mirror of
https://github.com/amiaopensource/ffmprovisr.git
synced 2025-10-15 18:29:57 +02:00
Compare commits
70 Commits
v2018-11-0
...
v2019-08-0
Author | SHA1 | Date | |
---|---|---|---|
|
839d50111e | ||
|
a6dd9c203c | ||
|
3402d968a7 | ||
|
283756f8cf | ||
|
d28ae29f5c | ||
|
2be5576012 | ||
|
43c98527a7 | ||
|
164d757309 | ||
|
2a87a120c3 | ||
|
fbe5f216a7 | ||
|
f93922a9c3 | ||
|
e06a76f559 | ||
|
0279c1d842 | ||
|
7e72b1c254 | ||
|
07fe8bf966 | ||
|
c32a7f44ad | ||
|
ea2c29a38c | ||
|
d624a3fc11 | ||
|
ade2615da3 | ||
|
72545d5c31 | ||
|
c6215c1953 | ||
|
abfb9ea982 | ||
|
02beb6ab1d | ||
|
b552ec4a31 | ||
|
c26c0d57ea | ||
|
d023bf7500 | ||
|
5b795e53dd | ||
|
806fd0c49b | ||
|
8a2cdbc088 | ||
|
9df208345c | ||
|
19e38145dd | ||
|
7453e500df | ||
|
8ceb0f4fc6 | ||
|
d95c2e6aa1 | ||
|
ef82e43fb8 | ||
|
445bd681a0 | ||
|
c01f821b59 | ||
|
47575a57ed | ||
|
2d6bf9159f | ||
|
60d452a431 | ||
|
dbd7687fb4 | ||
|
193d5f30fb | ||
|
7ad290734e | ||
|
9686a76ed6 | ||
|
2dee34d429 | ||
|
6daace9149 | ||
|
01a7404ece | ||
|
1062f8cf36 | ||
|
65161a567e | ||
|
c78323d8e7 | ||
|
afac0cda74 | ||
|
cc188eaf07 | ||
|
2f2ba5e6f1 | ||
|
c6021ea19b | ||
|
7a4ae9d2ea | ||
|
6b99821230 | ||
|
ed8c09daa6 | ||
|
c0181f51f8 | ||
|
1aeb95468d | ||
|
a3005e42d3 | ||
|
3934d85f54 | ||
|
f771ff3816 | ||
|
db219e201c | ||
|
a727aa7d5c | ||
|
5f7a01e920 | ||
|
76a93b7211 | ||
|
e431fbb3c5 | ||
|
14e66c13db | ||
|
2d14e3266b | ||
|
7615c872e4 |
307
index.html
307
index.html
@@ -62,6 +62,7 @@
|
||||
<span class="intro-lead">Sister projects</span>
|
||||
<p><a href="https://dd388.github.io/crals/" target="_blank">Script Ahoy</a>: Community Resource for Archivists and Librarians Scripting</p>
|
||||
<p><a href="https://datapraxis.github.io/sourcecaster/" target="_blank">The Sourcecaster</a>: an app that helps you use the command line to work through common challenges that come up when working with digital primary sources.</p>
|
||||
<p><a href="https://pugetsoundandvision.github.io/micropops/" target="_blank">Micropops</a>: One liners and automation tools from Moving Image Preservation of Puget Sound</p>
|
||||
<p><a href="https://amiaopensource.github.io/cable-bible/" target="_blank">Cable Bible</a>: A Guide to Cables and Connectors Used for Audiovisual Tech</p>
|
||||
</div>
|
||||
|
||||
@@ -131,9 +132,9 @@
|
||||
<div class="hiding">
|
||||
<h3>Filtergraphs</h3>
|
||||
<p>Many FFmpeg commands use filters that manipulate the video or audio stream in some way: for example, <a href="https://ffmpeg.org/ffmpeg-filters.html#hflip" target="_blank">hflip</a> to horizontally flip a video, or <a href="https://ffmpeg.org/ffmpeg-filters.html#amerge-1" target="_blank">amerge</a> to merge two or more audio tracks into a single stream.</p>
|
||||
<p>The use of a filter is signalled by the flag <code>-vf</code> (video filter) or <code>-af</code> (audio filter), followed by the name and options of the filter itself. For example, take the <a href="#convert-colourspace">convert colourspace</a> command:</p>
|
||||
<p>The use of a filter is signaled by the flag <code>-vf</code> (video filter) or <code>-af</code> (audio filter), followed by the name and options of the filter itself. For example, take the <a href="#convert-colorspace">convert colorspace</a> command:</p>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -c:v libx264 -vf colormatrix=<em>src</em>:<em>dst</em> <em>output_file</em></code>
|
||||
<p>Here, <a href="https://ffmpeg.org/ffmpeg-filters.html#colormatrix" target="_blank">colormatrix</a> is the filter used, with <em>src</em> and <em>dst</em> representing the source and destination colourspaces. This part following the <code>-vf</code> is a <strong>filtergraph</strong>.</p>
|
||||
<p>Here, <a href="https://ffmpeg.org/ffmpeg-filters.html#colormatrix" target="_blank">colormatrix</a> is the filter used, with <em>src</em> and <em>dst</em> representing the source and destination colorspaces. This part following the <code>-vf</code> is a <strong>filtergraph</strong>.</p>
|
||||
<p>It is also possible to apply multiple filters to an input, which are sequenced together in the filtergraph. A chained set of filters is called a filter chain, and a filtergraph may include multiple filter chains. Filters in a filterchain are separated from each other by commas (<code>,</code>), and filterchains are separated from each other by semicolons (<code>;</code>). For example, take the <a href="#inverse-telecine">inverse telecine</a> command:</p>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -c:v libx264 -vf "fieldmatch,yadif,decimate" <em>output_file</em></code></p>
|
||||
<p>Here we have a filtergraph including one filter chain, which is made up of three video filters.</p>
|
||||
@@ -161,7 +162,7 @@
|
||||
<input type="checkbox" id="stream-mapping">
|
||||
<div class="hiding">
|
||||
<h3>Stream mapping</h3>
|
||||
<p>Stream mapping is the practice of defining which of the streams (e.g., video or audio tracks) present in an input file will be present in the output file. FFmpeg recognises five stream types:</p>
|
||||
<p>Stream mapping is the practice of defining which of the streams (e.g., video or audio tracks) present in an input file will be present in the output file. FFmpeg recognizes five stream types:</p>
|
||||
<ul>
|
||||
<li><code>a</code> - audio</li>
|
||||
<li><code>v</code> - video</li>
|
||||
@@ -212,28 +213,6 @@
|
||||
</div>
|
||||
<!-- End Basic rewrap command -->
|
||||
|
||||
<!-- MKV to MP4 -->
|
||||
<label class="recipe" for="mkv_to_mp4">Convert Matroska (MKV) to MP4</label>
|
||||
<input type="checkbox" id="mkv_to_mp4">
|
||||
<div class="hiding">
|
||||
<h3>MKV to MP4</h3>
|
||||
<p><code>ffmpeg -i <em>input_file</em>.mkv -c:v copy -c:a aac <em>output_file</em>.mp4</code></p>
|
||||
<p>This will convert your Matroska (MKV) files to MP4 files.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <em>input_file</em></dt><dd>path and name of the input file<br>
|
||||
The extension for the Matroska container is <code>.mkv</code>.</dd>
|
||||
<dt>-c:v copy</dt><dd>copies the video stream without re-encoding it</dd>
|
||||
<dt>-c:a aac</dt><dd>re-encodes the audio stream using the AAC audio codec<br>
|
||||
Note that sadly MP4 cannot contain sound encoded by a PCM (Pulse-Code Modulation) audio codec.<br>
|
||||
For silent videos you can replace <code>-c:a aac</code> by <code>-an</code>, which means that there will be no audio track in the output file.</dd>
|
||||
<dt><em>output_file</em></dt><dd>path and name of the output file<br>
|
||||
The extension for the MP4 container is <code>.mp4</code>.</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
<!-- ends MKV to MP4 -->
|
||||
|
||||
<!-- Rewrap DV -->
|
||||
<label class="recipe" for="rewrap-dv">Rewrap DV video to .dv file</label>
|
||||
<input type="checkbox" id="rewrap-dv">
|
||||
@@ -314,7 +293,7 @@
|
||||
<dl>
|
||||
<dt>-preset <em>veryslow</em></dt><dd>This option tells FFmpeg to use the slowest preset possible for the best compression quality.<br>
|
||||
Available presets, from slowest to fastest, are: <code>veryslow</code>, <code>slower</code>, <code>slow</code>, <code>medium</code>, <code>fast</code>, <code>faster</code>, <code>veryfast</code>, <code>superfast</code>, <code>ultrafast</code>.</dd>
|
||||
<dt>-crf <em>18</em></dt><dd>Specifying a lower CRF will make a larger file with better visual quality. For H.264 files being encoded with a 4:2:0 chroma subsampling scheme (i.e., using <code>-pix_fmt yuv420p</code>), the scale ranges between 0-51, with 0 being lossless and 51 the worst possible quality.<br>
|
||||
<dt>-crf <em>18</em></dt><dd>Specifying a lower CRF will make a larger file with better visual quality. For H.264 files being encoded with a 4:2:0 chroma subsampling scheme (i.e., using <code>-pix_fmt yuv420p</code>), the scale ranges between 0-51 for 8-bit content, with 0 being lossless and 51 the worst possible quality.<br>
|
||||
If no crf is specified, <code>libx264</code> will use a default value of 23. 18 is often considered a “visually lossless” compression.</dd>
|
||||
</dl>
|
||||
<p>For more information, see the <a href="https://trac.ffmpeg.org/wiki/Encode/H.264" target="_blank">FFmpeg and H.264 Encoding Guide</a> on the FFmpeg wiki.</p>
|
||||
@@ -367,7 +346,7 @@
|
||||
<dt>-slicecrc 1</dt><dd>Adds CRC information for each slice. This makes it possible for a decoder to detect errors in the bitstream, rather than blindly decoding a broken slice. (Read more <a href="http://ndsr.nycdigital.org/diving-in-head-first/" target="_blank">here</a>).</dd>
|
||||
<dt>-slices 16</dt><dd>Each frame is split into 16 slices. 16 is a good trade-off between filesize and encoding time.</dd>
|
||||
<dt>-c:a copy</dt><dd>copies all mapped audio streams.</dd>
|
||||
<dt><em>output_file</em>.mkv</dt><dd>path and name of the output file. Use the <code>.mkv</code> extension to save your file in a Matroska container. Optionally, choose a different extension if you want a different container, such as <code>.mov</code> or <code>.avi</code>.</dd>
|
||||
<dt><em>output_file</em>.mkv</dt><dd>path and name of the output file. Use the <code>.mkv</code> extension to save your file in a Matroska container.</dd>
|
||||
<dt>-f framemd5</dt><dd>Decodes video with the framemd5 muxer in order to generate MD5 checksums for every frame of your input file. This allows you to verify losslessness when compared against the framemd5s of the output file.</dd>
|
||||
<dt>-an</dt><dd>ignores the audio stream when creating framemd5 (audio no)</dd>
|
||||
<dt><em>framemd5_output_file</em></dt><dd>path, name and extension of the framemd5 file.</dd>
|
||||
@@ -427,7 +406,7 @@
|
||||
<dt><em>output file</em></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p>The libx265 encoding library defaults to a ‘medium’ preset for compression quality and a CRF of 28. CRF stands for ‘constant rate factor’ and determines the quality and file size of the resulting H.265 video. The CRF scale ranges from 0 (best quality [lossless]; largest file size) to 51 (worst quality; smallest file size).</p>
|
||||
<p>A CRF of 28 for H.265 can be considered a medium setting, <a href="https://trac.ffmpeg.org/wiki/Encode/H.265#ConstantRateFactorCRF" target="_blank">corresponding</a> to a CRF of 23 in <a href="./index.html#transcode_h264">encoding H.264</a>, but should result in about half the file size.</p>
|
||||
<p>A CRF of 28 for H.265 can be considered a medium setting, <a href="https://trac.ffmpeg.org/wiki/Encode/H.265#ConstantRateFactorCRF" target="_blank">corresponding</a> to a CRF of 23 in <a href="#transcode_h264">encoding H.264</a>, but should result in about half the file size.</p>
|
||||
<p>To create a higher quality file, you can add these presets:</p>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -c:v libx265 -pix_fmt yuv420p -preset veryslow -crf 18 -c:a copy <em>output_file</em></code></p>
|
||||
<dl>
|
||||
@@ -453,7 +432,7 @@
|
||||
<dt>-b:v 690k</dt><dd>specifies the 690k video bitrate</dd>
|
||||
<dt><em>output file</em></dt><dd>path, name and extension of the output file (make sure to include the <code>.ogv</code> filename suffix)</dd>
|
||||
</dl>
|
||||
<p>This recipe is based on <a href="http://paulrouget.com/e/converttohtml5video">Paul Rouget's recipes</a>.</p>
|
||||
<p>This recipe is based on <a href="http://paulrouget.com/e/converttohtml5video" target="_blank">Paul Rouget's recipes</a>.</p>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
<!-- ends Transcode to Ogg/Theora -->
|
||||
@@ -488,7 +467,7 @@
|
||||
<!-- ends WAV to MP3 -->
|
||||
|
||||
<!-- append notice to access mp3 -->
|
||||
<label class="recipe" for="append_mp3">Generate two access MP3s (with and without copyright).</label>
|
||||
<label class="recipe" for="append_mp3">Generate two access MP3s (with and without copyright)</label>
|
||||
<input type="checkbox" id="append_mp3">
|
||||
<div class="hiding">
|
||||
<h3>Generate two access MP3s from input. One with appended audio (such as a copyright notice) and one unmodified.</h3>
|
||||
@@ -607,7 +586,7 @@
|
||||
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0"</dt><dd>set colour matrix, video scaling and padding<br>Three filters are applied:
|
||||
<ol>
|
||||
<li>The luma coefficients are modified from SD video (according to Rec. 601) to HD video (according to Rec. 709) by a colour matrix. Note that today Rec. 709 is often used also for SD and therefore you may cancel this parameter.</li>
|
||||
<li>The luma coefficients are modified from SD video (according to Rec. 601) to HD video (according to Rec. 709) by a color matrix. Note that today Rec. 709 is often used also for SD and therefore you may cancel this parameter.</li>
|
||||
<li>The scaling filter (<code>scale=1440:1080</code>) works for both upscaling and downscaling. We use the Lanczos scaling algorithm (<code>flags=lanczos</code>), which is slower but gives better results than the default bilinear algorithm.</li>
|
||||
<li>The padding filter (<code>pad=1920:1080:240:0</code>) completes the transformation from SD to HD.</li>
|
||||
</ol></dd>
|
||||
@@ -623,7 +602,7 @@
|
||||
<label class="recipe" for="change_DAR">Change display aspect ratio without re-encoding</label>
|
||||
<input type="checkbox" id="change_DAR">
|
||||
<div class="hiding">
|
||||
<h3>Change Display Aspect Ratio without reencoding video</h3>
|
||||
<h3>Change Display Aspect Ratio without re-encoding video</h3>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -c:v copy -aspect 4:3 <em>output_file</em></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
@@ -636,32 +615,32 @@
|
||||
</div>
|
||||
<!-- ends Change display aspect ratio without re-encoding video -->
|
||||
|
||||
<!-- Convert colourspace -->
|
||||
<label class="recipe" for="convert-colourspace">Convert colourspace of video</label>
|
||||
<input type="checkbox" id="convert-colourspace">
|
||||
<!-- Convert colorspace -->
|
||||
<label class="recipe" for="convert-colorspace">Convert colorspace of video</label>
|
||||
<input type="checkbox" id="convert-colorspace">
|
||||
<div class="hiding">
|
||||
<h3>Transcode video to a different colourspace</h3>
|
||||
<p>This command uses a filter to convert the video to a different colour space.</p>
|
||||
<h3>Transcode video to a different colorspace</h3>
|
||||
<p>This command uses a filter to convert the video to a different colorspace.</p>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -c:v libx264 -vf colormatrix=src:dst <em>output_file</em></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <em>input file</em></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-c:v libx264</dt><dd>tells FFmpeg to encode the video stream as H.264</dd>
|
||||
<dt>-vf colormatrix=<em>src</em>:<em>dst</em></dt><dd>the video filter <strong>colormatrix</strong> will be applied, with the given source and destination colourspaces.<br>
|
||||
<dt>-vf colormatrix=<em>src</em>:<em>dst</em></dt><dd>the video filter <strong>colormatrix</strong> will be applied, with the given source and destination colorspaces.<br>
|
||||
Accepted values include <code>bt601</code> (Rec.601), <code>smpte170m</code> (Rec.601, 525-line/<a href="https://en.wikipedia.org/wiki/NTSC#NTSC-M" target="_blank">NTSC</a> version), <code>bt470bg</code> (Rec.601, 625-line/<a href="https://en.wikipedia.org/wiki/PAL#PAL-B.2FG.2FD.2FK.2FI" target="_blank">PAL</a> version), <code>bt709</code> (Rec.709), and <code>bt2020</code> (Rec.2020).<br>
|
||||
For example, to convert from Rec.601 to Rec.709, you would use <code>-vf colormatrix=bt601:bt709</code>.</dd>
|
||||
<dt><em>output file</em></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p><strong>Note:</strong> Converting between colourspaces with FFmpeg can be done via either the <strong>colormatrix</strong> or <strong>colorspace</strong> filters, with colorspace allowing finer control (individual setting of colourspace, transfer characteristics, primaries, range, pixel format, etc). See <a href="https://trac.ffmpeg.org/wiki/colorspace" target="_blank">this</a> entry on the FFmpeg wiki, and the FFmpeg documentation for <a href="https://ffmpeg.org/ffmpeg-filters.html#colormatrix" target="_blank">colormatrix</a> and <a href="https://ffmpeg.org/ffmpeg-filters.html#colorspace" target="_blank">colorspace</a>.</p>
|
||||
<p><strong>Note:</strong> Converting between colorspaces with FFmpeg can be done via either the <strong>colormatrix</strong> or <strong>colorspace</strong> filters, with colorspace allowing finer control (individual setting of colorspace, transfer characteristics, primaries, range, pixel format, etc). See <a href="https://trac.ffmpeg.org/wiki/colorspace" target="_blank">this</a> entry on the FFmpeg wiki, and the FFmpeg documentation for <a href="https://ffmpeg.org/ffmpeg-filters.html#colormatrix" target="_blank">colormatrix</a> and <a href="https://ffmpeg.org/ffmpeg-filters.html#colorspace" target="_blank">colorspace</a>.</p>
|
||||
<hr>
|
||||
<h4>Convert colourspace and embed colourspace metadata</h4>
|
||||
<h4>Convert colorspace and embed colorspace metadata</h4>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -c:v libx264 -vf colormatrix=src:dst -color_primaries <em>val</em> -color_trc <em>val</em> -colorspace <em>val</em> <em>output_file</em></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <em>input file</em></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-c:v libx264</dt><dd>encode video as H.264</dd>
|
||||
<dt>-vf colormatrix=<em>src</em>:<em>dst</em></dt><dd>the video filter <strong>colormatrix</strong> will be applied, with the given source and destination colourspaces.</dd>
|
||||
<dt>-color_primaries <em>val</em></dt><dd>tags video with the given colour primaries.<br>
|
||||
<dt>-vf colormatrix=<em>src</em>:<em>dst</em></dt><dd>the video filter <strong>colormatrix</strong> will be applied, with the given source and destination colorspaces.</dd>
|
||||
<dt>-color_primaries <em>val</em></dt><dd>tags video with the given color primaries.<br>
|
||||
Accepted values include <code>smpte170m</code> (Rec.601, 525-line/NTSC version), <code>bt470bg</code> (Rec.601, 625-line/PAL version), <code>bt709</code> (Rec.709), and <code>bt2020</code> (Rec.2020).
|
||||
<dt>-color_trc <em>val</em></dt><dd>tags video with the given transfer characteristics (gamma).<br>
|
||||
Accepted values include <code>smpte170m</code> (Rec.601, 525-line/NTSC version), <code>gamma28</code> (Rec.601, 625-line/PAL version)<sup><a href="#fn1" id="ref1">1</a></sup>, <code>bt709</code> (Rec.709), <code>bt2020_10</code> (Rec.2020 10-bit), and <code>bt2020_12</code> (Rec.2020 12-bit).</dd>
|
||||
@@ -677,17 +656,17 @@
|
||||
<p>To Rec.709:</p>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -c:v libx264 -vf colormatrix=bt601:bt709 -color_primaries bt709 -color_trc bt709 -colorspace bt709 <em>output_file</em></code></p>
|
||||
<p>MediaInfo output examples:</p>
|
||||
<img src="./img/colourspace_metadata_mediainfo.png" alt="MediaInfo screenshots of colourspace metadata"><br>
|
||||
<img src="./img/colourspace_metadata_mediainfo.png" alt="MediaInfo screenshots of colorspace metadata"><br>
|
||||
<p><span class="beware">⚠</span> Using this command it is possible to add Rec.709 tags to a file that is actually Rec.601 (etc), so apply with caution!</p>
|
||||
<p>These commands are relevant for H.264 and H.265 videos, encoded with <code>libx264</code> and <code>libx265</code> respectively.</p>
|
||||
<p><strong>Note:</strong> If you wish to embed colourspace metadata <em>without</em> changing to another colourspace, omit <code>-vf colormatrix=src:dst</code>. However, since it is <code>libx264</code>/<code>libx265</code> that writes the metadata, it’s not possible to add these tags without reencoding the video stream.</p>
|
||||
<p><strong>Note:</strong> If you wish to embed colorspace metadata <em>without</em> changing to another colorspace, omit <code>-vf colormatrix=src:dst</code>. However, since it is <code>libx264</code>/<code>libx265</code> that writes the metadata, it’s not possible to add these tags without re-encoding the video stream.</p>
|
||||
<p>For all possible values for <code>-color_primaries</code>, <code>-color_trc</code>, and <code>-colorspace</code>, see the FFmpeg documentation on <a href="https://ffmpeg.org/ffmpeg-codecs.html#Codec-Options" target="_blank">codec options</a>.</p>
|
||||
<hr>
|
||||
<p id="fn1" class="footnote">1. Out of step with the regular pattern, <code>-color_trc</code> doesn’t accept <code>bt470bg</code>; it is instead here referred to directly as gamma.<br>
|
||||
In the Rec.601 standard, 525-line/NTSC and 625-line/PAL video have assumed gammas of 2.2 and 2.8 respectively. <a href="#ref1" title="Jump back.">↩</a></p>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
<!-- ends Convert colourspace -->
|
||||
<!-- ends Convert colorspace -->
|
||||
|
||||
<!-- Modify speed -->
|
||||
<label class="recipe" for="modify_speed">Modify image and sound speed</label>
|
||||
@@ -714,6 +693,26 @@
|
||||
</div>
|
||||
<!-- ends Modify speed -->
|
||||
|
||||
<!-- Synchronize video and audio streams -->
|
||||
<label class="recipe" for="sync_streams">Synchronize video and audio streams</label>
|
||||
<input type="checkbox" id="sync_streams">
|
||||
<div class="hiding">
|
||||
<h3>Synchronize video and audio streams</h3>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -itsoffset 0.125 -i <em>input_file</em> -map 1:v -map 0:a -c copy <em>output_file</em></code></p>
|
||||
<p>A command to slip the video channel approximate 2 frames (0.125 for a 25fps timeline) to align video and audio drift, if generated during video tape capture for example.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-itsoffset 0.125</dt><dd>uses itsoffset command to set offset to 0.125 of a second. The offset time must be a time duration specification, see <a href="https://ffmpeg.org/ffmpeg-utils.html#time-duration-syntax" target="_blank">FFMPEG Utils Time Duration Syntax</a>.</dd>
|
||||
<dt>-i <em>input_file</em></dt><dd>repeat path, name and extension of the input file</dd>
|
||||
<dt>-map 1:v -map 0:a</dt><dd>selects the video channel for itsoffset command. To slip the audio channel reverse the selection to -map 0:v -map 1:a.</dd>
|
||||
<dt>-c copy</dt><dd>copies the encode settings of the input_file to the output_file</dd>
|
||||
<dt><em>output_file_resync</em></dt><dd>path, name and extension of the output_file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
<!-- ends Synchronize video and audio streams -->
|
||||
|
||||
<!-- Make stream properties explicate -->
|
||||
<label class="recipe" for="clarify_stream">Clarify stream properties</label>
|
||||
<input type="checkbox" id="clarify_stream">
|
||||
@@ -762,25 +761,54 @@
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-vf "<em>width</em>:<em>height</em>"</dt><dd>Crops the video to the given width and height (in pixels).<br>
|
||||
By default, the crop area is centred: that is, the position of the top left of the cropped area is set to x = (<em>input_width</em> - <em>output_width</em>) / 2, y = <em>input_height</em> - <em>output_height</em>) / 2.
|
||||
By default, the crop area is centered: that is, the position of the top left of the cropped area is set to x = (<em>input_width</em> - <em>output_width</em>) / 2, y = <em>input_height</em> - <em>output_height</em>) / 2.
|
||||
</dd>
|
||||
<dt><em>output_file</em></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p>It's also possible to specify the crop position by adding the x and y coordinates representing the top left of your cropped area to your crop filter, as such:</p>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -vf "crop=<em>width</em>:<em>height</em>[:<em>x_position</em>:<em>y_position</em>]" <em>output_file</em></code></p>
|
||||
<h3>Examples</h3>
|
||||
<p>The original frame, a screenshot of the SMPTE colourbars:</p>
|
||||
<p>The original frame, a screenshot of the SMPTE colorbars:</p>
|
||||
<img class="sample-image" src="img/crop_example_orig.png" alt="VLC screenshot of Maggie Cheung">
|
||||
<p>Result of the command <code>ffmpeg -i <em>smpte_coloursbars.mov</em> -vf "crop=500:500" <em>output_file</em></code>:</p>
|
||||
<p>Result of the command <code>ffmpeg -i <em>smpte_colorsbars.mov</em> -vf "crop=500:500" <em>output_file</em></code>:</p>
|
||||
<img class="sample-image-small" src="img/crop_example_aftercrop1.png" alt="VLC screenshot of Maggie Cheung, cropped from original">
|
||||
<p>Result of the command <code>ffmpeg -i <em>smpte_coloursbars.mov</em> -vf "crop=500:500:0:0" <em>output_file</em></code>, appending <code>:0:0</code> to crop from the top left corner:</p>
|
||||
<p>Result of the command <code>ffmpeg -i <em>smpte_colorsbars.mov</em> -vf "crop=500:500:0:0" <em>output_file</em></code>, appending <code>:0:0</code> to crop from the top left corner:</p>
|
||||
<img class="sample-image-small" src="img/crop_example_aftercrop2.png" alt="VLC screenshot of Maggie Cheung, cropped from original">
|
||||
<p>Result of the command <code>ffmpeg -i <em>smpte_coloursbars.mov</em> -vf "crop=500:300:500:30" <em>output_file</em></code>:</p>
|
||||
<p>Result of the command <code>ffmpeg -i <em>smpte_colousbars.mov</em> -vf "crop=500:300:500:30" <em>output_file</em></code>:</p>
|
||||
<img class="sample-image-small" src="img/crop_example_aftercrop3.png" alt="VLC screenshot of Maggie Cheung, cropped from original">
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
<!-- ends Crop video -->
|
||||
|
||||
<!-- Change video color to black and white -->
|
||||
<label class="recipe" for="col_change">Change video color to black and white</label>
|
||||
<input type="checkbox" id="col_change">
|
||||
<div class="hiding">
|
||||
<h3>Change video color to black and white</h3>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -filter_complex hue=s=0 -c:a copy <em>output_file</em></code></p>
|
||||
<p>A basic command to alter color hue to black and white using filter_complex (credit @FFMPEG via Twitter).</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-filter_complex hue=s=0</dt><dd>uses filter_complex command to set the hue to black and white</dd>
|
||||
<dt>-c:a copy</dt><dd>copies the encode settings of the input_file to the output_file</dd>
|
||||
<dt><em>output_file</em></dt><dd>path, name and extension of the output_file</dd>
|
||||
</dl>
|
||||
<p>An alternative that preserves interlacing information for a ProRes 422 HQ file generated, for example, from a tape master (credit Dave Rice):</p>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -c:v prores_ks -flags +ildct -map 0 -c:a copy -profile:v 3 -vf hue=s=0 <em>output_file</em></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-c:v prores_ks</dt><dd>encodes the video to ProRes (prores_ks marks the stream as interlaced, unlike prores)</dd>
|
||||
<dt>-flags +ildct</dt><dd>ensures that the output_file has interlaced field encoding, using interlace aware discrete cosine transform</dd>
|
||||
<dt>-map 0</dt><dd>ensures ffmpeg maps all streams of the input_file to the output_file</dd>
|
||||
<dt>-c:a copy</dt><dd>copies the encode settings of the input_file to the output_file</dd>
|
||||
<dt><em>output_file</em></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
<!-- ends Change video color to black and white -->
|
||||
|
||||
</div>
|
||||
<div class="well">
|
||||
<h2 id="audio-files">Change or view audio properties</h2>
|
||||
@@ -883,6 +911,23 @@
|
||||
</div>
|
||||
<!-- ends RIAA equalization -->
|
||||
|
||||
<!-- CD De-emphasis -->
|
||||
<label class="recipe" for="cd_eq">Reverse CD Pre-Emphasis</label>
|
||||
<input type="checkbox" id="cd_eq">
|
||||
<div class="hiding">
|
||||
<h3>Reverse CD Pre-Emphasis</h3>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -af aemphasis=type=cd <em>output_file</em></code></p>
|
||||
<p>This will apply de-emphasis to reverse the effects of CD pre-emphasis in the somewhat rare case of CDs that were created with this technology. Use this command to create more accurate listening copies of files that were ripped 'flat' (without any de-emphasis) where the original source utilized emphasis. For more information about CD pre-emphasis see the <a href="https://wiki.hydrogenaud.io/index.php?title=Pre-emphasis" target="_blank">Hydrogen Audio page</a> on this subject.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt><em>input_file</em></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-af aemphasis=type=cd</dt><dd>activates the aemphasis filter and sets it to use CD equalization</dd>
|
||||
<dt><em>output_file</em></dt><dd>path and name of output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
<!-- CD De-emphasis -->
|
||||
|
||||
<!-- one pass loudnorm -->
|
||||
<label class="recipe" for="loudnorm_one_pass">One Pass Loudness Normalization</label>
|
||||
<input type="checkbox" id="loudnorm_one_pass">
|
||||
@@ -949,7 +994,7 @@
|
||||
|
||||
</div>
|
||||
<div class="well">
|
||||
<h2 id="join-trim">Join, trim, or excerpt a video</h2>
|
||||
<h2 id="join-trim">Join, trim, or create an excerpt</h2>
|
||||
|
||||
<!-- Join files of the same type together -->
|
||||
<label class="recipe" for="join_files">Join (concatenate) two or more files of the same type</label>
|
||||
@@ -986,7 +1031,7 @@
|
||||
<p><code>ffmpeg -i input_1.avi -i input_2.mp4 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[video_out][audio_out]" -map "[video_out]" -map "[audio_out]" <em>output_file</em></code></p>
|
||||
<p>This command takes two or more files of the different file types and joins them together to make a single file.</p>
|
||||
<p>The input files may differ in many respects - container, codec, chroma subsampling scheme, framerate, etc. However, the above command only works properly if the files to be combined have the same dimensions (e.g., 720x576). Also note that if the input files have different framerates, then the output file will be of variable framerate.</p>
|
||||
<p>Some aspects of the input files will be normalised: for example, if an input file contains a video track and an audio track that do not have exactly the same duration, the shorter one will be padded. In the case of a shorter video track, the last frame will be repeated in order to cover the missing video; in the case of a shorter audio track, the audio stream will be padded with silence.</p>
|
||||
<p>Some aspects of the input files will be normalized: for example, if an input file contains a video track and an audio track that do not have exactly the same duration, the shorter one will be padded. In the case of a shorter video track, the last frame will be repeated in order to cover the missing video; in the case of a shorter audio track, the audio stream will be padded with silence.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <em>input_1.ext</em></dt><dd>path, name and extension of the first input file</dd>
|
||||
@@ -1023,12 +1068,12 @@
|
||||
<p>(The Lanczos scaling algorithm is recommended, as it is slower but better than the default bilinear algorithm).</p>
|
||||
<p>The rescaling should be applied just before the point where the streams to be used in the output file are listed. Select the stream you want to rescale, apply the filter, and assign that to a variable name (<code>rescaled_video</code> in the below example). Then you use this variable name in the list of streams to be concatenated.</p>
|
||||
<p><code>ffmpeg -i input_1.avi -i input_2.mp4 -filter_complex "[0:v:0] scale=1920:1080:flags=lanczos [rescaled_video], [rescaled_video] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [video_out] [audio_out]" -map "[video_out]" -map "[audio_out]" <em>output_file</em></code></p>
|
||||
<p>However, this will only have the desired visual output if the inputs have the same aspect ratio. If you wish to concatenate an SD and an HD file, you will also wish to pillarbox the SD file while upscaling. (See the <a href="https://amiaopensource.github.io/ffmprovisr/#SD_HD_2">Convert 4:3 to pillarboxed HD</a> command). The full command would look like this:</p>
|
||||
<p>However, this will only have the desired visual output if the inputs have the same aspect ratio. If you wish to concatenate an SD and an HD file, you will also wish to pillarbox the SD file while upscaling. (See the <a href="#SD_HD_2">Convert 4:3 to pillarboxed HD</a> command). The full command would look like this:</p>
|
||||
<p><code>ffmpeg -i input_1.avi -i input_2.mp4 -filter_complex "[0:v:0] scale=1440:1080:flags=lanczos, pad=1920:1080:(ow-iw)/2:(oh-ih)/2 [to_hd_video], [to_hd_video] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [video_out] [audio_out]" -map "[video_out]" -map "[audio_out]" <em>output_file</em></code></p>
|
||||
<p>Here, the first input is an SD file which needs to be upscaled to match the second input, which is 1920x1080. The scale filter enlarges the SD input to the height of the HD frame, keeping the 4:3 aspect ratio; then, the video is pillarboxed within a 1920x1080 frame.</p>
|
||||
<h4>Variation: concatenating files of different framerates</h4>
|
||||
<p>If the input files have different framerates, then the output file may be of variable framerate. To explicitly obtain an output file of constant framerate, you may wish convert an input (or multiple inputs) to a different framerate prior to concatenation.</p>
|
||||
<p>You can speed up or slow down a file using the <code>fps</code> and <code>atempo</code> filters (see also the <a href="https://amiaopensource.github.io/ffmprovisr/#modify_speed">Modify speed</a> command).</p>
|
||||
<p>You can speed up or slow down a file using the <code>fps</code> and <code>atempo</code> filters (see also the <a href="#modify_speed">Modify speed</a> command).</p>
|
||||
<p>Here's an example of the full command, in which input_1 is 30fps, input_2 is 25fps, and 25fps is the desired output speed.</p>
|
||||
<p><code>ffmpeg -i input_1.avi -i input_2.mp4 -filter_complex "[0:v:0] fps=fps=25 [video_to_25fps]; [0:a:0] atempo=(25/30) [audio_to_25fps]; [video_to_25fps] [audio_to_25fps] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [video_out] [audio_out]" -map "[video_out]" -map "[audio_out]" <em>output_file</em></code></p>
|
||||
<p>Note that the <code>fps</code> filter will drop or repeat frames as necessary in order to achieve the desired frame rate - see the FFmpeg <a href="https://ffmpeg.org/ffmpeg-filters.html#fps-1" target="_blank">fps docs</a> for more details.</p>
|
||||
@@ -1069,12 +1114,12 @@
|
||||
<!-- ends Split file into segments -->
|
||||
|
||||
<!-- Trim -->
|
||||
<label class="recipe" for="trim">Trim video</label>
|
||||
<label class="recipe" for="trim">Trim file</label>
|
||||
<input type="checkbox" id="trim">
|
||||
<div class="hiding">
|
||||
<h3>Trim a video without re-encoding</h3>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -ss 00:02:00 -to 00:55:00 -c copy -map 0 <em>output_file</em></code></p>
|
||||
<p>This command allows you to create an excerpt from a video file without re-encoding the image data.</p>
|
||||
<p>This command allows you to create an excerpt from a file without re-encoding the audiovisual data.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
|
||||
@@ -1085,12 +1130,12 @@
|
||||
<strong>Note:</strong> watch out when using <code>-ss</code> with <code>-c copy</code> if the source is encoded with an interframe codec (e.g., H.264). Since FFmpeg must split on i-frames, it will seek to the nearest i-frame to begin the stream copy.</dd>
|
||||
<dt><em>output_file</em></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p>Variation: trim video by setting duration, by using <code>-t</code> instead of <code>-to</code></p>
|
||||
<p>Variation: trim file by setting duration, by using <code>-t</code> instead of <code>-to</code></p>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -ss 00:05:00 -t 10 -c copy <em>output_file</em></code></p>
|
||||
<dl>
|
||||
<dt>-ss 00:05:00 -t 10</dt><dd>Beginning five minutes into the original video, this command will create a 10-second-long excerpt.</dd>
|
||||
</dl>
|
||||
<p>Note: In order to keep the original timestamps, without trying to sanitise them, you may add the <code>-copyts</code> option.</p>
|
||||
<p>Note: In order to keep the original timestamps, without trying to sanitize them, you may add the <code>-copyts</code> option.</p>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
<!-- ends Trim -->
|
||||
@@ -1101,7 +1146,7 @@
|
||||
<div class="hiding">
|
||||
<h3>Excerpt from beginning</h3>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -t <em>5</em> -c copy -map 0 <em>output_file</em></code></p>
|
||||
<p>This command captures a certain portion of a video file, starting from the beginning and continuing for the amount of time (in seconds) specified in the script. This can be used to create a preview file, or to remove unwanted content from the end of the file. To be more specific, use timecode, such as 00:00:05.</p>
|
||||
<p>This command captures a certain portion of a file, starting from the beginning and continuing for the amount of time (in seconds) specified in the script. This can be used to create a preview file, or to remove unwanted content from the end of the file. To be more specific, use timecode, such as 00:00:05.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
|
||||
@@ -1115,12 +1160,12 @@
|
||||
<!-- ends Excerpt from beginning -->
|
||||
|
||||
<!-- Excerpt to end -->
|
||||
<label class="recipe" for="excerpt_to_end">Create a new video file with the first five seconds trimmed off the original</label>
|
||||
<label class="recipe" for="excerpt_to_end">Create a new file with the first five seconds trimmed off the original</label>
|
||||
<input type="checkbox" id="excerpt_to_end">
|
||||
<div class="hiding">
|
||||
<h3>Excerpt to end</h3>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -ss <em>5</em> -c copy -map 0 <em>output_file</em></code></p>
|
||||
<p>This command copies a video file starting from a specified time, removing the first few seconds from the output. This can be used to create an excerpt, or remove unwanted content from the beginning of a video file.</p>
|
||||
<p>This command copies a file starting from a specified time, removing the first few seconds from the output. This can be used to create an excerpt, or remove unwanted content from the beginning of a file.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
|
||||
@@ -1134,12 +1179,12 @@
|
||||
<!-- ends Excerpt to end -->
|
||||
|
||||
<!-- Excerpt from end -->
|
||||
<label class="recipe" for="excerpt_from_end">Create a new video file with the final five seconds of the original</label>
|
||||
<label class="recipe" for="excerpt_from_end">Create a new file with the final five seconds of the original</label>
|
||||
<input type="checkbox" id="excerpt_from_end">
|
||||
<div class="hiding">
|
||||
<h3>Excerpt from end</h3>
|
||||
<p><code>ffmpeg -sseof <em>-5</em> -i <em>input_file</em> -c copy -map 0 <em>output_file</em></code></p>
|
||||
<p>This command copies a video file starting from a specified time before the end of the file, removing everything before from the output. This can be used to create an excerpt, or extract content from the end of a video file (e.g. for extracting the closing credits).</p>
|
||||
<p>This command copies a file starting from a specified time before the end of the file, removing everything before from the output. This can be used to create an excerpt, or extract content from the end of a file (e.g. for extracting the closing credits).</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-sseof <em>-5</em></dt><dd>This parameter must stay before the input file. It tells FFmpeg what timecode in the file to look for to start copying, and specifies the number of seconds from the end of the video that FFmpeg should start copying. The end of the file has index 0 and the minus sign is needed to reference earlier portions. To be more specific, you can use timecode such as -00:00:05. Note that in most file formats it is not possible to seek exactly, so FFmpeg will seek to the closest point before.</dd>
|
||||
@@ -1152,6 +1197,50 @@
|
||||
</div>
|
||||
<!-- ends Excerpt from end -->
|
||||
|
||||
<!-- Trim start silence -->
|
||||
<label class="recipe" for="trim_start_silence">Trim silence from beginning of an audio file</label>
|
||||
<input type="checkbox" id="trim_start_silence">
|
||||
<div class="hiding">
|
||||
<h3>Remove silent portion at the beginning of an audio file</h3>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -af silenceremove=start_threshold=-57dB:start_duration=1:start_periods=1 -c:a <em>your_codec_choice</em> -ar <em>your_sample_rate_choice</em> <em>output_file</em></code></p>
|
||||
<p>This command will automatically remove silence at the beginning of an audio file. The threshold for what qualifies as silence can be changed - this example uses anything under -57 dB, which is a decent level for accounting for analogue hiss.</p>
|
||||
<p><strong>Note:</strong> Since this command uses a filter, the audio stream will be re-encoded for the output. If you do not specify a sample rate or codec, this command will use the sample rate from your input and <a href='#codec-defaults'>the codec defaults for your output format</a>. Take care that you are getting your intended results!</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file (e.g. input_file.wav)</dd>
|
||||
<dt>-af silenceremove</dt><dd>applies the silence remove filter</dd>
|
||||
<dt>start_threshold=-57dB</dt><dd>tells the filter the threshold for what to call 'silence' for the purpose of removal. This can be increased or decreased as necessary.</dd>
|
||||
<dt>start_duration=1</dt><dd>This tells the filter how much non-silent audio must be detected before it stops trimming. With a value of <code>0</code> the filter would stop after detecting any non-silent audio. A setting of <code>1</code> allows it to continue trimming through short 'pops' such as those caused by engaging the playback device, or the recorded sound of a microphone being plugged in.</dd>
|
||||
<dt>start_periods=1</dt><dd>This tells the filter to trim the first example of silence it discovers from the beginning of the file. This value could be increased to remove subsequent silent portions from the file if desired.</dd>
|
||||
<dt>-c:a <em>your_codec_choice</em></dt><dd>This tells the filter what codec to use, and must be specified to avoid defaults. If you want 24 bit PCM, your value would be <code>-c:a pcm_s24le</code>.</dd>
|
||||
<dt><em>output_file</em></dt><dd>path, name and extension of the output file (e.g. output_file.wav).</dd>
|
||||
</dl>
|
||||
</div>
|
||||
<!-- ends Trim start silence -->
|
||||
|
||||
<!-- Trim end silence -->
|
||||
<label class="recipe" for="trim_end_silence">Trim silence from the end of an audio file</label>
|
||||
<input type="checkbox" id="trim_end_silence">
|
||||
<div class="hiding">
|
||||
<h3>Remove silent portion from the end of an audio file</h3>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -af areverse,silenceremove=start_threshold=-57dB:start_duration=1:start_periods=1,areverse -c:a <em>your_codec_choice</em> -ar <em>your_sample_rate_choice</em> <em>output_file</em></code></p>
|
||||
<p>This command will automatically remove silence at the end of an audio file. Since the <code>silenceremove</code> filter is best at removing silence from the beginning of files, this command used the <code>areverse</code> filter twice to reverse the input, remove silence and then restore correct orientation.</p>
|
||||
<p><strong>Note:</strong> Since this command uses a filter, the audio stream will be re-encoded for the output. If you do not specify a sample rate or codec, this command will use the sample rate from your input and <a href='#codec-defaults'>the codec defaults for your output format</a>. Take care that you are getting your intended results!</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file (e.g. input_file.wav)</dd>
|
||||
<dt>-af areverse,</dt><dd>starts the filter chain with reversing the input</dd>
|
||||
<dt>silenceremove</dt><dd>applies the silence remove filter</dd>
|
||||
<dt>start_threshold=-57dB</dt><dd>tells the filter the threshold for what to call 'silence' for the purpose of removal. This can be increased or decreased as necessary.</dd>
|
||||
<dt>start_duration=1</dt><dd>This tells the filter how much non-silent audio must be detected before it stops trimming. With a value of <code>0</code> the filter would stop after detecting any non-silent audio. A setting of <code>1</code> allows it to continue trimming through short 'pops' such as those caused by engaging the playback device, or the recorded sound of a microphone being plugged in.</dd>
|
||||
<dt>start_periods=1</dt><dd>This tells the filter to trim the first example of silence it discovers.</dd>
|
||||
<dt>areverse</dt><dd>applies the audio reverse filter again to restore input to correct orientation.</dd>
|
||||
<dt>-c:a <em>your_codec_choice</em></dt><dd>This tells the filter what codec to use, and must be specified to avoid defaults. If you want 24 bit PCM, your value would be <code>-c:a pcm_s24le</code>.</dd>
|
||||
<dt><em>output_file</em></dt><dd>path, name and extension of the output file (e.g. output_file.wav).</dd>
|
||||
</dl>
|
||||
</div>
|
||||
<!-- ends Trim end silence -->
|
||||
|
||||
</div>
|
||||
<div class="well">
|
||||
<h2 id="interlacing">Work with interlaced video</h2>
|
||||
@@ -1205,7 +1294,7 @@
|
||||
<p><code>"yadif,format=yuv420p"</code> is an FFmpeg <a href="https://trac.ffmpeg.org/wiki/FilteringGuide#FiltergraphChainFilterrelationship" target="_blank">filtergraph</a>. Here the filtergraph is made up of one filter chain, which is itself made up of the two filters (separated by the comma).<br>
|
||||
The enclosing quote marks are necessary when you use spaces within the filtergraph, e.g. <code>-vf "yadif, format=yuv420p"</code>, and are included above as an example of good practice.</p>
|
||||
<p><strong>Note:</strong> FFmpeg includes several deinterlacers apart from <a href="https://ffmpeg.org/ffmpeg-filters.html#yadif-1" target="_blank">yadif</a>: <a href="https://ffmpeg.org/ffmpeg-filters.html#bwdif" target="_blank">bwdif</a>, <a href="https://ffmpeg.org/ffmpeg-filters.html#w3fdif" target="_blank">w3fdif</a>, <a href="https://ffmpeg.org/ffmpeg-filters.html#kerndeint" target="_blank">kerndeint</a>, and <a href="https://ffmpeg.org/ffmpeg-filters.html#nnedi" target="_blank">nnedi</a>.</p>
|
||||
<p>For more H.264 encoding options, see the latter section of the <a href="./index.html#transcode_h264">encode H.264 command</a>.</p>
|
||||
<p>For more H.264 encoding options, see the latter section of the <a href="#transcode_h264">encode H.264 command</a>.</p>
|
||||
<div class="sample-image">
|
||||
<h2>Example</h2>
|
||||
<p>Before and after deinterlacing:</p>
|
||||
@@ -1291,7 +1380,7 @@
|
||||
<div class="hiding">
|
||||
<h3>Create centered, transparent text watermark</h3>
|
||||
<p>E.g For creating access copies with your institutions name</p>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -vf drawtext="fontfile=<em>font_path</em>:fontsize=<em>font_size</em>:text=<em>watermark_text</em>:fontcolor=<em>font_colour</em>:alpha=0.4:x=(w-text_w)/2:y=(h-text_h)/2" <em>output_file</em></code></p>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -vf drawtext="fontfile=<em>font_path</em>:fontsize=<em>font_size</em>:text=<em>watermark_text</em>:fontcolor=<em>font_color</em>:alpha=0.4:x=(w-text_w)/2:y=(h-text_h)/2" <em>output_file</em></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
|
||||
@@ -1300,7 +1389,7 @@
|
||||
<dt>fontfile=<em>font_path</em></dt><dd>Set path to font. For example in macOS: <code>fontfile=/Library/Fonts/AppleGothic.ttf</code></dd>
|
||||
<dt>fontsize=<em>font_size</em></dt><dd>Set font size. <code>35</code> is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.</dd>
|
||||
<dt>text=<em>watermark_text</em></dt><dd>Set the content of your watermark text. For example: <code>text='FFMPROVISR EXAMPLE TEXT'</code></dd>
|
||||
<dt>fontcolor=<em>font_colour</em></dt><dd>Set colour of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
|
||||
<dt>fontcolor=<em>font_color</em></dt><dd>Set color of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
|
||||
<dt>alpha=0.4</dt><dd>Set transparency value.</dd>
|
||||
<dt>x=(w-text_w)/2:y=(h-text_h)/2</dt><dd>Sets <em>x</em> and <em>y</em> coordinates for the watermark. These relative values will centre your watermark regardless of video dimensions.</dd>
|
||||
</dl>
|
||||
@@ -1342,9 +1431,9 @@
|
||||
<dt>fontfile=<em>font_path</em></dt><dd>Set path to font. For example in macOS: <code>fontfile=/Library/Fonts/AppleGothic.ttf</code></dd>
|
||||
<dt>fontsize=<em>font_size</em></dt><dd>Set font size. <code>35</code> is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.</dd>
|
||||
<dt>timecode=<em>starting_timecode</em></dt><dd>Set the timecode to be displayed for the first frame. Timecode is to be represented as <code>hh:mm:ss[:;.]ff</code>. Colon escaping is determined by O.S, for example in Ubuntu <code>timecode='09\\:50\\:01\\:23'</code>. Ideally, this value would be generated from the file itself using ffprobe.</dd>
|
||||
<dt>fontcolor=<em>font_colour</em></dt><dd>Set colour of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
|
||||
<dt>fontcolor=<em>font_color</em></dt><dd>Set color of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
|
||||
<dt>box=1</dt><dd>Enable box around timecode</dd>
|
||||
<dt>boxcolor=<em>box_colour</em></dt><dd>Set colour of box. Can be a text string such as <code>fontcolor=black</code> or a hexadecimal value such as <code>fontcolor=0x000000</code></dd>
|
||||
<dt>boxcolor=<em>box_color</em></dt><dd>Set color of box. Can be a text string such as <code>fontcolor=black</code> or a hexadecimal value such as <code>fontcolor=0x000000</code></dd>
|
||||
<dt>rate=<em>timecode_rate</em></dt><dd>Framerate of video. For example <code>25/1</code></dd>
|
||||
<dt>x=(w-text_w)/2:y=h/1.2</dt><dd>Sets <em>x</em> and <em>y</em> coordinates for the timecode. These relative values will horizontally centre your timecode in the bottom third regardless of video dimensions.</dd>
|
||||
<dt>"</dt><dd>quotation mark to end drawtext filter command</dd>
|
||||
@@ -1646,6 +1735,29 @@
|
||||
</div>
|
||||
<!-- ends Side by Side Videos/Temporal Difference Filter -->
|
||||
|
||||
<!-- xstack -->
|
||||
<label class="recipe" for="xstack">Use xstack to arrange output layout of multiple video sources</label>
|
||||
<input type="checkbox" id="xstack">
|
||||
<div class="hiding">
|
||||
<h3>This filter enables vertical and horizontal stacking of multiple video sources into one output.</h3>
|
||||
<p>This filter is useful for the creation of output windows such as the one utilized in <a href="https://github.com/amiaopensource/vrecord" target="_blank">vrecord.</a></p>
|
||||
<p><code>ffplay -f lavfi -i <em>testsrc</em> -vf "split=3[a][b][c],[a][b][c]xstack=inputs=3:layout=0_0|0_h0|0_h0+h1[out]"</code></p>
|
||||
<p>The following example uses the 'testsrc' virtual input combined with the <a href="https://ffmpeg.org/ffmpeg-filters.html#split_002c-asplit" target="_blank">split filter</a> to generate the multiple inputs.</p>
|
||||
<dl>
|
||||
<dt>ffplay</dt><dd>starts the command</dd>
|
||||
<dt>-f lavfi -i testsrc</dt><dd>tells ffplay to use the <a href="https://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">Libavfilter's virtual device input 'testsrc'</a></dd>
|
||||
<dt>-vf</dt><dd>tells ffmpeg that you will be applying a filter chain to the input</dd>
|
||||
<dt>split=3[a][b][c],</dt><dd>splits the input into three separate signals within the filter graph, named a, b and c respectively. (These are variables and any names could be used as long as they are kept consistent in following steps). The <code>,</code> separates this from the next part of the filter chain.</dd>
|
||||
<dt>[a][b][c]xstack=inputs=3:</dt><dd>tells ffmpeg that you will be using the xstack filter on the three named inputs a,b and c. The final <code>:</code> is a necessary divider between the number of inputs, and the orientation of outputs portion of the xstack command.</dd>
|
||||
<dt>layout=0_0|0_h0|0_h0+h1</dt><dd>This is where the locations of the video sources in the output stack are designated. The locations are specified in order of input (so in this example <code>0_0</code> corresponds to input <code>[a]</code>. Inputs must be separated with a <code>|</code>. The two numbers represent columns and rows, with counting starting at zero rather than one. In this example, <code>0_0</code> means that input <code>[a]</code> is placed at the first row of the first column in the output. <code>0_h0</code> places the next input in the first column, at a row corresponding with the height of the first input. <code>0_h0+h1</code> places the final input in the first column, at a row corresponding with the height of the first input plus the height of the second input. This has the effect of creating a vertical stack of the three inputs. This could be made a horizontal stack by changing this portion of the command to <code>layout=0_0|w0_0|w0+w1_0</code>.</dd>
|
||||
<dt>[out]</dt><dd>this ends the filter chain and designates the final output.</dd>
|
||||
</dl>
|
||||
<div class="sample-image">
|
||||
</div>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
<!-- ends xstack -->
|
||||
|
||||
</div>
|
||||
<div class="well">
|
||||
<h2 id="metadata">View or strip metadata</h2>
|
||||
@@ -1728,8 +1840,8 @@
|
||||
<input type="checkbox" id="batch_processing_win">
|
||||
<div class="hiding">
|
||||
<h3>Create PowerShell script to batch process with FFmpeg</h3>
|
||||
<p>As of Windows 10, it is possible to run Bash via <a href="https://msdn.microsoft.com/en-us/commandline/wsl/about" target="_blank">Bash on Ubuntu on Windows</a>, allowing you to use <a href="index.html#batch_processing_bash">bash scripting</a>. To enable Bash on Windows, see <a href="https://msdn.microsoft.com/en-us/commandline/wsl/install_guide" target="_blank">these instructions</a>.</p>
|
||||
<p>On Windows, the primary native command line programme is <strong>PowerShell</strong>. PowerShell scripts are plain text files saved with a .ps1 extension. This entry explains how they work with the example of a PowerShell script named “rewrap-mp4.ps1”, which rewraps .mp4 files in a given directory to .mkv files.</p>
|
||||
<p>As of Windows 10, it is possible to run Bash via <a href="https://msdn.microsoft.com/en-us/commandline/wsl/about" target="_blank">Bash on Ubuntu on Windows</a>, allowing you to use <a href="#batch_processing_bash">bash scripting</a>. To enable Bash on Windows, see <a href="https://msdn.microsoft.com/en-us/commandline/wsl/install_guide" target="_blank">these instructions</a>.</p>
|
||||
<p>On Windows, the primary native command line program is <strong>PowerShell</strong>. PowerShell scripts are plain text files saved with a .ps1 extension. This entry explains how they work with the example of a PowerShell script named “rewrap-mp4.ps1”, which rewraps .mp4 files in a given directory to .mkv files.</p>
|
||||
<p>“rewrap-mp4.ps1” contains the following text:</p>
|
||||
<pre class="codeblock"><code>$inputfiles = ls *.mp4
|
||||
foreach ($file in $inputfiles) {
|
||||
@@ -1825,7 +1937,7 @@
|
||||
<li>44.1 kHz: "asetnsamples=n=44100"</li>
|
||||
<li>96 kHz: "asetnsamples=n=96000"</li>
|
||||
</ul>
|
||||
<p><strong>Note:</strong> This filter trandscodes audio to 16 bit PCM by default. The generated framemd5s will represent this value. Validating these framemd5s will require using the same default settings. Alternatively, when your file has another quantisation rates (e.g. 24 bit), then you might add the audio codec <code>-c:a pcm_s24le</code> to the command, for compatibility reasons with other tools, like <a href="https://mediaarea.net/BWFMetaEdit" target="_blank">BWF MetaEdit</a>.</p>
|
||||
<p><strong>Note:</strong> This filter transcodes audio to 16 bit PCM by default. The generated framemd5s will represent this value. Validating these framemd5s will require using the same default settings. Alternatively, when your file has another quantization rates (e.g. 24 bit), then you might add the audio codec <code>-c:a pcm_s24le</code> to the command, for compatibility reasons with other tools, like <a href="https://mediaarea.net/BWFMetaEdit" target="_blank">BWF MetaEdit</a>.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
|
||||
@@ -1856,11 +1968,30 @@
|
||||
<dt>-c:a copy</dt><dd>ensures that FFmpeg will not transcode the audio to a different codec before generating the MD5 (by default FFmpeg will use 16 bit PCM for audio MD5s).</dd>
|
||||
<dt><em>output_file_2</em></dt><dd>is the output file for the audio stream MD5.</dd>
|
||||
</dl>
|
||||
<p><strong>Note:</strong>The MD5s generated by running this command on WAV files are compatible with those embedded by the <a href="https://mediaarea.net/BWFMetaEdit" target="_blank">BWF MetaEdit</a> tool and can be compared.</p>
|
||||
<p><strong>Note:</strong> The MD5s generated by running this command on WAV files are compatible with those embedded by the <a href="https://mediaarea.net/BWFMetaEdit" target="_blank">BWF MetaEdit</a> tool and can be compared.</p>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
<!-- ends Create stream md5s -->
|
||||
|
||||
<!-- Get checksum for video/audio stream -->
|
||||
<label class="recipe" for="get_stream_checksum">Get checksum for video/audio stream</label>
|
||||
<input type="checkbox" id="get_stream_checksum">
|
||||
<div class="hiding">
|
||||
<h3>Get checksum for video/audio stream</h3>
|
||||
<p><code>ffmpeg -loglevel error -i <em>input_file</em> -map 0:v:0 -f hash -hash md5 -</code></p>
|
||||
<p>This script will perform a fixity check on a specified audio or video stream of the file, useful for checking that the content within a video has not changed even if the container format has changed.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-loglevel error</dt><dd>sets the verbosity of logging to show all errors</dd>
|
||||
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-map 0:v:0</dt><dd>designated the first video stream as the stream on which to perform this hash generation operation. <code>-map 0</code> can be used to run the operation on all streams.</dd>
|
||||
<dt>-f hash -hash md5</dt><dd>produce a checksum hash, and set the hash algorithm to md5. See the official <a href="https://ffmpeg.org/ffmpeg-formats.html#hash" target="_blank">documentation on hash</a> for other algorithms.</dd>
|
||||
<dt>-</dt><dd>FFmpeg syntax requires a specified output, and <code>-</code> is just a place holder. No file is actually created.</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
<!-- ends Get checksum for video/audio stream -->
|
||||
|
||||
<!-- QCTools Report -->
|
||||
<label class="recipe" for="qctools">QCTools report (with audio)</label>
|
||||
<input type="checkbox" id="qctools">
|
||||
@@ -1985,7 +2116,7 @@
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-f lavfi</dt><dd>tells FFmpeg to use the <a href="https://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">libavfilter</a> input virtual device</dd>
|
||||
<dt>-i testsrc=size=720x576:rate=25</dt><dd>asks for the testsrc filter pattern as input. Adjusting the <code>size</code> and <code>rate</code> options allows you to choose a specific frame size and framerate. <br>
|
||||
<dt>-i testsrc=size=720x576:rate=25</dt><dd>asks for the testsrc filter pattern as input. Adjusting the <code>size</code> and <code>rate</code> options allows you to choose a specific frame size and framerate.<br>
|
||||
The different test patterns that can be generated are listed <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc" target="_blank">here</a>.</dd>
|
||||
<dt>-c:v v210</dt><dd>transcodes video from rawvideo to 10-bit Uncompressed Y′C<sub>B</sub>C<sub>R</sub> 4:2:2. Alter this setting to set your desired codec.</dd>
|
||||
<dt>-t 10</dt><dd>specifies recording time of 10 seconds</dd>
|
||||
@@ -2000,12 +2131,12 @@
|
||||
<input type="checkbox" id="play_hd_smpte">
|
||||
<div class="hiding">
|
||||
<h3>Play HD SMPTE bars</h3>
|
||||
<p>Test an HD video projector by playing the SMPTE colour bars pattern.</p>
|
||||
<p>Test an HD video projector by playing the SMPTE color bars pattern.</p>
|
||||
<p><code>ffplay -f lavfi -i smptehdbars=size=1920x1080</code></p>
|
||||
<dl>
|
||||
<dt>ffplay</dt><dd>starts the command</dd>
|
||||
<dt>-f lavfi</dt><dd>tells ffplay to use the <a href="https://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">Libavfilter</a> input virtual device</dd>
|
||||
<dt>-i smptehdbars=size=1920x1080</dt><dd>asks for the <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc" target="_blank">smptehdbars filter pattern</a> as input and sets the HD resolution. This generates a colour bars pattern, based on the SMPTE RP 219–2002.</dd>
|
||||
<dt>-i smptehdbars=size=1920x1080</dt><dd>asks for the <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc" target="_blank">smptehdbars filter pattern</a> as input and sets the HD resolution. This generates a color bars pattern, based on the SMPTE RP 219–2002.</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
@@ -2016,12 +2147,12 @@
|
||||
<input type="checkbox" id="play_vga_smpte">
|
||||
<div class="hiding">
|
||||
<h3>Play VGA SMPTE bars</h3>
|
||||
<p>Test a VGA (SD) video projector by playing the SMPTE colour bars pattern.</p>
|
||||
<p>Test a VGA (SD) video projector by playing the SMPTE color bars pattern.</p>
|
||||
<p><code>ffplay -f lavfi -i smptebars=size=640x480</code></p>
|
||||
<dl>
|
||||
<dt>ffplay</dt><dd>starts the command</dd>
|
||||
<dt>-f lavfi</dt><dd>tells ffplay to use the <a href="https://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">Libavfilter</a> input virtual device</dd>
|
||||
<dt>-i smptebars=size=640x480</dt><dd>asks for the <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc" target="_blank">smptebars filter pattern</a> as input and sets the VGA (SD) resolution. This generates a colour bars pattern, based on the SMPTE Engineering Guideline EG 1–1990.</dd>
|
||||
<dt>-i smptebars=size=640x480</dt><dd>asks for the <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc" target="_blank">smptebars filter pattern</a> as input and sets the VGA (SD) resolution. This generates a color bars pattern, based on the SMPTE Engineering Guideline EG 1–1990.</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
@@ -2091,7 +2222,7 @@
|
||||
<div class="hiding">
|
||||
<h3>Conway's Game of Life</h3>
|
||||
<p>Simulates <a href="https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life" target="_blank">Conway's Game of Life</a></p>
|
||||
<p><code>ffplay -f lavfi life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800</code></p>
|
||||
<p><code>ffplay -f lavfi life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#c83232:life_color=#00ff00,scale=1200:800</code></p>
|
||||
<dl>
|
||||
<dt>ffplay</dt><dd>starts the command</dd>
|
||||
<dt>-f lavfi</dt><dd>tells ffplay to use the <a href="https://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">Libavfilter</a> input virtual device</dd>
|
||||
@@ -2099,13 +2230,13 @@
|
||||
<dt>:</dt><dd>indicates there’s another parameter coming</dd>
|
||||
<dt>mold=10:r=60:ratio=0.1</dt><dd>sets up the rules of the game: cell mold speed, video rate, and random fill ratio</dd>
|
||||
<dt>:</dt><dd>indicates there’s another parameter coming</dd>
|
||||
<dt>death_color=#C83232:life_color=#00ff00</dt><dd>specifies color for cell death and cell life; mold_color can also be set</dd>
|
||||
<dt>death_color=#c83232:life_color=#00ff00</dt><dd>specifies color for cell death and cell life; mold_color can also be set</dd>
|
||||
<dt>,</dt><dd>comma signifies closing of video source assertion and ready for filter assertion</dd>
|
||||
<dt>scale=1200:800</dt><dd>scale to 1280 width and 800 height</dd>
|
||||
</dl>
|
||||
<img src="img/life.gif" alt="GIF of above command">
|
||||
<p>To save a portion of the stream instead of playing it back infinitely, use the following command:</p>
|
||||
<p><code>ffmpeg -f lavfi -i life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800 -t 5 <em>output_file</em></code></p>
|
||||
<p><code>ffmpeg -f lavfi -i life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#c83232:life_color=#00ff00,scale=1200:800 -t 5 <em>output_file</em></code></p>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
<!-- ends Game of Life -->
|
||||
@@ -2409,7 +2540,7 @@
|
||||
<input type="checkbox" id="find-offset">
|
||||
<div class="hiding">
|
||||
<h3>Find Drive Offset for Exact CD Ripping</h3>
|
||||
<p>If you want to make CD rips that can be verified via checksums to other rips of the same content, you need to know the offset of your CD drive. Put simply, different models of CD drives have different offsets, meaning they start reading in slightly different locations. This must be compensated for in order for files created on different (model) drives to generate the same checksum. For a more detailed explanation of drive offsets see the explanation <a href="http://dbpoweramp.com/spoons-audio-guide-cd-ripping.htm">here.</a> In order to find your drive offset, first you will need to know exactly what model your drive is, then you can look it up in the list of drive offsets by Accurate Rip.</p>
|
||||
<p>If you want to make CD rips that can be verified via checksums to other rips of the same content, you need to know the offset of your CD drive. Put simply, different models of CD drives have different offsets, meaning they start reading in slightly different locations. This must be compensated for in order for files created on different (model) drives to generate the same checksum. For a more detailed explanation of drive offsets see the explanation <a href="https://dbpoweramp.com/spoons-audio-guide-cd-ripping.htm" target="_blank">here.</a> In order to find your drive offset, first you will need to know exactly what model your drive is, then you can look it up in the list of drive offsets by Accurate Rip.</p>
|
||||
<p>Often it can be difficult to tell what model your drive is simply by looking at it - it may be housed inside your computer or have external branding that is different from the actual drive manufacturer. For this reason, it can be useful to query your drive with CD ripping software in order to ID it. The following commands should give you a better idea of what drive you have.</p>
|
||||
<p><strong>Cdda2wav:</strong> <code>cdda2wav -scanbus</code> or simply <code>cdda2wav</code></p>
|
||||
<p><strong>CD Paranoia:</strong> <code>cdparanoia -vsQ</code></p>
|
||||
@@ -2460,6 +2591,20 @@
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
<!-- ends Rip with CDDA2WAV -->
|
||||
|
||||
<!-- Check for CD Emphasis -->
|
||||
<label class="recipe" for="cd-emph-check">Check/Compensate for CD Emphasis</label>
|
||||
<input type="checkbox" id="cd-emph-check">
|
||||
<div class="hiding">
|
||||
<h3>Check/Compensate for CD Emphasis</h3>
|
||||
<p>While somewhat rare, certain CDs had 'emphasis' applied as a form of noise reduction. This seems to mostly affect early (1980s) era CDs and some CDs pressed in Japan. Emphasis is part of the <a href="https://en.wikipedia.org/wiki/Compact_Disc_Digital_Audio#Standard">Red Book standard</a> and, if present, must be compensated for to ensure accurate playback. CDs that use emphasis contain flags on tracks that tell the CD player to de-emphasize the audio on playback. When ripping a CD with emphasis, it is important to take this into account and either apply de-emphasis while ripping, or if storing a 'flat' copy, create another de-emphasized listening copy.</p>
|
||||
<p>The following commands will output information about the presence of emphasis when run on a target CD:</p>
|
||||
<p><strong>Cdda2wav:</strong> <code>cdda2wav -J</code></p>
|
||||
<p><strong>CD Paranoia:</strong> <code>cdparanoia -Q</code></p>
|
||||
<p>In order to compensate for emphasis during ripping while using Cdda2wav, the <code>-T</code> flag can be added to the <a href="#cdda2wav">standard ripping command</a>. For a recipe to compensate for a flat rip, see the section on <a href="#cd_eq">de-emphasizing with FFmpeg</a>.
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
<!-- Check for CD Emphasis -->
|
||||
</div>
|
||||
<!-- ends CDDA Tools -->
|
||||
|
||||
|
35
js/js.js
35
js/js.js
@@ -1,23 +1,38 @@
|
||||
$(document).ready(function() {
|
||||
|
||||
// open recipe window if a hash is found in URL
|
||||
if(window.location.hash) {
|
||||
id = window.location.hash
|
||||
console.log(id.substring(1))
|
||||
function appendLink(id) {
|
||||
$(id).next('div').find('.link').empty();
|
||||
$(id).next('div').find('.link').append("<small>Link to this command: <a href='https://amiaopensource.github.io/ffmprovisr/index.html" + id + "'>https://amiaopensource.github.io/ffmprovisr/index.html" + id + "</a></small>");
|
||||
}
|
||||
|
||||
function moveToRecipe(id) {
|
||||
document.getElementById(id.substring(1)).checked = true;
|
||||
$('html, body').animate({ scrollTop: $(id).offset().top}, 1000);
|
||||
$(id).closest('div').find('.link').empty();
|
||||
$(id).closest('div').find('.link').append("<small>Link to this command: <a href='https://amiaopensource.github.io/ffmprovisr/index.html"+window.location.hash+"'>https://amiaopensource.github.io/ffmprovisr/index.html"+window.location.hash+"</a></small>");
|
||||
$('html, body').animate({ scrollTop: $(id).offset().top }, 1000);
|
||||
appendLink(id)
|
||||
}
|
||||
|
||||
// open recipe window if a hash is found in URL
|
||||
if (window.location.hash) {
|
||||
id = window.location.hash
|
||||
moveToRecipe(id)
|
||||
}
|
||||
|
||||
// add hash URL when recipe is opened
|
||||
$('label[class="recipe"]').on("click", function(){
|
||||
id = $(this).attr("for");
|
||||
window.location.hash = ('#' + id)
|
||||
$('#' + id).closest('div').find('.link').empty();
|
||||
$('#' + id).closest('div').find('.link').append("<small>Link to this command: <a href='https://amiaopensource.github.io/ffmprovisr/index.html"+window.location.hash+"'>https://amiaopensource.github.io/ffmprovisr/index.html"+window.location.hash+"</a></small>");
|
||||
});
|
||||
appendLink('#' + id)
|
||||
})
|
||||
|
||||
// open recipe when clicked
|
||||
$('a').on("click", function(){
|
||||
intralink = $(this).attr("href")
|
||||
if (intralink[0] == "#") {
|
||||
moveToRecipe(intralink)
|
||||
}
|
||||
})
|
||||
|
||||
// open all windows if button is clicked
|
||||
$('#open-all').on("click", function(){
|
||||
$('input[type=checkbox]').each(function(){
|
||||
this.checked = !this.checked;
|
||||
|
37
readme.md
37
readme.md
@@ -12,15 +12,36 @@ To facilitate better understanding of FFmpeg through collaborative sharing of us
|
||||
|
||||
The code is found in the gh-pages branch (the default primary branch). Readme is right here. You can see the site live on [GitHub pages](http://amiaopensource.github.io/ffmprovisr).
|
||||
|
||||
You can also install the latest [release](https://github.com/amiaopensource/ffmprovisr/releases) on your computer with the command:
|
||||
You can also install the latest [release](https://github.com/amiaopensource/ffmprovisr/releases) on your computer with the two commands:
|
||||
```
|
||||
brew install amiaopensource/amiaos/ffmprovisr
|
||||
brew tap amiaopensource/amiaos
|
||||
brew install ffmprovisr
|
||||
```
|
||||
and then call it locally with the command:
|
||||
```
|
||||
ffmprovisr
|
||||
```
|
||||
This works currently under macOS, Linux and the Linux subsystem on Windows. On classic Windows you can install the last [release](https://github.com/amiaopensource/ffmprovisr/releases) manually and the open `index.html` in a browser.
|
||||
This works currently under macOS, Linux and the Linux apps on Windows (Ubuntu and Debian tested). On classic Windows you can install the last [release](https://github.com/amiaopensource/ffmprovisr/releases) manually and the open `index.html` in a browser.
|
||||
|
||||
#### Parseable list of the commands
|
||||
|
||||
A list of all recipes in an easily parseable [ASCII text](recipes.txt) format is provided as well. It contains for each recipe its title and command in the following format:
|
||||
|
||||
```
|
||||
# title of recipe 1
|
||||
ffmpeg command 1
|
||||
# title of recipe 2
|
||||
ffmpeg command 2
|
||||
|
||||
...
|
||||
|
||||
# title of recipe n-1
|
||||
ffmpeg command n-1
|
||||
# title of recipe n
|
||||
ffmpeg command n
|
||||
```
|
||||
|
||||
The used [one-liner](scripts/get_recipe_list) is in the `scripts` folder.
|
||||
|
||||
## How do I contribute?
|
||||
|
||||
@@ -52,6 +73,7 @@ You can read our contributor code of conduct [here](https://github.com/amiaopens
|
||||
*Code Contributors*:
|
||||
ablwr (Ashley)
|
||||
bastibeckr (Basti Becker)
|
||||
b00giehead (Joanna White)
|
||||
bturkus
|
||||
dericed (Dave Rice)
|
||||
edsu (Ed Summers)
|
||||
@@ -61,6 +83,7 @@ kfrn (Katherine Frances Nagels)
|
||||
kgrons (Kathryn Gronsbell)
|
||||
kieranjol (Kieran O'Leary)
|
||||
llogan (Lou)
|
||||
mgiraldo (Mauricio Giraldo)
|
||||
pjotrek-b (Peter B.)
|
||||
privatezero (Andrew Weaver)
|
||||
retokromer (Reto Kromer)
|
||||
@@ -70,6 +93,7 @@ rfraimow
|
||||
ablwr (Ashley)
|
||||
audiovisualopen
|
||||
bastibeckr (Basti Becker)
|
||||
b00giehead (Joanna White)
|
||||
brainwane (Sumana Harihareswara)
|
||||
bturkus
|
||||
dericed (Dave Rice)
|
||||
@@ -88,6 +112,7 @@ kfrn (Katherine Frances Nagels)
|
||||
kgrons (Kathryn Gronsbell)
|
||||
kieranjol (Kieran O'Leary)
|
||||
llogan (Lou)
|
||||
mgiraldo (Mauricio Giraldo)
|
||||
mulvya
|
||||
nkrabben (Nick Krabbenhoeft)
|
||||
pjotrek-b (Peter B.)
|
||||
@@ -99,9 +124,9 @@ ross-spencer (Ross Spencer)
|
||||
todrobbins (Tod Robbins)
|
||||
|
||||
Repo: amiaopensource/ffmprovisr
|
||||
Code Contributors: 15
|
||||
All Contributors: 30
|
||||
Last updated: 2018-04-22 (4:2:2 Day)
|
||||
Code Contributors: 17
|
||||
All Contributors: 32
|
||||
Last updated: 2019-02-11
|
||||
|
||||
## AVHack Team
|
||||
|
||||
|
210
recipes.txt
Normal file
210
recipes.txt
Normal file
@@ -0,0 +1,210 @@
|
||||
# Basic rewrap command
|
||||
ffmpeg -i input_file.ext -c copy -map 0 output_file.ext
|
||||
# Rewrap DV video to .dv file
|
||||
ffmpeg -i input_file -f rawvideo -c:v copy output_file.dv
|
||||
# Transcode to deinterlaced Apple ProRes LT
|
||||
ffmpeg -i input_file -c:v prores -profile:v 1 -vf yadif -c:a pcm_s16le output_file.mov
|
||||
# Transcode to an H.264 access file
|
||||
ffmpeg -i input_file -c:v libx264 -pix_fmt yuv420p -c:a aac output_file
|
||||
# Transcode from DCP to an H.264 access file
|
||||
ffmpeg -i input_video_file.mxf -i input_audio_file.mxf -c:v libx264 -pix_fmt yuv420p -c:a aac output_file.mp4
|
||||
# Transcode your file with the FFV1 Version 3 Codec in a Matroska container
|
||||
ffmpeg -i input_file -map 0 -dn -c:v ffv1 -level 3 -g 1 -slicecrc 1 -slices 16 -c:a copy output_file.mkv -f framemd5 -an framemd5_output_file
|
||||
# Convert DVD to H.264
|
||||
ffmpeg -i concat:input_file_1\|input_file_2\|input_file_3 -c:v libx264 -c:a aac output_file.mp4
|
||||
# Transcode to an H.265/HEVC MP4
|
||||
ffmpeg -i input_file -c:v libx265 -pix_fmt yuv420p -c:a copy output_file
|
||||
# Transcode to an Ogg Theora
|
||||
ffmpeg -i input_file -acodec libvorbis -b:v 690k output_file
|
||||
# Convert WAV to MP3
|
||||
ffmpeg -i input_file.wav -write_id3v1 1 -id3v2_version 3 -dither_method rectangular -out_sample_rate 48k -qscale:a 1 output_file.mp3
|
||||
# Generate two access MP3s (with and without copyright).
|
||||
ffmpeg -i input_file -i input_file_to_append -filter_complex "[0:a:0]asplit=2[a][b];[b]afifo[bb];[1:a:0][bb]concat=n=2:v=0:a=1[concatout]" -map "[a]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file.mp3 -map "[concatout]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file_appended.mp3
|
||||
# Convert WAV to AAC/MP4
|
||||
ffmpeg -i input_file.wav -c:a aac -b:a 128k -dither_method rectangular -ar 44100 output_file.mp4
|
||||
# Transform 4:3 aspect ratio into 16:9 with pillarbox
|
||||
ffmpeg -i input_file -filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2" -c:a copy output_file
|
||||
# Transform 16:9 aspect ratio video into 4:3 with letterbox
|
||||
ffmpeg -i input_file -filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2" -c:a copy output_file
|
||||
# Flip video image
|
||||
ffmpeg -i input_file -filter:v "hflip,vflip" -c:a copy output_file
|
||||
# Transform SD to HD with pillarbox
|
||||
ffmpeg -i input_file -filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0" -c:a copy output_file
|
||||
# Change display aspect ratio without re-encoding
|
||||
ffmpeg -i input_file -c:v copy -aspect 4:3 output_file
|
||||
# Convert colorspace of video
|
||||
ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst output_file
|
||||
# Modify image and sound speed
|
||||
ffmpeg -i input_file -r output_fps -filter_complex "[0:v]setpts=input_fps/output_fps*PTS[v]; [0:a]atempo=output_fps/input_fps[a]" -map "[v]" -map "[a]" output_file
|
||||
# Synchronize video and audio streams
|
||||
ffmpeg -i input_file -itsoffset 0.125 -i input_file -map 1:v -map 0:a -c copy output_file
|
||||
# Clarify stream properties
|
||||
ffprobe input_file -show_streams
|
||||
# Crop video
|
||||
ffmpeg -i input_file -vf "crop=width:height" output_file
|
||||
# Change video color to black and white
|
||||
ffmpeg -i input_file -filter_complex hue=s=0 -c:a copy output_file
|
||||
# Extract audio without loss from an AV file
|
||||
ffmpeg -i input_file -c:a copy -vn output_file
|
||||
# Combine audio tracks
|
||||
ffmpeg -i input_file -filter_complex "[0:a:0][0:a:1]amerge[out]" -map 0:v -map "[out]" -c:v copy -shortest output_file
|
||||
# Inverses the audio phase of the second channel
|
||||
ffmpeg -i input_file -af pan="stereo|c0=c0|c1=-1*c1" output_file
|
||||
# Calculate Loudness Levels
|
||||
ffmpeg -i input_file -af loudnorm=print_format=json -f null -
|
||||
# RIAA Equalization
|
||||
ffmpeg -i input_file -af aemphasis=type=riaa output_file
|
||||
# Reverse CD Pre-Emphasis
|
||||
ffmpeg -i input_file -af aemphasis=type=cd output_file
|
||||
# One Pass Loudness Normalization
|
||||
ffmpeg -i input_file -af loudnorm=dual_mono=true -ar 48k output_file
|
||||
# Two Pass Loudness Normalization
|
||||
ffmpeg -i input_file -af loudnorm=dual_mono=true:measured_I=input_i:measured_TP=input_tp:measured_LRA=input_lra:measured_thresh=input_thresh:offset=target_offset:linear=true -ar 48k output_file
|
||||
# Fix A/V sync issues by resampling audio
|
||||
ffmpeg -i input_file -c:v copy -c:a pcm_s16le -af "aresample=async=1000" output_file
|
||||
# Join (concatenate) two or more files of the same type
|
||||
ffmpeg -f concat -i mylist.txt -c copy output_file
|
||||
# Join (concatenate) two or more files of different types
|
||||
ffmpeg -i input_1.avi -i input_2.mp4 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[video_out][audio_out]" -map "[video_out]" -map "[audio_out]" output_file
|
||||
# Split one file into several smaller segments
|
||||
ffmpeg -i input_file -c copy -map 0 -f segment -segment_time 60 -reset_timestamps 1 output_file-%03d.mkv
|
||||
# Trim file
|
||||
ffmpeg -i input_file -ss 00:02:00 -to 00:55:00 -c copy -map 0 output_file
|
||||
# Create an excerpt, starting from the beginning of the file
|
||||
ffmpeg -i input_file -t 5 -c copy -map 0 output_file
|
||||
# Create a new file with the first five seconds trimmed off the original
|
||||
ffmpeg -i input_file -ss 5 -c copy -map 0 output_file
|
||||
# Create a new file with the final five seconds of the original
|
||||
ffmpeg -sseof -5 -i input_file -c copy -map 0 output_file
|
||||
# Trim silence from beginning of an audio file
|
||||
ffmpeg -i input_file -af silenceremove=start_threshold=-57dB:start_duration=1:start_periods=1 -c:a your_codec_choice -ar your_sample_rate_choice output_file
|
||||
# Trim silence from the end of an audio file
|
||||
ffmpeg -i input_file -af areverse,silenceremove=start_threshold=-57dB:start_duration=1:start_periods=1,areverse -c:a your_codec_choice -ar your_sample_rate_choice output_file
|
||||
# Upscaled, pillar-boxed HD H.264 access files from SD NTSC source
|
||||
ffmpeg -i input_file -c:v libx264 -filter:v "yadif, scale=1440:1080:flags=lanczos, pad=1920:1080:(ow-iw)/2:(oh-ih)/2, format=yuv420p" output_file
|
||||
# Deinterlace video
|
||||
ffmpeg -i input_file -c:v libx264 -vf "yadif,format=yuv420p" output_file
|
||||
# Inverse telecine
|
||||
ffmpeg -i input_file -c:v libx264 -vf "fieldmatch,yadif,decimate" output_file
|
||||
# Set field order for interlaced video
|
||||
ffmpeg -i input_file -c:v video_codec -filter:v setfield=tff output_file
|
||||
# Identify interlacement patterns in a video file
|
||||
ffmpeg -i input file -filter:v idet -f null -
|
||||
# Create opaque centered text watermark
|
||||
ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:text=watermark_text:fontcolor=font_color:alpha=0.4:x=(w-text_w)/2:y=(h-text_h)/2" output_file
|
||||
# Overlay image watermark on video
|
||||
ffmpeg -i input_video file -i input_image_file -filter_complex overlay=main_w-overlay_w-5:5 output_file
|
||||
# Burn in timecode
|
||||
ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:timecode=starting_timecode:fontcolor=font_colour:box=1:boxcolor=box_colour:rate=timecode_rate:x=(w-text_w)/2:y=h/1.2" output_file
|
||||
Embed subtitles
|
||||
ffmpeg -i input_file -i subtitles_file -c copy -c:s mov_text output_file
|
||||
# Export one thumbnail per video file
|
||||
ffmpeg -i input_file -ss 00:00:20 -vframes 1 thumb.png
|
||||
# Export many thumbnails per video file
|
||||
ffmpeg -i input_file -vf fps=1/60 out%d.png
|
||||
# Create GIF from still images
|
||||
ffmpeg -f image2 -framerate 9 -pattern_type glob -i "input_image_*.jpg" -vf scale=250x250 output_file.gif
|
||||
# Create GIF from a video
|
||||
ffmpeg -ss HH:MM:SS -i input_file -filter_complex "fps=10,scale=500:-1:flags=lanczos,palettegen" -t 3 palette.png
|
||||
ffmpeg -ss HH:MM:SS -i input_file -i palette.png -filter_complex "[0:v]fps=10, scale=500:-1:flags=lanczos[v], [v][1:v]paletteuse" -t 3 -loop 6 output_file
|
||||
# Transcode an image sequence into uncompressed 10-bit video
|
||||
ffmpeg -f image2 -framerate 24 -i input_file_%06d.ext -c:v v210 output_file
|
||||
# Create video from image and audio
|
||||
ffmpeg -r 1 -loop 1 -i image_file -i audio_file -acodec copy -shortest -vf scale=1280:720 output_file
|
||||
# Audio Bitscope
|
||||
ffplay -f lavfi "amovie=input_file, asplit=2[out1][a], [a]abitscope=colors=purple|yellow[out0]"
|
||||
# Play a graphical output showing decibel levels of an input file
|
||||
ffplay -f lavfi "amovie='input.mp3', astats=metadata=1:reset=1, adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0:size=700x256:bg=Black[out]"
|
||||
# Identify pixels out of broadcast range
|
||||
ffplay -f lavfi "movie='input.mp4', signalstats=out=brng:color=cyan[out]"
|
||||
# Vectorscope from video to screen
|
||||
ffplay input_file -vf "split=2[m][v], [v]vectorscope=b=0.7:m=color3:g=green[v], [m][v]overlay=x=W-w:y=H-h"
|
||||
# Side by Side Videos/Temporal Difference Filter
|
||||
ffmpeg -i input01 -i input02 -filter_complex "[0:v:0]tblend=all_mode=difference128[a];[1:v:0]tblend=all_mode=difference128[b];[a][b]hstack[out]" -map [out] -f nut -c:v rawvideo - | ffplay -
|
||||
# Use xstack to arrange output layout of multiple video sources
|
||||
ffplay -f lavfi -i testsrc -vf "split=3[a][b][c],[a][b][c]xstack=inputs=3:layout=0_0|0_h0|0_h0+h1[out]"
|
||||
# Pull specs from video file
|
||||
ffprobe -i input_file -show_format -show_streams -show_data -print_format xml
|
||||
# Strip metadata
|
||||
ffmpeg -i input_file -map_metadata -1 -c:v copy -c:a copy output_file
|
||||
# Batch processing (Mac/Linux)
|
||||
for file in *.mxf; do ffmpeg -i "$file" -map 0 -c copy "${file%.mxf}.mov"; done
|
||||
# Check decoder errors
|
||||
ffmpeg -i input_file -f null -
|
||||
# Check FFV1 fixity
|
||||
ffmpeg -report -i input_file -f null -
|
||||
# Create MD5 checksums (video frames)
|
||||
ffmpeg -i input_file -f framemd5 -an output_file
|
||||
# Create MD5 checksums (audio samples)
|
||||
ffmpeg -i input_file -af "asetnsamples=n=48000" -f framemd5 -vn output_file
|
||||
# Create MD5 checksum(s) for A/V stream data only
|
||||
ffmpeg -i input_file -map 0:v:0 -c:v copy -f md5 output_file_1 -map 0:a:0 -c:a copy -f md5 output_file_2
|
||||
# Get checksum for video/audio stream
|
||||
ffmpeg -loglevel error -i input_file -map 0:v:0 -f hash -hash md5 -
|
||||
# QCTools report (with audio)
|
||||
ffprobe -f lavfi -i "movie=input_file:s=v+a[in0][in1], [in0]signalstats=stat=tout+vrep+brng, cropdetect=reset=1:round=1, idet=half_life=1, split[a][b];[a]field=top[a1];[b]field=bottom, split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim[out0];[in1]ebur128=metadata=1, astats=metadata=1:reset=1:length=0.4[out1]" -show_frames -show_versions -of xml=x=1:q=1 -noprivate | gzip > input_file.qctools.xml.gz
|
||||
# QCTools report (no audio)
|
||||
ffprobe -f lavfi -i "movie=input_file,signalstats=stat=tout+vrep+brng, cropdetect=reset=1:round=1, idet=half_life=1, split[a][b];[a]field=top[a1];[b]field=bottom,split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim" -show_frames -show_versions -of xml=x=1:q=1 -noprivate | gzip > input_file.qctools.xml.gz
|
||||
# Read/Extract EIA-608 Closed Captioning
|
||||
ffprobe -f lavfi -i movie=input_file,readeia608 -show_entries frame=pkt_pts_time:frame_tags=lavfi.readeia608.0.line,lavfi.readeia608.0.cc,lavfi.readeia608.1.line,lavfi.readeia608.1.cc -of csv > input_file.csv
|
||||
# Make a mandelbrot test pattern video
|
||||
ffmpeg -f lavfi -i mandelbrot=size=1280x720:rate=25 -c:v libx264 -t 10 output_file
|
||||
# Make a SMPTE bars test pattern video
|
||||
ffmpeg -f lavfi -i smptebars=size=720x576:rate=25 -c:v prores -t 10 output_file
|
||||
# Make a test pattern video
|
||||
ffmpeg -f lavfi -i testsrc=size=720x576:rate=25 -c:v v210 -t 10 output_file
|
||||
# Play HD SMPTE bars
|
||||
ffplay -f lavfi -i smptehdbars=size=1920x1080
|
||||
# Play VGA SMPTE bars
|
||||
ffplay -f lavfi -i smptebars=size=640x480
|
||||
# Generate a sine wave test audio file
|
||||
ffmpeg -f lavfi -i "sine=frequency=1000:sample_rate=48000:duration=5" -c:a pcm_s16le output_file.wav
|
||||
# SMPTE bars + Sine wave audio
|
||||
ffmpeg -f lavfi -i "smptebars=size=720x576:rate=25" -f lavfi -i "sine=frequency=1000:sample_rate=48000" -c:a pcm_s16le -t 10 -c:v ffv1 output_file
|
||||
# Make a broken file
|
||||
ffmpeg -i input_file -bsf noise=1 -c copy output_file
|
||||
# Conway's Game of Life
|
||||
ffplay -f lavfi life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800
|
||||
# Play video with OCR
|
||||
ffplay input_file -vf "ocr,drawtext=fontfile=/Library/Fonts/Andale Mono.ttf:text=%{metadata\\\:lavfi.ocr.text}:fontcolor=white"
|
||||
# Export OCR from video to screen
|
||||
ffprobe -show_entries frame_tags=lavfi.ocr.text -f lavfi -i "movie=input_file,ocr"
|
||||
# Compare Video Fingerprints
|
||||
ffmpeg -i input_one -i input_two -filter_complex signature=detectmode=full:nb_inputs=2 -f null -
|
||||
# Generate Video Fingerprint
|
||||
ffmpeg -i input -vf signature=format=xml:filename="output.xml" -an -f null -
|
||||
# Play an image sequence
|
||||
ffplay -framerate 5 input_file_%06d.ext
|
||||
# Split audio and video tracks
|
||||
ffmpeg -i input_file -map 0:v:0 video_output_file -map 0:a:0 audio_output_file
|
||||
# Merge audio and video tracks
|
||||
ffmpeg -i video_file -i audio_file -map 0:v -map 1:a -c copy output_file
|
||||
# Create ISO files for DVD access
|
||||
ffmpeg -i input_file -aspect 4:3 -target ntsc-dvd output_file.mpg
|
||||
# CSV with timecodes and YDIF
|
||||
ffprobe -f lavfi -i movie=input_file,signalstats -show_entries frame=pkt_pts_time:frame_tags=lavfi.signalstats.YDIF -of csv
|
||||
# Cover head switching noise
|
||||
ffmpeg -i input_file -filter:v drawbox=w=iw:h=7:y=ih-h:t=max output_file
|
||||
# Record and live-stream simultaneously
|
||||
ffmpeg -re -i ${INPUTFILE} -map 0 -flags +global_header -vf scale="1280:-1,format=yuv420p" -pix_fmt yuv420p -level 3.1 -vsync passthrough -crf 26 -g 50 -bufsize 3500k -maxrate 1800k -c:v libx264 -c:a aac -b:a 128000 -r:a 44100 -ac 2 -t ${STREAMDURATION} -f tee "[movflags=+faststart]${TARGETFILE}|[f=flv]${STREAMTARGET}"
|
||||
# View FFmpeg subprogram information
|
||||
ffmpeg -h type=name
|
||||
# Rip a CD with CD Paranoia
|
||||
cdparanoia -L -B -O [Drive Offset] [Starting Track Number]-[Ending Track Number] output_file.wav
|
||||
# Rip a CD with Cdda2wav
|
||||
cdda2wav -L0 -t all -cuefile -paranoia paraopts=retries=200,readahead=600,minoverlap=sectors-per-request-1 -verbose-level all output.wav
|
||||
# Compare two images
|
||||
compare -metric ae image1.ext image2.ext null:
|
||||
# Create thumbnails of images
|
||||
mogrify -resize 80x80 -format jpg -quality 75 -path thumbs *.jpg
|
||||
# Creates grid of images from text file
|
||||
montage @list.txt -tile 6x12 -geometry +0+0 output_grid.jpg
|
||||
# Get file signature data
|
||||
convert -verbose input_file.ext | grep -i signature
|
||||
# Removes exif metadata
|
||||
mogrify -path ./stripped/ -strip *.jpg
|
||||
# Resizes image to specific pixel width
|
||||
convert input_file.ext -resize 750 output_file.ext
|
||||
# Transcoding to/from FLAC
|
||||
flac --best --keep-foreign-metadata --preserve-modtime --verify input.wav
|
||||
flac --decode --keep-foreign-metadata --preserve-modtime --verify input.flac
|
1
scripts/get_recipe_list
Normal file
1
scripts/get_recipe_list
Normal file
@@ -0,0 +1 @@
|
||||
curl https://amiaopensource.github.io/ffmprovisr/ -s | grep -E '<h3>.*</h3>|<p><code>.*</code></p>' | sed 's/.*<code>\(.*\)<\/code>/\1/' | sed 's/.*<h3>\(.*\)<\/h3>/# \1/' | grep -v '\*\*\*' | sed -e 's/<[^>]*>//g'
|
Reference in New Issue
Block a user