|
|
|
@@ -90,9 +90,9 @@
|
|
|
|
|
<h3>Streaming vs. Saving</h3>
|
|
|
|
|
<p>FFplay allows you to stream created video and FFmpeg allows you to save video.</p>
|
|
|
|
|
<p>The following command creates and saves a 10-second video of SMPTE bars:</p>
|
|
|
|
|
<code>ffmpeg -f lavfi -i smptebars=size=640x480 -t 5 output_file</code>
|
|
|
|
|
<p><code>ffmpeg -f lavfi -i smptebars=size=640x480 -t 5 output_file</code></p>
|
|
|
|
|
<p>This command plays and streams SMPTE bars but does not save them on the computer:</p>
|
|
|
|
|
<code>ffplay -f lavfi smptebars=size=640x480</code>
|
|
|
|
|
<p><code>ffplay -f lavfi smptebars=size=640x480</code></p>
|
|
|
|
|
<p>The main difference is small but significant: the <code>-i</code> flag is required for FFmpeg but not required for FFplay. Additionally, the FFmpeg script needs to have <code>-t 5</code> and <code>output.mkv</code> added to specify the length of time to record and the place to save the video.</p>
|
|
|
|
|
<p class="link"></p>
|
|
|
|
|
</div>
|
|
|
|
@@ -113,16 +113,17 @@
|
|
|
|
|
<p>It is also possible to apply multiple filters to an input, which are sequenced together in the filtergraph. A chained set of filters is called a filter chain, and a filtergraph may include multiple filter chains. Filters in a filterchain are separated from each other by commas (<code>,</code>), and filterchains are separated from each other by semicolons (<code>;</code>). For example, take the <a href="#inverse-telecine">inverse telecine</a> command:</p>
|
|
|
|
|
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf "fieldmatch,yadif,decimate" <i>output_file</i></code></p>
|
|
|
|
|
<p>Here we have a filtergraph including one filter chain, which is made up of three video filters.</p>
|
|
|
|
|
<p>It is often prudent to enclose your filtergraph in quotation marks; this means that you can use spaces within the filtergraph. Using the inverse telecine example again, the following filter commands are all valid and equivalent:
|
|
|
|
|
<p>It is often prudent to enclose your filtergraph in quotation marks; this means that you can use spaces within the filtergraph. Using the inverse telecine example again, the following filter commands are all valid and equivalent:</p>
|
|
|
|
|
<ul>
|
|
|
|
|
<li><code>-vf fieldmatch,yadif,decimate</code></li>
|
|
|
|
|
<li><code>-vf "fieldmatch,yadif,decimate"</code></li>
|
|
|
|
|
<li><code>-vf "fieldmatch, yadif, decimate"</code></li>
|
|
|
|
|
</ul>
|
|
|
|
|
but <code>-vf fieldmatch, yadif, decimate</code> is not valid.</p>
|
|
|
|
|
<p>but <code>-vf fieldmatch, yadif, decimate</code> is not valid.</p>
|
|
|
|
|
<p>The ordering of the filters is significant. Video filters are applied in the order given, with the output of one filter being passed along as the input to the next filter in the chain. In the example above, <code>fieldmatch</code> reconstructs the original frames from the inverse telecined video, <code>yadif</code> deinterlaces (this is a failsafe in case any combed frames remain, for example if the source mixes telecined and real interlaced content), and <code>decimate</code> deletes duplicated frames. Clearly, it is not possible to delete duplicated frames before those frames are reconstructed.</p>
|
|
|
|
|
<h4>Notes</h4>
|
|
|
|
|
<ul>
|
|
|
|
|
<li><code>-vf</code> is an alias for <code>-filter:v</code></li>
|
|
|
|
|
<li>If the command involves more than one input or output, you must use the flag <code>-filter_complex</code> instead of <code>-vf</code>.</li>
|
|
|
|
|
<li>Straight quotation marks ("like this") rather than curved quotation marks (“like this”) should be used.</li>
|
|
|
|
|
</ul>
|
|
|
|
@@ -948,6 +949,15 @@
|
|
|
|
|
<code>-map "[video_out]" -c:v libx264 -pix_fmt yuv420p -preset veryslow -crf 18</code></p>
|
|
|
|
|
<p>Likewise, to encode the output audio stream as mp3, the command could include the following:<br>
|
|
|
|
|
<code>-map "[audio_out]" -c:a libmp3lame -dither_method modified_e_weighted -qscale:a 2</code></p>
|
|
|
|
|
<h4>Variation: concatenating files of different resolutions</h4>
|
|
|
|
|
<p>To concatenate files of different resolutions, you need to resize the videos to have matching resolutions prior to concatenation. The most basic way to do this is by using a scale filter and giving the dimensions of the file you wish to match:</p>
|
|
|
|
|
<p><code>-vf scale=1920:1080:flags=lanczos</code></p>
|
|
|
|
|
<p>(The Lanczos scaling algorithm is recommended, as it is slower but better than the default bilinear algorithm).</p>
|
|
|
|
|
<p>The rescaling should be applied just before the point where the streams to be used in the output file are listed. Select the stream you want to rescale, apply the filter, and assign that to a variable name (<code>rescaled_video</code> in the below example). Then you use this variable name in the list of streams to be concatenated.</p>
|
|
|
|
|
<p><code>ffmpeg -i input1.avi -i input2.mp4 -filter_complex "[0:v:0] scale=1920:1080:flags=lanczos [rescaled_video], [rescaled_video] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [video_out] [audio_out]" -map "[video_out]" -map "[audio_out]" <i>output_file</i></code></p>
|
|
|
|
|
<p>However, this will only have the desired visual output if the inputs have the same aspect ratio. If you wish to concatenate an SD and an HD file, you will also wish to pillarbox the SD file while upscaling. (See the <a href="https://amiaopensource.github.io/ffmprovisr/#SD_HD_2">Convert 4:3 to pillarboxed HD</a> command). The full command would look like this:</p>
|
|
|
|
|
<p><code>ffmpeg -i input1.avi -i input2.mp4 -filter_complex "[0:v:0] scale=1440:1080:flags=lanczos, pad=1920:1080:(ow-iw)/2:(oh-ih)/2 [to_hd_video], [to_hd_video] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [video_out] [audio_out]" -map "[video_out]" -map "[audio_out]" <i>output_file</i></code></p>
|
|
|
|
|
<p>Here, the first input an SD file which needs to be upscaled to match the second input, which is 1920x1080. The scale filter enlarges the SD input to the height of the HD frame, keeping the 4:3 aspect ratio; then, the video is pillarboxed within a 1920x1080 frame.</p>
|
|
|
|
|
<p>For more information, see the <a href="https://trac.ffmpeg.org/wiki/Concatenate#differentcodec" target="_blank">FFmpeg wiki page on concatenating files of different types</a>.</p>
|
|
|
|
|
<p class="link"></p>
|
|
|
|
|
</div>
|
|
|
|
@@ -997,7 +1007,7 @@
|
|
|
|
|
<dt>-ss 00:02:00</dt><dd>sets in point at 00:02:00</dd>
|
|
|
|
|
<dt>-to 00:55:00</dt><dd>sets out point at 00:55:00</dd>
|
|
|
|
|
<dt>-c copy</dt><dd>use stream copy mode (no re-encoding)<br>
|
|
|
|
|
<dt>-map 0</dt><dd>tells FFmpeg to map all streams of the input to the output.</dd>
|
|
|
|
|
<dt>-map 0</dt><dd>tells FFmpeg to map all streams of the input to the output.<br>
|
|
|
|
|
<b>Note:</b> watch out when using <code>-ss</code> with <code>-c copy</code> if the source is encoded with an interframe codec (e.g., H.264). Since FFmpeg must split on i-frames, it will seek to the nearest i-frame to begin the stream copy.</dd>
|
|
|
|
|
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
|
|
|
|
</dl>
|
|
|
|
|