<p>FFmpeg is a powerful tool for manipulating audiovisual files. Unfortunately, it also has a steep learning curve, especially for users unfamiliar with a command line interface. This app helps users through the command generation process so that more people can reap the benefits of FFmpeg.</p>
<p>Each button displays helpful information about how to perform a wide variety of tasks using FFmpeg. To use this site, click on the task you would like to perform. A new window will open up with a sample command and a description of how that command works. You can copy this command and understand how the command works with a breakdown of each of the flags.</p>
<p>This page does not have search functionality, but you can open all recipes (second option in the sidebar) and use your browser's search tool (often ctrl+f or cmd+f) to perform a keyword search through all recipes.</p>
<p>For instructions on how to install FFmpeg on Mac, Linux, and Windows, refer to Reto Kromer’s <ahref="https://avpres.net/FFmpeg/#ch1"target="_blank">installation instructions</a>.</p>
<p>For Bash and command line basics, try the <ahref="https://learnpythonthehardway.org/book/appendixa.html"target="_blank">Command Line Crash Course</a>. For a little more context presented in an ffmprovisr style, try <ahref="https://explainshell.com/"target="_blank">explainshell.com</a>!</p>
This work is licensed under a <ahref="https://creativecommons.org/licenses/by/4.0/"target="_blank">Creative Commons Attribution 4.0 International License</a>.
<p><ahref="https://datapraxis.github.io/sourcecaster/"target="_blank">The Sourcecaster</a>: an app that helps you use the command line to work through common challenges that come up when working with digital primary sources.</p>
<p><ahref="https://amiaopensource.github.io/cable-bible/"target="_blank">Cable Bible</a>: A Guide to Cables and Connectors Used for Audiovisual Tech</p>
<p>At its basis, an FFmpeg command is relatively simple. After you have installed FFmpeg (see instructions <ahref="https://avpres.net/FFmpeg/#ch1"target="_blank">here</a>), the program is invoked simply by typing <code>ffmpeg</code> at the command prompt.</p>
<p>Subsequently, each instruction that you supply to FFmpeg is actually a pair: a flag, which designates the <em>type</em> of action you want to carry out; and then the specifics of that action. Flags are always prepended with a hyphen.</p>
<p>For example, in the instruction <code>-i <em>input_file.ext</em></code>, the <code>-i</code> flag tells FFmpeg that you are supplying an input file, and <code>input_file.ext</code> states which file it is.</p>
<p>Likewise, in the instruction <code>-c:v prores</code>, the flag <code>-c:v</code> tells FFmpeg that you want to encode the video stream, and <code>prores</code> specifies which codec is to be used. (<code>-c:v</code> is shorthand for <code>-codec:v</code>/<code>-codec:video</code>).</p>
<p>A very basic FFmpeg command looks like this:</p>
<p>The main difference is small but significant: the <code>-i</code> flag is required for FFmpeg but not required for FFplay. Additionally, the FFmpeg script needs to have <code>-t 5</code> and <code>output.mkv</code> added to specify the length of time to record and the place to save the video.</p>
<p>Unless specified, FFmpeg will automatically set codec choices and codec parameters based off of internal defaults. These defaults are applied based on the file type used in the output (for example <code>.mov</code> or <code>.wav</code>).</p>
<p>When creating or transcoding files with FFmpeg, it is important to consider codec settings for both audio and video, as the default options may not be desirable in your particular context. The following is a brief list of codec defaults for some common file types:</p>
<p>Many FFmpeg commands use filters that manipulate the video or audio stream in some way: for example, <ahref="https://ffmpeg.org/ffmpeg-filters.html#hflip"target="_blank">hflip</a> to horizontally flip a video, or <ahref="https://ffmpeg.org/ffmpeg-filters.html#amerge-1"target="_blank">amerge</a> to merge two or more audio tracks into a single stream.</p>
<p>The use of a filter is signalled by the flag <code>-vf</code> (video filter) or <code>-af</code> (audio filter), followed by the name and options of the filter itself. For example, take the <ahref="#convert-colourspace">convert colourspace</a> command:</p>
<p>Here, <ahref="https://ffmpeg.org/ffmpeg-filters.html#colormatrix"target="_blank">colormatrix</a> is the filter used, with <em>src</em> and <em>dst</em> representing the source and destination colourspaces. This part following the <code>-vf</code> is a <strong>filtergraph</strong>.</p>
<p>It is also possible to apply multiple filters to an input, which are sequenced together in the filtergraph. A chained set of filters is called a filter chain, and a filtergraph may include multiple filter chains. Filters in a filterchain are separated from each other by commas (<code>,</code>), and filterchains are separated from each other by semicolons (<code>;</code>). For example, take the <ahref="#inverse-telecine">inverse telecine</a> command:</p>
<p>It is often prudent to enclose your filtergraph in quotation marks; this means that you can use spaces within the filtergraph. Using the inverse telecine example again, the following filter commands are all valid and equivalent:</p>
<p>The ordering of the filters is significant. Video filters are applied in the order given, with the output of one filter being passed along as the input to the next filter in the chain. In the example above, <code>fieldmatch</code> reconstructs the original frames from the inverse telecined video, <code>yadif</code> deinterlaces (this is a failsafe in case any combed frames remain, for example if the source mixes telecined and real interlaced content), and <code>decimate</code> deletes duplicated frames. Clearly, it is not possible to delete duplicated frames before those frames are reconstructed.</p>
<p>Stream mapping is the practice of defining which of the streams (e.g., video or audio tracks) present in an input file will be present in the output file. FFmpeg recognises five stream types:</p>
<ul>
<li><code>a</code> - audio</li>
<li><code>v</code> - video</li>
<li><code>s</code> - subtitle</li>
<li><code>d</code> - data (including timecode tracks)</li>
<li><code>t</code> - attachment</li>
</ul>
<p>Mapping is achieved by use of the <code>-map</code> flag, followed by an action of the type <code>file_number:stream_type[:stream_number]</code>. Numbering is zero-indexed, and it's possible to map by stream type and/or overall stream order within the input file. For example:</p>
<ul>
<li><code>-map 0:v</code> means ‘take all video streams from the first input file’.</li>
<li><code>-map 0:3</code> means ‘take the fourth stream from the first input file’.</li>
<li><code>-map 0:a:2</code> means ‘take the third audio stream from the first input file’.</li>
<li><code>-map 0:0 -map 0:2</code> means ‘take the first and third streams from the first input file’.</li>
<li><code>-map 0:1 -map 1:0</code> means ‘take the second stream from the first input file and the first stream from the second input file’.</li>
<p>To map <em>all</em> streams in the input file to the output file, use <code>-map 0</code>. However, note that not all container formats can include all stream types: for example, .mp4 cannot contain timecode.</p>
<p>When no mapping is specified in an ffmpeg command, the default for video files is to take just one video and one audio stream for the output: other stream types, such as timecode or subtitles, will not be copied to the output file by default. If multiple video or audio streams are present, the best quality one is automatically selected by FFmpeg.</p>
<p>For more information, check out the FFmpeg wiki <ahref="https://trac.ffmpeg.org/wiki/Map"target="_blank">Map</a> page, and the official FFmpeg <ahref="https://ffmpeg.org/ffmpeg.html#Advanced-options"target="_blank">documentation on <code>-map</code></a>.</p>
<p>This script will rewrap a video file. It will create a new video video file where the inner content (the video, audio, and subtitle data) of the original file is unchanged, but these streams are rehoused within a different container format.</p>
<dt>-c copy</dt><dd>copy the streams directly, without re-encoding.</dd>
<dt>-map 0</dt><dd>map all streams of the input to the output.<br>
By default, FFmpeg will only map one stream of each type (video, audio, subtitles) to the output file. However, files may have multiple streams of a given type - for example, a video may have several audio tracks for different languages. Therefore, if you want to preserve all the streams in the original, it's necessary to use this option.</dd>
<p>It may not be possible to rewrap a file's contents to a new container without re-encoding one or more of the streams within (that is, the video, audio, and subtitle tracks). Some containers can only contain streams of a certain encoding type: for example, the .mp4 container does not support uncompressed audio tracks. (In practice .mp4 goes hand-in-hand with a H.264-encoded video stream and an AAC-encoded video stream, although other types of video and audio streams are possible). Another example is that the Matroska container does not allow data tracks; see the <ahref="#mkv-to-mp4">MKV to MP4 recipe</a>.</p>
<p>In such cases, FFmpeg will throw an error. If you encounter errors of this kind, you may wish to consult the <ahref="#transcode">list of transcoding recipes</a>.</p>
<p>This script will take a video that is encoded in the <ahref="https://en.wikipedia.org/wiki/DV"target="_blank">DV Codec</a> but wrapped in a different container (such as MOV) and rewrap it into a raw DV file (with the .dv extension). Since DV files potentially contain a great deal of provenance metadata within the DV stream, it is necessary to rewrap files in this method to avoid unintentional stripping of this metadata.</p>
<dt>-f rawvideo</dt><dd>this tells FFmpeg to pass the video stream as raw video data without remuxing. This step is what ensures the survival of embedded metadata versus a standard rewrap.</dd>
<dt>-c:v copy</dt><dd>copy the DV stream directly, without re-encoding.</dd>
<p>This command transcodes an input file into a deinterlaced Apple ProRes 422 LT file with 16-bit linear PCM encoded audio. The file is deinterlaced using the yadif filter (Yet Another De-Interlacing Filter).</p>
<dt>-vf yadif</dt><dd>Runs a deinterlacing video filter (yet another deinterlacing filter) on the new file. <code>-vf</code> is an alias for <code>-filter:v</code>.</dd>
<dt>-c:a pcm_s16le</dt><dd>tells FFmpeg to encode the audio stream in 16-bit linear PCM</dd>
<li><code>prores</code> is much faster, can be used for progressive video only, and seems to be better for video according to Rec. 601 (Recommendation ITU-R BT.601).</li>
<li><code>prores_ks</code> generates a better file, can also be used for interlaced video, allows also encoding of ProRes 4444 (<code>-c:v prores_ks -profile:v 4</code>) and ProRes 4444 XQ (<code>-c:v prores_ks -profile:v 5</code>), and seems to be better for video according to Rec. 709 (Recommendation ITU-R BT.709).</li>
<p>This command takes an input file and transcodes it to H.264 with an .mp4 wrapper, audio is transcoded to AAC. The libx264 codec defaults to a “medium” preset for compression quality and a CRF of 23. CRF stands for constant rate factor and determines the quality and file size of the resulting H.264 video. A low CRF means high quality and large file size; a high CRF means the opposite.</p>
<dt>-pix_fmt yuv420p</dt><dd>libx264 will use a chroma subsampling scheme that is the closest match to that of the input. This can result in Y′C<sub>B</sub>C<sub>R</sub> 4:2:0, 4:2:2, or 4:4:4 chroma subsampling. QuickTime and most other non-FFmpeg based players can’t decode H.264 files that are not 4:2:0. In order to allow the video to play in all players, you can specify 4:2:0 chroma subsampling.</dd>
<dt>-crf <em>18</em></dt><dd>Specifying a lower CRF will make a larger file with better visual quality. For H.264 files being encoded with a 4:2:0 chroma subsampling scheme (i.e., using <code>-pix_fmt yuv420p</code>), the scale ranges between 0-51, with 0 being lossless and 51 the worst possible quality.<br>
If no crf is specified, <code>libx264</code> will use a default value of 23. 18 is often considered a “visually lossless” compression.</dd>
</dl>
<p>For more information, see the <ahref="https://trac.ffmpeg.org/wiki/Encode/H.264"target="_blank">FFmpeg and H.264 Encoding Guide</a> on the FFmpeg wiki.</p>
<p>This will transcode MXF wrapped video and audio files to an H.264 encoded MP4 file. Please note this only works for unencrypted, single reel DCPs.</p>
<p>This will losslessly transcode your video with the FFV1 Version 3 codec in a Matroska container. In order to verify losslessness, a framemd5 of the source video is also generated. For more information on FFV1 encoding, <ahref="https://trac.ffmpeg.org/wiki/Encode/FFV1"target="_blank">try the FFmpeg wiki</a>.</p>
<dt>-map 0</dt><dd>Map all streams that are present in the input file. This is important as FFmpeg will map only one stream of each type (video, audio, subtitles) by default to the output video.</dd>
<dt>-dn</dt><dd>ignore data streams (data no). The Matroska container does not allow data tracks.</dd>
<dt>-c:v ffv1</dt><dd>specifies the FFV1 video codec.</dd>
<dt>-level 3</dt><dd>specifies Version 3 of the FFV1 codec.</dd>
<dt>-g 1</dt><dd>specifies intra-frame encoding, or GOP=1.</dd>
<dt>-slicecrc 1</dt><dd>Adds CRC information for each slice. This makes it possible for a decoder to detect errors in the bitstream, rather than blindly decoding a broken slice. (Read more <ahref="http://ndsr.nycdigital.org/diving-in-head-first/"target="_blank">here</a>).</dd>
<dt>-slices 16</dt><dd>Each frame is split into 16 slices. 16 is a good trade-off between filesize and encoding time.</dd>
<dt>-c:a copy</dt><dd>copies all mapped audio streams.</dd>
<dt><em>output_file</em>.mkv</dt><dd>path and name of the output file. Use the <code>.mkv</code> extension to save your file in a Matroska container. Optionally, choose a different extension if you want a different container, such as <code>.mov</code> or <code>.avi</code>.</dd>
<dt>-f framemd5</dt><dd>Decodes video with the framemd5 muxer in order to generate MD5 checksums for every frame of your input file. This allows you to verify losslessness when compared against the framemd5s of the output file.</dd>
<p>This command allows you to create an H.264 file from a DVD source that is not copy-protected.</p>
<p>Before encoding, you’ll need to establish which of the .VOB files on the DVD or .iso contain the content that you wish to encode. Inside the VIDEO_TS directory, you will see a series of files with names like VTS_01_0.VOB, VTS_01_1.VOB, etc. Some of the .VOB files will contain menus, special features, etc, so locate the ones that contain target content by playing them back in VLC.</p>
<dt>-i concat:<em>input files</em></dt><dd>lists the input VOB files and directs FFmpeg to concatenate them. Each input file should be separated by a backslash and a pipe, like so:<br>
<dt>-crf 18</dt><dd>sets the constant rate factor to a visually lossless value. Libx264 defaults to a <ahref="https://trac.ffmpeg.org/wiki/Encode/H.264#crf"target="_blank">crf of 23</a>, considered medium quality; a smaller CRF value produces a larger and higher quality video.</dd>
<dt>-preset veryslow</dt><dd>A slower preset will result in better compression and therefore a higher-quality file. The default is <strong>medium</strong>; slower presets are <strong>slow</strong>, <strong>slower</strong>, and <strong>veryslow</strong>.</dd>
<p>Bear in mind that by default, libx264 will only encode a single video stream and a single audio stream, picking the ‘best’ of the options available. To preserve all video and audio streams, add <strong>-map</strong> parameters:</p>
<p><strong>Note:</strong> FFmpeg must be compiled with libx265, the library of the H.265 codec, for this script to work. (Add the flag <code>--with-x265</code> if using the <code>brew install ffmpeg</code> method).</p>
<dt>-c:v libx265</dt><dd>tells FFmpeg to encode the video as H.265</dd>
<dt>-pix_fmt yuv420p</dt><dd>libx265 will use a chroma subsampling scheme that is the closest match to that of the input. This can result in Y′C<sub>B</sub>C<sub>R</sub> 4:2:0, 4:2:2, or 4:4:4 chroma subsampling. For widest accessibility, it’s a good idea to specify 4:2:0 chroma subsampling.</dd>
<dt>-c:a copy</dt><dd>tells FFmpeg not to change the audio codec</dd>
<p>The libx265 encoding library defaults to a ‘medium’ preset for compression quality and a CRF of 28. CRF stands for ‘constant rate factor’ and determines the quality and file size of the resulting H.265 video. The CRF scale ranges from 0 (best quality [lossless]; largest file size) to 51 (worst quality; smallest file size).</p>
<p>A CRF of 28 for H.265 can be considered a medium setting, <ahref="https://trac.ffmpeg.org/wiki/Encode/H.265#ConstantRateFactorCRF"target="_blank">corresponding</a> to a CRF of 23 in <ahref="./index.html#transcode_h264">encoding H.264</a>, but should result in about half the file size.</p>
<p>To create a higher quality file, you can add these presets:</p>
<dt>-preset <em>veryslow</em></dt><dd>This option tells FFmpeg to use the slowest preset possible for the best compression quality.</dd>
<dt>-crf <em>18</em></dt><dd>Specifying a lower CRF will make a larger file with better visual quality. 18 is often considered a ‘visually lossless’ compression.</dd>
<p><strong>Note:</strong> FFmpeg must be installed with support for Ogg Theora. If you are using Homebrew, you can check with <code>brew info ffmpeg</code> and then update it with <code>brew upgrade ffmpeg --with-theora --with-libvorbis</code> if necessary.</p>
<dt>-write_id3v1 1</dt><dd>This will write metadata to an ID3v1 tag at the head of the file, assuming you’ve embedded metadata into the WAV file.</dd>
<dt>-id3v2_version 3</dt><dd>This will write metadata to an ID3v2.3 tag at the tail of the file, assuming you’ve embedded metadata into the WAV file.</dd>
<dt>-dither_method rectangular</dt><dd>Dither makes sure you don’t unnecessarily truncate the dynamic range of your audio.</dd>
<dt>-out_sample_rate 48k</dt><dd>Sets the audio sampling frequency to 48 kHz. This can be omitted to use the same sampling frequency as the input.</dd>
<dt>-qscale:a 1</dt><dd>This sets the encoder to use a constant quality with a variable bitrate of between 190-250kbit/s. If you would prefer to use a constant bitrate, this could be replaced with <code>-b:a 320k</code> to set to the maximum bitrate allowed by the MP3 format. For more detailed discussion on variable vs constant bitrates see <ahref="https://trac.ffmpeg.org/wiki/Encode/MP3"target="_blank">here.</a></dd>
<li>About ID3v2.3 tag: ID3v2.3 is better supported than ID3v2.4, FFmpeg's default ID3v2 setting.</li>
<li>About dither methods: FFmpeg comes with a variety of dither algorithms, outlined in the <ahref="https://ffmpeg.org/ffmpeg-resampler.html"target="_blank">official docs</a>, though some may lead to unintended, drastic digital clipping on some systems.</li>
<p>This script allows you to generate two derivative audio files from a master while appending audio from a separate file (for example a copyright or institutional notice) to one of them.</p>
<dt>-filter_complex</dt><dd>enables the complex filtering to manage splitting the input to two audio streams</dd>
<dt>[0:a:0]asplit=2[a][b];</dt><dd><code>asplit</code> allows audio streams to be split up for separate manipulation. This command splits the audio from the first input (the master file) into two streams "a" and "b"</dd>
<dt>[b]afifo[bb];</dt><dd>this buffers the stream "b" to help prevent dropped samples and renames stream to "bb"</dd>
<dt>[1:a:0][bb]concat=n=2:v=0:a=1[concatout]</dt><dd><code>concat</code> is used to join files. <code>n=2</code> tells the filter there are two inputs. <code>v=0:a=1</code> Tells the filter there are 0 video outputs and 1 audio output. This command appends the audio from the second input to the beginning of stream "bb" and names the output "concatout"</dd>
<dt>-map "[a]"</dt><dd>this maps the unmodified audio stream to the first output</dd>
<p>A note about dither methods. FFmpeg comes with a variety of dither algorithms, outlined in the <ahref="https://ffmpeg.org/ffmpeg-resampler.html"target="_blank">official docs</a>, though some may lead to unintended, not-subtle digital clipping on some systems.</p>
<dt>-filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"</dt><dd>video padding<br>This resolution independent formula is actually padding any aspect ratio into 16:9 by pillarboxing, because the video filter uses relative values for input width (iw), input height (ih), output width (ow) and output height (oh).</dd>
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
For silent videos you can replace <code>-c:a copy</code> by <code>-an</code>.</dd>
This resolution independent formula is actually padding any aspect ratio into 4:3 by letterboxing, because the video filter uses relative values for input width (iw), input height (ih), output width (ow) and output height (oh).</dd>
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
For silent videos you can replace <code>-c:a copy</code> by <code>-an</code>.</dd>
<dt>-filter:v "hflip,vflip"</dt><dd>flips the image horizontally and vertically<br>By using only one of the parameters hflip or vflip for filtering the image is flipped on that axis only. The quote marks are not mandatory.</dd>
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
For silent videos you can replace <code>-c:a copy</code> by <code>-an</code>.</dd>
<dt>-filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0"</dt><dd>set colour matrix, video scaling and padding<br>Three filters are applied:
<ol>
<li>The luma coefficients are modified from SD video (according to Rec. 601) to HD video (according to Rec. 709) by a colour matrix. Note that today Rec. 709 is often used also for SD and therefore you may cancel this parameter.</li>
<li>The scaling filter (<code>scale=1440:1080</code>) works for both upscaling and downscaling. We use the Lanczos scaling algorithm (<code>flags=lanczos</code>), which is slower but gives better results than the default bilinear algorithm.</li>
<li>The padding filter (<code>pad=1920:1080:240:0</code>) completes the transformation from SD to HD.</li>
</ol></dd>
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
For silent videos you can replace <code>-c:a copy</code> with <code>-an</code>.</dd>
<dt>-c:v copy</dt><dd>Copy all mapped video streams.</dd>
<dt>-aspect 4:3</dt><dd>Change Display Aspect Ratio to <code>4:3</code>. Experiment with other aspect ratios such as <code>16:9</code>. If used together with <code>-c:v copy</code>, it will affect the aspect ratio stored at container level, but not the aspect ratio stored in encoded frames, if it exists.</dd>
<dt>-vf colormatrix=<em>src</em>:<em>dst</em></dt><dd>the video filter <strong>colormatrix</strong> will be applied, with the given source and destination colourspaces.<br>
<p><strong>Note:</strong> Converting between colourspaces with FFmpeg can be done via either the <strong>colormatrix</strong> or <strong>colorspace</strong> filters, with colorspace allowing finer control (individual setting of colourspace, transfer characteristics, primaries, range, pixel format, etc). See <ahref="https://trac.ffmpeg.org/wiki/colorspace"target="_blank">this</a> entry on the FFmpeg wiki, and the FFmpeg documentation for <ahref="https://ffmpeg.org/ffmpeg-filters.html#colormatrix"target="_blank">colormatrix</a> and <ahref="https://ffmpeg.org/ffmpeg-filters.html#colorspace"target="_blank">colorspace</a>.</p>
<dt>-vf colormatrix=<em>src</em>:<em>dst</em></dt><dd>the video filter <strong>colormatrix</strong> will be applied, with the given source and destination colourspaces.</dd>
<dt>-color_primaries <em>val</em></dt><dd>tags video with the given colour primaries.<br>
<imgsrc="./img/colourspace_metadata_mediainfo.png"alt="MediaInfo screenshots of colourspace metadata"><br>
<p><spanclass="beware">⚠</span> Using this command it is possible to add Rec.709 tags to a file that is actually Rec.601 (etc), so apply with caution!</p>
<p>These commands are relevant for H.264 and H.265 videos, encoded with <code>libx264</code> and <code>libx265</code> respectively.</p>
<p><strong>Note:</strong> If you wish to embed colourspace metadata <em>without</em> changing to another colourspace, omit <code>-vf colormatrix=src:dst</code>. However, since it is <code>libx264</code>/<code>libx265</code> that writes the metadata, it’s not possible to add these tags without reencoding the video stream.</p>
<p>For all possible values for <code>-color_primaries</code>, <code>-color_trc</code>, and <code>-colorspace</code>, see the FFmpeg documentation on <ahref="https://ffmpeg.org/ffmpeg-codecs.html#Codec-Options"target="_blank">codec options</a>.</p>
<pid="fn1"class="footnote">1. Out of step with the regular pattern, <code>-color_trc</code> doesn’t accept <code>bt470bg</code>; it is instead here referred to directly as gamma.<br>
In the Rec.601 standard, 525-line/NTSC and 625-line/PAL video have assumed gammas of 2.2 and 2.8 respectively. <ahref="#ref1"title="Jump back.">↩</a></p>
<dt>-filter_complex "[0:v]setpts=<em>input_fps</em>/<em>output_fps</em>*PTS[v]; [0:a]atempo=<em>output_fps</em>/<em>input_fps</em>[a]"</dt><dd>A complex filter is needed here, in order to handle video stream and the audio stream separately. The <code>setpts</code> video filter modifies the PTS (presentation time stamp) of the video stream, and the <code>atempo</code> audio filter modifies the speed of the audio stream while keeping the same sound pitch. Note that the parameter order for the image and for the sound are inverted:
<li>In the video filter <code>setpts</code> the numerator <code>input_fps</code> sets the input speed and the denominator <code>output_fps</code> sets the output speed; both values are given in frames per second.</li>
<li>In the sound filter <code>atempo</code> the numerator <code>output_fps</code> sets the output speed and the denominator <code>input_fps</code> sets the input speed; both values are given in frames per second.</li>
</ul>
The different filters in a complex filter can be divided either by comma or semicolon. The quotation marks allow to insert a space between the filters for readability.</dd>
<dt>-map "[v]"</dt><dd>maps the video stream and:</dd>
<dt>-map "[a]"</dt><dd>maps the audio stream together into:</dd>
<h2>Find undetermined or unknown stream properties</h2>
<p>These examples use QuickTime inputs and outputs. The strategy will vary or may not be possible in other file formats. In the case of these examples it is the intention to make a lossless copy while clarifying an unknown characteristic of the stream.</p>
<dt>-show_streams</dt><dd>Shows metadata of stream properties</dd>
</dl>
<p>Values that are set to 'unknown' and 'undetermined' may be unspecified within the stream. An unknown aspect ratio would be expressed as '0:1'. Streams with many unknown properties may have interoperability issues or not play as intended. In many cases, an unknown or undetermined value may be accurate because the information about the source is unclear, but often the value is intended to be known. In many cases the stream will played with an assumed value if undetermined (for instance a display_aspect_ratio of '0:1' may be played as 'WIDTH:HEIGHT'), but this may or may not be what is intended. Use carefully.</p>
<dt>-aspect DAR_NUM:DAR_DEN</dt><dd>Replace DAR_NUM with the display aspect ratio numerator and DAR_DEN with the display aspect ratio denominator, such as <em>-aspect 4:3</em> or <em>-aspect 16:9</em>.</dd>
<dt><em>output_file</em></dt><dd>path, name and extension of the output file</dd>
<p>Other properties may be clarified in a similar way. Replace <em>-aspect</em> and its value with other properties such as shown in the options below. Note that setting color values in QuickTime requires that <em>-movflags write_colr</em> is set.</p>
<p>The possible values for <code>-color_primaries</code>, <code>-color_trc</code>, and <code>-field_order</code> are given in the <ahref="https://ffmpeg.org/ffmpeg-all.html#toc-Codec-Options"target="_blank">Codec Options</a> section of the FFmpeg docs - scroll down to near the bottom of the section.</p>
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
<dt>-vf "<em>width</em>:<em>height</em>"</dt><dd>Crops the video to the given width and height (in pixels).<br>
By default, the crop area is centred: that is, the position of the top left of the cropped area is set to x = (<em>input_width</em> - <em>output_width</em>) / 2, y = <em>input_height</em> - <em>output_height</em>) / 2.
<p>It's also possible to specify the crop position by adding the x and y coordinates representing the top left of your cropped area to your crop filter, as such:</p>
<p>Result of the command <code>ffmpeg -i <em>smpte_coloursbars.mov</em> -vf "crop=500:500:0:0" <em>output_file</em></code>, appending <code>:0:0</code> to crop from the top left corner:</p>
<p>This command combines two audio tracks present in a video file into one stream. It can be useful in situations where a downstream process, like YouTube’s automatic captioning, expect one audio track. To ensure that you’re mapping the right audio tracks run ffprobe before writing the script to identify which tracks are desired. More than two audio streams can be combined by extending the pattern present in the -filter_complex option.</p>
<dt>-af</dt><dd>specifies that the next section should be interpreted as an audio filter</dd>
<dt>pan=</dt><dd>tell the quoted text below to use the <ahref="https://ffmpeg.org/ffmpeg-filters.html#pan-1"target="_blank">pan filter</a></dd>
<dt>"stereo|c0=c0|c1=-1*c1"</dt><dd>maps the output's first channel (c0) to the input's first channel and the output's second channel (c1) to the inverse of the input's second channel</dd>
<p>This filter calculates and outputs loudness information in json about an input file (labeled input) as well as what the levels would be if loudnorm were applied in its one pass mode (labeled output). The values generated can be used as inputs for a 'second pass' of the loudnorm filter allowing more accurate loudness normalization than if it is used in a single pass.</p>
<p>These instructions use the loudnorm defaults, which align well with PBS recommendations for target loudness. More information can be found at the <ahref="https://ffmpeg.org/ffmpeg-filters.html#loudnorm"target="_blank">loudnorm documentation</a>.</p>
<p>Information about PBS loudness standards can be found in the <ahref="https://www-tc.pbs.org/capt/Producing/TOS-2012-Pt2-Distribution.pdf"target="_blank">PBS Technical Operating Specifications</a> document. Information about EBU loudness standards can be found in the <ahref="https://tech.ebu.ch/docs/r/r128-2014.pdf"target="_blank">EBU R 128</a> recommendation document.</p>
<dt>-af loudnorm</dt><dd>activates the loudnorm filter</dd>
<dt>print_format=json</dt><dd>sets the output format for loudness information to json. This format makes it easy to use in a second pass. For a more human readable output, this can be set to <code>print_format=summary</code></dd>
<p>This will apply RIAA equalization to an input file allowing correct listening of audio transferred 'flat' (without EQ) from records that used this EQ curve. For more information about RIAA equalization see the <ahref="https://en.wikipedia.org/wiki/RIAA_equalization"target="_blank">Wikipedia page</a> on the subject.</p>
<p>This will normalize the loudness of an input using one pass, which is quicker but less accurate than using two passes. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the <ahref="https://ffmpeg.org/ffmpeg-filters.html#loudnorm"target="_blank">loudnorm documentation</a>.</p>
<p>Information about PBS loudness standards can be found in the <ahref="https://www-tc.pbs.org/capt/Producing/TOS-2012-Pt2-Distribution.pdf"target="_blank">PBS Technical Operating Specifications</a> document. Information about EBU loudness standards can be found in the <ahref="https://tech.ebu.ch/docs/r/r128-2014.pdf"target="_blank">EBU R 128</a> recommendation document.</p>
<dt>-af loudnorm</dt><dd>activates the loudnorm filter with default settings</dd>
<dt>dual_mono=true</dt><dd>(optional) Use this for mono files meant to be played back on stereo systems for correct loudness. Not necessary for multi-track inputs.</dd>
<dt>-ar 48k</dt><dd>Sets the output sample rate to 48 kHz. (The loudnorm filter upsamples to 192 kHz so it is best to manually set a desired output sample rate).</dd>
<p>This command allows using the levels calculated using a <ahref="#loudnorm_metadata">first pass of the loudnorm filter</a> to more accurately normalize loudness. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the <ahref="https://ffmpeg.org/ffmpeg-filters.html#loudnorm"target="_blank">loudnorm documentation</a>.</p>
<p>Information about PBS loudness standards can be found in the <ahref="https://www-tc.pbs.org/capt/Producing/TOS-2012-Pt2-Distribution.pdf"target="_blank">PBS Technical Operating Specifications</a> document. Information about EBU loudness standards can be found in the <ahref="https://tech.ebu.ch/docs/r/r128-2014.pdf"target="_blank">EBU R 128</a> recommendation document.</p>
<dt>-af loudnorm</dt><dd>activates the loudnorm filter with default settings</dd>
<dt>dual_mono=true</dt><dd>(optional) use this for mono files meant to be played back on stereo systems for correct loudness. Not necessary for multi-track inputs.</dd>
<dt>linear=true</dt><dd>tells loudnorm to use linear normalization</dd>
<dt>-ar 48k</dt><dd>Sets the output sample rate to 48 kHz. (The loudnorm filter upsamples to 192 kHz so it is best to manually set a desired output sample rate).</dd>
<dt>-c:v copy</dt><dd>Copy all mapped video streams.</dd>
<dt>-c:a pcm_s16le</dt><dd>tells FFmpeg to encode the audio stream in 16-bit linear PCM (<ahref="https://en.wikipedia.org/wiki/Endianness#Little-endian"target="_blank">little endian</a>)</dd>
<dt>-af "aresample=async=1000"</dt><dd>Uses the <ahref="https://ffmpeg.org/ffmpeg-filters.html#aresample-1"target="_blank">aresample</a> filter to stretch/squeeze samples to given timestamps, with a maximum of 1000 samples per second compensation.</dd>
<p>This command takes two or more files of the same file type and joins them together to make a single file. All that the program needs is a text file with a list specifying the files that should be joined. However, it only works properly if the files to be combined have the exact same codec and technical specifications. Be careful, FFmpeg may appear to have successfully joined two video files with different codecs, but may only bring over the audio from the second file or have other weird behaviors. Don’t use this command for joining files with different codecs and technical specs and always preview your resulting video file!</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-f concat</dt><dd>forces ffmpeg to concatenate the files and to keep the same file format</dd>
<dt>-i <em>mylist.txt</em></dt><dd>path, name and extension of the input file. Per the <ahref="https://ffmpeg.org/ffmpeg-formats.html#Options"target="_blank">FFmpeg documentation</a>, it is preferable to specify relative rather than absolute file paths, as allowing absolute file paths may pose a security risk.<br>
In the above, <strong>file</strong> is simply the word "file". Straight apostrophes ('like this') rather than curved quotation marks (‘like this’) must be used to enclose the file paths.<br>
<strong>Note:</strong> If specifying absolute file paths in the .txt file, add <code>-safe 0</code> before the input file.<br>
<p>The input files may differ in many respects - container, codec, chroma subsampling scheme, framerate, etc. However, the above command only works properly if the files to be combined have the same dimensions (e.g., 720x576). Also note that if the input files have different framerates, then the output file will be of variable framerate.</p>
<p>Some aspects of the input files will be normalised: for example, if an input file contains a video track and an audio track that do not have exactly the same duration, the shorter one will be padded. In the case of a shorter video track, the last frame will be repeated in order to cover the missing video; in the case of a shorter audio track, the audio stream will be padded with silence.</p>
Each reference to a specific stream is enclosed in square brackets. In the first stream reference, <code>0:v:0</code>, the first zero refers to the first input file, <code>v</code> means video stream, and the second zero indicates that it is the <em>first</em> video stream in the file that should be selected. Likewise, <code>0:a:0</code> means the first audio stream in the first input file.<br>
As demonstrated above, ffmpeg uses zero-indexing: <code>0</code> means the first input/stream/etc, <code>1</code> means the second input/stream/etc, and <code>4</code> would mean the fifth input/stream/etc.</dd>
<dt>[1:v:0][1:a:0]</dt><dd>As described above, this means select the first video and audio streams from the second input file.</dd>
<dt>concat=</dt><dd>starts the <code>concat</code> filter</dd>
<dt>n=2</dt><dd>states that there are two input files</dd>
<dt>:</dt><dd>separator</dd>
<dt>v=1</dt><dd>sets the number of output video streams.<br>
Note that this must be equal to the number of video streams selected from each segment.</dd>
<dt>:</dt><dd>separator</dd>
<dt>a=1</dt><dd>sets the number of output audio streams.<br>
Note that this must be equal to the number of audio streams selected from each segment.</dd>
<dt>[video_out]</dt><dd>name of the concatenated output video stream. This is a variable name which you define, so you could call it something different, like “vOut”, “outv”, or “banana”.</dd>
<dt>[audio_out]</dt><dd>name of the concatenated output audio stream. Again, this is a variable name which you define.</dd>
<p>If no characteristics of the output files are specified, ffmpeg will use the default encodings associated with the given output file type. To specify the characteristics of the output stream(s), add flags after each <code>-map "[out]"</code> part of the command.</p>
<p>For example, to ensure that the video stream of the output file is visually lossless H.264 with a 4:2:0 chroma subsampling scheme, the command above could be amended to include the following:<br>
<h4>Variation: concatenating files of different resolutions</h4>
<p>To concatenate files of different resolutions, you need to resize the videos to have matching resolutions prior to concatenation. The most basic way to do this is by using a scale filter and giving the dimensions of the file you wish to match:</p>
<p>(The Lanczos scaling algorithm is recommended, as it is slower but better than the default bilinear algorithm).</p>
<p>The rescaling should be applied just before the point where the streams to be used in the output file are listed. Select the stream you want to rescale, apply the filter, and assign that to a variable name (<code>rescaled_video</code> in the below example). Then you use this variable name in the list of streams to be concatenated.</p>
<p>However, this will only have the desired visual output if the inputs have the same aspect ratio. If you wish to concatenate an SD and an HD file, you will also wish to pillarbox the SD file while upscaling. (See the <ahref="https://amiaopensource.github.io/ffmprovisr/#SD_HD_2">Convert 4:3 to pillarboxed HD</a> command). The full command would look like this:</p>
<p>Here, the first input is an SD file which needs to be upscaled to match the second input, which is 1920x1080. The scale filter enlarges the SD input to the height of the HD frame, keeping the 4:3 aspect ratio; then, the video is pillarboxed within a 1920x1080 frame.</p>
<h4>Variation: concatenating files of different framerates</h4>
<p>If the input files have different framerates, then the output file may be of variable framerate. To explicitly obtain an output file of constant framerate, you may wish convert an input (or multiple inputs) to a different framerate prior to concatenation.</p>
<p>You can speed up or slow down a file using the <code>fps</code> and <code>atempo</code> filters (see also the <ahref="https://amiaopensource.github.io/ffmprovisr/#modify_speed">Modify speed</a> command).</p>
<p>Here's an example of the full command, in which input_1 is 30fps, input_2 is 25fps, and 25fps is the desired output speed.</p>
<p>Note that the <code>fps</code> filter will drop or repeat frames as necessary in order to achieve the desired frame rate - see the FFmpeg <ahref="https://ffmpeg.org/ffmpeg-filters.html#fps-1"target="_blank">fps docs</a> for more details.</p>
<p>For more information, see the <ahref="https://trac.ffmpeg.org/wiki/Concatenate#differentcodec"target="_blank">FFmpeg wiki page on concatenating files of different types</a>.</p>
<pclass="link"></p>
</div>
<!-- ends Join files of the different types together -->
<dt>-f segment</dt><dd>Use <ahref="https://ffmpeg.org/ffmpeg-formats.html#toc-segment_002c-stream_005fsegment_002c-ssegment"target="_blank">segment muxer</a> for generating the output.</dd>
<p>Path, name and extension of the output file.<br>
In order to have an incrementing number in each segment filename, FFmpeg supports <ahref="http://www.cplusplus.com/reference/cstdio/printf/"target="_blank">printf-style</a> syntax for a counter.</p>
<p>In this example, '%03d' means: 3-digits, zero-padded<br>
<strong>Note:</strong> watch out when using <code>-ss</code> with <code>-c copy</code> if the source is encoded with an interframe codec (e.g., H.264). Since FFmpeg must split on i-frames, it will seek to the nearest i-frame to begin the stream copy.</dd>
<dt><em>output_file</em></dt><dd>path, name and extension of the output file</dd>
<p>This command captures a certain portion of a video file, starting from the beginning and continuing for the amount of time (in seconds) specified in the script. This can be used to create a preview file, or to remove unwanted content from the end of the file. To be more specific, use timecode, such as 00:00:05.</p>
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
<dt>-t <em>5</em></dt><dd>tells FFmpeg to stop copying from the input file after a certain time, and specifies the number of seconds after which to stop copying. In this case, 5 seconds is specified.</dd>
<p>This command copies a video file starting from a specified time, removing the first few seconds from the output. This can be used to create an excerpt, or remove unwanted content from the beginning of a video file.</p>
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
<dt>-ss <em>5</em></dt><dd>tells FFmpeg what timecode in the file to look for to start copying, and specifies the number of seconds into the video that FFmpeg should start copying. To be more specific, you can use timecode such as 00:00:05.</dd>
<p>This command copies a video file starting from a specified time before the end of the file, removing everything before from the output. This can be used to create an excerpt, or extract content from the end of a video file (e.g. for extracting the closing credits).</p>
<dt>-sseof <em>-5</em></dt><dd>This parameter must stay before the input file. It tells FFmpeg what timecode in the file to look for to start copying, and specifies the number of seconds from the end of the video that FFmpeg should start copying. The end of the file has index 0 and the minus sign is needed to reference earlier portions. To be more specific, you can use timecode such as -00:00:05. Note that in most file formats it is not possible to seek exactly, so FFmpeg will seek to the closest point before.</dd>
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
By default, <ahref="https://ffmpeg.org/ffmpeg-filters.html#yadif-1"target="_blank">yadif</a> will output one frame for each frame. Outputting one frame for each <em>field</em> (thereby doubling the frame rate) with <code>yadif=1</code> may produce visually better results.</dd>
<dt>scale=1440:1080:flags=lanczos</dt><dd>resizes the image to 1440x1080, using the Lanczos scaling algorithm, which is slower but better than the default bilinear algorithm.</dd>
<dt>pad=1920:1080:(ow-iw)/2:(oh-ih)/2</dt><dd>pads the area around the 4:3 input video to create a 16:9 output video</dd>
<dt>format=yuv420p</dt><dd>specifies a pixel format of Y′C<sub>B</sub>C<sub>R</sub> 4:2:0</dd>
<dt>"</dt><dd>quotation mark to end filtergraph</dd>
By default, <ahref="https://ffmpeg.org/ffmpeg-filters.html#yadif-1"target="_blank">yadif</a> will output one frame for each frame. Outputting one frame for each <em>field</em> (thereby doubling the frame rate) with <code>yadif=1</code> may produce visually better results.</dd>
<dt>format=yuv420p</dt><dd>chroma subsampling set to 4:2:0<br>
By default, <code>libx264</code> will use a chroma subsampling scheme that is the closest match to that of the input. This can result in Y′C<sub>B</sub>C<sub>R</sub> 4:2:0, 4:2:2, or 4:4:4 chroma subsampling. QuickTime and most other non-FFmpeg based players can’t decode H.264 files that are not 4:2:0, therefore it’s advisable to specify 4:2:0 chroma subsampling.</dd>
<p><code>"yadif,format=yuv420p"</code> is an FFmpeg <ahref="https://trac.ffmpeg.org/wiki/FilteringGuide#FiltergraphChainFilterrelationship"target="_blank">filtergraph</a>. Here the filtergraph is made up of one filter chain, which is itself made up of the two filters (separated by the comma).<br>
The enclosing quote marks are necessary when you use spaces within the filtergraph, e.g. <code>-vf "yadif, format=yuv420p"</code>, and are included above as an example of good practice.</p>
<p><strong>Note:</strong> FFmpeg includes several deinterlacers apart from <ahref="https://ffmpeg.org/ffmpeg-filters.html#yadif-1"target="_blank">yadif</a>: <ahref="https://ffmpeg.org/ffmpeg-filters.html#bwdif"target="_blank">bwdif</a>, <ahref="https://ffmpeg.org/ffmpeg-filters.html#w3fdif"target="_blank">w3fdif</a>, <ahref="https://ffmpeg.org/ffmpeg-filters.html#kerndeint"target="_blank">kerndeint</a>, and <ahref="https://ffmpeg.org/ffmpeg-filters.html#nnedi"target="_blank">nnedi</a>.</p>
<p>The inverse telecine procedure reverses the <ahref="https://en.wikipedia.org/wiki/Three-two_pull_down"target="_blank">3:2 pull down</a> process, restoring 29.97fps interlaced video to the 24fps frame rate of the original film source.</p>
<dt>-c:v libx264</dt><dd>encode video as H.264</dd>
<dt>-vf "fieldmatch,yadif,decimate"</dt><dd>applies these three video filters to the input video.<br>
<ahref="https://ffmpeg.org/ffmpeg-filters.html#fieldmatch"target="_blank">Fieldmatch</a> is a field matching filter for inverse telecine - it reconstructs the progressive frames from a telecined stream.<br>
<ahref="https://ffmpeg.org/ffmpeg-filters.html#yadif-1"target="_blank">Yadif</a> (‘yet another deinterlacing filter’) deinterlaces the video. (Note that FFmpeg also includes several other deinterlacers).<br>
<p><code>"fieldmatch,yadif,decimate"</code> is an FFmpeg <ahref="https://trac.ffmpeg.org/wiki/FilteringGuide#FiltergraphChainFilterrelationship"target="_blank">filtergraph</a>. Here the filtergraph is made up of one filter chain, which is itself made up of the three filters (separated by commas).<br>
The enclosing quote marks are necessary when you use spaces within the filtergraph, e.g. <code>-vf "fieldmatch, yadif, decimate"</code>, and are included above as an example of good practice.</p>
<p>Note that if applying an inverse telecine procedure to a 29.97i file, the output framerate will actually be 23.976fps.</p>
<p>This command can also be used to restore other framerates.</p>
<divclass="sample-image">
<h2>Example</h2>
<p>Before and after inverse telecine:</p>
<imgsrc="img/ivtc_originalvideo.gif"alt="GIF of original video">
<imgsrc="img/ivtc_result.gif"alt="GIF of video after inverse telecine">
<dt>-c:v <em>video_codec</em></dt><dd>As a video filter is used, it is not possible to use <code>-c copy</code>. The video must be re-encoded with whatever video codec is chosen, e.g. <code>ffv1</code>, <code>v210</code> or <code>prores</code>.</dd>
<dt><em>output_file</em></dt><dd>path, name and extension of the output file</dd>
<dt>-filter:v idet</dt><dd>This calls the <ahref="https://ffmpeg.org/ffmpeg-filters.html#idet"target="_blank">idet (detect video interlacing type) filter</a>.</dd>
<dt>-f null</dt><dd>Video is decoded with the <code>null</code> muxer. This allows video decoding without creating an output file.</dd>
<dt>-</dt><dd>FFmpeg syntax requires a specified output, and <code>-</code> is just a place holder. No file is actually created.</dd>
<dt>fontfile=<em>font_path</em></dt><dd>Set path to font. For example in macOS: <code>fontfile=/Library/Fonts/AppleGothic.ttf</code></dd>
<dt>fontsize=<em>font_size</em></dt><dd>Set font size. <code>35</code> is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.</dd>
<dt>text=<em>watermark_text</em></dt><dd>Set the content of your watermark text. For example: <code>text='FFMPROVISR EXAMPLE TEXT'</code></dd>
<dt>fontcolor=<em>font_colour</em></dt><dd>Set colour of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
<dt>x=(w-text_w)/2:y=(h-text_h)/2</dt><dd>Sets <em>x</em> and <em>y</em> coordinates for the watermark. These relative values will centre your watermark regardless of video dimensions.</dd>
<dt>-filter_complex overlay=main_w-overlay_w-5:5</dt><dd>This calls the overlay filter and sets x and y coordinates for the position of the watermark on the video. Instead of hardcoding specific x and y coordinates, <code>main_w-overlay_w-5:5</code> uses relative coordinates to place the watermark in the upper right hand corner, based on the width of your input files. Please see the <ahref="https://ffmpeg.org/ffmpeg-all.html#toc-Examples-102"target="_blank">FFmpeg documentation for more examples.</a></dd>
<dt>fontfile=<em>font_path</em></dt><dd>Set path to font. For example in macOS: <code>fontfile=/Library/Fonts/AppleGothic.ttf</code></dd>
<dt>fontsize=<em>font_size</em></dt><dd>Set font size. <code>35</code> is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.</dd>
<dt>timecode=<em>starting_timecode</em></dt><dd>Set the timecode to be displayed for the first frame. Timecode is to be represented as <code>hh:mm:ss[:;.]ff</code>. Colon escaping is determined by O.S, for example in Ubuntu <code>timecode='09\\:50\\:01\\:23'</code>. Ideally, this value would be generated from the file itself using ffprobe.</dd>
<dt>fontcolor=<em>font_colour</em></dt><dd>Set colour of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
<dt>boxcolor=<em>box_colour</em></dt><dd>Set colour of box. Can be a text string such as <code>fontcolor=black</code> or a hexadecimal value such as <code>fontcolor=0x000000</code></dd>
<dt>rate=<em>timecode_rate</em></dt><dd>Framerate of video. For example <code>25/1</code></dd>
<dt>x=(w-text_w)/2:y=h/1.2</dt><dd>Sets <em>x</em> and <em>y</em> coordinates for the timecode. These relative values will horizontally centre your timecode in the bottom third regardless of video dimensions.</dd>
<dt>-c:s mov_text</dt><dd>Encode subtitles using the <code>mov_text</code> codec. Note: The <code>mov_text</code> codec works for MP4 and MOV containers. For the MKV container, acceptable formats are <code>ASS</code>, <code>SRT</code>, and <code>SSA</code>.</dd>
<dt>-vf fps=1/60</dt><dd>Creates a filtergraph to use for the streams. The rest of the command identifies filtering by frames per second, and sets the frames per second at 1/60 (which is one per minute). Omitting this will output all frames from the video.</dd>
<dt><em>output file</em></dt><dd>path, name and extension of the output file. In the example out%d.png where %d is a regular expression that adds a number (d is for digit) and increments with each frame (out1.png, out2.png, out3.png…). You may also chose a regular expression like out%04d.png which gives 4 digits with leading 0 (out0001.png, out0002.png, out0003.png, …).</dd>
<p>This will convert a series of image files into a GIF.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-f image2</dt><dd>forces input or output file format. <code>image2</code> specifies the image file demuxer.</dd>
<dt>-framerate 9</dt><dd>sets framerate to 9 frames per second</dd>
<dt>-pattern_type glob</dt><dd>tells FFmpeg that the following mapping should "interpret like a <ahref="https://en.wikipedia.org/wiki/Glob_%28programming%29"target="_blank">glob</a>" (a "global command" function that relies on the * as a wildcard and finds everything that matches)</dd>
<dt>-i <em>"input_image_*.jpg"</em></dt><dd>maps all files in the directory that start with input_image_, for example input_image_001.jpg, input_image_002.jpg, input_image_003.jpg... etc.<br>
<p>The first command will use the palettegen filter to create a custom palette, then the second command will create the GIF with the paletteuse filter. The result is a high quality GIF.</p>
Then the scale filter resizes the image. You can specify both the width and the height, or specify a value for one and use a scale value of <em>-1</em> for the other to preserve the aspect ratio. (For example, <code>500:-1</code> would create a GIF 500 pixels wide and with a height proportional to the original video). In the first script above, <code>:flags=lanczos</code> specifies that the Lanczos rescaling algorithm will be used to resize the image.<br>
<dt>-t <em>3</em></dt><dd>duration in seconds (here 3; can be specified also with a full timestamp, i.e. here 00:00:03)</dd>
<dt>-loop <em>6</em></dt><dd>sets the number of times to loop the GIF. A value of <em>-1</em> will disable looping. Omitting <em>-loop</em> will use the default, which will loop infinitely.</dd>
<dt><em>output_file</em></dt><dd>path, name and extension of the output file</dd>
<p>The second command has a slightly different filtergraph, which breaks down as follows:</p>
<dl>
<dt>-filter_complex "[0:v]fps=10, scale=500:-1:flags=lanczos[v], [v][1:v]paletteuse"</dt><dd><code>[0:v]fps=10,scale=500:-1:flags=lanczos[v]</code>: applies the fps and scale filters described above to the first input file (the video).<br>
<code>[v][1:v]paletteuse"</code>: applies the <code>paletteuse</code> filter, setting the second input file (the palette) as the reference file.</dd>
<p>This is a quick and easy method. Dithering is more apparent than the above method using the palette filters, but the file size will be smaller. Perfect for that “legacy” GIF look.</p>
This must match the naming convention actually used! The regex %06d matches six digits long numbers, possibly with leading zeroes. This allows to read in ascending order, one image after the other, the full sequence inside one folder. For image sequences starting with 086400 (i.e. captured with a timecode starting at 01:00:00:00 and at 24 fps), add the flag <code>-start_number 086400</code> before <code>-i input_file_%06d.ext</code>. The extension for TIFF files is .tif or maybe .tiff; the extension for DPX files is .dpx (or eventually .cin for old files).</dd>
<dt>-c:v v210</dt><dd>encodes an uncompressed 10-bit video stream</dd>
<p>This command will take an image file (e.g. image.jpg) and an audio file (e.g. audio.mp3) and combine them into a video file that contains the audio track with the image used as the video. It can be useful in a situation where you might want to upload an audio file to a platform like YouTube. You may want to adjust the scaling with -vf to suit your needs.</p>
<p>This filter allows visual analysis of the information held in various bit depths of an audio stream. This can aid with identifying when a file that is nominally of a higher bit depth actually has been 'padded' with null information. The provided GIF shows a 16 bit WAV file (left) and then the results of converting that same WAV to 32 bit (right). Note that in the 32 bit version, there is still only information in the first 16 bits.</p>
<dl>
<dt>ffplay -f lavfi</dt><dd>starts the command and tells ffplay that you will be using the lavfi virtual device to create the input</dd>
<dt>"</dt><dd>quotation mark to start the lavfi filtergraph</dd>
<dt>asplit=2[out1][a]</dt><dd>splits the audio stream in two. One of these [a] will be passed to the filter, and the other [out1] will be the audible stream.</dd>
<dt>[a]abitscope=colors=purple|yellow[out0]</dt><dd>sends stream [a] into the abitscope filter, sets the colors for the channels to purple and yellow, and outputs the results to [out0]. This is what will be the visualization.</dd>
<dt>"</dt><dd>quotation mark to end the lavfi filtergraph</dd>
</dl>
<divclass="sample-image">
<h2>Comparison of mono 16 bit and mono 16 bit padded to 32 bit.</h2>
<dt>-f lavfi</dt><dd>tells ffplay to use the <ahref="https://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">Libavfilter input virtual device</a></dd>
<dt>,</dt><dd>comma signifies the end of audio source section and the beginning of the filter section</dd>
<dt>astats=metadata=1</dt><dd>tells the astats filter to ouput metadata that can be passed to another filter (in this case adrawgraph)</dd>
<dt>:</dt><dd>divides between options of the same filter</dd>
<dt>reset=1</dt><dd>tells the filter to calculate the stats on every frame (increasing this number would calculate stats for groups of frames)</dd>
<dt>,</dt><dd>comma divides one filter in the chain from another</dd>
<dt>adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0</dt><dd>draws a graph using the overall peak volume calculated by the astats filter. It sets the max for the graph to 0 (dB) and the minimum to -30 (dB). For more options on data points that can be graphed see the <ahref="https://ffmpeg.org/ffmpeg-filters.html#astats-1"target="_blank">FFmpeg astats documentation</a></dd>
<dt>size=700x256:bg=Black</dt><dd>sets the background color and size of the output</dd>
<dt>[out]</dt><dd>ends the filterchain and sets the output</dd>
<dt>"</dt><dd>quotation mark to end the lavfi filtergraph</dd>
<dt>-f lavfi</dt><dd>tells ffplay to use the <ahref="https://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">Libavfilter input virtual device</a></dd>
<dt>-vf</dt><dd>creates a filtergraph to use for the streams</dd>
<dt>"</dt><dd>quotation mark to start filtergraph</dd>
<dt>split=2[m][v]</dt><dd>Splits the input into two identical outputs and names them [m] and [v]</dd>
<dt>,</dt><dd>comma signifies there is another parameter coming</dd>
<dt>[v]vectorscope=b=0.7:m=color3:g=green[v]</dt><dd>asserts usage of the vectorscope filter and sets a light background opacity (b, alias for bgopacity), sets a background color style (m, alias for mode), and graticule color (g, alias for graticule)</dd>
<dt>,</dt><dd>comma signifies there is another parameter coming</dd>
<dt>[m][v]overlay=x=W-w:y=H-h</dt><dd>declares where the vectorscope will overlay on top of the video image as it plays</dd>
<dt>"</dt><dd>quotation mark to end filtergraph</dd>
<dt>-filter_complex</dt><dd>Lets FFmpeg know we will be using a complex filter (this must be used for multiple inputs)</dd>
<dt>"</dt><dd>quotation mark to start filtergraph</dd>
<dt>[0:v:0]tblend=all_mode=difference128[a]</dt><dd>Applies the tblend filter (with the settings all_mode and difference128) to the first video stream from the first input and assigns the result to the output [a]</dd>
<dt>[1:v:0]tblend=all_mode=difference128[b]</dt><dd>Applies the tblend filter (with the settings all_mode and difference128) to the first video stream from the second input and assigns the result to the output [b]</dd>
<dt>[a][b]hstack[out]</dt><dd>Takes the outputs from the previous steps ([a] and [b] and uses the hstack (horizontal stack) filter on them to create the side by side output. This output is then named [out])</dd>
<dt>"</dt><dd>quotation mark to end filtergraph</dd>
<dt>-map [out]</dt><dd>Maps the output of the filter chain</dd>
<p>See also the <ahref="https://ffmpeg.org/ffprobe.html"target="_blank"> FFmpeg documentation on ffprobe</a> for a full list of flags, commands, and options.</p>
<h3>Create Bash script to batch process with FFmpeg</h3>
<p>Bash scripts are plain text files saved with a .sh extension. This entry explains how they work with the example of a bash script named “Rewrap-MXF.sh”, which rewraps .mxf files in a given directory to .mov files.</p>
<p>“Rewrap-MXF.sh” contains the following text:</p>
<p><code>for file in *.mxf; do ffmpeg -i "$file" -map 0 -c copy "${file%.mxf}.mov"; done</code></p>
<dl>
<dt>for file in *.mxf</dt><dd>starts the loop, and states what the input files will be. Here, the FFmpeg command within the loop will be applied to all files with an extension of .mxf.<br>
The word ‘file’ is an arbitrary variable which will represent each .mxf file in turn as it is looped over.</dd>
<dt>do ffmpeg -i "$file"</dt><dd>carry out the following FFmpeg command for each input file.<br>
Per Bash syntax, within the command the variable is referred to by <strong>“$file”</strong>. The dollar sign is used to reference the variable ‘file’, and the enclosing quotation marks prevents reinterpretation of any special characters that may occur within the filename, ensuring that the original filename is retained.</dd>
<p><strong>Note:</strong> the shell script (.sh file) and all .mxf files to be processed must be contained within the same directory, and the script must be run from that directory.<br>
<h3>Create PowerShell script to batch process with FFmpeg</h3>
<p>As of Windows 10, it is possible to run Bash via <ahref="https://msdn.microsoft.com/en-us/commandline/wsl/about"target="_blank">Bash on Ubuntu on Windows</a>, allowing you to use <ahref="index.html#batch_processing_bash">bash scripting</a>. To enable Bash on Windows, see <ahref="https://msdn.microsoft.com/en-us/commandline/wsl/install_guide"target="_blank">these instructions</a>.</p>
<p>On Windows, the primary native command line programme is <strong>PowerShell</strong>. PowerShell scripts are plain text files saved with a .ps1 extension. This entry explains how they work with the example of a PowerShell script named “rewrap-mp4.ps1”, which rewraps .mp4 files in a given directory to .mkv files.</p>
<dt>$inputfiles = ls *.mp4</dt><dd>Creates the variable <code>$inputfiles</code>, which is a list of all the .mp4 files in the current folder.<br>
In PowerShell, all variable names start with the dollar-sign character.</dd>
<dt>foreach ($file in $inputfiles)</dt><dd>Creates a loop and states the subsequent code block will be applied to each file listed in <code>$inputfiles</code>.<br>
<code>$file</code> is an arbitrary variable which will represent each .mp4 file in turn as it is looped over.</dd>
<dt>{</dt><dd>Opens the code block.</dd>
<dt>$output = [io.path]::ChangeExtension($file, '.mkv')</dt><dd>Sets up the output file: it will be located in the current folder and keep the same filename, but will have an .mkv extension instead of .mp4.</dd>
<dt>ffmpeg -i $file</dt><dd>Carry out the following FFmpeg command for each input file.<br>
<strong>Note:</strong> To call FFmpeg here as just ‘ffmpeg’ (rather than entering the full path to ffmpeg.exe), you must make sure that it’s correctly configured. See <ahref="http://adaptivesamples.com/how-to-install-ffmpeg-on-windows/"target="_blank">this article</a>, especially the section ‘Add to Path’.</dd>
<dt>-c copy</dt><dd>enable stream copy (no re-encode)</dd>
<dt>$output</dt><dd>The output file is set to the value of the <code>$output</code> variable declared above: i.e., the current file name with an .mkv extension.</dd>
<p><strong>Note:</strong> the PowerShell script (.ps1 file) and all .mp4 files to be rewrapped must be contained within the same directory, and the script must be run from that directory.<p>
<p>This decodes your video and displays any CRC checksum mismatches. These errors will display in your terminal like this: <code>[ffv1 @ 0x1b04660] CRC mismatch 350FBD8A!at 0.272000 seconds</code></p>
<p>Frame CRCs are enabled by default in FFV1 Version 3.</p>
<dt>-report</dt><dd>Dump full command line and console output to a file named <em>ffmpeg-YYYYMMDD-HHMMSS.log</em> in the current directory. It also implies <code>-loglevel verbose</code>.</dd>
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
<p>This will create an MD5 checksum for each group of 48000 audio samples.<br>
The number of samples per group can be set arbitrarily, but it's good practice to match the samplerate of the media file (so you will get one checksum per second).</p>
<p><strong>Note:</strong> This filter trandscodes audio to 16 bit PCM by default. The generated framemd5s will represent this value. Validating these framemd5s will require using the same default settings. Alternatively, when your file has another quantisation rates (e.g. 24 bit), then you might add the audio codec <code>-c:a pcm_s24le</code> to the command, for compatibility reasons with other tools, like <ahref="https://mediaarea.net/BWFMetaEdit"target="_blank">BWF MetaEdit</a>.</p>
<p>This will create MD5 checksums for the first video and the first audio stream in a file. If only one of these is necessary (for example if used on a WAV file) either part of the command can be excluded to create the desired MD5 only. Use of this kind of checksum enables integrity of the A/V information to be verified independently of any changes to surrounding metadata.</p>
<dt><em>output_file_1</em></dt><dd>is the output file for the video stream MD5. Example file extensions are <code>.md5</code> and <code>.txt</code></dd>
<dt>-map 0:a:0</dt><dd>selects the first audio stream from the input</dd>
<dt>-c:a copy</dt><dd>ensures that FFmpeg will not transcode the audio to a different codec before generating the MD5 (by default FFmpeg will use 16 bit PCM for audio MD5s).</dd>
<p><strong>Note:</strong>The MD5s generated by running this command on WAV files are compatible with those embedded by the <ahref="https://mediaarea.net/BWFMetaEdit"target="_blank">BWF MetaEdit</a> tool and can be compared.</p>
<p>This will create an XML report for use in <ahref="https://github.com/bavc/qctools"target="_blank">QCTools</a> for a video file with one video track and one audio track. See also the <ahref="https://github.com/bavc/qctools/blob/master/docs/data_format.md#creating-a-qctools-document"target="_blank">QCTools documentation</a>.</p>
<dt>-f lavfi</dt><dd>tells ffprobe to use the <ahref="https://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">Libavfilter</a> input virtual device</dd>
<dd>This very large lump of commands declares the input file and passes in a request for all potential data signal information for a file with one video and one audio track</dd>
<dt>-show_frames</dt><dd>asks for information about each frame and subtitle contained in the input multimedia stream</dd>
<dt>-show_versions</dt><dd>asks for information related to program and library versions</dd>
<dt>-of xml=x=1:q=1</dt><dd>sets the data export format to XML</dd>
<dt>-noprivate</dt><dd>hides any private data that might exist in the file</dd>
<dt>| gzip</dt><dd>The | is to "pipe" (or push) the data into a compressed file format</dd>
<dt><code>></code></dt><dd>redirects the standard output (the data made by ffprobe about the video)</dd>
<dt><em>input_file</em>.qctools.xml.gz</dt><dd>names the zipped data output file, which can be named anything but needs the extension qctools.xml.gz for compatibility issues</dd>
<p>This will create an XML report for use in <ahref="https://github.com/bavc/qctools"target="_blank">QCTools</a> for a video file with one video track and NO audio track. See also the <ahref="https://github.com/bavc/qctools/blob/master/docs/data_format.md#creating-a-qctools-document"target="_blank">QCTools documentation</a>.</p>
<dt>-f lavfi</dt><dd>tells ffprobe to use the <ahref="https://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">Libavfilter</a> input virtual device</dd>
<dd>This very large lump of commands declares the input file and passes in a request for all potential data signal information for a file with one video and one audio track</dd>
<dt>-show_frames</dt><dd>asks for information about each frame and subtitle contained in the input multimedia stream</dd>
<dt>-show_versions</dt><dd>asks for information related to program and library versions</dd>
<dt>-of xml=x=1:q=1</dt><dd>sets the data export format to XML</dd>
<dt>-noprivate</dt><dd>hides any private data that might exist in the file</dd>
<dt>| gzip</dt><dd>The | is to "pipe" (or push) the data into a compressed file format</dd>
<dt><code>></code></dt><dd>redirects the standard output (the data made by ffprobe about the video)</dd>
<dt><em>input_file</em>.qctools.xml.gz</dt><dd>names the zipped data output file, which can be named anything but needs the extension qctools.xml.gz for compatibility issues</dd>
<p>This command uses FFmpeg's <ahref="https://ffmpeg.org/ffmpeg-filters.html#readeia608"target="_blank">readeia608</a> filter to extract the hexadecimal values hidden within <ahref="https://en.wikipedia.org/wiki/EIA-608"target="_blank">EIA-608 (Line 21)</a> Closed Captioning, outputting a csv file. For more information about EIA-608, check out Adobe's <ahref="https://www.adobe.com/content/dam/Adobe/en/devnet/video/pdfs/introduction_to_closed_captions.pdf"target="_blank">Introduction to Closed Captions</a>.</p>
<p>If hex isn't your thing, closed captioning <ahref="http://www.theneitherworld.com/mcpoodle/SCC_TOOLS/DOCS/CC_CHARS.HTML"target="_blank">character</a> and <ahref="http://www.theneitherworld.com/mcpoodle/SCC_TOOLS/DOCS/CC_CODES.HTML"target="_blank">code</a> sets can be found in the documentation for SCTools.</p>
<dt>-f lavfi</dt><dd>tells ffprobe to use the <ahref="https://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">libavfilter</a> input virtual device</dd>
<dt>readeia608 -show_entries frame=pkt_pts_time:frame_tags=lavfi.readeia608.0.line,lavfi.readeia608.0.cc,lavfi.readeia608.1.line,lavfi.readeia608.1.cc -of csv</dt><dd>specifies the first two lines of video in which EIA-608 data (hexadecimal byte pairs) are identifiable by ffprobe, outputting comma separated values (CSV)</dd>
<dt>></dt><dd>redirects the standard output (the data created by ffprobe about the video)</dd>
<p>Side-by-side video with true EIA-608 captions on the left, zoomed in view of the captions on the right (with hex values represented). To achieve something similar with your own captioned video, try out the EIA608/VITC viewer in <ahref="https://github.com/bavc/qctools"target="_blank">QCTools</a>.</p>
<imgsrc="./img/eia608_captions.gif"alt="GIF of Closed Captions">
<dt>-f lavfi</dt><dd>tells FFmpeg to use the <ahref="https://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">Libavfilter</a> input virtual device</dd>
<dt>-i mandelbrot=size=1280x720:rate=25</dt><dd>asks for the <ahref="https://ffmpeg.org/ffmpeg-filters.html#mandelbrot"target="_blank">mandelbrot test filter</a> as input. Adjusting the <code>size</code> and <code>rate</code> options allows you to choose a specific frame size and framerate.</dd>
<dt>-c:v libx264</dt><dd>transcodes video from rawvideo to H.264. Set <code>-pix_fmt</code> to <code>yuv420p</code> for greater H.264 compatibility with media players.</dd>
<dt>-t 10</dt><dd>specifies recording time of 10 seconds</dd>
<dt>-f lavfi</dt><dd>tells FFmpeg to use the <ahref="https://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">Libavfilter</a> input virtual device</dd>
<dt>-i smptebars=size=720x576:rate=25</dt><dd>asks for the <ahref="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc"target="_blank">smptebars test filter</a> as input. Adjusting the <code>size</code> and <code>rate</code> options allows you to choose a specific frame size and framerate.</dd>
<dt>-c:v prores</dt><dd>transcodes video from rawvideo to Apple ProRes 4:2:2.</dd>
<dt>-t 10</dt><dd>specifies recording time of 10 seconds</dd>
<dt>-f lavfi</dt><dd>tells FFmpeg to use the <ahref="https://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">libavfilter</a> input virtual device</dd>
<dt>-i testsrc=size=720x576:rate=25</dt><dd>asks for the testsrc filter pattern as input. Adjusting the <code>size</code> and <code>rate</code> options allows you to choose a specific frame size and framerate. <br>
The different test patterns that can be generated are listed <ahref="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc"target="_blank">here</a>.</dd>
<dt>-c:v v210</dt><dd>transcodes video from rawvideo to 10-bit Uncompressed Y′C<sub>B</sub>C<sub>R</sub> 4:2:2. Alter this setting to set your desired codec.</dd>
<dt>-t 10</dt><dd>specifies recording time of 10 seconds</dd>
<dt>-f lavfi</dt><dd>tells ffplay to use the <ahref="https://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">Libavfilter</a> input virtual device</dd>
<dt>-i smptehdbars=size=1920x1080</dt><dd>asks for the <ahref="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc"target="_blank">smptehdbars filter pattern</a> as input and sets the HD resolution. This generates a colour bars pattern, based on the SMPTE RP 219–2002.</dd>
<dt>-f lavfi</dt><dd>tells ffplay to use the <ahref="https://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">Libavfilter</a> input virtual device</dd>
<dt>-i smptebars=size=640x480</dt><dd>asks for the <ahref="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc"target="_blank">smptebars filter pattern</a> as input and sets the VGA (SD) resolution. This generates a colour bars pattern, based on the SMPTE Engineering Guideline EG 1–1990.</dd>
<dt>-f lavfi</dt><dd>tells FFmpeg to use the <ahref="https://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">Libavfilter</a> input virtual device</dd>
<dt>-c:a pcm_s16le</dt><dd>encodes the audio codec in <code>pcm_s16le</code> (the default encoding for wav files). <code>pcm</code> represents pulse-code modulation format (raw bytes), <code>16</code> means 16 bits per sample, and <code>le</code> means "little endian"</dd>
<dt>-f lavfi</dt><dd>tells FFmpeg to use the <ahref="https://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">libavfilter</a> input virtual device</dd>
<dt>-i smptebars=size=720x576:rate=25</dt><dd>asks for the <ahref="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc"target="_blank">smptebars test filter</a> as input. Adjusting the <code>size</code> and <code>rate</code> options allows you to choose a specific frame size and framerate.</dd>
<dt>-f lavfi</dt><dd>use libavfilter again, but now for audio</dd>
<dt>-i "sine=frequency=1000:sample_rate=48000"</dt><dd>Sets the signal to 1000 Hz, sampling at 48 kHz.</dd>
<dt>-c:a pcm_s16le</dt><dd>encodes the audio codec in <code>pcm_s16le</code> (the default encoding for wav files). <code>pcm</code> represents pulse-code modulation format (raw bytes), <code>16</code> means 16 bits per sample, and <code>le</code> means "little endian"</dd>
<dt>-t 10</dt><dd>specifies recording time of 10 seconds</dd>
<dt>-c:v ffv1</dt><dd>Encodes to <ahref="https://en.wikipedia.org/wiki/FFV1"target="_blank">FFV1</a>. Alter this setting to set your desired codec.</dd>
<dt>-bsf noise=1</dt><dd>sets bitstream filters for all to 'noise'. Filters can be set on specific filters using syntax such as <code>-bsf:v</code> for video, <code>-bsf:a</code> for audio, etc. The <ahref="https://ffmpeg.org/ffmpeg-bitstream-filters.html#noise"target="_blank">noise filter</a> intentionally damages the contents of packets without damaging the container. This sets the noise level to 1 but it could be left blank or any number above 0.</dd>
<dt>-f lavfi</dt><dd>tells ffplay to use the <ahref="https://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">Libavfilter</a> input virtual device</dd>
<labelclass="recipe"for="ocr_on_top">Play video with OCR</label>
<inputtype="checkbox"id="ocr_on_top">
<divclass="hiding">
<h3>Plays video with OCR on top</h3>
<p>Note: ffmpeg must be compiled with the tesseract library for this script to work (<code>--with-tesseract</code> if using the <code>brew install ffmpeg</code> method).</p>
<dt>-vf</dt><dd>creates a filtergraph to use for the streams</dd>
<dt>"</dt><dd>quotation mark to start filtergraph</dd>
<dt>ocr,</dt><dd>tells ffplay to use ocr as source and the comma signifies that the script is ready for filter assertion</dd>
<dt>drawtext=fontfile=/Library/Fonts/Andale Mono.ttf</dt><dd>tells ffplay to drawtext and use a specific font (Andale Mono) when doing so</dd>
<dt>:</dt><dd>indicates there’s another parameter coming</dd>
<dt>text=%{metadata\\\:lavfi.ocr.text}</dt><dd>tells ffplay what text to use when playing. In this case, calls for metadata that lives in the lavfi.ocr.text library</dd>
<dt>:</dt><dd>indicates there’s another parameter coming</dd>
<dt>fontcolor=white</dt><dd>specifies font color as white</dd>
<dt>"</dt><dd>quotation mark to end filtergraph</dd>
<labelclass="recipe"for="ffprobe_ocr">Export OCR from video to screen</label>
<inputtype="checkbox"id="ffprobe_ocr">
<divclass="hiding">
<h3>Exports OCR data to screen</h3>
<p>Note: FFmpeg must be compiled with the tesseract library for this script to work (<code>--with-tesseract</code> if using the <code>brew install ffmpeg</code> method)</p>
<dt>-f lavfi</dt><dd>tells ffprobe to use the <ahref="https://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">Libavfilter input virtual device</a></dd>
This must match the naming convention used! The regex %06d matches six-digit-long numbers, possibly with leading zeroes. This allows the full sequence to be read in ascending order, one image after the other.<br>
The extension for TIFF files is .tif or maybe .tiff; the extension for DPX files is .dpx (or even .cin for old files). Screenshots are often in .png format.</dd>
<p>If <code>-framerate</code> is omitted, the playback speed depends on the images’ file sizes and on the computer’s processing power. It may be rather slow for large image files.</p>
<p>You can navigate durationally by clicking within the playback window. Clicking towards the left-hand side of the playback window takes you towards the beginning of the playback sequence; clicking towards the right takes you towards the end of the sequence.</p>
<p>This command splits the original input file into a video and audio stream. The -map command identifies which streams are mapped to which file. To ensure that you’re mapping the right streams to the right file, run ffprobe before writing the script to identify which streams are desired.</p>
<p>This command takes a video file and an audio file as inputs, and creates an output file that combines the video stream in the first file with the audio stream in the second file.</p>
<p><strong>Note:</strong> in the example above, the video input file is given prior to the audio input file. However, input files can be added any order, as long as they are indexed correctly when stream mapping with <code>-map</code>. See the entry on <ahref="#stream-mapping">stream mapping</a>.</p>
<labelclass="recipe"for="create_iso">Create ISO files for DVD access</label>
<inputtype="checkbox"id="create_iso">
<divclass="hiding">
<h3>Create ISO files for DVD access</h3>
<p>Create an ISO file that can be used to burn a DVD. Please note, you will have to install dvdauthor. To install dvd author using Homebrew run: <code>brew install dvdauthor</code></p>
<dt>-f lavfi</dt><dd>uses the <ahref="https://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">Libavfilter input virtual device</a> as chosen format</dd>
<dt>,</dt><dd>comma signifies closing of video source assertion and ready for filter assertion</dd>
<dt>signalstats</dt><dd>tells ffprobe to use the signalstats command</dd>
<dt>-show_entries</dt><dd>sets list of entries to show per column, determined on the next line</dd>
<dt>frame=pkt_pts_time:frame_tags=lavfi.signalstats.YDIF</dt><dd>specifies showing the timecode (<code>pkt_pts_time</code>) in the frame stream and the YDIF section of the frame_tags stream</dd>
<dt>-of csv</dt><dd>sets the output printing format to CSV. <code>-of</code> is an alias of <code>-print_format</code>.</dd>
<dd>This calls the drawtext filter with the following options:
<dl>
<dt>w=in_w</dt><dd>Width is set to the input width. Shorthand for this command would be w=iw</dd>
<dt>h=7</dt><dd>Height is set to 7 pixels.</dd>
<dt>y=ih-h</dt><dd>Y represents the offset, and ih-h sets it to the input height minus the height declared in the previous parameter, setting the box at the bottom of the frame.</dd>
<dt>t=max</dt><dd>T represents the thickness of the drawn box. Default is 3.</dd>
<p>I use this script to stream to a RTMP target and record the stream locally as .mp4 with only one ffmpeg-instance.</p>
<p>As input, I use <code>bmdcapture</code> which is piped to ffmpeg. But it can also be used with a static videofile as input.</p>
<p>The input will be scaled to 1280px width, maintaining height. Also the stream will stop after a given time (see <code>-t</code> option.)</p>
<h4>Notes</h4>
<ol>
<li>I recommend to use this inside a shell script - then you can define the variables <code>${INPUTFILE}</code>, <code>${STREAMDURATION}</code>, <code>${TARGETFILE}</code>, and <code>${STREAMTARGET}</code>.</li>
<li>This is in daily use to live-stream a real-world TV show. No errors for nearly 4 years. Some parameters were found by trial-and-error or empiric testing. So suggestions/questions are welcome.</li>
<dt>"[movflags=+faststart]target-file.mp4|[f=flv]rtmp://stream-url/stream-id"</dt><dd>The outputs, separated by a pipe (|). The first is the local file, the second is the live stream. Options for each target are given in square brackets before the target.</dd>
</dl>
<pclass="link"></p>
</div>
<!-- END Record and live-stream at the same time -->
<p>This section introduces and explains the usage of some additional command line tools similar to FFmpeg for use in digital preservation workflows (and beyond!).</p>
<p>It's official website can be found <ahref="https://www.imagemagick.org/script/index.php"target="_blank">here</a>.</p>
<p>Another great resource with lots of supplemental explanations of filters is available at <ahref="http://www.fmwconcepts.com/imagemagick/index.php"target="_blank">Fred's ImageMagick Scripts</a>.</p>
<p>Unlike many other command line tools, ImageMagick isn't summoned by calling its name. Rather, ImageMagick installs links to several more specific commands: <code>convert</code>, <code>montage</code>, and <code>mogrify</code>, to name a few.</p>
<dt>-metric ae</dt><dd>applies the absolute error count metric, returning the number of different pixels. <ahref="https://www.imagemagick.org/script/command-line-options.php#metric"target="_blank">Other parameters</a> are available for image comparison.</dd>
<p>The flac tool is the tool created by the FLAC project to transcode to/from FLAC and to manipulate metadata in FLAC files. One advantage it has over other tools used to transcode into FLAC is the capability of embedding foreign metadata (such as BWF metadata). This means that it is possible to compress a BWF file into FLAC and maintain the ability to transcode back into an identical BWF, metadata and all. For a more detailed explanation, see <ahref="http://dericed.com/2013/flac-in-the-archives/"target="_blank">Dave Rice's article</a> on the topic, from which the following commands are adapted.</p>
<h3>Transcode to FLAC</h3>
<p>Use this command to transcode from WAV to FLAC while maintaining BWF metadata</p>
<dt>-i <em>input_file.ext</em></dt><dd>path and name of the input file</dd>
<dt>--best</dt><dd>sets the file for the most efficient compression (resulting in a smaller file at the expense of a slower process).</dd>
<dt>--keep-foreign-metadata</dt><dd>tells the flac tool to maintain original metadata within the FLAC file.</dd>
<dt>--preserve-modtime</dt><dd>preserves the file timestamps of the input file.</dd>
<dt>--verify</dt><dd>verifies the validity of the output file.</dd>
</dl>
<h3>Transcode from FLAC</h3>
<p>Use this command to transcode from FLAC to reconstruct original BWF file. Command is the same as the prior command with the exception of substituting <code>--decode</code> for <code>best</code> and changing the input to a <code>.flac</code> file.</p>
<p>This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info!</p>
<p>Made with ♥ at <ahref="https://wiki.curatecamp.org/index.php/Association_of_Moving_Image_Archivists_%26_Digital_Library_Federation_Hack_Day_2015"target="_blank">AMIA #AVhack15</a>! Contribute to the project via <ahref="https://github.com/amiaopensource/ffmprovisr"target="_blank">our GitHub page</a>!</p>