<p>FFmpeg is a powerful tool for manipulating audiovisual files. Unfortunately, it also has a steep learning curve, especially for users unfamiliar with a command line interface. This app helps users through the command generation process so that more people can reap the benefits of FFmpeg.</p>
<p>Each button displays helpful information about how to perform a wide variety of tasks using FFmpeg. To use this site, click on the task you would like to perform. A new window will open up with a sample command and a description of how that command works. You can copy this command and understand how the command works with a breakdown of each of the flags.</p>
<p>For instructions on how to install FFmpeg on Mac, Linux, and Windows, refer to Reto Kromer’s <ahref="https://avpres.net/FFmpeg/#ch1"target="_blank">installation instructions</a>.</p>
<p>For Bash and command line basics, try the <ahref="https://learnpythonthehardway.org/book/appendixa.html"target="_blank">Command Line Crash Course</a>. For a little more context presented in an ffmprovisr style, try <ahref="http://explainshell.com/"target="_blank">explainshell.com</a>!</p>
This work is licensed under a <ahref="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.</p>
<p><atarget="_blank"href="http://dd388.github.io/crals/">Script Ahoy</a>: Community Resource for Archivists and Librarians Scripting</p>
<p><atarget="_blank"href="https://datapraxis.github.io/sourcecaster/">The Sourcecaster</a>: an app that helps you use the command line to work through common challenges that come up when working with digital primary sources.</p>
<p><atarget="_blank"href="https://amiaopensource.github.io/cable-bible/">Cable Bible</a>: A Guide to Cables and Connectors Used for Audiovisual Tech</p>
<spandata-toggle="modal"data-target="#wav_to_mp3"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Convert WAV to MP3">WAV to MP3</button></span>
<dt>-i <i>input_file</i></dt><dd>path and name of the input file</dd>
<dt>-write_id3v1 <i>1</i></dt><dd>Write ID3v1 tag. This will add metadata to the old MP3 format, assuming you’ve embedded metadata into the WAV file.</dd>
<dt>-id3v2_version <i>3</i></dt><dd>Write ID3v2 tag. This will add metadata to a newer MP3 format, assuming you’ve embedded metadata into the WAV file.</dd>
<dt>-dither_method <i>modified_e_weighted</i></dt><dd>Dither makes sure you don’t unnecessarily truncate the dynamic range of your audio.</dd>
<dt>-out_sample_rate <i>48k</i></dt><dd>Sets the audio sampling frequency to 48 kHz. This can be omitted to use the same sampling frequency as the input.</dd>
<dt>-qscale:a <i>1</i></dt><dd>This sets the encoder to use a constant quality with a variable bitrate of between 190-250kbit/s. If you would prefer to use a constant bitrate, this could be replaced with <code>-b:a 320k</code> to set to the maximum bitrate allowed by the MP3 format. For more detailed discussion on variable vs constant bitrates see <ahref="https://trac.ffmpeg.org/wiki/Encode/MP3"target="_blank">here.</a></dd>
<spandata-toggle="modal"data-target="#wav_to_mp4"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Convert WAV to AAC/MP4">WAV to AAC/MP4</button></span>
<spandata-toggle="modal"data-target="#to_prores"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Transcode to deinterlaced Apple ProRes LT">Transcode to ProRes</button></span>
<p>This command transcodes an input file into a deinterlaced Apple ProRes 422 LT file with 16-bit linear PCM encoded audio. The file is deinterlaced using the yadif filter (Yet Another De-Interlacing Filter).</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-c:v prores</dt><dd>Tells ffmpeg to transcode the video stream into Apple ProRes 422</dd>
<dt>-profile:v <i>1</i></dt><dd>Declares profile of ProRes you want to use. The profiles are explained below:
<ul>
<li>0 = ProRes 422 (Proxy)</li>
<li>1 = ProRes 422 (LT)</li>
<li>2 = ProRes 422 (Standard)</li>
<li>3 = ProRes 422 (HQ)</li>
</ul></dd>
<dt>-vf yadif</dt><dd>Runs a deinterlacing video filter (yet another deinterlacing filter) on the new file</dd>
<dt>-c:a pcm_s16le</dt><dd>Tells ffmpeg to encode the audio stream in 16-bit linear PCM</dd>
<p>FFmpeg comes with more than one ProRes encoder:</p>
<ul>
<li><code>prores</code> is much faster, can be used for progressive video only, and seems to be better for video according to Rec. 601 (Recommendation ITU-R BT.601).</li>
<li><code>prores_ks</code> generates a better file, can also be used for interlaced video, allows also encoding of ProRes 4444 (<code>-c:v prores_ks -profile:v 4</code>), and seems to be better for video according to Rec. 709 (Recommendation ITU-R BT.709).</li>
<spandata-toggle="modal"data-target="#transcode_h264"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Transcode to an H.264 access file">Transcode to H.264</button></span>
<p>This command takes an input file and transcodes it to H.264 with an .mp4 wrapper, keeping the audio the same codec as the original. The libx264 codec defaults to a “medium” preset for compression quality and a CRF of 23. CRF stands for constant rate factor and determines the quality and file size of the resulting H.264 video. A low CRF means high quality and large file size; a high CRF means the opposite.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-c:v libx264</dt><dd>tells ffmpeg to change the video codec of the file to H.264</dd>
<dt>-pix_fmt yuv420p</dt><dd> libx264 will use a chroma subsampling scheme that is the closest match to that of the input. This can result in Y′C<sub>B</sub>C<sub>R</sub> 4:2:0, 4:2:2, or 4:4:4 chroma subsampling. QuickTime and most other non-FFmpeg based players can’t decode H.264 files that are not 4:2:0. In order to allow the video to play in all players, you can specify 4:2:0 chroma subsampling.</dd>
<dt>-preset <i>veryslow</i></dt><dd>This option tells ffmpeg to use the slowest preset possible for the best compression quality.</dd>
<dt>-crf <i>18</i></dt><dd>Specifying a lower CRF will make a larger file with better visual quality. 18 is often considered a “visually lossless” compression.</dd>
<spandata-toggle="modal"data-target="#dcp_to_h264"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Transcode from DCP to an H.264 access file">H.264 from DCP</button></span>
<p>This will transcode mxf wrapped video and audio files to an H.264 encoded .mp4 file. Please note this only works for unencrypted, single reel DCPs.</p>
<spandata-toggle="modal"data-target="#ntsc_to_h264"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Upscaled, pillar-boxed HD H.264 access files from SD NTSC source">NTSC to H.264</button></span>
<dt>-i</dt><dd>for input video file and audio file</dd>
<dt>-c:v libx264</dt><dd>encodes video stream with libx264 (h264)</dd>
<dt>-filter:v</dt><dd>calls an option to apply filtering to the video stream. yadif deinterlaces. scale and pad do the math! resizes the video frame then pads the area around the 4:3 aspect to complete 16:9. flags=lanczos uses the Lanczos scaling algorithm which is slower but better than the default bilinear. Finally, format specifies a pixel format of YUV 4:2:0. The very same scaling filter also downscales a bigger image size into HD.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
<spandata-toggle="modal"data-target="#SD_HD"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Transform 4:3 aspect ratio into 16:9 with pillarbox">4:3 to 16:9</button></span>
<dt>-filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"</dt><dd>video padding<br>This resolution independent formula is actually padding any aspect ratio into 16:9 by pillarboxing, because the video filter uses relative values for input width (iw), input height (ih), output width (ow) and output height (oh).</dd>
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
<spandata-toggle="modal"data-target="#SD_HD_2"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Transform SD to HD with pillarbox">SD to HD</button></span>
<dt>-filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0"</dt><dd>set colour matrix, video scaling and padding<br>Three filters are applied:
<li>The luma coefficients are modified from SD video (according to Rec. 601) to HD video (according to Rec. 709) by a colour matrix. Note that today Rec. 709 is often used also for SD and therefore you may cancel this parameter.</li>
<li>The scaling filter (<code>scale=1440:1080</code>) works for both upscaling and downscaling. We use the Lanczos scaling algorithm (<code>flags=lanczos</code>), which is slower but gives better results than the default bilinear algorithm.</li>
<li>The padding filter (<code>pad=1920:1080:240:0</code>) completes the transformation from SD to HD.</li>
<spandata-toggle="modal"data-target="#HD_SD"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Transform 16:9 aspect ratio video into 4:3 with letterbox">16:9 to 4:3</button></span>
This resolution independent formula is actually padding any aspect ratio into 4:3 by letterboxing, because the video filter uses relative values for input width (iw), input height (ih), output width (ow) and output height (oh).</dd>
<spandata-toggle="modal"data-target="#create_FFV1_mkv"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Transcode your file with the FFV1 Version 3 Codec in a Matroska container">Create FFV1.mkv</button></span>
<p>This will losslessly trancode your video with the FFV1 Version 3 codec in a Matroska container. In order to verify losslessness, a framemd5 of the source video is also generated. For more information on FFV1 encoding, <ahref="https://trac.ffmpeg.org/wiki/Encode/FFV1"target="_blank">try the ffmpeg wiki</a>.</p>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file.</dd>
<dt>-map 0</dt><dd>Map all streams that are present in the input file. This is important as ffmpeg will map only one stream of each type (video, audio, subtitles) by default to the output video.</dd>
<dt>-c:v ffv1</dt><dd>specifies the FFV1 video codec.</dd>
<dt>-level 3</dt><dd>specifies Version 3 of the FFV1 codec.</dd>
<dt>-g 1</dt><dd>specifies intra-frame encoding, or GOP=1.</dd>
<dt>-slicecrc 1</dt><dd>Adds CRC information for each slice. This makes it possible for a decoder to detect errors in the bitstream, rather than blindly decoding a broken slice.</dd>
<dt>-slices 16</dt><dd>Each frame is split into 16 slices. 16 is a good trade-off between filesize and encoding time. <ahref="http://ndsr.nycdigital.org/diving-in-head-first/"target="_blank">[more]</a></dd>
<dt><i>output_file</i>.mkv</dt><dd>path and name of the output file. Use the <code>.mkv</code> extension to save your file in a Matroska container. Optionally, choose a different extension if you want a different container, such as <code>.mov</code> or <code>.avi</code>.</dd>
<dt>-f framemd5</dt><dd> Decodes video with the framemd5 muxer in order to generate md5 checksums for every frame of your input file. This allows you to verify losslessness when compared against the framemd5s of the output file.</dd>
<dt>-an</dt><dd>ignores the audio stream when creating framemd5 (audio no)</dd>
<dt><i>framemd5_output_file</i></dt><dd>path, name and extension of the framemd5 file.</dd>
<spandata-toggle="modal"data-target="#change_DAR"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Change display aspect ratio without re-encoding">Change Display Aspect Ratio</button></span>
<dt>-aspect 4:3</dt><dd>Change Display Aspect Ratio to <code>4:3</code>. Experiment with other aspect ratios such as <code>16:9</code>. If used together with <code>-c:v copy</code>, it will affect the aspect ratio stored at container level, but not the aspect ratio stored in encoded frames, if it exists.</dd>
<spandata-toggle="modal"data-target="#mkv_to_mp4"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Convert Matroska (MKV) to MP4">MKV to MP4</button></span>
<spandata-toggle="modal"data-target="#img_to_gif"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Converts images to GIF">Images to GIF</button></span>
<dt>-pattern_type glob</dt><dd>tells ffmpeg that the following mapping should "interpret like a <ahref="https://en.wikipedia.org/wiki/Glob_%28programming%29"target="_blank">glob</a>" (a "global command" function that relies on the * as a wildcard and finds everything that matches)</dd>
<dt>-i <i>"input_image_*.jpg"</i></dt><dd>maps all files in the directory that start with input_image_, for example input_image_001.jpg, input_image_002.jpg, input_image_003.jpg... etc.<br>
<spandata-toggle="modal"data-target="#dvd_to_file"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Basic DVD to file conversion">Convert DVD to H.264</button></span>
<p>Before encoding, you’ll need to establish which of the .VOB files on the DVD or .iso contain the content that you wish to encode. Inside the VIDEO_TS directory, you will see a series of files with names like VTS_01_0.VOB, VTS_01_1.VOB, etc. Some of the .VOB files will contain menus, special features, etc, so locate the ones that contain target content by playing them back in VLC.</p>
<dt>-i concat:<i>input files</i></dt><dd>lists the input VOB files and directs ffmpeg to concatenate them. Each input file should be separated by a backslash and a pipe, like so:<br>
<dt>-crf 18</dt><dd>sets the constant rate factor to a visually lossless value. Libx264 defaults to a <ahref="https://trac.ffmpeg.org/wiki/Encode/H.264#crf"target="_blank">crf of 23</a>, considered medium quality; a smaller crf value produces a larger and higher quality video.</dd>
<dt>-preset veryslow</dt><dd>A slower preset will result in better compression and therefore a higher-quality file. The default is <b>medium</b>; slower presets are <b>slow</b>, <b>slower</b>, and <b>veryslow</b>.</dd>
<p>Bear in mind that by default, libx264 will only encode a single video stream and a single audio stream, picking the ‘best’ of the options available. To preserve all video and audio streams, add <b>-map</b> parameters:</p>
<spandata-toggle="modal"data-target="#transcode_h265"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Transcode to an H.265/HEVC MP4">Transcode to H.265/HEVC</button></span>
<p><b>Note</b>: ffmpeg must be compiled with libx265, the library of the H.265 codec, for this script to work. (Add the flag <code>--with-x265</code> if using <code>brew install ffmpeg</code> method).</p>
<dt>-pix_fmt yuv420p</dt><dd>libx265 will use a chroma subsampling scheme that is the closest match to that of the input. This can result in YUV 4:2:0, 4:2:2, or 4:4:4 chroma subsampling. For widest accessibility, it’s a good idea to specify 4:2:0 chroma subsampling.</dd>
<dt>-c:a copy</dt><dd>tells ffmpeg not to change the audio codec</dd>
<dt><i>output file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p>The libx265 encoding library defaults to a ‘medium’ preset for compression quality and a CRF of 28. CRF stands for ‘constant rate factor’ and determines the quality and file size of the resulting H.265 video. The CRF scale ranges from 0 (best quality [lossless]; largest file size) to 51 (worst quality; smallest file size).</p>
<p>A CRF of 28 for H.265 can be considered a medium setting, <ahref="https://trac.ffmpeg.org/wiki/Encode/H.265#ConstantRateFactorCRF"target="_blank">corresponding</a> to a CRF of 23 in <ahref="./index.html#transcode_h264">encoding H.264</a>, but should result in about half the file size.</p>
<dt>-preset <i>veryslow</i></dt><dd>This option tells ffmpeg to use the slowest preset possible for the best compression quality.</dd>
<dt>-crf <i>18</i></dt><dd>Specifying a lower CRF will make a larger file with better visual quality. 18 is often considered a ‘visually lossless’ compression.</dd>
By default, <ahref="https://ffmpeg.org/ffmpeg-filters.html#yadif-1"target="_blank">yadif</a> will output one frame for each frame. Outputting one frame for each <i>field</i> (thereby doubling the frame rate) with <code>yadif=1</code> may produce visually better results.</dd>
By default, <code>libx264</code> will use a chroma subsampling scheme that is the closest match to that of the input. This can result in Y′C<sub>B</sub>C<sub>R</sub> 4:2:0, 4:2:2, or 4:4:4 chroma subsampling. QuickTime and most other non-FFmpeg based players can’t decode H.264 files that are not 4:2:0, therefore it’s advisable to specify 4:2:0 chroma subsampling.</dd>
<p><code>"yadif,format=yuv420p"</code> is an ffmpeg <ahref="https://trac.ffmpeg.org/wiki/FilteringGuide#FiltergraphChainFilterrelationship"target="_blank">filtergraph</a>. Here the filtergraph is made up of one filter chain, which is itself made up of the two filters (separated by the comma).<br>
The enclosing quote marks are necessary when you use spaces within the filtergraph, e.g. <code>-vf "yadif, format=yuv420p"</code>, and are included above as an example of good practice.</p>
<p><b>Note</b>: ffmpeg includes several deinterlacers apart from <ahref="https://ffmpeg.org/ffmpeg-filters.html#yadif-1"target="_blank">yadif</a>: <ahref="https://ffmpeg.org/ffmpeg-filters.html#bwdif"target="_blank">bwdif</a>, <ahref="https://ffmpeg.org/ffmpeg-filters.html#w3fdif"target="_blank">w3fdif</a>, <ahref="https://ffmpeg.org/ffmpeg-filters.html#kerndeint"target="_blank">kerndeint</a>, and <ahref="https://ffmpeg.org/ffmpeg-filters.html#nnedi"target="_blank">nnedi</a>.</p>
<dt>-vf colormatrix=<i>src</i>:<i>dst</i></dt><dd>the video filter <b>colormatrix</b> will be applied, with the given source and destination colourspaces.<br>
Accepted values include <code>bt601</code> (Rec.601), <code>smpte170m</code> (Rec.601, 525-line/<ahref="https://en.wikipedia.org/wiki/NTSC#NTSC-M"target="_blank">NTSC</a> version), <code>bt470bg</code> (Rec.601, 625-line/<ahref="https://en.wikipedia.org/wiki/PAL#PAL-B.2FG.2FD.2FK.2FI"target="_blank">PAL</a> version), <code>bt709</code> (Rec.709), and <code>bt2020</code> (Rec.2020).<br>
<p><b>Note</b>: Converting between colourspaces with ffmpeg can be done via either the <b>colormatrix</b> or <b>colorspace</b> filters, with colorspace allowing finer control (individual setting of colourspace, transfer characteristics, primaries, range, pixel format, etc). See <ahref="https://trac.ffmpeg.org/wiki/colorspace"target="_blank">this</a> entry on the ffmpeg wiki, and the ffmpeg documentation for <ahref="http://ffmpeg.org/ffmpeg-filters.html#colormatrix"target="_blank">colormatrix</a> and <ahref="http://ffmpeg.org/ffmpeg-filters.html#colorspace"target="_blank">colorspace</a>.</p>
<dt>-vf colormatrix=<i>src</i>:<i>dst</i></dt><dd>the video filter <b>colormatrix</b> will be applied, with the given source and destination colourspaces.</dd>
<dt>-color_primaries <i>val</i></dt><dd>tags video with the given colour primaries.<br>
<imgsrc="./img/colourspace_metadata_mediainfo.png"alt="MediaInfo screenshots of colourspace metadata"><br>
<p><spanclass="beware">⚠</span> Using this command it is possible to add Rec.709 tags to a file that is actually Rec.601 (etc), so apply with caution!</p>
<p>These commands are relevant for H.264 and H.265 videos, encoded with <code>libx264</code> and <code>libx265</code> respectively.</p>
<p><b>Note</b>: If you wish to embed colourspace metadata <i>without</i> changing to another colourspace, omit <code>-vf colormatrix=src:dst</code>. However, since it is <code>libx264</code>/<code>libx265</code> that writes the metadata, it’s not possible to add these tags without reencoding the video stream.</p>
<p>For all possible values for <code>-color_primaries</code>, <code>-color_trc</code>, and <code>-colorspace</code>, see the ffmpeg documentation on <ahref="./index.html#Codec-Options"target="_blank">codec options</a>.</p>
<pid="fn1"class="footnote">1. Out of step with the regular pattern, <code>-color_trc</code> doesn’t accept <code>bt470bg</code>; it is instead here referred to directly as gamma.<br>
In the Rec.601 standard, 525-line/NTSC and 625-line/PAL video have assumed gammas of 2.2 and 2.8 respectively. <ahref="#ref1"title="Jump back.">↩</a></p>
<p>The inverse telecine procedure reverses the <ahref="https://en.wikipedia.org/wiki/Three-two_pull_down">3:2 pull down</a> process, restoring 29.97fps interlaced video to the 24fps frame rate of the original film source.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-c:v libx264</dt><dd>encode video as H.264</dd>
<dt>-vf "fieldmatch,yadif,decimate"</dt><dd>applies these three video filters to the input video.<br>
<ahref="https://ffmpeg.org/ffmpeg-filters.html#fieldmatch">Fieldmatch</a> is a field matching filter for inverse telecine - it reconstructs the progressive frames from a telecined stream.<br>
<ahref="https://ffmpeg.org/ffmpeg-filters.html#yadif-1">Yadif</a> (‘yet another deinterlacing filter’) deinterlaces the video. (Note that ffmpeg also includes several other deinterlacers).<br>
<dt><i>output file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p><code>"fieldmatch,yadif,decimate"</code> is an ffmpeg <ahref="https://trac.ffmpeg.org/wiki/FilteringGuide#FiltergraphChainFilterrelationship"target="_blank">filtergraph</a>. Here the filtergraph is made up of one filter chain, which is itself made up of the three filters (separated by commas).<br>
The enclosing quote marks are necessary when you use spaces within the filtergraph, e.g. <code>-vf "fieldmatch, yadif, decimate"</code>, and are included above as an example of good practice.</p>
<p>Note that if applying an inverse telecine procedure to a 29.97i file, the output framerate will actually be 23.976fps.</p>
<p>This command can also be used to restore other framerates.</p>
<spandata-toggle="modal"data-target="#astats"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Play a graphical output showing decibel levels of an input file">Graphic for audio</button></span>
<dt>-f lavfi</dt><dd>tells ffmpeg to use the <ahref="http://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">Libavfilter input virtual device</a></dd>
<dt>adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0</dt><dd>draws a graph using the overall peak volume calculated by the astats filter. It sets the max for the graph to 0 (dB) and the minimum to -30 (dB). For more options on data points that can be graphed see the <ahref="https://ffmpeg.org/ffmpeg-filters.html#astats-1"target="_blank">ffmpeg astats documentation</a></dd>
<spandata-toggle="modal"data-target="#brng"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Identify pixels out of broadcast range">Broadcast Range</button></span>
<dt>-f lavfi</dt><dd>tells ffmpeg to use the <ahref="http://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">Libavfilter input virtual device</a></dd>
<spandata-toggle="modal"data-target="#ocr_on_top"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Play video with OCR on top">Shows OCR</button></span>
<p>Note: ffmpeg must be compiled with the tesseract library for this script to work (<code>--with-tesseract</code> if using <code>brew install ffmpeg</code> method).</p>
<dt>text=%{metadata\\\:lavfi.ocr.text}</dt><dd>tells ffplay what text to use when playing. In this case, calls for metadata that lives in the lavfi.ocr.text library</dd>
<spandata-toggle="modal"data-target="#ffprobe_ocr"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Export OCR from video to screen">Exports OCR</button></span>
<p>Note: ffmpeg must be compiled with the tesseract library for this script to work (<code>--with-tesseract</code> if using <code>brew install ffmpeg</code> method)</p>
<dt>-f lavfi</dt><dd>tells ffmpeg to use the <ahref="http://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">Libavfilter input virtual device</a></dd>
<spandata-toggle="modal"data-target="#vectorscope"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Vectorscope from video to screen">Vectorscope</button></span>
<dt>[v]vectorscope=b=0.7:m=color3:g=green[v]</dt><dd>asserts usage of the vectorscope filter and sets a light background opacity (b, alias for bgopacity), sets a background color style (m, alias for mode), and graticule color (g, alias for graticule)</dd>
<spandata-toggle="modal"data-target="#tempdif"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Play two videos side by side while applying the temporal difference filter to both">Side by Side Videos/Temporal Difference Filter</button></span>
<dt>[0:v:0]tblend=all_mode=difference128[a]</dt><dd>Applies the tblend filter (with the settings all_mode and difference128) to the first video stream from the first input and assigns the result to the output [a]</dd>
<dt>[1:v:0]tblend=all_mode=difference128[a]</dt><dd>Applies the tblend filter (with the settings all_mode and difference128) to the first video stream from the second input and assigns the result to the output [b]</dd>
<dt>[a][b]hstack[out]</dt><dd>Takes the outputs from the previous steps ([a] and [b] and uses the hstack (horizontal stack) filter on them to create the side by side output. This output is then named [out])</dd>
<dt>-map [out]</dt><dd>Maps the output of the filter chain</dd>
<dt>-f nut</dt><dd>Sets the format for the output video stream to <ahref="https://www.ffmpeg.org/ffmpeg-formats.html#nut"target="_blank">Nut</a></dd>
<dt>-c:v rawvideo</dt><dd>Sets the video codec of the output video stream to raw video</dd>
<spandata-toggle="modal"data-target="#create_gif"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Create a GIF from a video">Create GIF</button></span>
<p>The first command will use the palettegen filter to create a custom palette, then the second command will create the GIF with the paletteuse filter. The result is a high quality GIF.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-ss <i>HH:MM:SS</i></dt><dd>starting point of the gif. If a plain numerical value is used it will be interpreted as seconds</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-filter_complex "fps=<i>frame rate</i>,scale=<i>width</i>:<i>height</i>,palettegen"</dt><dd>a complex filtergraph using the fps filter to set frame rate, the scale filter to resize, and the palettegen filter to generate the palette. The scale value of <i>-1</i> preserves the aspect ratio</dd>
<dt>-t <i>3</i></dt><dd>duration in seconds (here 3; can be specified also with a full timestamp, i.e. here 00:00:03)</dd>
<dt>-loop <i>6</i></dt><dd>number of times to loop the gif. A value of <i>-1</i> will disable looping. Omitting <i>-loop</i> will use the default which will loop infinitely</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
<p>This is a quick and easy method. Dithering is more apparent than the above method using the palette* filters, but the file size will be smaller. Perfect for that “legacy” GIF look.</p>
<spandata-toggle="modal"data-target="#one_thumbnail"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Export one thumbnail per video file">One thumbnail</button></span>
<spandata-toggle="modal"data-target="#multi_thumbnail"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Export many thumbnails per video file">Many thumbnails</button></span>
<dt>-vf fps=1/60</dt><dd>Creates a filtergraph to use for the streams. The rest of the command identifies filtering by frames per second, and sets the frames per second at 1/60 (which is one per minute). Omitting this will output all frames from the video.</dd>
<dt><i>output file</i></dt><dd>path, name and extension of the output file. In the example out%d.png where %d is a regular expression that adds a number (d is for digit) and increments with each frame (out1.png, out2.png, out3.png…). You may also chose a regular expression like out%04d.png which gives 4 digits with leading 0 (out0001.png, out0002.png, out0003.png, …).</dd>
<spandata-toggle="modal"data-target="#excerpt_from_start"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Create an excerpt, starting from the beginning of the file">Excerpt from beginning</button></span>
<p>This command captures a certain portion of a video file, starting from the beginning and continuing for the amount of time (in seconds) specified in the script. This can be used to create a preview file, or to remove unwanted content from the end of the file. To be more specific, use timecode, such as 00:00:05.</p>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-t <i>5</i></dt><dd>Tells ffmpeg to stop copying from the input file after a certain time, and specifies the number of seconds after which to stop copying. In this case, 5 seconds is specified.</dd>
<dt>-c copy</dt><dd>use stream copy mode to re-mux instead of re-encode</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
<i>Note:</i> watch out when using <code>-ss</code> with <code>-c copy</code> if the source is encoded with an interframe codec (e.g., H.264). Since ffmpeg must split on i-frames, it will seek to the nearest i-frame to begin the stream copy.</dd>
<spandata-toggle="modal"data-target="#excerpt_to_end"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Create a new video file with the first five seconds trimmed off the original">Excerpt to end</button></span>
<p>This command copies a video file starting from a specified time, removing the first few seconds from the output. This can be used to create an excerpt, or remove unwanted content from the beginning of a video file.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-ss <i>5</i></dt><dd>Tells ffmpeg what timecode in the file to look for to start copying, and specifies the number of seconds into the video that ffmpeg should start copying. To be more specific, you can use timecode such as 00:00:05.</dd>
<dt>-c copy</dt><dd>use stream copy mode to re-mux instead of re-encode</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
<spandata-toggle="modal"data-target="#excerpt_from_end"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Create a new video file with the final five seconds of the original">Excerpt from end</button></span>
<p>This command copies a video file starting from a specified time before the end of the file, removing everything before from the output. This can be used to create an excerpt, or extract content from the end of a video file (e.g. for extracting the closing credits).</p>
<dt>-sseof <i>-5</i></dt><dd>This parameter must stay before the input file. It tells ffmpeg what timecode in the file to look for to start copying, and specifies the number of seconds from the end of the video that ffmpeg should start copying. The end of the file has index 0 and the minus sign is needed to reference earlier portions. To be more specific, you can use timecode such as -00:00:05. Note that in most file formats it is not possible to seek exactly, so ffmpeg will seek to the closest point before.</dd>
<spandata-toggle="modal"data-target="#create_iso"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Create ISO files for DVD access">Create ISO</button></span>
<p>Create an ISO file that can be used to burn a DVD. Please note, you will have to install dvdauthor. To install dvd author using Homebrew run: <code>brew install dvdauthor</code></p>
<dd>This calls the drawtext filter with the following options:
<dl>
<dt>w=in_w</dt><dd>Width is set to the input width. Shorthand for this command would be w=iw</dd>
<dt>h=7</dt><dd>Height is set to 7 pixels.</dd>
<dt>y=ih-h</dt><dd>Y represents the offset, and ih-h sets it to the input height minus the height declared in the previous parameter, setting the box at the bottom of the frame.</dd>
<spandata-toggle="modal"data-target="#append_mp3"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Generate two access MP3s from input. One with added audio (such as copyright notice) and one unmodified.">Append notice to access MP3</button></span>
<p>This script allows you to generate two derivative audio files from a master while appending audio from a seperate file (for example a copyright or institutional notice) to one of them.</p>
<dt>-filter_complex</dt><dd>enables the complex filtering to manage splitting the input to two audio streams</dd>
<dt>[0:a:0]asplit=2[a][b];</dt><dd><code>asplit</code> allows audio streams to be split up for seperate manipulation. This command splits the audio from the first input (the master file) into two streams "a" and "b"</dd>
<dt>[b]afifo[bb];</dt><dd>this buffers the stream "b" to help prevent dropped samples and renames stream to "bb"</dd>
<dt>[1:a:0][bb]concat=n=2:v=0:a=1[concatout]</dt><dd><code>concat</code> is used to join files. <code>n=2</code> tells the filter there are two inputs. <code>v=0:a=1</code> Tells the filter there are 0 video outputs and 1 audio output. This command appends the audio from the second input to the beginning of stream "bb" and names the output "concatout"</dd>
<p>Bash scripts are plain text files saved with a .sh extension. This entry explains how they work with the example of a bash script named “Rewrap-MXF.sh”, which rewraps .MXF files in a given directory to .MOV files.</p>
<dt>for file in *.MXF</dt><dd>starts the loop, and states what the input files will be. Here, the ffmpeg command within the loop will be applied to all files with an extension of .MXF.<br>
Per Bash syntax, within the command the variable is referred to by <b>“$file”</b>. The dollar sign is used to reference the variable ‘file’, and the enclosing quotation marks prevents reinterpretation of any special characters that may occur within the filename, ensuring that the original filename is retained.</dd>
<p><b>Note</b>: the shell script (.sh file) and all .MXF files to be processed must be contained within the same directory, and the script must be run from that directory.<br>
<p>As of Windows 10, it is possible to run Bash via <ahref="https://msdn.microsoft.com/en-us/commandline/wsl/about"target="_blank">Bash on Ubuntu on Windows</a>, allowing you to use <ahref="index.html#batch_processing_bash">bash scripting</a>. To enable Bash on Windows, see <ahref="https://msdn.microsoft.com/en-us/commandline/wsl/install_guide"target="_blank">these instructions</a>.</p>
<p>On Windows, the primary native command line programme is <b>PowerShell</b>. PowerShell scripts are plain text files saved with a .ps1 extension. This entry explains how they work with the example of a PowerShell script named “rewrap-mp4.ps1”, which rewraps .mp4 files in a given directory to .mkv files.</p>
<dt>foreach ($file in $inputfiles)</dt><dd>Creates a loop and states the subsequent code block will be applied to each file listed in <code>$inputfiles</code>.<br>
<dt>$output = [io.path]::ChangeExtension($file, '.mkv')</dt><dd>Sets up the output file: it will be located in the current folder and keep the same filename, but will have an .mkv extension instead of .mp4.</dd>
<dt>ffmpeg -i $file</dt><dd>Carry out the following ffmpeg command for each input file.<br>
<b>Note</b>: To call ffmpeg here as just ‘ffmpeg’ (rather than entering the full path to ffmpeg.exe), you must make sure that it’s correctly configured. See <ahref="http://adaptivesamples.com/how-to-install-ffmpeg-on-windows/"target="_blank">this article</a>, especially the section ‘Add to Path’.</dd>
<dt>-c copy</dt><dd>enable stream copy (no re-encode)</dd>
<dt>$output</dt><dd>The output file is set to the value of the <code>$output</code> variable declared above: i.e., the current file name with an .mkv extension.</dd>
<p><b>Note</b>: the PowerShell script (.ps1 file) and all .mp4 files to be rewrapped must be contained within the same directory, and the script must be run from that directory.<p>
<spandata-toggle="modal"data-target="#create_frame_md5s"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Create an MD5 checksum per video frame">Create MD5 checksums</button></span>
<spandata-toggle="modal"data-target="#pull_specs"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Pull specs from video file">Pull specs</button></span>
<p>This command extracts technical metadata from a video file and displays it in xml.</p>
<p>ffmpeg documentation on ffprobe (full list of flags, commands, <ahref="https://www.ffmpeg.org/ffprobe.html"target="_blank">www.ffmpeg.org/ffprobe.html</a>)</p>
<dl>
<dt>ffprobe</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<spandata-toggle="modal"data-target="#check_FFV1_fixity"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Decode your video and verify the internal CRC checksums">Check FFV1 fixity</button></span>
<p>This decodes your video and displays any CRC checksum mismatches. These errors will display in your terminal like this: <code>[ffv1 @ 0x1b04660] CRC mismatch 350FBD8A!at 0.272000 seconds</code></p>
<dt>-report</dt><dd>Dump full command line and console output to a file named <i>ffmpeg-YYYYMMDD-HHMMSS.log</i> in the current directory. It also implies <code>-loglevel verbose</code>.</dd>
<spandata-toggle="modal"data-target="#check_interlacement"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Identify interlacement patterns in a video file">Check interlacement </button></span>
<dt>-filter:v idet</dt><dd>This calls the <ahref="https://ffmpeg.org/ffmpeg-filters.html#idet"target="_blank">idet (detect video interlacing type) filter</a>.</dd>
<spandata-toggle="modal"data-target="#qctools"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Create a QCTools report for a video file with audio track">QCTools report (with audio)</button></span>
<p>This will create an XML report for use in <ahref="https://github.com/bavc/qctools"target="_blank">QCTools</a> for a video file with one video track and one audio track. See also the <ahref="https://github.com/bavc/qctools/blob/master/docs/data_format.md#creating-a-qctools-document"target="_blank">QCTools documentation</a>.</p>
<dt>-f lavfi</dt><dd>tells ffprobe to use the Libavfilter input virtual device <ahref="http://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">[more]</a></dd>
<dd>This very large lump of commands declares the input file and passes in a request for all potential data signal information for a file with one video and one audio track</dd>
<dt>-show_frames</dt><dd>asks for information about each frame and subtitle contained in the input multimedia stream</dd>
<dt>-show_versions</dt><dd>asks for information related to program and library versions</dd>
<dt><i>input_file</i>.qctools.xml.gz</dt><dd>names the zipped data output file, which can be named anything but needs the extension qctools.xml.gz for compatibility issues</dd>
<spandata-toggle="modal"data-target="#qctools_no_audio"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Create a QCTools report for a video file with no audio track">QCTools report (no audio)</button></span>
<p>This will create an XML report for use in <ahref="https://github.com/bavc/qctools"target="_blank">QCTools</a> for a video file with one video track and NO audio track. See also the <ahref="https://github.com/bavc/qctools/blob/master/docs/data_format.md#creating-a-qctools-document"target="_blank">QCTools documentation</a>.</p>
<dt>-f lavfi</dt><dd>tells ffprobe to use the Libavfilter input virtual device <ahref="http://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">[more]</a></dd>
<dd>This very large lump of commands declares the input file and passes in a request for all potential data signal information for a file with one video and one audio track</dd>
<dt>-show_frames</dt><dd>asks for information about each frame and subtitle contained in the input multimedia stream</dd>
<dt>-show_versions</dt><dd>asks for information related to program and library versions</dd>
<dt><i>input_file</i>.qctools.xml.gz</dt><dd>names the zipped data output file, which can be named anything but needs the extension qctools.xml.gz for compatibility issues</dd>
<spandata-toggle="modal"data-target="#mandelbrot"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Make a mandelbrot test pattern video">Mandelbrot</button></span>
<dt>-f lavfi</dt><dd>tells ffmpeg to use the Libavfilter input virtual device <ahref="http://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">[more]</a></dd>
<dt>-i mandelbrot=size=1280x720:rate=25</dt><dd>asks for the mandelbrot test filter as input. Adjusting the <code>size</code> and <code>rate</code> options allow you to choose a specific frame size and framerate. <ahref="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc"target="_blank">[more]</a></dd>
<dt>-c:v <i>libx264</i></dt><dd>transcodes video from rawvideo to H.264. Set <code>-pix_fmt</code> to <code>yuv420p</code> for greater H.264 compatibility with media players.</dd>
<spandata-toggle="modal"data-target="#smpte_bars"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Make a SMPTE bars test pattern video">SMPTE bars</button></span>
<dt>-f lavfi</dt><dd>tells ffmpeg to use the Libavfilter input virtual device <ahref="http://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">[more]</a></dd>
<dt>-i smptebars=size=720x576:rate=25</dt><dd>asks for the smptebars test filter as input. Adjusting the <code>size</code> and <code>rate</code> options allow you to choose a specific frame size and framerate. <ahref="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc"target="_blank">[more]</a></dd>
<dt>-c:v <i>prores</i></dt><dd>transcodes video from rawvideo to Apple ProRes 4:2:2.</dd>
<spandata-toggle="modal"data-target="#test"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Make a test pattern video">Test pattern</button></span>
<dt>-f lavfi</dt><dd>tells ffmpeg to use the <ahref="http://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">libavfilter</a> input virtual device</dd>
<dt>-i testsrc=size=720x576:rate=25</dt><dd>asks for the testsrc filter pattern as input. Adjusting the <code>size</code> and <code>rate</code> options allow you to choose a specific frame size and framerate. <br>
The different test patterns that can be generated are listed <ahref="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc"target="_blank">here</a>.</dd>
<spandata-toggle="modal"data-target="#play_hd_smpte"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Test an HD video projector by playing the SMPTE colour bars pattern">Play HD SMPTE bars</button></span>
<dt>-f lavfi</dt><dd>tells ffmpeg to use the libavfilter input virtual device <ahref="http://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">[more]</a></dd>
<dt>-i smptehdbars=size=1920x1080</dt><dd>asks for the smptehdbars filter pattern as input and sets the HD resolution. This generates a colour bars pattern, based on the SMPTE RP 219–2002. <ahref="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc"target="_blank">[more]</a></dd>
<spandata-toggle="modal"data-target="#play_vga_smpte"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Test a VGA video projector by playing the SMPTE colour bars pattern">Play VGA SMPTE bars</button></span>
<dt>-f lavfi</dt><dd>tells ffmpeg to use the libavfilter input virtual device <ahref="http://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">[more]</a></dd>
<dt>-i smptebars=size=640x480</dt><dd>asks for the smptehdbars filter pattern as input and sets the VGA (SD) resolution. This generates a colour bars pattern, based on the SMPTE Engineering Guideline EG 1–1990. <ahref="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc"target="_blank">[more]</a></dd>
<spandata-toggle="modal"data-target="#broken_dv"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Make a broken DV file from a perfectly good one">Broken DV</button></span>
<dt>-bsf noise</dt><dd>sets bitstream filters for all to 'noise'. The <ahref="https://www.ffmpeg.org/ffmpeg-bitstream-filters.html#noise">noise filter</a> intentionally damages the contents of packets without damaging the container</dd>
<spandata-toggle="modal"data-target="#sine_wave"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Generate a test audio file playing a sine wave">Sine wave</button></span>
<dt>-f lavfi</dt><dd>tells ffmpeg to use the libavfilter input virtual device <ahref="http://ffmpeg.org/ffmpeg-devices.html#lavfi"target="_blank">[more]</a></dd>
<dt>-i "sine=frequency=1000:sample_rate=48000:duration=5"</dt><dd>Sets the signal to 1000 Hz, sampling at 48 kHz, and for 5 seconds</dd>
<dt>-c:a pcm_s16le</dt><dd>encodes the audio codec in <code>pcm_s16le</code> (the default encoding for wav files). pcm represents pulse-code modulation format (raw bytes), <code>16</code> means 16 bits per sample, and <code>le</code> means "little endian"</dd>
<spandata-toggle="modal"data-target="#join_files"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Join (concatenate) two or more files into a single file">Join files together</button></span>
<p>This command takes two or more files of the same file type and joins them together to make a single file. All that the program needs is a text file with a list specifying the files that should be joined. However, it only works properly if the files to be combined have the exact same codec and technical specifications. Be careful, ffmpeg may appear to have successfully joined two video files with different codecs, but may only bring over the audio from the second file or have other weird behaviors. Don’t use this command for joining files with different codecs and technical specs and always preview your resulting video file!</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-f concat</dt><dd>forces ffmpeg to concatenate the files and to keep the same file format</dd>
<dt>-i <i>mylist.txt</i></dt><dd>path, name and extension of the input file. Per the <ahref="https://www.ffmpeg.org/ffmpeg-formats.html#Options"target="_blank">ffmpeg documentation</a>, it is preferable to specify relative rather than absolute file paths, as allowing absolute file paths may pose a security risk.<br>
In the above, <b>file</b> is simply the word "file". Straight apostrophes ('like this') rather than curved quotation marks (‘like this’) must be used to enclose the file paths.<br>
<spandata-toggle="modal"data-target="#play_im_seq"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Play an image sequence directly as moving images">Play an image sequence</button></span>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file<br>
This must match the naming convention used! The regex %06d matches six-digit-long numbers, possibly with leading zeroes. This allows the full sequence to be read in ascending order, one image after the other.<br>
The extension for TIFF files is .tif or maybe .tiff; the extension for DPX files is .dpx (or even .cin for old files). Screenshots are often in .png format.</dd>
<p>If <code>-framerate</code> is omitted, the playback speed depends on the images’ file sizes and on the computer’s processing power. It may be rather slow for large image files.</p>
<p>You can navigate durationally by clicking within the playback window. Clicking towards the left-hand side of the playback window takes you towards the beginning of the playback sequence; clicking towards the right takes you towards the end of the sequence.</p>
<spandata-toggle="modal"data-target="#split_audio_video"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Create separate audio and video tracks from an audiovisual file">Split audio and video tracks</button></span>
<p>This command splits the original input file into a video and audio stream. The -map command identifies which streams are mapped to which file. To ensure that you’re mapping the right streams to the right file, run ffprobe before writing the script to identify which streams are desired.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-map <i>0:v:0</i></dt><dd>grabs the first video stream and maps it into:</dd>
<dt><i>video_output_file</i></dt><dd>path, name and extension of the video output file</dd>
<dt>-map <i>0:a:0</i></dt><dd>grabs the first audio stream and maps it into:</dd>
<dt><i>audio_output_file</i></dt><dd>path, name and extension of the audio output file</dd>
<p>This command combines two audio tracks present in a video file into one stream. It can be useful in situations where a downstream process, like YouTube’s automatic captioning, expect one audio track. To ensure that you’re mapping the right audio tracks run ffprobe before writing the script to identify which tracks are desired. More than two audio streams can be combined by extending the pattern present in the -filter_complex option.</p>
<spandata-toggle="modal"data-target="#extract_audio"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Extract audio without loss from an AV file">Extract audio</button></span>
<spandata-toggle="modal"data-target="#flip_image"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Flip the image">Flip image</button></span>
<dt>-filter:v "hflip,vflip"</dt><dd>flips the image horizontally and vertically<br>By using only one of the parameters hflip or vflip for filtering the image is flipped on that axis only. The quote marks are not mandatory.</dd>
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-filter_complex "[0:v]setpts=<i>input_fps</i>/<i>output_fps</i>*PTS[v]; [0:a]atempo=<i>output_fps</i>/<i>input_fps</i>[a]"</dt><dd>A complex filter is needed here, in order to handle video stream and the audio stream separately. The <code>setpts</code> video filter modifies the PTS (presentation time stamp) of the video stream, and the <code>atempo</code> audio filter modifies the speed of the audio stream while keeping the same sound pitch. Note that the parameter’s order for the image and for the sound are inverted:
<ul>
<li>In the video filter <code>setpts</code> the numerator <code>input_fps</code> sets the input speed and the denominator <code>output_fps</code> sets the output speed; both values are given in frames per second.</li>
<li>In the sound filter <code>atempo</code> the numerator <code>output_fps</code> sets the output speed and the denominator <code>input_fps</code> sets the input speed; both values are given in frames per second.</li>
</ul>
The different filters in a complex filter can be divided either by comma or semicolon. The quotation marks allow to insert a space between the filters for readability.</dd>
<dt>-map "[v]"</dt><dd>maps the video stream and:</dd>
<dt>-map "[a]"</dt><dd>maps the audio stream together into:</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
<dt>fontsize=<i>font_size</i></dt><dd> Set font size. <code>35</code> is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.</dd>
<dt>text=<i>watermark_text</i></dt><dd> Set the content of your watermark text. For example: <code>text='FFMPROVISR EXAMPLE TEXT'</code></dd>
<dt>fontcolor=<i>font_colour</i></dt><dd> Set colour of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
<dt>alpha=0.4</dt><dd> Set transparency value.</dd>
<dt>x=(w-text_w)/2:y=(h-text_h)/2</dt><dd> Sets <i>x</i> and <i>y</i> coordinates for the watermark. These relative values will centre your watermark regardless of video dimensions.</dd>
</dl>
Note: <code>-vf</code> is a shortcut for <code>-filter:v</code>.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file.</dd>
<dt>-filter_complex overlay=main_w-overlay_w-5:5</dt><dd>This calls the overlay filter and sets x and y coordinates for the position of the watermark on the video. Instead of hardcoding specific x and y coordinates, <code>main_w-overlay_w-5:5</code> uses relative coordinates to place the watermark in the upper right hand corner, based on the width of your input files. Please see the <ahref="https://www.ffmpeg.org/ffmpeg-all.html#toc-Examples-102"target="_blank">ffmpeg documentation for more examples.</a></dd>
<spandata-toggle="modal"data-target="#burn_in_timecode"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Burn in timecode ">Burn in timecode</button></span>
<dt>fontsize=<i>font_size</i></dt><dd> Set font size. <code>35</code> is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.</dd>
<dt>timecode=<i>starting_timecode</i></dt><dd> Set the timecode to be displayed for the first frame. Timecode is to be represented as <code>hh:mm:ss[:;.]ff</code>. Colon escaping is determined by O.S, for example in Ubuntu <code>timecode='09\\:50\\:01\\:23'</code>. Ideally, this value would be generated from the file itself using ffprobe.</dd>
<dt>fontcolor=<i>font_colour</i></dt><dd> Set colour of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
<dt>box=1</dt><dd> Enable box around timecode</dd>
<dt>boxcolor=<i>box_colour</i></dt><dd> Set colour of box. Can be a text string such as <code>fontcolor=black</code> or a hexadecimal value such as <code>fontcolor=0x000000</code></dd>
<dt>rate=<i>timecode_rate</i></dt><dd> Framerate of video. For example <code>25/1</code></dd>
<dt>x=(w-text_w)/2:y=h/1.2</dt><dd> Sets <i>x</i> and <i>y</i> coordinates for the timecode. These relative values will horizontally centre your timecode in the bottom third regardless of video dimensions.</dd>
</dl>
Note: <code>-vf</code> is a shortcut for <code>-filter:v</code>.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file.</dd>
<spandata-toggle="modal"data-target="#images_2_video"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Transcode an image sequence into uncompressed 10-bit video">Image sequence into video</button></span>
This must match the naming convention actually used! The regex %06d matches six digits long numbers, possibly with leading zeroes. This allows to read in ascending order, one image after the other, the full sequence inside one folder. For image sequences starting with 086400 (i.e. captured with a timecode starting at 01:00:00:00 and at 24 fps), add the flag <code>-start_number 086400</code> before <code>-i input_file_%06d.ext</code>. The extension for TIFF files is .tif or maybe .tiff; the extension for DPX files is .dpx (or eventually .cin for old files).</dd>
<dt>-c:v v210</dt><dd>encodes an uncompressed 10-bit video stream</dd>
<dt>-an copy</dt><dd>no audio</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
<spandata-toggle="modal"data-target="#image-audio"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Create video from image and audio">Create video from image and audio</button></span>
<p>This command will take an image file (e.g. image.jpg) and an audio file (e.g. audio.mp3) and combine them into a video file that contains the audio track with the image used as the video. It can be useful in a situation where you might want to upload an audio file to a platform like YouTube. You may want to adjust the scaling with -vf to suit your needs.</p>
<spandata-toggle="modal"data-target="#set_field_order"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Set field order for interlaced video">Set field order</button></span>
<h4>Find undetermined or unknown stream properties</h4>
<p>These examples use QuickTime inputs and outputs. The strategy will vary or may not be possible in other file formats. In the case of these examples it is the intention to make a lossless copy while clarifying an unknown characteristic of the stream.</p>
<dt>-show_streams</dt><dd>Shows metadata of stream properties</dd>
</dl>
<p>Values that are set to 'unknown' and 'undetermined' may be unspecified within the stream. An unknown aspect ratio would be expressed as '0:1'. Streams with many unknown properties may have interoperability issues or not play as intended. In many cases, an unknown or undetermined value may be accurate because the information about the source is unclear, but often the value is intended to be known. In many cases the stream will played with an assumed value if undetermined (for instance a display_aspect_ratio of '0:1' may be played as 'WIDTH:HEIGHT'), but this may or may not be what is intended. Use carefully.</p>
<h4>Set aspect ratio</h4>
<p>If the display_aspect_ratio is set to '0:1' it may be clarified with the <i>-aspect</i> option and stream copy.</p>
<dt>-map 0</dt><dd>Tells ffmpeg to map all streams of the input to the output.</dd>
<dt>-aspect DAR_NUM:DAR_DEN</dt><dd>Replace DAR_NUM with the display aspect ratio numerator and DAR_DEN with the display aspect ratio denominator, such as <i>-aspect 4:3</i> or <i>-aspect 16:9</i>.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<h4>Adding other stream properties.</h4>
<p>Other properties may be clarified in a similar way. Replace <i>-aspect</i> and its value with other properties such as shown in the options below. Note that setting color values in QuickTime requires that <i>-movflags write_colr</i> is set.</p>
<dt>-color_primary <i>VALUE</i> -movflags write_colr</dt><dd>Set a new color_primary value. The vocabulary for values is at <ahref="http://ffmpeg.org/ffmpeg-all.html"target="_blank">ffmpeg</a>.</dd>
<dt>-color_trc <i>VALUE</i> -movflags write_colr</dt><dd>Set a new color_transfer value. The vocabulary for values is at <ahref="http://ffmpeg.org/ffmpeg-all.html"target="_blank">ffmpeg</a>.</dd>
<dt>-field_order <i>VALUE</i></dt><dd>Set interlacement values. The vocabulary for values is at <ahref="http://ffmpeg.org/ffmpeg-all.html"target="_blank">ffmpeg</a>.</dd>
<spandata-toggle="modal"data-target="#view_format_info"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="View format information">View format information</button></span>
<p>This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info!</p>
<p>Made with ♥ at <ahref="http://wiki.curatecamp.org/index.php/Association_of_Moving_Image_Archivists_%26_Digital_Library_Federation_Hack_Day_2015"target="_blank">AMIA #AVhack15</a>! Contribute to the project via <ahref="https://github.com/amiaopensource/ffmprovisr">our GitHub page</a>!</p>