<p>FFmpeg is a powerful tool for manipulating audiovisual files. Unfortunately, it also has a steep learning curve, especially for users unfamiliar with a command line interface. This app helps users through the command generation process so that more people can reap the benefits of FFmpeg.</p>
<p>Each button displays helpful information about how to perform a wide variety of tasks using FFmpeg. To use this site, click on the task you would like to perform. A new window will open up with a sample command and a description of how that command works. You can copy this command and understand how the command works with a breakdown of each of the flags.</p>
<p>This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info!</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
<spandata-toggle="modal"data-target=".create_gif"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Create a GIF from a video">Create GIF</button></span>
<p>The first command will use the palettegen filter to create a custom palette, then the second command will create the GIF with the paletteuse filter. The result is a high quality GIF.</p>
<dt>-filter_complex "fps=<i>frame rate</i>,scale=<i>width</i>:<i>height</i>,palettegen"</dt><dd>a complex filtergraph using the fps filter to set frame rate, the scale filter to resize, and the palettegen filter to generate the palette. The scale value of <i>-1</i> preserves the aspect ratio</dd>
<dt>-t <i>3</i></dt><dd>duration in seconds (here 3; can be specified also with a full timestamp, i.e. here 00:00:03)</dd>
<dt>-loop <i>6</i></dt><dd>number of times to loop the gif. A value of <i>-1</i> will disable looping. Omitting <i>-loop</i> will use the default which will loop infinitely</dd>
<p>This is a quick and easy method. Dithering is more apparent than the above method using the palette* filters, but the file size will be smaller. Perfect for that "legacy" GIF look.</p>
<spandata-toggle="modal"data-target=".wav_to_mp3"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Converts WAV to MP3">WAV to MP3</button></span>
<dt>-write_id3v1 <i>1</i></dt><dd>Write ID3v1 tag. This will add metadata to the old MP3 format, assuming you've embedded metadata into the WAV file.</dd>
<dt>-id3v2_version <i>3</i></dt><dd>Write ID3v2 tag. This will add metadata to a newer MP3 format, assuming you've embedded metadata into the WAV file.</dd>
<dt>-dither_method <i>modified_e_weighted</i></dt><dd>Dither makes sure you don't unnecessarily truncate the dynamic range of your audio.</dd>
<dt>-out_sample_rate <i>48k</i></dt><dd>Sets the audio sampling frequency to 48 kHz. This can be omitted to use the same sampling frequency as the input.</dd>
<spandata-toggle="modal"data-target=".batch_processing"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="FFMPEG batch processing within a single folder">Batch processing</button></span>
<p>Re-wrap .MFX files in a specified directory to .mov files by using this code within a .sh file. The shell script (.sh file) and all MXF files must be contained in the same directory, and the script must be run from the directory itself (cd ~/Desktop/MXF_file_directory). Execute .sh file with the command <code>sh Rewrap-MXF.sh</code></p>
<spandata-toggle="modal"data-target=".create_frame_md5s"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="This will create an MD5 checksum per video frame">Create MD5 checksums</button></span>
<spandata-toggle="modal"data-target=".to_prores"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="This will transcode to deinterlaced Apple ProRes LT">Transcode to ProRes</button></span>
<p>This command transcodes an input file into a deinterlaced Apple ProRes 422 LT file with 16-bit linear PCM encoded audio. The file is deinterlaced using the yadif filter (Yet Another De-Interlacing Filter).</p>
<spandata-toggle="modal"data-target=".one_thumbnail"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Export one thumbnail per video file">One thumbnail</button></span>
<spandata-toggle="modal"data-target=".multi_thumbnail"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Export many thumbnails per video file">Many thumbnails</button></span>
<dt>-vf fps=1/60</dt><dd>-vf is an alias for -filter:v, which creates a filtergraph to use for the streams. The rest of the command identifies filtering by frames per second, and sets the frames per second at 1/60 (which is one per minute). Omitting this will output all frames from the video</dd>
<dt><i>output file</i></dt><dd>path, name and extension of the output file. In the example out%d.png where %d is a regular expression that adds a number (d is for digit) and increments with each frame (out1.png, out2.png, out3.png…). You may also chose a regular expression like out%04d.png which gives 4 digits with leading 0 (out0001.png, out0002.png, out0003.png, …).</dd>
<spandata-toggle="modal"data-target=".thumbnails"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Generate thumbnails from a video at regular intervals">Generate thumbnails</button></span>
<spandata-toggle="modal"data-target=".pull_specs"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Pull specs from video file">Pull specs</button></span>
<p>This command extracts technical metadata from a video file and displays it in xml.</p>
<p>ffmpeg documentation on ffprobe (full list of flags, commands, <ahref="https://www.ffmpeg.org/ffprobe.html"target="_blank">www.ffmpeg.org/ffprobe.html</a>)</p>
<spandata-toggle="modal"data-target=".join_files"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Join (concatenate) two or more files into a single file">Join files together</button></span>
<p>This command takes two or more files of the same file type and joins them together to make a single file. All that the program needs is a text file with a list specifying the files that should be joined. However, it only works properly if the files to be combined have the exact same codec and technical specifications. Be careful, ffmpeg may appear to have successfully joined two video files with different codecs, but may only bring over the audio from the second file or have other weird behaviors. Don’t use this command for joining files with different codecs and technical specs and always preview your resulting video file!</p>
<p>ffmpeg documentation on concatenating files (full list of flags, commands, <ahref="https://trac.ffmpeg.org/wiki/Concatenate">https://trac.ffmpeg.org/wiki/Concatenate</a>)</p>
<dt>-f concat</dt><dd>forces ffmpeg to concatenate the files and to keep the same file format</dd>
<dt>-i <i>mylist.txt</i></dt><dd>path, name and extension of the input file. This text file contains the list of files to be concatenated and should be formatted as follows:
<spandata-toggle="modal"data-target=".excerpt_from_start"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Create an excerpt, starting from the beginning of the file">Excerpt from beginning</button></span>
<p>This command captures a certain portion of a video file, starting from the beginning and continuing for the amount of time (in seconds) specified in the script. This can be used to create a preview file, or to remove unwanted content from the end of the file. To be more specific, use timecode, such as 00:00:05.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-t <i>5</i></dt><dd>Tells ffmpeg to stop copying from the input file after a certain time, and specifies the number of seconds after which to stop copying. In this case, 5 seconds is specified.</dd>
<spandata-toggle="modal"data-target=".excerpt_from_middle"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Capture five seconds from the middle of a video file">Excerpt from middle</button></span>
<p>This command captures a certain portion of a video file, starting from a designated point in the file and taking an excerpt as long as the amount of time (in seconds) specified in the script. This can be used to create a preview or clip out a desired segment. To be more specific, use timecode, such as 00:00:05.</p>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-ss <i>5</i></dt><dd>Tells ffmpeg what timecode in the file to look for to start copying, and specifies the number of seconds into the video that ffmpeg should start copying. To be more specific, you can use timecode such as 00:00:05.</dd>
<dt>-t <i>10</i></dt><dd>Tells ffmpeg to stop copying from the input file after a certain time, and specifies the number of seconds after which to stop copying. In this case, 10 seconds is specified.</dd>
<spandata-toggle="modal"data-target=".excerpt_to_end"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Create a new video file with the first five seconds trimmed off the original">Excerpt to end</button></span>
<p>This command copies a video file starting from a specified time, removing the first few seconds from the output. This can be used to create an excerpt, or remove unwanted content from the beginning of a video file.</p>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-ss <i>5</i></dt><dd>Tells ffmpeg what timecode in the file to look for to start copying, and specifies the number of seconds into the video that ffmpeg should start copying. To be more specific, you can use timecode such as 00:00:05.</dd>
<spandata-toggle="modal"data-target=".split_audio_video"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Create separate audio and video tracks from an audiovisual file">Split audio and video tracks</button></span>
<p>This command splits the original input file into a video and audio stream. The -map command identifies which streams are mapped to which file. To ensure that you’re mapping the right streams to the right file, run ffprobe before writing the script to identify which streams are desired.</p>
<spandata-toggle="modal"data-target=".transcode_h264"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Transcode to an H.264 access file">Transcode to H.264</button></span>
<p>This command takes an input file and transcodes it to H.264 with an .mp4 wrapper, keeping the audio the same codec as the original. The libx264 codec defaults to a “medium” preset for compression quality and a CRF of 23. CRF stands for constant rate factor and determines the quality and file size of the resulting H.264 video. A low CRF means high quality and large file size; a high CRF means the opposite.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-c:v libx264</dt><dd>tells ffmpeg to change the video codec of the file to H.264</dd>
<dt>-c:a copy</dt><dd>tells ffmpeg not to change the audio codec</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
<dt>-crf <i>18</i></dt><dd>Specifying a lower CRF will make a larger file with better visual quality. 18 is often considered a “visually lossless” compression.</dd>
<p>libx264 will use a chroma subsampling scheme that is the closest match to that of the input. This can result in YUV 4:2:0, 4:2:2, or 4:4:4 chroma subsampling. QuickTime and most other non-FFmpeg based players can’t decode H.264 files that are not 4:2:0. In order to allow the video to play in all players, you can specify 4:2:0 chroma subsampling:</p>
<spandata-toggle="modal"data-target=".dcp_to_h264"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Transcode from DCP to an H.264 access file">H.264 from DCP</button></span>
<spandata-toggle="modal"data-target=".create_iso"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Create ISO files for DVD access">Create ISO</button></span>
<p>Create an ISO file that can be used to burn a DVD. Please note, you will have to install dvdauthor. To install dvd author using Homebrew run: <code>brew install dvdauthor</code></p>
<spandata-toggle="modal"data-target=".ntsc_to_h264"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Upscaled, Pillar-boxed HD H.264 Access Files from SD NTSC source">NTSC to H.264</button></span>
<dt>-filter:v</dt><dd>calls an option to apply filtering to the video stream. yadif deinterlaces. scale and pad do the math! resizes the video frame then pads the area around the 4:3 aspect to complete 16:9. flags=lanczos uses the Lanczos scaling algorithm which is slower but better than the default bilinear. Finally, format specifies a pixel format of YUV 4:2:0. The very same scaling filter also downscales a bigger image size into HD.</dd>
<spandata-toggle="modal"data-target=".SD_HD"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Transform 4:3 aspect ratio into 16:9 with pillarbox">4:3 to 16:9</button></span>
<dt>-filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"</dt><dd>video padding<br/>This resolution independent formula is actually padding any aspect ratio into 16:9 by pillarboxing, because the video filter uses relative values for input width (iw), input height (ih), output width (ow) and output height (oh).</dd>
<spandata-toggle="modal"data-target=".HD_SD"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Transform 16:9 aspect ratio video into 4:3 with letterbox">16:9 to 4:3</button></span>
<dt>-filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2"</dt><dd>video padding<br/>This resolution independent formula is actually padding any aspect ratio into 4:3 by letterboxing, because the video filter uses relative values for input width (iw), input height (ih), output width (ow) and output height (oh).</dd>
<spandata-toggle="modal"data-target=".flip_image"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Flip the image">Flip image</button></span>
<dt>-filter:v "hflip,vflip"</dt><dd>flips the image horizontally and vertically<br/>By using only one of the parameters hflip or vflip for filtering the image is flipped on that axis only. The quote marks are not mandatory.</dd>
<dt>-filter_complex "[0:v]setpts=<i>input_fps</i>/<i>output_fps</i>*PTS[v]; [0:a]atempo=<i>output_fps</i>/<i>input_fps</i>[a]"</dt><dd>A complex filter is needed here, in order to handle video stream and the audio stream separately. The <code>setpts</code> video filter modifies the PTS (presentation time stamp) of the video stream, and the <code>atempo</code> audio filter modifies the speed of the audio stream while keeping the same sound pitch. Note that the parameter’s order for the image and for the sound are inverted:
<li>In the video filter <code>setpts</code> the numerator <code>input_fps</code> sets the input speed and the denominator <code>output_fps</code> sets the output speed; both values are given in frames per second.</li>
<li>In the sound filter <code>atempo</code> the numerator <code>output_fps</code> sets the output speed and the denominator <code>input_fps</code> sets the input speed; both values are given in frames per second.</li>
</ul>
The different filters in a complex filter can be divided either by comma or semicolon. The quotation marks allow to insert a space between the filters for readability.</dd>
<dt>-map "[v]"</dt><dd>maps the video stream and:</dd>
<dt>-map "[a]"</dt><dd>maps the audio stream together into:</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
<dt>fontsize=<i>font_size</i></dt><dd> Set font size. <code>35</code> is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.</dd>
<dt>fontcolor=<i>font_colour</i></dt><dd> Set colour of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
<dt>x=(w-text_w)/2:y=(h-text_h)/2</dt><dd> Sets <i>x</i> and <i>y</i> coordinates for the watermark. These relative values will centre your watermark regardless of video dimensions.</dd>
<spandata-toggle="modal"data-target=".burn_in_timecode"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Burn in timecode ">Burn in timecode</button></span>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-vf drawtext=</dt><dd>This calls the drawtext filter with the following options:
<dl>
<dt>fontfile=<i>font_path</i></dt><dd> Set path to font. For example in OSX: <code>fontfile=/Library/Fonts/AppleGothic.ttf</code></dd>
<dt>fontsize=<i>font_size</i></dt><dd> Set font size. <code>35</code> is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.</dd>
<dt>timecode=<i>starting_timecode</i></dt><dd> Set the timecode to be displayed for the first frame. Timecode is to be represented as <code>hh:mm:ss[:;.]ff</code>. Colon escaping is determined by O.S, for example in Ubuntu <code>timecode='09\\:50\\:01\\:23'</code>. Ideally, this value would be generated from the file itself using ffprobe.</dd>
<dt>fontcolor=<i>font_colour</i></dt><dd> Set colour of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
<dt>box=1</dt><dd> Enable box around timecode</dd>
<dt>boxcolor=<i>box_colour</i></dt><dd> Set colour of box. Can be a text string such as <code>fontcolor=black</code> or a hexadecimal value such as <code>fontcolor=0x000000</code></dd>
<dt>rate=<i>timecode_rate</i></dt><dd> Framerate of video. For example <code>25/1</code></dd>
<dt>x=(w-text_w)/2:y=h/1.2</dt><dd> Sets <i>x</i> and <i>y</i> coordinates for the timecode. These relative values will horizontally centre your timecode in the bottom third regardless of video dimensions.</dd>
<spandata-toggle="modal"data-target=".check_FFV1_fixity"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="This decodes your video and verifies the internal crc checksums">Check FFV1 fixity</button></span>
<p>This decodes your video and displays any crc checksum mismatches. These errors will display in your terminal like this: <code>[ffv1 @ 0x1b04660] CRC mismatch 350FBD8A!at 0.272000 seconds
<spandata-toggle="modal"data-target=".images_2_video"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Transcode an image sequence into uncompressed 10-bit video">Image sequence into video</button></span>
<dt>-f image2</dt><dd>forces the image file de-muxer for single image files
</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file<br/>
This must match the naming convention actually used! The regex %06d matches six digits long numbers, possibly with leading zeroes. This allows to read in ascending order, one image after the other, the full sequence inside one folder. For image sequences starting with 086400 (i.e. captured with a timecode starting at 01:00:00:00 and at 24 fps), add the flag <code>-start_number 086400</code> before <code>-i input_file_%06d.ext</code>. The extension for TIFF files is .tif or maybe .tiff; the extension for DPX files is .dpx (or eventually .cin for old files).</dd>
<dt>-c:v v210</dt><dd>encodes an uncompressed 10-bit video stream</dd>
<dt>-an copy</dt><dd>no audio</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
<spandata-toggle="modal"data-target=".create_FFV1_mkv"><buttontype="button"class="btn btn-default"data-toggle="tooltip"data-placement="bottom"title="Transcode your file with the FFV1 Version 3 Codec in a matroska container">Create FFV1.mkv</button></span>
<p>This will losslessly trancode your video with the FFV1 Version 3 codec in a Matroska container. In order to verify losslessness, a framemd5 of the source video is also generated. For more information on FFV1 encoding, <ahref="https://trac.ffmpeg.org/wiki/Encode/FFV1"target="_blank">try the ffmpeg wiki.</a></p>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file.</dd>
<dt>-map 0</dt><dd>Map all streams that are present in the input file. This is important as ffmpeg will map only one stream of each type (video, audio, subtitles) by default to the output video.</dd>
<dt>-dn</dt><dd>ignore data streams (data no). The matroska container does not allow data tracks.</dd>
<dt>-g 1</dt><dd>specifies intra-frame encoding, or GOP=1.</dd>
<dt>-slicecrc 1</dt><dd>Adds CRC information for each slice. This makes it possible for a decoder to detect errors in the bitstream, rather than blindly decoding a broken slice.</dd>
<dt>-slices 16</dt><dd>Each frame is split into 16 slices. 16 is a good trade-off between filesize and encoding time. <ahref="http://ndsr.nycdigital.org/diving-in-head-first/"target="_blank">Click here for more information.</a></dd>
<dt>-c:a copy</dt><dd>copies all mapped audio streams.</dd>
<dt><i>output_file</i>.mkv</dt><dd>path and name of the output file. Use the <code>.mkv</code> extension to save your file in a matroska container. Optionally, choose a different extension if you want a different container, such as <code>.mov</code> or <code>.avi.</code></dd>
<dt>-f framemd5</dt><dd> Decodes video with the framemd5 muxer in order to generate md5 checksums for every frame of your input file. This allows you to verify losslessness when compared against the framemd5s of the output file.</dd>
<dt>-an</dt><dd>ignores the audio stream when creating framemd5 (audio no)</dd>
<dt><i>framemd5_output_file</i></dt><dd>path, name and extension of the framemd5 file.</dd>
<p>Made with ♥ at <ahref="http://wiki.curatecamp.org/index.php/Association_of_Moving_Image_Archivists_%26_Digital_Library_Federation_Hack_Day_2015">AMIA #AVhack15</a>! Contribute to the project via <ahref="https://github.com/amiaopensource/ffmprovisr">our GitHub page</a>!</p>