diff --git a/index.html b/index.html index b882278..5c6edb7 100644 --- a/index.html +++ b/index.html @@ -17,8 +17,7 @@ - Table of Contents - + Table of Contents About this resource FFmpeg basics Advanced FFmpeg concepts @@ -63,13 +62,11 @@ Learn about FFmpeg basics - Basic structure of an FFmpeg command Basic structure of an FFmpeg command - At its basis, an FFmpeg command is relatively simple. After you have installed FFmpeg (see instructions here), the program is invoked simply by typing ffmpeg at the command prompt. Subsequently, each instruction that you supply to FFmpeg is actually a pair: a flag, which designates the type of action you want to carry out; and then the specifics of that action. Flags are always prepended with a hyphen. For example, in the instruction -i input_file.ext, the -i flag tells FFmpeg that you are supplying an input file, and input_file.ext states which file it is. @@ -82,6 +79,7 @@ output_file.extpath and name of the output file. Because this is the last part of the command, the filename you type here does not have a flag designating it as the output file. + @@ -94,7 +92,6 @@ Filtergraphs - Many FFmpeg commands use filters that manipulate the video or audio stream in some way: for example, hflip to horizontally flip a video, or amerge to merge two or more audio tracks into a single stream. The use of a filter is signalled by the flag -vf (video filter) or -af (audio filter), followed by the name and options of the filter itself. For example, take the convert colourspace command: ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst output_file @@ -116,6 +113,7 @@ Straight quotation marks ("like this") rather than curved quotation marks (“like this”) should be used. For more information, check out the FFmpeg wiki Filtering Guide. + @@ -128,7 +126,6 @@ Rewrap a file - ffmpeg -i input_file.ext -c copy -map 0 output_file.ext This script will rewrap a video file. It will create a new video video file where the inner content (the video, audio, and subtitle data) of the original file is unchanged, but these streams are rehoused within a different container format. Note: rewrapping is also known as remuxing, short for re-multiplexing. @@ -144,6 +141,7 @@ Important caveat It may not be possible to rewrap a file's contents to a new container without re-encoding one or more of the streams within (that is, the video, audio, and subtitle tracks). Some containers can only contain streams of a certain encoding type: for example, the .mp4 container does not support uncompressed audio tracks. (In practice .mp4 goes hand-in-hand with a H.264-encoded video stream and an AAC-encoded video stream, although other types of video and audio streams are possible). Another example is that the Matroska container does not allow data tracks; see the MKV to MP4 recipe. In such cases, FFmpeg will throw an error. If you encounter errors of this kind, you may wish to consult the list of transcoding recipes. + @@ -152,7 +150,6 @@ MKV to MP4 - ffmpeg -i input_file.mkv -c:v copy -c:a aac output_file.mp4 This will convert your Matroska (MKV) files to MP4 files. @@ -166,6 +163,7 @@ output_filepath and name of the output file The extension for the MP4 container is .mp4. + @@ -178,7 +176,6 @@ Transcode into a deinterlaced Apple ProRes LT - ffmpeg -i input_file -c:v prores -profile:v 1 -vf yadif -c:a pcm_s16le output_file.mov This command transcodes an input file into a deinterlaced Apple ProRes 422 LT file with 16-bit linear PCM encoded audio. The file is deinterlaced using the yadif filter (Yet Another De-Interlacing Filter). @@ -202,6 +199,7 @@ prores is much faster, can be used for progressive video only, and seems to be better for video according to Rec. 601 (Recommendation ITU-R BT.601). prores_ks generates a better file, can also be used for interlaced video, allows also encoding of ProRes 4444 (-c:v prores_ks -profile:v 4), and seems to be better for video according to Rec. 709 (Recommendation ITU-R BT.709). + @@ -210,7 +208,6 @@ Transcode to H.264 - ffmpeg -i input_file -c:v libx264 -pix_fmt yuv420p -c:a copy output_file This command takes an input file and transcodes it to H.264 with an .mp4 wrapper, keeping the audio the same codec as the original. The libx264 codec defaults to a “medium” preset for compression quality and a CRF of 23. CRF stands for constant rate factor and determines the quality and file size of the resulting H.264 video. A low CRF means high quality and large file size; a high CRF means the opposite. @@ -230,6 +227,7 @@ If no crf is specified, libx264 will use a default value of 23. 18 is often considered a “visually lossless” compression. For more information, see the FFmpeg and H.264 Encoding Guide on the FFmpeg wiki. + @@ -238,7 +236,6 @@ H.264 from DCP - ffmpeg -i input_video_file.mxf -i input_audio_file.mxf -c:v libx264 -pix_fmt yuv420p -c:a aac output_file.mp4 This will transcode MXF wrapped video and audio files to an H.264 encoded MP4 file. Please note this only works for unencrypted, single reel DCPs. @@ -257,6 +254,7 @@ -c:a copyre-encodes using the same audio codec output_file.mkvpath, name and .mkv extension of the output file + @@ -265,7 +263,6 @@ Create FFV1 Version 3 video in a Matroska container with framemd5 of input - ffmpeg -i input_file -map 0 -dn -c:v ffv1 -level 3 -g 1 -slicecrc 1 -slices 16 -c:a copy output_file.mkv -f framemd5 -an framemd5_output_file This will losslessly transcode your video with the FFV1 Version 3 codec in a Matroska container. In order to verify losslessness, a framemd5 of the source video is also generated. For more information on FFV1 encoding, try the FFmpeg wiki. @@ -284,6 +281,7 @@ -anignores the audio stream when creating framemd5 (audio no) framemd5_output_filepath, name and extension of the framemd5 file. + @@ -292,7 +290,6 @@ Convert DVD to H.264 - ffmpeg -i concat:input_file1\|input_file2\|input_file3 -c:v libx264 -c:a copy output_file.mp4 This command allows you to create an H.264 file from a DVD source that is not copy-protected. Before encoding, you’ll need to establish which of the .VOB files on the DVD or .iso contain the content that you wish to encode. Inside the VIDEO_TS directory, you will see a series of files with names like VTS_01_0.VOB, VTS_01_1.VOB, etc. Some of the .VOB files will contain menus, special features, etc, so locate the ones that contain target content by playing them back in VLC. @@ -317,6 +314,7 @@ -map 0:vencodes all video streams -map 0:aencodes all audio streams + @@ -325,7 +323,6 @@ Transcode to H.265/HEVC - ffmpeg -i input_file -c:v libx265 -pix_fmt yuv420p -c:a copy output_file This command takes an input file and transcodes it to H.265/HEVC in an .mp4 wrapper, keeping the audio codec the same as in the original file. Note: FFmpeg must be compiled with libx265, the library of the H.265 codec, for this script to work. (Add the flag --with-x265 if using the brew install ffmpeg method). @@ -345,6 +342,7 @@ -preset veryslowThis option tells FFmpeg to use the slowest preset possible for the best compression quality. -crf 18Specifying a lower CRF will make a larger file with better visual quality. 18 is often considered a ‘visually lossless’ compression. + @@ -356,7 +354,6 @@ WAV to MP3 - ffmpeg -i input_file.wav -write_id3v1 1 -id3v2_version 3 -dither_method rectangular -out_sample_rate 48k -qscale:a 1 output_file.mp3 This will convert your WAV files to MP3s. @@ -370,10 +367,11 @@ output_filepath and name of the output file A couple notes - - About ID3v2.3 tag: ID3v2.3 is better supported than ID3v2.4, FFmpeg's default ID3v2 setting. - About dither methods: FFmpeg comes with a variety of dither algorithms, outlined in the official docs, though some may lead to unintended, drastic digital clipping on some systems. - + + About ID3v2.3 tag: ID3v2.3 is better supported than ID3v2.4, FFmpeg's default ID3v2 setting. + About dither methods: FFmpeg comes with a variety of dither algorithms, outlined in the official docs, though some may lead to unintended, drastic digital clipping on some systems. + + @@ -382,7 +380,6 @@ Generate two access MP3s from input. One with appended audio (such as a copyright notice) and one unmodified. - ffmpeg -i input_file -i input_file_to_append -filter_complex "[0:a:0]asplit=2[a][b];[b]afifo[bb];[1:a:0][bb]concat=n=2:v=0:a=1[concatout]" -map "[a]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file.mp3 -map "[concatout]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file_appended.mp3 This script allows you to generate two derivative audio files from a master while appending audio from a separate file (for example a copyright or institutional notice) to one of them. @@ -400,6 +397,7 @@ -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2sets up MP3 options (using constant quality) output_file_appendedpath, name and extension of the output file (with appended notice) + @@ -408,7 +406,6 @@ WAV to AAC/MP4 - ffmpeg -i input_file.wav -c:a aac -b:a 128k -dither_method rectangular -ar 44100 output_file.mp4 This will convert your WAV file to AAC/MP4. @@ -421,6 +418,7 @@ output_filepath and name of the output file A note about dither methods. FFmpeg comes with a variety of dither algorithms, outlined in the official docs, though some may lead to unintended, not-subtle digital clipping on some systems. + @@ -433,7 +431,6 @@ Transform 4:3 aspect ratio into 16:9 with pillarbox - Transform a video file with 4:3 aspect ratio into a video file with 16:9 aspect ratio by correct pillarboxing. ffmpeg -i input_file -filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2" -c:a copy output_file @@ -444,6 +441,7 @@ For silent videos you can replace -c:a copy by -an. output_filepath, name and extension of the output file + @@ -452,7 +450,6 @@ Transform 16:9 aspect ratio video into 4:3 with letterbox - Transform a video file with 16:9 aspect ratio into a video file with 4:3 aspect ratio by correct letterboxing. ffmpeg -i input_file -filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2" -c:a copy output_file @@ -464,15 +461,33 @@ For silent videos you can replace -c:a copy by -an. output_filepath, name and extension of the output file + + + Flip video image + + + Flip the video image horizontally and/or vertically + ffmpeg -i input_file -filter:v "hflip,vflip" -c:a copy output_file + + ffmpegstarts the command + -i input_filepath, name and extension of the input file + -filter:v "hflip,vflip"flips the image horizontally and verticallyBy using only one of the parameters hflip or vflip for filtering the image is flipped on that axis only. The quote marks are not mandatory. + -c:a copyre-encodes using the same audio codec + For silent videos you can replace -c:a copy by -an. + output_filepath, name and extension of the output file + + + + + Transform SD to HD with pillarbox Transform SD into HD with pillarbox - Transform a SD video file with 4:3 aspect ratio into an HD video file with 16:9 aspect ratio by correct pillarboxing. ffmpeg -i input_file -filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0" -c:a copy output_file @@ -488,6 +503,7 @@ For silent videos you can replace -c:a copy with -an. output_filepath, name and extension of the output file + @@ -496,7 +512,6 @@ Change Display Aspect Ratio without reencoding video - ffmpeg -i input_file -c:v copy -aspect 4:3 output_file ffmpegstarts the command @@ -505,6 +520,7 @@ -aspect 4:3Change Display Aspect Ratio to 4:3. Experiment with other aspect ratios such as 16:9. If used together with -c:v copy, it will affect the aspect ratio stored at container level, but not the aspect ratio stored in encoded frames, if it exists. output_filepath, name and extension of the output file + @@ -513,7 +529,6 @@ Transcode video to a different colourspace - This command uses a filter to convert the video to a different colour space. ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst output_file @@ -558,6 +573,7 @@ 1. Out of step with the regular pattern, -color_trc doesn’t accept bt470bg; it is instead here referred to directly as gamma. In the Rec.601 standard, 525-line/NTSC and 625-line/PAL video have assumed gammas of 2.2 and 2.8 respectively. ↩ + @@ -566,7 +582,6 @@ Modify image and sound speed - E.g. for converting 24fps to 25fps with audio pitch compensation for PAL access copies. (Thanks @kieranjol!) ffmpeg -i input_file -filter_complex "[0:v]setpts=input_fps/output_fps*PTS[v]; [0:a]atempo=output_fps/input_fps[a]" -map "[v]" -map "[a]" output_file @@ -582,6 +597,7 @@ -map "[a]"maps the audio stream together into: output_filepath, name and extension of the output file + @@ -590,7 +606,6 @@ Set stream properties - Find undetermined or unknown stream properties These examples use QuickTime inputs and outputs. The strategy will vary or may not be possible in other file formats. In the case of these examples it is the intention to make a lossless copy while clarifying an unknown characteristic of the stream. ffprobe input_file -show_streams @@ -619,6 +634,7 @@ -field_order VALUESet interlacement values. The possible values for -color_primaries, -color_trc, and -field_order are given in the Codec Options section of the FFmpeg docs - scroll down to near the bottom of the section. + @@ -631,7 +647,6 @@ Extract audio from an AV file - ffmpeg -i input_file -c:a copy -vn output_file This command extracts the audio stream without loss from an audiovisual file. @@ -641,6 +656,7 @@ -vnno video stream output_filepath, name and extension of the output file + @@ -649,7 +665,6 @@ Combine audio tracks into one in a video file - ffmpeg -i input_file -filter_complex "[0:a:0][0:a:1]amerge[out]" -map 0:v -map "[out]" -c:v copy -shortest output_file This command combines two audio tracks present in a video file into one stream. It can be useful in situations where a downstream process, like YouTube’s automatic captioning, expect one audio track. To ensure that you’re mapping the right audio tracks run ffprobe before writing the script to identify which tracks are desired. More than two audio streams can be combined by extending the pattern present in the -filter_complex option. @@ -665,6 +680,7 @@ -shortestlimit to the shortest stream output_filepath, name and extension of the video output file + @@ -673,7 +689,6 @@ Flip audio phase shift - ffmpeg -i input_file -af pan="stereo|c0=c0|c1=-1*c1" output_file This command inverses the audio phase of the second channel by rotating it 180°. @@ -684,6 +699,7 @@ "stereo|c0=c0|c1=-1*c1"maps the output's first channel (c0) to the input's first channel and the output's second channel (c1) to the inverse of the input's second channel output filepath, name and extension of the output file + @@ -692,7 +708,6 @@ Calculate Loudness Levels - ffmpeg -i input_file -af loudnorm=print_format=json -f null - This filter calculates and outputs loudness information in json about an input file (labeled input) as well as what the levels would be if loudnorm were applied in its one pass mode (labeled output). The values generated can be used as inputs for a 'second pass' of the loudnorm filter allowing more accurate loudness normalization than if it is used in a single pass. These instructions use the loudnorm defaults, which align well with PBS recommendations for target loudness. More information can be found at the loudnorm documentation. @@ -704,6 +719,7 @@ print_format=jsonsets the output format for loudness information to json. This format makes it easy to use in a second pass. For a more human readable output, this can be set to print_format=summary -f null -sets the file output to null (since we are only interested in the metadata generated) + @@ -712,7 +728,6 @@ RIAA Equalization - ffmpeg -i input_file -af aemphasis=type=riaa output_file This will apply RIAA equalization to an input file allowing correct listening of audio transferred 'flat' (without EQ) from records that used this EQ curve. For more information about RIAA equalization see the Wikipedia page on the subject. @@ -721,6 +736,7 @@ -af aemphasis=type=riaaactivates the aemphasis filter and sets it to use RIAA equalization output_filepath and name of output file + @@ -729,7 +745,6 @@ One Pass Loudness Normalization - ffmpeg -i input_file -af loudnorm=dual_mono=true -ar 48k output_file This will normalize the loudness of an input using one pass, which is quicker but less accurate than using two passes. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the loudnorm documentation. Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document. @@ -741,6 +756,7 @@ -ar 48kSets the output sample rate to 48 kHz. (The loudnorm filter upsamples to 192 kHz so it is best to manually set a desired output sample rate). output_filepath, name and extension for output file + @@ -749,7 +765,6 @@ Two Pass Loudness Normalization - ffmpeg -i input_file -af loudnorm=dual_mono=true:measured_I=input_i:measured_TP=input_tp:measured_LRA=input_lra:measured_thresh=input_thresh:offset=target_offset:linear=true -ar 48k output_file This command allows using the levels calculated using a first pass of the loudnorm filter to more accurately normalize loudness. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the loudnorm documentation. Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document. @@ -767,6 +782,7 @@ -ar 48kSets the output sample rate to 48 kHz. (The loudnorm filter upsamples to 192 kHz so it is best to manually set a desired output sample rate). output_filepath, name and extension for output file + @@ -775,7 +791,6 @@ Fix AV Sync: Resample audio - ffmpeg -i input_file -c:v copy -c:a pcm_s16le -af "aresample=async=1000" output_file ffmpegstarts the command @@ -785,6 +800,7 @@ -af "aresample=async=1000"Uses the aresample filter to stretch/squeeze samples to given timestamps, with a maximum of 1000 samples per second compensation. output_filepath, name and extension of the output file. Try different file extensions such as mkv, mov, mp4, or avi. + @@ -797,7 +813,6 @@ Join files together - ffmpeg -f concat -i mylist.txt -c copy output_file This command takes two or more files of the same file type and joins them together to make a single file. All that the program needs is a text file with a list specifying the files that should be joined. However, it only works properly if the files to be combined have the exact same codec and technical specifications. Be careful, FFmpeg may appear to have successfully joined two video files with different codecs, but may only bring over the audio from the second file or have other weird behaviors. Don’t use this command for joining files with different codecs and technical specs and always preview your resulting video file! @@ -816,6 +831,7 @@ output_filepath, name and extension of the output file For more information, see the FFmpeg wiki page on concatenating files. + @@ -824,7 +840,6 @@ Split file into segments - ffmpeg -i input_file -c copy -map 0 -f segment -segment_time 60 -reset_timestamps 1 output_file-%03d.mkv ffmpegStarts the command. @@ -847,6 +862,7 @@ + @@ -855,7 +871,6 @@ Trim a video without re-encoding - ffmpeg -i input_file -ss 00:02:00 -to 00:55:00 -c copy -map 0 output_file This command allows you to create an excerpt from a video file without re-encoding the image data. @@ -874,6 +889,7 @@ -ss 00:05:00 -t 10Beginning five minutes into the original video, this command will create a 10-second-long excerpt. Note: In order to keep the original timestamps, without trying to sanitise them, you may add the -copyts option. + @@ -882,7 +898,6 @@ Excerpt from beginning - ffmpeg -i input_file -t 5 -c copy -map 0 output_file This command captures a certain portion of a video file, starting from the beginning and continuing for the amount of time (in seconds) specified in the script. This can be used to create a preview file, or to remove unwanted content from the end of the file. To be more specific, use timecode, such as 00:00:05. @@ -893,6 +908,7 @@ -map 0tells FFmpeg to map all streams of the input to the output. output_filepath, name and extension of the output file + @@ -901,7 +917,6 @@ Excerpt to end - ffmpeg -i input_file -ss 5 -c copy -map 0 output_file This command copies a video file starting from a specified time, removing the first few seconds from the output. This can be used to create an excerpt, or remove unwanted content from the beginning of a video file. @@ -912,6 +927,7 @@ -map 0tells FFmpeg to map all streams of the input to the output. output_filepath, name and extension of the output file + @@ -920,7 +936,6 @@ Excerpt from end - ffmpeg -sseof -5 -i input_file -c copy -map 0 output_file This command copies a video file starting from a specified time before the end of the file, removing everything before from the output. This can be used to create an excerpt, or extract content from the end of a video file (e.g. for extracting the closing credits). @@ -931,6 +946,7 @@ -map 0tells FFmpeg to map all streams of the input to the output. output_filepath, name and extension of the output file + @@ -943,7 +959,6 @@ Upscaled, Pillar-boxed HD H.264 Access Files from SD NTSC source - ffmpeg -i input_file -c:v libx264 -filter:v "yadif, scale=1440:1080:flags=lanczos, pad=1920:1080:(ow-iw)/2:(oh-ih)/2, format=yuv420p" output_file ffmpegstarts the command @@ -959,6 +974,7 @@ output_filepath, name and extension of the output file Note: the very same scaling filter also downscales a bigger image size into HD. + @@ -967,7 +983,6 @@ Deinterlace a video - ffmpeg -i input_file -c:v libx264 -vf "yadif,format=yuv420p" output_file This command takes an interlaced input file and outputs a deinterlaced H.264 MP4. @@ -994,6 +1009,7 @@ + @@ -1002,7 +1018,6 @@ Inverse telecine a video file - ffmpeg -i input_file -c:v libx264 -vf "fieldmatch,yadif,decimate" output_file The inverse telecine procedure reverses the 3:2 pull down process, restoring 29.97fps interlaced video to the 24fps frame rate of the original film source. @@ -1025,6 +1040,7 @@ + @@ -1033,7 +1049,6 @@ Change field order of an interlaced video - ffmpeg -i input_file -c:v video_codec -filter:v setfield=tff output_file ffmpegstarts the command @@ -1042,6 +1057,7 @@ -c:v video_codecAs a video filter is used, it is not possible to use -c copy. The video must be re-encoded with whatever video codec is chosen, e.g. ffv1, v210 or prores. output_filepath, name and extension of the output file + @@ -1050,7 +1066,6 @@ Check video file interlacement patterns - ffmpeg -i input file -filter:v idet -f null - ffmpegstarts the command @@ -1059,6 +1074,7 @@ -f nullVideo is decoded with the null muxer. This allows video decoding without creating an output file. -FFmpeg syntax requires a specified output, and - is just a place holder. No file is actually created. + @@ -1071,7 +1087,6 @@ Create centered, transparent text watermark - E.g For creating access copies with your institutions name ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:text=watermark_text:fontcolor=font_colour:alpha=0.4:x=(w-text_w)/2:y=(h-text_h)/2" output_file @@ -1089,6 +1104,7 @@ Note: -vf is a shortcut for -filter:v. output_filepath, name and extension of the output file. + @@ -1097,7 +1113,6 @@ Overlay image watermark on video - ffmpeg -i input_video file -i input_image_file -filter_complex overlay=main_w-overlay_w-5:5 output_file ffmpegstarts the command @@ -1106,6 +1121,7 @@ -filter_complex overlay=main_w-overlay_w-5:5This calls the overlay filter and sets x and y coordinates for the position of the watermark on the video. Instead of hardcoding specific x and y coordinates, main_w-overlay_w-5:5 uses relative coordinates to place the watermark in the upper right hand corner, based on the width of your input files. Please see the FFmpeg documentation for more examples. output_filepath, name and extension of the output file + @@ -1114,7 +1130,6 @@ Create a burnt in timecode on your image - ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:timecode=starting_timecode:fontcolor=font_colour:box=1:boxcolor=box_colour:rate=timecode_rate:x=(w-text_w)/2:y=h/1.2" output_file ffmpegstarts the command @@ -1133,6 +1148,7 @@ output_filepath, name and extension of the output file. Note: -vf is a shortcut for -filter:v. + @@ -1145,7 +1161,6 @@ One thumbnail - ffmpeg -i input_file -ss 00:00:20 -vframes 1 thumb.png This command will grab a thumbnail 20 seconds into the video. @@ -1155,6 +1170,7 @@ -vframes 1sets the number of frames (in this example, one frame) output filepath, name and extension of the output file + @@ -1163,7 +1179,6 @@ Many thumbnails - ffmpeg -i input_file -vf fps=1/60 out%d.png This will grab a thumbnail every minute and output sequential png files. @@ -1173,6 +1188,7 @@ -vf fps=1/60Creates a filtergraph to use for the streams. The rest of the command identifies filtering by frames per second, and sets the frames per second at 1/60 (which is one per minute). Omitting this will output all frames from the video. output filepath, name and extension of the output file. In the example out%d.png where %d is a regular expression that adds a number (d is for digit) and increments with each frame (out1.png, out2.png, out3.png…). You may also chose a regular expression like out%04d.png which gives 4 digits with leading 0 (out0001.png, out0002.png, out0003.png, …). + @@ -1181,7 +1197,6 @@ Images to GIF - ffmpeg -f image2 -framerate 9 -pattern_type glob -i "input_image_*.jpg" -vf scale=250x250 output_file.gif This will convert a series of image files into a GIF. @@ -1194,6 +1209,7 @@ -vf scale=250x250filter the video to scale it to 250x250; -vf is an alias for -filter:v output_file.gifpath and name of the output file + @@ -1202,7 +1218,6 @@ Create GIF - Create high quality GIF ffmpeg -ss HH:MM:SS -i input_file -filter_complex "fps=10,scale=500:-1:flags=lanczos,palettegen" -t 3 palette.png ffmpeg -ss HH:MM:SS -i input_file -i palette.png -filter_complex "[0:v]fps=10, scale=500:-1:flags=lanczos[v], [v][1:v]paletteuse" -t 3 -loop 6 output_file @@ -1227,6 +1242,7 @@ Simpler GIF creation ffmpeg -ss HH:MM:SS -i input_file -vf "fps=10,scale=500:-1" -t 3 -loop 6 output_file This is a quick and easy method. Dithering is more apparent than the above method using the palette filters, but the file size will be smaller. Perfect for that “legacy” GIF look. + @@ -1239,7 +1255,6 @@ Transcode an image sequence into uncompressed 10-bit video - ffmpeg -f image2 -framerate 24 -i input_file_%06d.ext -c:v v210 output_file ffmpegstarts the command @@ -1250,6 +1265,7 @@ -c:v v210encodes an uncompressed 10-bit video stream output_filepath, name and extension of the output file + @@ -1258,7 +1274,6 @@ Create a video from an image and audio file. - ffmpeg -r 1 -loop 1 -i image_file -i audio_file -acodec copy -shortest -vf scale=1280:720 output_file This command will take an image file (e.g. image.jpg) and an audio file (e.g. audio.mp3) and combine them into a video file that contains the audio track with the image used as the video. It can be useful in a situation where you might want to upload an audio file to a platform like YouTube. You may want to adjust the scaling with -vf to suit your needs. @@ -1272,6 +1287,7 @@ -vf scale=1280:720filter the video to scale it to 1280x720 for YouTube. -vf is an alias for -filter:v output_filepath, name and extension of the video output file + @@ -1284,7 +1300,6 @@ Creates a visualization of the bits in an audio stream - ffplay -f lavfi "amovie=input_file, asplit=2[out1][a], [a]abitscope=colors=purple|yellow[out0]" This filter allows visual analysis of the information held in various bit depths of an audio stream. This can aid with identifying when a file that is nominally of a higher bit depth actually has been 'padded' with null information. The provided GIF shows a 16 bit WAV file (left) and then the results of converting that same WAV to 32 bit (right). Note that in the 32 bit version, there is still only information in the first 16 bits. @@ -1299,6 +1314,7 @@ Comparison of mono 16 bit and mono 16 bit padded to 32 bit. + @@ -1307,7 +1323,6 @@ Plays a graphical output showing decibel levels of an input file - ffplay -f lavfi "amovie='input.mp3', astats=metadata=1:reset=1, adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0:size=700x256:bg=Black[out]" ffplaystarts the command @@ -1328,6 +1343,7 @@ Example of filter output + @@ -1336,7 +1352,6 @@ Shows all pixels outside of broadcast range - ffplay -f lavfi "movie='input.mp4', signalstats=out=brng:color=cyan[out]" ffplaystarts the command @@ -1353,6 +1368,7 @@ Example of filter output + @@ -1361,7 +1377,6 @@ Plays vectorscope of video - ffplay input_file -vf "split=2[m][v], [v]vectorscope=b=0.7:m=color3:g=green[v], [m][v]overlay=x=W-w:y=H-h" ffplaystarts the command @@ -1375,6 +1390,7 @@ [m][v]overlay=x=W-w:y=H-hdeclares where the vectorscope will overlay on top of the video image as it plays "quotation mark to end filtergraph + @@ -1383,7 +1399,6 @@ This will play two input videos side by side while also applying the temporal difference filter to them - ffmpeg -i input01 -i input02 -filter_complex "[0:v:0]tblend=all_mode=difference128[a];[1:v:0]tblend=all_mode=difference128[b];[a][b]hstack[out]" -map [out] -f nut -c:v rawvideo - | ffplay - ffmpegstarts the command @@ -1405,6 +1420,7 @@ Example of filter output + @@ -1417,7 +1433,6 @@ Pull specs from video file - ffprobe -i input_file -show_format -show_streams -show_data -print_format xml This command extracts technical metadata from a video file and displays it in xml. @@ -1429,6 +1444,7 @@ -print_formatSet the output printing format (in this example “xml”; other formats include “json” and “flat”) See also the FFmpeg documentation on ffprobe for a full list of flags, commands, and options. + @@ -1437,7 +1453,6 @@ Strips metadata from video file - ffmpeg -i input_file -map_metadata -1 -c:v copy -c:a copy output_file ffmpegstarts the command @@ -1447,6 +1462,7 @@ -acodec copycopies audio track output_fileMakes copy of original file and names output file + @@ -1459,7 +1475,6 @@ Create Bash script to batch process with FFmpeg - Bash scripts are plain text files saved with a .sh extension. This entry explains how they work with the example of a bash script named “Rewrap-MXF.sh”, which rewraps .mxf files in a given directory to .mov files. “Rewrap-MXF.sh” contains the following text: for file in *.mxf; do ffmpeg -i "$file" -map 0 -c copy "${file%.mxf}.mov"; done @@ -1481,6 +1496,7 @@ e.g., if an input file is bestmovie002.avi, its output will be bestmovie002_suffix.avi. Variation: recursively process all MXF files in subdirectories using find instead of for: find input_directory -iname "*.mxf" -exec ffmpeg -i {} -map 0 -c copy {}.mov \; + @@ -1489,7 +1505,6 @@ Create PowerShell script to batch process with FFmpeg - As of Windows 10, it is possible to run Bash via Bash on Ubuntu on Windows, allowing you to use bash scripting. To enable Bash on Windows, see these instructions. On Windows, the primary native command line programme is PowerShell. PowerShell scripts are plain text files saved with a .ps1 extension. This entry explains how they work with the example of a PowerShell script named “rewrap-mp4.ps1”, which rewraps .mp4 files in a given directory to .mkv files. “rewrap-mp4.ps1” contains the following text: @@ -1515,6 +1530,7 @@ Note: the PowerShell script (.ps1 file) and all .mp4 files to be rewrapped must be contained within the same directory, and the script must be run from that directory. Execute the .ps1 file by typing .\rewrap-mp4.ps1 in PowerShell. Modify the script as needed to perform different transcodes, or to use with ffprobe. :) + @@ -1523,7 +1539,6 @@ Create MD5 checksums (video frames) - ffmpeg -i input_file -f framemd5 -an output_file This will create an MD5 checksum per video frame. @@ -1534,6 +1549,7 @@ output_filepath, name and extension of the output file You may verify an MD5 checksum file created this way by using a Bash script. + @@ -1542,7 +1558,6 @@ Create MD5 checksums (audio samples) - ffmpeg -i input_file -af "asetnsamples=n=48000" -f framemd5 -vn output_file This will create an MD5 checksum for each group of 48000 audio samples. The number of samples per group can be set arbitrarily, but it's good practice to match the samplerate of the media file (so you will get one checksum per second). @@ -1560,6 +1575,7 @@ output_filepath, name and extension of the output file You may verify an MD5 checksum file created this way by using a Bash script. + @@ -1568,7 +1584,6 @@ Creates a QCTools report - ffprobe -f lavfi -i "movie=input_file:s=v+a[in0][in1], [in0]signalstats=stat=tout+vrep+brng, cropdetect=reset=1:round=1, idet=half_life=1, split[a][b];[a]field=top[a1];[b]field=bottom, split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim[out0];[in1]ebur128=metadata=1, astats=metadata=1:reset=1:length=0.4[out1]" -show_frames -show_versions -of xml=x=1:q=1 -noprivate | gzip > input_file.qctools.xml.gz This will create an XML report for use in QCTools for a video file with one video track and one audio track. See also the QCTools documentation. @@ -1585,6 +1600,7 @@ >redirects the standard output (the data made by ffprobe about the video) input_file.qctools.xml.gznames the zipped data output file, which can be named anything but needs the extension qctools.xml.gz for compatibility issues + @@ -1593,7 +1609,6 @@ Creates a QCTools report - ffprobe -f lavfi -i "movie=input_file,signalstats=stat=tout+vrep+brng, cropdetect=reset=1:round=1, idet=half_life=1, split[a][b];[a]field=top[a1];[b]field=bottom,split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim" -show_frames -show_versions -of xml=x=1:q=1 -noprivate | gzip > input_file.qctools.xml.gz This will create an XML report for use in QCTools for a video file with one video track and NO audio track. See also the QCTools documentation. @@ -1610,6 +1625,7 @@ >redirects the standard output (the data made by ffprobe about the video) input_file.qctools.xml.gznames the zipped data output file, which can be named anything but needs the extension qctools.xml.gz for compatibility issues + @@ -1618,7 +1634,6 @@ Check FFV1 Version 3 fixity - ffmpeg -report -i input_file -f null - This decodes your video and displays any CRC checksum mismatches. These errors will display in your terminal like this: [ffv1 @ 0x1b04660] CRC mismatch 350FBD8A!at 0.272000 seconds Frame crcs are enabled by default in FFV1 Version 3. @@ -1629,6 +1644,7 @@ -f nullVideo is decoded with the null muxer. This allows video decoding without creating an output file. -FFmpeg syntax requires a specified output, and - is just a place holder. No file is actually created. + @@ -1637,7 +1653,6 @@ Read/Extract EIA-608 (Line 21) closed captioning - ffprobe -f lavfi -i movie=input_file,readeia608 -show_entries frame=pkt_pts_time:frame_tags=lavfi.readeia608.0.line,lavfi.readeia608.0.cc,lavfi.readeia608.1.line,lavfi.readeia608.1.cc -of csv > input_file.csv This command uses FFmpeg's readeia608 filter to extract the hexadecimal values hidden within EIA-608 (Line 21) Closed Captioning, outputting a csv file. For more information about EIA-608, check out Adobe's Introduction to Closed Captions. If hex isn't your thing, closed captioning character and code sets can be found in the documentation for SCTools. @@ -1654,6 +1669,7 @@ Side-by-side video with true EIA-608 captions on the left, zoomed in view of the captions on the right (with hex values represented). To achieve something similar with your own captioned video, try out the EIA608/VITC viewer in QCTools. + @@ -1666,7 +1682,6 @@ Makes a mandelbrot test pattern video - ffmpeg -f lavfi -i mandelbrot=size=1280x720:rate=25 -c:v libx264 -t 10 output_file ffmpegstarts the command @@ -1676,6 +1691,7 @@ -t 10specifies recording time of 10 seconds output_filepath, name and extension of the output file. Try different file extensions such as mkv, mov, mp4, or avi. + @@ -1684,7 +1700,6 @@ Makes a SMPTE bars test pattern video - ffmpeg -f lavfi -i smptebars=size=720x576:rate=25 -c:v prores -t 10 output_file ffmpegstarts the command @@ -1694,6 +1709,7 @@ -t 10specifies recording time of 10 seconds output_filepath, name and extension of the output file. Try different file extensions such as mov or avi. + @@ -1702,7 +1718,6 @@ Make a test pattern video - ffmpeg -f lavfi -i testsrc=size=720x576:rate=25 -c:v v210 -t 10 output_file ffmpegstarts the command @@ -1713,6 +1728,7 @@ -t 10specifies recording time of 10 seconds output_filepath, name and extension of the output file. Try different file extensions such as mkv, mov, mp4, or avi. + @@ -1721,7 +1737,6 @@ Play HD SMPTE bars - Test an HD video projector by playing the SMPTE colour bars pattern. ffplay -f lavfi -i smptehdbars=size=1920x1080 @@ -1729,6 +1744,7 @@ -f lavfitells ffplay to use the Libavfilter input virtual device -i smptehdbars=size=1920x1080asks for the smptehdbars filter pattern as input and sets the HD resolution. This generates a colour bars pattern, based on the SMPTE RP 219–2002. + @@ -1737,7 +1753,6 @@ Play VGA SMPTE bars - Test a VGA (SD) video projector by playing the SMPTE colour bars pattern. ffplay -f lavfi -i smptebars=size=640x480 @@ -1745,6 +1760,7 @@ -f lavfitells ffplay to use the Libavfilter input virtual device -i smptebars=size=640x480asks for the smptebars filter pattern as input and sets the VGA (SD) resolution. This generates a colour bars pattern, based on the SMPTE Engineering Guideline EG 1–1990. + @@ -1753,7 +1769,6 @@ Sine wave - Generate a test audio file playing a sine wave. ffmpeg -f lavfi -i "sine=frequency=1000:sample_rate=48000:duration=5" -c:a pcm_s16le output_file.wav @@ -1763,6 +1778,7 @@ -c:a pcm_s16leencodes the audio codec in pcm_s16le (the default encoding for wav files). pcm represents pulse-code modulation format (raw bytes), 16 means 16 bits per sample, and le means "little endian" output_file.wavpath, name and extension of the output file + @@ -1771,7 +1787,6 @@ SMPTE bars + Sine wave audio - Generate a SMPTE bars test video + a 1kHz sine wave as audio testsignal. ffmpeg -f lavfi -i smptebars=size=720x576:rate=25 -f lavfi -i "sine=frequency=1000:sample_rate=48000" -c:a pcm_s16le -t 10 -c:v ffv1 output_file @@ -1785,6 +1800,7 @@ -c:v ffv1Encodes to FFV1. Alter this setting to set your desired codec. output_file.wavpath, name and extension of the output file + @@ -1793,7 +1809,6 @@ Makes a broken test file - Modifies an existing, functioning file and intentionally breaks it for testing purposes. ffmpeg -i input_file -bsf noise=1 -c copy output_file @@ -1803,6 +1818,7 @@ -c copyuse stream copy mode to re-mux instead of re-encode output_filepath, name and extension of the output file + @@ -1815,7 +1831,6 @@ Plays video with OCR on top - Note: ffmpeg must be compiled with the tesseract library for this script to work (--with-tesseract if using the brew install ffmpeg method). ffplay input_file -vf "ocr,drawtext=fontfile=/Library/Fonts/Andale Mono.ttf:text=%{metadata\\\:lavfi.ocr.text}:fontcolor=white" @@ -1831,6 +1846,7 @@ fontcolor=whitespecifies font color as white "quotation mark to end filtergraph + @@ -1839,7 +1855,6 @@ Exports OCR data to screen - Note: FFmpeg must be compiled with the tesseract library for this script to work (--with-tesseract if using the brew install ffmpeg method) ffprobe -show_entries frame_tags=lavfi.ocr.text -f lavfi -i "movie=input_file,ocr" @@ -1849,6 +1864,7 @@ -f lavfitells ffprobe to use the Libavfilter input virtual device -i "movie=input_file,ocr"declares 'movie' as input_file and passes in the 'ocr' command + @@ -1861,7 +1877,6 @@ Compare two video files for content similarity using perceptual hashing - ffmpeg -i input_one -i input_two -filter_complex signature=detectmode=full:nb_inputs=2 -f null - ffmpegstarts the command @@ -1871,6 +1886,7 @@ nb_inputs=2tells the filter to expect two input files -f null -Sets the output of FFmpeg to a null stream (since we are not creating a transcoded file, just viewing metadata). + @@ -1879,7 +1895,6 @@ Generate a perceptual hash for an input video file - ffmpeg -i input -vf signature=format=xml:filename="output.xml" -an -f null - ffmpeg -i inputstarts the command using your input file @@ -1888,6 +1903,7 @@ -antells FFmpeg to ignore the audio stream of the input file -f null -Sets the FFmpeg output to a null stream (since we are only interested in the output generated by the filter). + @@ -1900,7 +1916,6 @@ Play an image sequence - Play an image sequence directly as moving images, without having to create a video first. ffplay -framerate 5 input_file_%06d.ext @@ -1914,6 +1929,7 @@ Notes: If -framerate is omitted, the playback speed depends on the images’ file sizes and on the computer’s processing power. It may be rather slow for large image files. You can navigate durationally by clicking within the playback window. Clicking towards the left-hand side of the playback window takes you towards the beginning of the playback sequence; clicking towards the right takes you towards the end of the sequence. + @@ -1922,7 +1938,6 @@ Split audio and video tracks - ffmpeg -i input_file -map 0:v:0 video_output_file -map 0:a:0 audio_output_file This command splits the original input file into a video and audio stream. The -map command identifies which streams are mapped to which file. To ensure that you’re mapping the right streams to the right file, run ffprobe before writing the script to identify which streams are desired. @@ -1933,33 +1948,15 @@ -map 0:a:0grabs the first audio stream and maps it into: audio_output_filepath, name and extension of the audio output file + - - Flip video image - - - Flip the video image horizontally and/or vertically - - ffmpeg -i input_file -filter:v "hflip,vflip" -c:a copy output_file - - ffmpegstarts the command - -i input_filepath, name and extension of the input file - -filter:v "hflip,vflip"flips the image horizontally and verticallyBy using only one of the parameters hflip or vflip for filtering the image is flipped on that axis only. The quote marks are not mandatory. - -c:a copyre-encodes using the same audio codec - For silent videos you can replace -c:a copy by -an. - output_filepath, name and extension of the output file - - - - Create ISO files for DVD access Create ISO files for DVD access - Create an ISO file that can be used to burn a DVD. Please note, you will have to install dvdauthor. To install dvd author using Homebrew run: brew install dvdauthor ffmpeg -i input_file -aspect 4:3 -target ntsc-dvd output_file.mpg This command will take any file and create an MPEG file that dvdauthor can use to create an ISO. @@ -1970,6 +1967,7 @@ -target ntsc-dvdspecifies the region for your DVD. This could be also pal-dvd. output_file.mpgpath and name of the output file. The extension must be .mpg + @@ -1978,7 +1976,6 @@ Exports CSV for scene detection using YDIF - ffprobe -f lavfi -i movie=input_file,signalstats -show_entries frame=pkt_pts_time:frame_tags=lavfi.signalstats.YDIF -of csv This ffprobe command prints a CSV correlating timestamps and their YDIF values, useful for determining cuts. @@ -1991,6 +1988,7 @@ frame=pkt_pts_time:frame_tags=lavfi.signalstats.YDIFspecifies showing the timecode (pkt_pts_time) in the frame stream and the YDIF section of the frame_tags stream -of csvsets the output printing format to CSV. -of is an alias of -print_format. + @@ -1999,7 +1997,6 @@ Cover head switching noise - ffmpeg -i input_file -filter:v drawbox=w=iw:h=7:y=ih-h:t=max output_file This command will draw a black box over a small area of the bottom of the frame, which can be used to cover up head switching noise. @@ -2016,6 +2013,7 @@ output_filepath and name of the output file + @@ -2024,7 +2022,6 @@ Record and live-stream simultaneously - ffmpeg -re -i ${INPUTFILE} -map 0 -flags +global_header -vf scale="1280:-1,format=yuv420p" -pix_fmt yuv420p -level 3.1 -vsync passthrough -crf 26 -g 50 -bufsize 3500k -maxrate 1800k -c:v libx264 -c:a aac -b:a 128000 -r:a 44100 -ac 2 -t ${STREAMDURATION} -f tee "[movflags=+faststart]${TARGETFILE}|[f=flv]${STREAMTARGET}" I use this script to stream to a RTMP target and record the stream locally as .mp4 with only one ffmpeg-instance. As input, I use bmdcapture which is piped to ffmpeg. But it can also be used with a static videofile as input. @@ -2057,6 +2054,7 @@ -f tee Use multiple outputs. Outputs defined below. "[movflags=+faststart]target-file.mp4|[f=flv]rtmp://stream-url/stream-id"The outputs, separated by a pipe (|). The first is the local file, the second is the live stream. Options for each target are given in square brackets before the target. + @@ -2065,7 +2063,6 @@ View information about a specific decoder, encoder, demuxer, muxer, or filter - ffmpeg -h type=name ffmpegstarts the command @@ -2081,7 +2078,7 @@ - + @@ -2092,7 +2089,6 @@ Change the above data-target field, the hover-over description, the button text, and the below div ID *****Longer title***** - ffmpeg -i input_file *****code goes here***** output_file This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info!
At its basis, an FFmpeg command is relatively simple. After you have installed FFmpeg (see instructions here), the program is invoked simply by typing ffmpeg at the command prompt.
ffmpeg
Subsequently, each instruction that you supply to FFmpeg is actually a pair: a flag, which designates the type of action you want to carry out; and then the specifics of that action. Flags are always prepended with a hyphen.
For example, in the instruction -i input_file.ext, the -i flag tells FFmpeg that you are supplying an input file, and input_file.ext states which file it is.
-i input_file.ext
-i
input_file.ext
Many FFmpeg commands use filters that manipulate the video or audio stream in some way: for example, hflip to horizontally flip a video, or amerge to merge two or more audio tracks into a single stream.
The use of a filter is signalled by the flag -vf (video filter) or -af (audio filter), followed by the name and options of the filter itself. For example, take the convert colourspace command:
-vf
-af
ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst output_file @@ -116,6 +113,7 @@
ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst output_file
For more information, check out the FFmpeg wiki Filtering Guide.
ffmpeg -i input_file.ext -c copy -map 0 output_file.ext
This script will rewrap a video file. It will create a new video video file where the inner content (the video, audio, and subtitle data) of the original file is unchanged, but these streams are rehoused within a different container format.
Note: rewrapping is also known as remuxing, short for re-multiplexing.
It may not be possible to rewrap a file's contents to a new container without re-encoding one or more of the streams within (that is, the video, audio, and subtitle tracks). Some containers can only contain streams of a certain encoding type: for example, the .mp4 container does not support uncompressed audio tracks. (In practice .mp4 goes hand-in-hand with a H.264-encoded video stream and an AAC-encoded video stream, although other types of video and audio streams are possible). Another example is that the Matroska container does not allow data tracks; see the MKV to MP4 recipe.
In such cases, FFmpeg will throw an error. If you encounter errors of this kind, you may wish to consult the list of transcoding recipes.
ffmpeg -i input_file.mkv -c:v copy -c:a aac output_file.mp4
This will convert your Matroska (MKV) files to MP4 files.
.mp4
ffmpeg -i input_file -c:v prores -profile:v 1 -vf yadif -c:a pcm_s16le output_file.mov
This command transcodes an input file into a deinterlaced Apple ProRes 422 LT file with 16-bit linear PCM encoded audio. The file is deinterlaced using the yadif filter (Yet Another De-Interlacing Filter).
prores
prores_ks
-c:v prores_ks -profile:v 4
ffmpeg -i input_file -c:v libx264 -pix_fmt yuv420p -c:a copy output_file
This command takes an input file and transcodes it to H.264 with an .mp4 wrapper, keeping the audio the same codec as the original. The libx264 codec defaults to a “medium” preset for compression quality and a CRF of 23. CRF stands for constant rate factor and determines the quality and file size of the resulting H.264 video. A low CRF means high quality and large file size; a high CRF means the opposite.
libx264
For more information, see the FFmpeg and H.264 Encoding Guide on the FFmpeg wiki.
ffmpeg -i input_video_file.mxf -i input_audio_file.mxf -c:v libx264 -pix_fmt yuv420p -c:a aac output_file.mp4
This will transcode MXF wrapped video and audio files to an H.264 encoded MP4 file. Please note this only works for unencrypted, single reel DCPs.
ffmpeg -i input_file -map 0 -dn -c:v ffv1 -level 3 -g 1 -slicecrc 1 -slices 16 -c:a copy output_file.mkv -f framemd5 -an framemd5_output_file
This will losslessly transcode your video with the FFV1 Version 3 codec in a Matroska container. In order to verify losslessness, a framemd5 of the source video is also generated. For more information on FFV1 encoding, try the FFmpeg wiki.
ffmpeg -i concat:input_file1\|input_file2\|input_file3 -c:v libx264 -c:a copy output_file.mp4
This command allows you to create an H.264 file from a DVD source that is not copy-protected.
Before encoding, you’ll need to establish which of the .VOB files on the DVD or .iso contain the content that you wish to encode. Inside the VIDEO_TS directory, you will see a series of files with names like VTS_01_0.VOB, VTS_01_1.VOB, etc. Some of the .VOB files will contain menus, special features, etc, so locate the ones that contain target content by playing them back in VLC.
ffmpeg -i input_file -c:v libx265 -pix_fmt yuv420p -c:a copy output_file
This command takes an input file and transcodes it to H.265/HEVC in an .mp4 wrapper, keeping the audio codec the same as in the original file.
Note: FFmpeg must be compiled with libx265, the library of the H.265 codec, for this script to work. (Add the flag --with-x265 if using the brew install ffmpeg method).
--with-x265
brew install ffmpeg
ffmpeg -i input_file.wav -write_id3v1 1 -id3v2_version 3 -dither_method rectangular -out_sample_rate 48k -qscale:a 1 output_file.mp3
This will convert your WAV files to MP3s.
A couple notes
ffmpeg -i input_file -i input_file_to_append -filter_complex "[0:a:0]asplit=2[a][b];[b]afifo[bb];[1:a:0][bb]concat=n=2:v=0:a=1[concatout]" -map "[a]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file.mp3 -map "[concatout]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file_appended.mp3
This script allows you to generate two derivative audio files from a master while appending audio from a separate file (for example a copyright or institutional notice) to one of them.
ffmpeg -i input_file.wav -c:a aac -b:a 128k -dither_method rectangular -ar 44100 output_file.mp4
This will convert your WAV file to AAC/MP4.
A note about dither methods. FFmpeg comes with a variety of dither algorithms, outlined in the official docs, though some may lead to unintended, not-subtle digital clipping on some systems.
Transform a video file with 4:3 aspect ratio into a video file with 16:9 aspect ratio by correct pillarboxing.
ffmpeg -i input_file -filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2" -c:a copy output_file
-c:a copy
-an
Transform a video file with 16:9 aspect ratio into a video file with 4:3 aspect ratio by correct letterboxing.
ffmpeg -i input_file -filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2" -c:a copy output_file
ffmpeg -i input_file -filter:v "hflip,vflip" -c:a copy output_file
Transform a SD video file with 4:3 aspect ratio into an HD video file with 16:9 aspect ratio by correct pillarboxing.
ffmpeg -i input_file -filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0" -c:a copy output_file
ffmpeg -i input_file -c:v copy -aspect 4:3 output_file
4:3
16:9
-c:v copy
This command uses a filter to convert the video to a different colour space.
1. Out of step with the regular pattern, -color_trc doesn’t accept bt470bg; it is instead here referred to directly as gamma. In the Rec.601 standard, 525-line/NTSC and 625-line/PAL video have assumed gammas of 2.2 and 2.8 respectively. ↩
-color_trc
bt470bg
E.g. for converting 24fps to 25fps with audio pitch compensation for PAL access copies. (Thanks @kieranjol!)
ffmpeg -i input_file -filter_complex "[0:v]setpts=input_fps/output_fps*PTS[v]; [0:a]atempo=output_fps/input_fps[a]" -map "[v]" -map "[a]" output_file
These examples use QuickTime inputs and outputs. The strategy will vary or may not be possible in other file formats. In the case of these examples it is the intention to make a lossless copy while clarifying an unknown characteristic of the stream.
ffprobe input_file -show_streams
The possible values for -color_primaries, -color_trc, and -field_order are given in the Codec Options section of the FFmpeg docs - scroll down to near the bottom of the section.
-color_primaries
-field_order
ffmpeg -i input_file -c:a copy -vn output_file
This command extracts the audio stream without loss from an audiovisual file.
ffmpeg -i input_file -filter_complex "[0:a:0][0:a:1]amerge[out]" -map 0:v -map "[out]" -c:v copy -shortest output_file
This command combines two audio tracks present in a video file into one stream. It can be useful in situations where a downstream process, like YouTube’s automatic captioning, expect one audio track. To ensure that you’re mapping the right audio tracks run ffprobe before writing the script to identify which tracks are desired. More than two audio streams can be combined by extending the pattern present in the -filter_complex option.
ffmpeg -i input_file -af pan="stereo|c0=c0|c1=-1*c1" output_file
This command inverses the audio phase of the second channel by rotating it 180°.
ffmpeg -i input_file -af loudnorm=print_format=json -f null -
This filter calculates and outputs loudness information in json about an input file (labeled input) as well as what the levels would be if loudnorm were applied in its one pass mode (labeled output). The values generated can be used as inputs for a 'second pass' of the loudnorm filter allowing more accurate loudness normalization than if it is used in a single pass.
These instructions use the loudnorm defaults, which align well with PBS recommendations for target loudness. More information can be found at the loudnorm documentation.
print_format=summary
ffmpeg -i input_file -af aemphasis=type=riaa output_file
This will apply RIAA equalization to an input file allowing correct listening of audio transferred 'flat' (without EQ) from records that used this EQ curve. For more information about RIAA equalization see the Wikipedia page on the subject.
ffmpeg -i input_file -af loudnorm=dual_mono=true -ar 48k output_file
This will normalize the loudness of an input using one pass, which is quicker but less accurate than using two passes. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the loudnorm documentation.
Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document.
ffmpeg -i input_file -af loudnorm=dual_mono=true:measured_I=input_i:measured_TP=input_tp:measured_LRA=input_lra:measured_thresh=input_thresh:offset=target_offset:linear=true -ar 48k output_file
This command allows using the levels calculated using a first pass of the loudnorm filter to more accurately normalize loudness. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the loudnorm documentation.
ffmpeg -i input_file -c:v copy -c:a pcm_s16le -af "aresample=async=1000" output_file
ffmpeg -f concat -i mylist.txt -c copy output_file
This command takes two or more files of the same file type and joins them together to make a single file. All that the program needs is a text file with a list specifying the files that should be joined. However, it only works properly if the files to be combined have the exact same codec and technical specifications. Be careful, FFmpeg may appear to have successfully joined two video files with different codecs, but may only bring over the audio from the second file or have other weird behaviors. Don’t use this command for joining files with different codecs and technical specs and always preview your resulting video file!
For more information, see the FFmpeg wiki page on concatenating files.
ffmpeg -i input_file -c copy -map 0 -f segment -segment_time 60 -reset_timestamps 1 output_file-%03d.mkv
ffmpeg -i input_file -ss 00:02:00 -to 00:55:00 -c copy -map 0 output_file
This command allows you to create an excerpt from a video file without re-encoding the image data.
Note: In order to keep the original timestamps, without trying to sanitise them, you may add the -copyts option.
-copyts
ffmpeg -i input_file -t 5 -c copy -map 0 output_file
This command captures a certain portion of a video file, starting from the beginning and continuing for the amount of time (in seconds) specified in the script. This can be used to create a preview file, or to remove unwanted content from the end of the file. To be more specific, use timecode, such as 00:00:05.
ffmpeg -i input_file -ss 5 -c copy -map 0 output_file
This command copies a video file starting from a specified time, removing the first few seconds from the output. This can be used to create an excerpt, or remove unwanted content from the beginning of a video file.
ffmpeg -sseof -5 -i input_file -c copy -map 0 output_file
This command copies a video file starting from a specified time before the end of the file, removing everything before from the output. This can be used to create an excerpt, or extract content from the end of a video file (e.g. for extracting the closing credits).
ffmpeg -i input_file -c:v libx264 -filter:v "yadif, scale=1440:1080:flags=lanczos, pad=1920:1080:(ow-iw)/2:(oh-ih)/2, format=yuv420p" output_file
Note: the very same scaling filter also downscales a bigger image size into HD.
ffmpeg -i input_file -c:v libx264 -vf "yadif,format=yuv420p" output_file
This command takes an interlaced input file and outputs a deinterlaced H.264 MP4.
ffmpeg -i input_file -c:v libx264 -vf "fieldmatch,yadif,decimate" output_file
The inverse telecine procedure reverses the 3:2 pull down process, restoring 29.97fps interlaced video to the 24fps frame rate of the original film source.
ffmpeg -i input_file -c:v video_codec -filter:v setfield=tff output_file
-c copy
ffv1
v210
ffmpeg -i input file -filter:v idet -f null -
null
-
E.g For creating access copies with your institutions name
ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:text=watermark_text:fontcolor=font_colour:alpha=0.4:x=(w-text_w)/2:y=(h-text_h)/2" output_file
-filter:v
ffmpeg -i input_video file -i input_image_file -filter_complex overlay=main_w-overlay_w-5:5 output_file
main_w-overlay_w-5:5
ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:timecode=starting_timecode:fontcolor=font_colour:box=1:boxcolor=box_colour:rate=timecode_rate:x=(w-text_w)/2:y=h/1.2" output_file
Note: -vf is a shortcut for -filter:v.
ffmpeg -i input_file -ss 00:00:20 -vframes 1 thumb.png
This command will grab a thumbnail 20 seconds into the video.
ffmpeg -i input_file -vf fps=1/60 out%d.png
This will grab a thumbnail every minute and output sequential png files.
ffmpeg -f image2 -framerate 9 -pattern_type glob -i "input_image_*.jpg" -vf scale=250x250 output_file.gif
This will convert a series of image files into a GIF.
Create high quality GIF
ffmpeg -ss HH:MM:SS -i input_file -filter_complex "fps=10,scale=500:-1:flags=lanczos,palettegen" -t 3 palette.png
ffmpeg -ss HH:MM:SS -i input_file -i palette.png -filter_complex "[0:v]fps=10, scale=500:-1:flags=lanczos[v], [v][1:v]paletteuse" -t 3 -loop 6 output_file
Simpler GIF creation
ffmpeg -ss HH:MM:SS -i input_file -vf "fps=10,scale=500:-1" -t 3 -loop 6 output_file
This is a quick and easy method. Dithering is more apparent than the above method using the palette filters, but the file size will be smaller. Perfect for that “legacy” GIF look.
ffmpeg -f image2 -framerate 24 -i input_file_%06d.ext -c:v v210 output_file
ffmpeg -r 1 -loop 1 -i image_file -i audio_file -acodec copy -shortest -vf scale=1280:720 output_file
This command will take an image file (e.g. image.jpg) and an audio file (e.g. audio.mp3) and combine them into a video file that contains the audio track with the image used as the video. It can be useful in a situation where you might want to upload an audio file to a platform like YouTube. You may want to adjust the scaling with -vf to suit your needs.
ffplay -f lavfi "amovie=input_file, asplit=2[out1][a], [a]abitscope=colors=purple|yellow[out0]"
This filter allows visual analysis of the information held in various bit depths of an audio stream. This can aid with identifying when a file that is nominally of a higher bit depth actually has been 'padded' with null information. The provided GIF shows a 16 bit WAV file (left) and then the results of converting that same WAV to 32 bit (right). Note that in the 32 bit version, there is still only information in the first 16 bits.
ffplay -f lavfi "amovie='input.mp3', astats=metadata=1:reset=1, adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0:size=700x256:bg=Black[out]"
ffplay -f lavfi "movie='input.mp4', signalstats=out=brng:color=cyan[out]"
ffplay input_file -vf "split=2[m][v], [v]vectorscope=b=0.7:m=color3:g=green[v], [m][v]overlay=x=W-w:y=H-h"
ffmpeg -i input01 -i input02 -filter_complex "[0:v:0]tblend=all_mode=difference128[a];[1:v:0]tblend=all_mode=difference128[b];[a][b]hstack[out]" -map [out] -f nut -c:v rawvideo - | ffplay -
ffprobe -i input_file -show_format -show_streams -show_data -print_format xml
This command extracts technical metadata from a video file and displays it in xml.
See also the FFmpeg documentation on ffprobe for a full list of flags, commands, and options.
ffmpeg -i input_file -map_metadata -1 -c:v copy -c:a copy output_file
Bash scripts are plain text files saved with a .sh extension. This entry explains how they work with the example of a bash script named “Rewrap-MXF.sh”, which rewraps .mxf files in a given directory to .mov files.
“Rewrap-MXF.sh” contains the following text:
for file in *.mxf; do ffmpeg -i "$file" -map 0 -c copy "${file%.mxf}.mov"; done
e.g., if an input file is bestmovie002.avi, its output will be bestmovie002_suffix.avi.
Variation: recursively process all MXF files in subdirectories using find instead of for:
find
for
find input_directory -iname "*.mxf" -exec ffmpeg -i {} -map 0 -c copy {}.mov \;
As of Windows 10, it is possible to run Bash via Bash on Ubuntu on Windows, allowing you to use bash scripting. To enable Bash on Windows, see these instructions.
On Windows, the primary native command line programme is PowerShell. PowerShell scripts are plain text files saved with a .ps1 extension. This entry explains how they work with the example of a PowerShell script named “rewrap-mp4.ps1”, which rewraps .mp4 files in a given directory to .mkv files.
“rewrap-mp4.ps1” contains the following text:
Note: the PowerShell script (.ps1 file) and all .mp4 files to be rewrapped must be contained within the same directory, and the script must be run from that directory.
Execute the .ps1 file by typing .\rewrap-mp4.ps1 in PowerShell.
.\rewrap-mp4.ps1
Modify the script as needed to perform different transcodes, or to use with ffprobe. :)
ffmpeg -i input_file -f framemd5 -an output_file
This will create an MD5 checksum per video frame.
You may verify an MD5 checksum file created this way by using a Bash script.
ffmpeg -i input_file -af "asetnsamples=n=48000" -f framemd5 -vn output_file
This will create an MD5 checksum for each group of 48000 audio samples. The number of samples per group can be set arbitrarily, but it's good practice to match the samplerate of the media file (so you will get one checksum per second).
ffprobe -f lavfi -i "movie=input_file:s=v+a[in0][in1], [in0]signalstats=stat=tout+vrep+brng, cropdetect=reset=1:round=1, idet=half_life=1, split[a][b];[a]field=top[a1];[b]field=bottom, split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim[out0];[in1]ebur128=metadata=1, astats=metadata=1:reset=1:length=0.4[out1]" -show_frames -show_versions -of xml=x=1:q=1 -noprivate | gzip > input_file.qctools.xml.gz
This will create an XML report for use in QCTools for a video file with one video track and one audio track. See also the QCTools documentation.
>
ffprobe -f lavfi -i "movie=input_file,signalstats=stat=tout+vrep+brng, cropdetect=reset=1:round=1, idet=half_life=1, split[a][b];[a]field=top[a1];[b]field=bottom,split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim" -show_frames -show_versions -of xml=x=1:q=1 -noprivate | gzip > input_file.qctools.xml.gz
This will create an XML report for use in QCTools for a video file with one video track and NO audio track. See also the QCTools documentation.
ffmpeg -report -i input_file -f null -
This decodes your video and displays any CRC checksum mismatches. These errors will display in your terminal like this: [ffv1 @ 0x1b04660] CRC mismatch 350FBD8A!at 0.272000 seconds
[ffv1 @ 0x1b04660] CRC mismatch 350FBD8A!at 0.272000 seconds
Frame crcs are enabled by default in FFV1 Version 3.
ffprobe -f lavfi -i movie=input_file,readeia608 -show_entries frame=pkt_pts_time:frame_tags=lavfi.readeia608.0.line,lavfi.readeia608.0.cc,lavfi.readeia608.1.line,lavfi.readeia608.1.cc -of csv > input_file.csv
This command uses FFmpeg's readeia608 filter to extract the hexadecimal values hidden within EIA-608 (Line 21) Closed Captioning, outputting a csv file. For more information about EIA-608, check out Adobe's Introduction to Closed Captions.
If hex isn't your thing, closed captioning character and code sets can be found in the documentation for SCTools.
Side-by-side video with true EIA-608 captions on the left, zoomed in view of the captions on the right (with hex values represented). To achieve something similar with your own captioned video, try out the EIA608/VITC viewer in QCTools.
ffmpeg -f lavfi -i mandelbrot=size=1280x720:rate=25 -c:v libx264 -t 10 output_file
ffmpeg -f lavfi -i smptebars=size=720x576:rate=25 -c:v prores -t 10 output_file
ffmpeg -f lavfi -i testsrc=size=720x576:rate=25 -c:v v210 -t 10 output_file
Test an HD video projector by playing the SMPTE colour bars pattern.
ffplay -f lavfi -i smptehdbars=size=1920x1080
Test a VGA (SD) video projector by playing the SMPTE colour bars pattern.
ffplay -f lavfi -i smptebars=size=640x480
Generate a test audio file playing a sine wave.
ffmpeg -f lavfi -i "sine=frequency=1000:sample_rate=48000:duration=5" -c:a pcm_s16le output_file.wav
pcm_s16le
16
le
Generate a SMPTE bars test video + a 1kHz sine wave as audio testsignal.
ffmpeg -f lavfi -i smptebars=size=720x576:rate=25 -f lavfi -i "sine=frequency=1000:sample_rate=48000" -c:a pcm_s16le -t 10 -c:v ffv1 output_file
Modifies an existing, functioning file and intentionally breaks it for testing purposes.
ffmpeg -i input_file -bsf noise=1 -c copy output_file
Note: ffmpeg must be compiled with the tesseract library for this script to work (--with-tesseract if using the brew install ffmpeg method).
--with-tesseract
ffplay input_file -vf "ocr,drawtext=fontfile=/Library/Fonts/Andale Mono.ttf:text=%{metadata\\\:lavfi.ocr.text}:fontcolor=white"
Note: FFmpeg must be compiled with the tesseract library for this script to work (--with-tesseract if using the brew install ffmpeg method)
ffprobe -show_entries frame_tags=lavfi.ocr.text -f lavfi -i "movie=input_file,ocr"
ffmpeg -i input_one -i input_two -filter_complex signature=detectmode=full:nb_inputs=2 -f null -
ffmpeg -i input -vf signature=format=xml:filename="output.xml" -an -f null -
Play an image sequence directly as moving images, without having to create a video first.
ffplay -framerate 5 input_file_%06d.ext
Notes:
If -framerate is omitted, the playback speed depends on the images’ file sizes and on the computer’s processing power. It may be rather slow for large image files.
-framerate
You can navigate durationally by clicking within the playback window. Clicking towards the left-hand side of the playback window takes you towards the beginning of the playback sequence; clicking towards the right takes you towards the end of the sequence.
ffmpeg -i input_file -map 0:v:0 video_output_file -map 0:a:0 audio_output_file
This command splits the original input file into a video and audio stream. The -map command identifies which streams are mapped to which file. To ensure that you’re mapping the right streams to the right file, run ffprobe before writing the script to identify which streams are desired.
Create an ISO file that can be used to burn a DVD. Please note, you will have to install dvdauthor. To install dvd author using Homebrew run: brew install dvdauthor
brew install dvdauthor
ffmpeg -i input_file -aspect 4:3 -target ntsc-dvd output_file.mpg
This command will take any file and create an MPEG file that dvdauthor can use to create an ISO.
.mpg
ffprobe -f lavfi -i movie=input_file,signalstats -show_entries frame=pkt_pts_time:frame_tags=lavfi.signalstats.YDIF -of csv
This ffprobe command prints a CSV correlating timestamps and their YDIF values, useful for determining cuts.
pkt_pts_time
-of
-print_format
ffmpeg -i input_file -filter:v drawbox=w=iw:h=7:y=ih-h:t=max output_file
This command will draw a black box over a small area of the bottom of the frame, which can be used to cover up head switching noise.
ffmpeg -re -i ${INPUTFILE} -map 0 -flags +global_header -vf scale="1280:-1,format=yuv420p" -pix_fmt yuv420p -level 3.1 -vsync passthrough -crf 26 -g 50 -bufsize 3500k -maxrate 1800k -c:v libx264 -c:a aac -b:a 128000 -r:a 44100 -ac 2 -t ${STREAMDURATION} -f tee "[movflags=+faststart]${TARGETFILE}|[f=flv]${STREAMTARGET}"
I use this script to stream to a RTMP target and record the stream locally as .mp4 with only one ffmpeg-instance.
As input, I use bmdcapture which is piped to ffmpeg. But it can also be used with a static videofile as input.
bmdcapture
ffmpeg -h type=name
ffmpeg -i input_file *****code goes here***** output_file
This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info!