diff --git a/index.html b/index.html index bac1f07..3a6e287 100644 --- a/index.html +++ b/index.html @@ -71,15 +71,15 @@
At its basis, an FFmpeg command is relatively simple. After you have installed FFmpeg (see instructions here), the program is invoked simply by typing ffmpeg
at the command prompt.
Subsequently, each instruction that you supply to FFmpeg is actually a pair: a flag, which designates the type of action you want to carry out; and then the specifics of that action. Flags are always prepended with a hyphen.
-For example, in the instruction -i input_file.ext
, the -i
flag tells FFmpeg that you are supplying an input file, and input_file.ext
states which file it is.
Subsequently, each instruction that you supply to FFmpeg is actually a pair: a flag, which designates the type of action you want to carry out; and then the specifics of that action. Flags are always prepended with a hyphen.
+For example, in the instruction -i input_file.ext
, the -i
flag tells FFmpeg that you are supplying an input file, and input_file.ext
states which file it is.
Likewise, in the instruction -c:v prores
, the flag -c:v
tells FFmpeg that you want to encode the video stream, and prores
specifies which codec is to be used. (-c:v
is shorthand for -codec:v
/-codec:video
).
A very basic FFmpeg command looks like this:
Many FFmpeg commands use filters that manipulate the video or audio stream in some way: for example, hflip to horizontally flip a video, or amerge to merge two or more audio tracks into a single stream.
The use of a filter is signalled by the flag -vf
(video filter) or -af
(audio filter), followed by the name and options of the filter itself. For example, take the convert colourspace command:
ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst output_file
-
Here, colormatrix is the filter used, with src and dst representing the source and destination colourspaces. This part following the -vf
is a filtergraph.
ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst output_file
+
Here, colormatrix is the filter used, with src and dst representing the source and destination colourspaces. This part following the -vf
is a filtergraph.
It is also possible to apply multiple filters to an input, which are sequenced together in the filtergraph. A chained set of filters is called a filter chain, and a filtergraph may include multiple filter chains. Filters in a filterchain are separated from each other by commas (,
), and filterchains are separated from each other by semicolons (;
). For example, take the inverse telecine command:
ffmpeg -i input_file -c:v libx264 -vf "fieldmatch,yadif,decimate" output_file
ffmpeg -i input_file -c:v libx264 -vf "fieldmatch,yadif,decimate" output_file
Here we have a filtergraph including one filter chain, which is made up of three video filters.
It is often prudent to enclose your filtergraph in quotation marks; this means that you can use spaces within the filtergraph. Using the inverse telecine example again, the following filter commands are all valid and equivalent:
-map 0:0 -map 0:2
means ‘take the first and third streams from the first input file’.-map 0:1 -map 1:0
means ‘take the second stream from the first input file and the first stream from the second input file’.To map all streams in the input file to the output file, use -map 0
. However, note that not all container formats can include all stream types: for example, .mp4 cannot contain timecode.
To map all streams in the input file to the output file, use -map 0
. However, note that not all container formats can include all stream types: for example, .mp4 cannot contain timecode.
When no mapping is specified in an ffmpeg command, the default for video files is to take just one video and one audio stream for the output: other stream types, such as timecode or subtitles, will not be copied to the output file by default. If multiple video or audio streams are present, the best quality one is automatically selected by FFmpeg.
For more information, check out the FFmpeg wiki Map page, and the official FFmpeg documentation on -map
.
ffmpeg -i input_file.ext -c copy -map 0 output_file.ext
ffmpeg -i input_file.ext -c copy -map 0 output_file.ext
This script will rewrap a video file. It will create a new video video file where the inner content (the video, audio, and subtitle data) of the original file is unchanged, but these streams are rehoused within a different container format.
-Note: rewrapping is also known as remuxing, short for re-multiplexing.
+Note: rewrapping is also known as remuxing, short for re-multiplexing.
ffmpeg -i input_file.mkv -c:v copy -c:a aac output_file.mp4
ffmpeg -i input_file.mkv -c:v copy -c:a aac output_file.mp4
This will convert your Matroska (MKV) files to MP4 files.
.mkv
.-c:a aac
by -an
, which means that there will be no audio track in the output file..mp4
.ffmpeg -i input_file -f rawvideo -c:v copy output_file.dv
ffmpeg -i input_file -f rawvideo -c:v copy output_file.dv
This script will take a video that is encoded in the DV Codec but wrapped in a different container (such as MOV) and rewrap it into a raw DV file (with the .dv extension). Since DV files potentially contain a great deal of provenance metadata within the DV stream, it is necessary to rewrap files in this method to avoid unintentional stripping of this metadata.
ffmpeg -i input_file -c:v prores -profile:v 1 -vf yadif -c:a pcm_s16le output_file.mov
ffmpeg -i input_file -c:v prores -profile:v 1 -vf yadif -c:a pcm_s16le output_file.mov
This command transcodes an input file into a deinterlaced Apple ProRes 422 LT file with 16-bit linear PCM encoded audio. The file is deinterlaced using the yadif filter (Yet Another De-Interlacing Filter).
-vf
is an alias for -filter:v
..mov
.FFmpeg comes with more than one ProRes encoder:
@@ -272,23 +272,23 @@ffmpeg -i input_file -c:v libx264 -pix_fmt yuv420p -c:a aac output_file
ffmpeg -i input_file -c:v libx264 -pix_fmt yuv420p -c:a aac output_file
This command takes an input file and transcodes it to H.264 with an .mp4 wrapper, keeping the audio the same codec as the original. The libx264 codec defaults to a “medium” preset for compression quality and a CRF of 23. CRF stands for constant rate factor and determines the quality and file size of the resulting H.264 video. A low CRF means high quality and large file size; a high CRF means the opposite.
In order to use the same basic command to make a higher quality file, you can add some of these presets:
-ffmpeg -i input_file -c:v libx264 -pix_fmt yuv420p -preset veryslow -crf 18 -c:a aac output_file
ffmpeg -i input_file -c:v libx264 -pix_fmt yuv420p -preset veryslow -crf 18 -c:a aac output_file
veryslow
, slower
, slow
, medium
, fast
, faster
, veryfast
, superfast
, ultrafast
.-pix_fmt yuv420p
), the scale ranges between 0-51, with 0 being lossless and 51 the worst possible quality.-pix_fmt yuv420p
), the scale ranges between 0-51, with 0 being lossless and 51 the worst possible quality.libx264
will use a default value of 23. 18 is often considered a “visually lossless” compression.For more information, see the FFmpeg and H.264 Encoding Guide on the FFmpeg wiki.
@@ -301,23 +301,23 @@ffmpeg -i input_video_file.mxf -i input_audio_file.mxf -c:v libx264 -pix_fmt yuv420p -c:a aac output_file.mp4
ffmpeg -i input_video_file.mxf -i input_audio_file.mxf -c:v libx264 -pix_fmt yuv420p -c:a aac output_file.mp4
This will transcode MXF wrapped video and audio files to an H.264 encoded MP4 file. Please note this only works for unencrypted, single reel DCPs.
.mxf
.mxf
.mxf
.mxf
Variation: Copy PCM audio streams by using Matroska instead of the MP4 container
-ffmpeg -i input_video_file.mxf -i input_audio_file.mxf -c:v libx264 -pix_fmt yuv420p -c:a copy output_file.mkv
ffmpeg -i input_video_file.mxf -i input_audio_file.mxf -c:v libx264 -pix_fmt yuv420p -c:a copy output_file.mkv
ffmpeg -i input_file -map 0 -dn -c:v ffv1 -level 3 -g 1 -slicecrc 1 -slices 16 -c:a copy output_file.mkv -f framemd5 -an framemd5_output_file
ffmpeg -i input_file -map 0 -dn -c:v ffv1 -level 3 -g 1 -slicecrc 1 -slices 16 -c:a copy output_file.mkv -f framemd5 -an framemd5_output_file
This will losslessly transcode your video with the FFV1 Version 3 codec in a Matroska container. In order to verify losslessness, a framemd5 of the source video is also generated. For more information on FFV1 encoding, try the FFmpeg wiki.
.mkv
extension to save your file in a Matroska container. Optionally, choose a different extension if you want a different container, such as .mov
or .avi
..mkv
extension to save your file in a Matroska container. Optionally, choose a different extension if you want a different container, such as .mov
or .avi
.ffmpeg -i concat:input_file_1\|input_file_2\|input_file_3 -c:v libx264 -c:a aac output_file.mp4
ffmpeg -i concat:input_file_1\|input_file_2\|input_file_3 -c:v libx264 -c:a aac output_file.mp4
This command allows you to create an H.264 file from a DVD source that is not copy-protected.
Before encoding, you’ll need to establish which of the .VOB files on the DVD or .iso contain the content that you wish to encode. Inside the VIDEO_TS directory, you will see a series of files with names like VTS_01_0.VOB, VTS_01_1.VOB, etc. Some of the .VOB files will contain menus, special features, etc, so locate the ones that contain target content by playing them back in VLC.
-i concat:VTS_01_1.VOB\|VTS_01_2.VOB\|VTS_01_3.VOB
It’s also possible to adjust the quality of your output by setting the -crf and -preset values:
-ffmpeg -i concat:input_file_1\|input_file_2\|input_file_3 -c:v libx264 -crf 18 -preset veryslow -c:a aac output_file.mp4
It’s also possible to adjust the quality of your output by setting the -crf and -preset values:
+ffmpeg -i concat:input_file_1\|input_file_2\|input_file_3 -c:v libx264 -crf 18 -preset veryslow -c:a aac output_file.mp4
Bear in mind that by default, libx264 will only encode a single video stream and a single audio stream, picking the ‘best’ of the options available. To preserve all video and audio streams, add -map parameters:
-ffmpeg -i concat:input_file_1\|input_file_2 -map 0:v -map 0:a -c:v libx264 -c:a aac output_file.mp4
Bear in mind that by default, libx264 will only encode a single video stream and a single audio stream, picking the ‘best’ of the options available. To preserve all video and audio streams, add -map parameters:
+ffmpeg -i concat:input_file_1\|input_file_2 -map 0:v -map 0:a -c:v libx264 -c:a aac output_file.mp4
ffmpeg -i input_file -c:v libx265 -pix_fmt yuv420p -c:a copy output_file
ffmpeg -i input_file -c:v libx265 -pix_fmt yuv420p -c:a copy output_file
This command takes an input file and transcodes it to H.265/HEVC in an .mp4 wrapper, keeping the audio codec the same as in the original file.
-Note: FFmpeg must be compiled with libx265, the library of the H.265 codec, for this script to work. (Add the flag --with-x265
if using the brew install ffmpeg
method).
Note: FFmpeg must be compiled with libx265, the library of the H.265 codec, for this script to work. (Add the flag --with-x265
if using the brew install ffmpeg
method).
The libx265 encoding library defaults to a ‘medium’ preset for compression quality and a CRF of 28. CRF stands for ‘constant rate factor’ and determines the quality and file size of the resulting H.265 video. The CRF scale ranges from 0 (best quality [lossless]; largest file size) to 51 (worst quality; smallest file size).
A CRF of 28 for H.265 can be considered a medium setting, corresponding to a CRF of 23 in encoding H.264, but should result in about half the file size.
To create a higher quality file, you can add these presets:
-ffmpeg -i input_file -c:v libx265 -pix_fmt yuv420p -preset veryslow -crf 18 -c:a copy output_file
ffmpeg -i input_file -c:v libx265 -pix_fmt yuv420p -preset veryslow -crf 18 -c:a copy output_file
ffmpeg -i input_file -acodec libvorbis -b:v 690k output_file
ffmpeg -i input_file -acodec libvorbis -b:v 690k output_file
This command takes an input file and transcodes it to Ogg/Theora in an .ogv wrapper with 690k video bitrate.
-Note: FFmpeg must be installed with support for Ogg Theora. If you are using Homebrew, you can check with brew info ffmpeg
and then update it with brew upgrade ffmpeg --with-theora --with-libvorbis
if necessary.
Note: FFmpeg must be installed with support for Ogg Theora. If you are using Homebrew, you can check with brew info ffmpeg
and then update it with brew upgrade ffmpeg --with-theora --with-libvorbis
if necessary.
.ogv
filename suffix).ogv
filename suffix)This recipe is based on Paul Rouget's recipes.
@@ -441,17 +441,17 @@ffmpeg -i input_file.wav -write_id3v1 1 -id3v2_version 3 -dither_method rectangular -out_sample_rate 48k -qscale:a 1 output_file.mp3
ffmpeg -i input_file.wav -write_id3v1 1 -id3v2_version 3 -dither_method rectangular -out_sample_rate 48k -qscale:a 1 output_file.mp3
This will convert your WAV files to MP3s.
-b:a 320k
to set to the maximum bitrate allowed by the MP3 format. For more detailed discussion on variable vs constant bitrates see here.A couple notes
ffmpeg -i input_file -i input_file_to_append -filter_complex "[0:a:0]asplit=2[a][b];[b]afifo[bb];[1:a:0][bb]concat=n=2:v=0:a=1[concatout]" -map "[a]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file.mp3 -map "[concatout]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file_appended.mp3
ffmpeg -i input_file -i input_file_to_append -filter_complex "[0:a:0]asplit=2[a][b];[b]afifo[bb];[1:a:0][bb]concat=n=2:v=0:a=1[concatout]" -map "[a]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file.mp3 -map "[concatout]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file_appended.mp3
This script allows you to generate two derivative audio files from a master while appending audio from a separate file (for example a copyright or institutional notice) to one of them.
asplit
allows audio streams to be split up for separate manipulation. This command splits the audio from the first input (the master file) into two streams "a" and "b"concat
is used to join files. n=2
tells the filter there are two inputs. v=0:a=1
Tells the filter there are 0 video outputs and 1 audio output. This command appends the audio from the second input to the beginning of stream "bb" and names the output "concatout"ffmpeg -i input_file.wav -c:a aac -b:a 128k -dither_method rectangular -ar 44100 output_file.mp4
ffmpeg -i input_file.wav -c:a aac -b:a 128k -dither_method rectangular -ar 44100 output_file.mp4
This will convert your WAV file to AAC/MP4.
A note about dither methods. FFmpeg comes with a variety of dither algorithms, outlined in the official docs, though some may lead to unintended, not-subtle digital clipping on some systems.
@@ -519,14 +519,14 @@Transform a video file with 4:3 aspect ratio into a video file with 16:9 aspect ratio by correct pillarboxing.
-ffmpeg -i input_file -filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2" -c:a copy output_file
ffmpeg -i input_file -filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2" -c:a copy output_file
-c:a copy
by -an
.Transform a video file with 16:9 aspect ratio into a video file with 4:3 aspect ratio by correct letterboxing.
-ffmpeg -i input_file -filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2" -c:a copy output_file
ffmpeg -i input_file -filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2" -c:a copy output_file
-c:a copy
by -an
.ffmpeg -i input_file -filter:v "hflip,vflip" -c:a copy output_file
ffmpeg -i input_file -filter:v "hflip,vflip" -c:a copy output_file
-c:a copy
by -an
.Transform a SD video file with 4:3 aspect ratio into an HD video file with 16:9 aspect ratio by correct pillarboxing.
-ffmpeg -i input_file -filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0" -c:a copy output_file
ffmpeg -i input_file -filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0" -c:a copy output_file
-c:a copy
with -an
.ffmpeg -i input_file -c:v copy -aspect 4:3 output_file
ffmpeg -i input_file -c:v copy -aspect 4:3 output_file
4:3
. Experiment with other aspect ratios such as 16:9
. If used together with -c:v copy
, it will affect the aspect ratio stored at container level, but not the aspect ratio stored in encoded frames, if it exists.This command uses a filter to convert the video to a different colour space.
-ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst output_file
ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst output_file
bt601
(Rec.601), smpte170m
(Rec.601, 525-line/NTSC version), bt470bg
(Rec.601, 625-line/PAL version), bt709
(Rec.709), and bt2020
(Rec.2020).-vf colormatrix=bt601:bt709
.Note: Converting between colourspaces with FFmpeg can be done via either the colormatrix or colorspace filters, with colorspace allowing finer control (individual setting of colourspace, transfer characteristics, primaries, range, pixel format, etc). See this entry on the FFmpeg wiki, and the FFmpeg documentation for colormatrix and colorspace.
+Note: Converting between colourspaces with FFmpeg can be done via either the colormatrix or colorspace filters, with colorspace allowing finer control (individual setting of colourspace, transfer characteristics, primaries, range, pixel format, etc). See this entry on the FFmpeg wiki, and the FFmpeg documentation for colormatrix and colorspace.
ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst -color_primaries val -color_trc val -colorspace val output_file
ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst -color_primaries val -color_trc val -colorspace val output_file
smpte170m
(Rec.601, 525-line/NTSC version), bt470bg
(Rec.601, 625-line/PAL version), bt709
(Rec.709), and bt2020
(Rec.2020).
- smpte170m
(Rec.601, 525-line/NTSC version), gamma28
(Rec.601, 625-line/PAL version)1, bt709
(Rec.709), bt2020_10
(Rec.2020 10-bit), and bt2020_12
(Rec.2020 12-bit).smpte170m
(Rec.601, 525-line/NTSC version), bt470bg
(Rec.601, 625-line/PAL version), bt709
(Rec.709), bt2020_cl
(Rec.2020 constant luminance), and bt2020_ncl
(Rec.2020 non-constant luminance).To Rec.601 (525-line/NTSC):
-ffmpeg -i input_file -c:v libx264 -vf colormatrix=bt709:smpte170m -color_primaries smpte170m -color_trc smpte170m -colorspace smpte170m output_file
ffmpeg -i input_file -c:v libx264 -vf colormatrix=bt709:smpte170m -color_primaries smpte170m -color_trc smpte170m -colorspace smpte170m output_file
To Rec.601 (625-line/PAL):
-ffmpeg -i input_file -c:v libx264 -vf colormatrix=bt709:bt470bg -color_primaries bt470bg -color_trc gamma28 -colorspace bt470bg output_file
ffmpeg -i input_file -c:v libx264 -vf colormatrix=bt709:bt470bg -color_primaries bt470bg -color_trc gamma28 -colorspace bt470bg output_file
To Rec.709:
-ffmpeg -i input_file -c:v libx264 -vf colormatrix=bt601:bt709 -color_primaries bt709 -color_trc bt709 -colorspace bt709 output_file
ffmpeg -i input_file -c:v libx264 -vf colormatrix=bt601:bt709 -color_primaries bt709 -color_trc bt709 -colorspace bt709 output_file
MediaInfo output examples:
⚠ Using this command it is possible to add Rec.709 tags to a file that is actually Rec.601 (etc), so apply with caution!
These commands are relevant for H.264 and H.265 videos, encoded with libx264
and libx265
respectively.
Note: If you wish to embed colourspace metadata without changing to another colourspace, omit -vf colormatrix=src:dst
. However, since it is libx264
/libx265
that writes the metadata, it’s not possible to add these tags without reencoding the video stream.
Note: If you wish to embed colourspace metadata without changing to another colourspace, omit -vf colormatrix=src:dst
. However, since it is libx264
/libx265
that writes the metadata, it’s not possible to add these tags without reencoding the video stream.
For all possible values for -color_primaries
, -color_trc
, and -colorspace
, see the FFmpeg documentation on codec options.
1. Out of step with the regular pattern, -color_trc
doesn’t accept bt470bg
; it is instead here referred to directly as gamma.
@@ -670,11 +670,11 @@
E.g. for converting 24fps to 25fps with audio pitch compensation for PAL access copies. (Thanks @kieranjol!)
-ffmpeg -i input_file -filter_complex "[0:v]setpts=input_fps/output_fps*PTS[v]; [0:a]atempo=output_fps/input_fps[a]" -map "[v]" -map "[a]" output_file
ffmpeg -i input_file -filter_complex "[0:v]setpts=input_fps/output_fps*PTS[v]; [0:a]atempo=output_fps/input_fps[a]" -map "[v]" -map "[a]" output_file
setpts
video filter modifies the PTS (presentation time stamp) of the video stream, and the atempo
audio filter modifies the speed of the audio stream while keeping the same sound pitch. Note that the parameter order for the image and for the sound are inverted:
+ setpts
video filter modifies the PTS (presentation time stamp) of the video stream, and the atempo
audio filter modifies the speed of the audio stream while keeping the same sound pitch. Note that the parameter order for the image and for the sound are inverted:
setpts
the numerator input_fps
sets the input speed and the denominator output_fps
sets the output speed; both values are given in frames per second.atempo
the numerator output_fps
sets the output speed and the denominator input_fps
sets the input speed; both values are given in frames per second.These examples use QuickTime inputs and outputs. The strategy will vary or may not be possible in other file formats. In the case of these examples it is the intention to make a lossless copy while clarifying an unknown characteristic of the stream.
-ffprobe input_file -show_streams
ffprobe input_file -show_streams
Values that are set to 'unknown' and 'undetermined' may be unspecified within the stream. An unknown aspect ratio would be expressed as '0:1'. Streams with many unknown properties may have interoperability issues or not play as intended. In many cases, an unknown or undetermined value may be accurate because the information about the source is unclear, but often the value is intended to be known. In many cases the stream will played with an assumed value if undetermined (for instance a display_aspect_ratio of '0:1' may be played as 'WIDTH:HEIGHT'), but this may or may not be what is intended. Use carefully.
If the display_aspect_ratio is set to '0:1' it may be clarified with the -aspect option and stream copy.
-ffmpeg -i input_file -c copy -map 0 -aspect DAR_NUM:DAR_DEN output_file
If the display_aspect_ratio is set to '0:1' it may be clarified with the -aspect option and stream copy.
+ffmpeg -i input_file -c copy -map 0 -aspect DAR_NUM:DAR_DEN output_file
Other properties may be clarified in a similar way. Replace -aspect and its value with other properties such as shown in the options below. Note that setting color values in QuickTime requires that -movflags write_colr is set.
+Other properties may be clarified in a similar way. Replace -aspect and its value with other properties such as shown in the options below. Note that setting color values in QuickTime requires that -movflags write_colr is set.
The possible values for -color_primaries
, -color_trc
, and -field_order
are given in the Codec Options section of the FFmpeg docs - scroll down to near the bottom of the section.
ffmpeg -i input_file -vf "crop=width:height" output_file
ffmpeg -i input_file -vf "crop=width:height" output_file
This command crops the input video to the dimensions defined
It's also possible to specify the crop position by adding the x and y coordinates representing the top left of your cropped area to your crop filter, as such:
-ffmpeg -i input_file -vf "crop=width:height[:x_position:y_position]" output_file
ffmpeg -i input_file -vf "crop=width:height[:x_position:y_position]" output_file
The original frame, a screenshot of the SMPTE colourbars:
-Result of the command ffmpeg -i smpte_coloursbars.mov -vf "crop=500:500" output_file
:
Result of the command ffmpeg -i smpte_coloursbars.mov -vf "crop=500:500" output_file
:
Result of the command ffmpeg -i smpte_coloursbars.mov -vf "crop=500:500:0:0" output_file
, appending :0:0
to crop from the top left corner:
Result of the command ffmpeg -i smpte_coloursbars.mov -vf "crop=500:500:0:0" output_file
, appending :0:0
to crop from the top left corner:
Result of the command ffmpeg -i smpte_coloursbars.mov -vf "crop=500:300:500:30" output_file
:
Result of the command ffmpeg -i smpte_coloursbars.mov -vf "crop=500:300:500:30" output_file
:
ffmpeg -i input_file -c:a copy -vn output_file
ffmpeg -i input_file -c:a copy -vn output_file
This command extracts the audio stream without loss from an audiovisual file.
ffmpeg -i input_file -filter_complex "[0:a:0][0:a:1]amerge[out]" -map 0:v -map "[out]" -c:v copy -shortest output_file
ffmpeg -i input_file -filter_complex "[0:a:0][0:a:1]amerge[out]" -map 0:v -map "[out]" -c:v copy -shortest output_file
This command combines two audio tracks present in a video file into one stream. It can be useful in situations where a downstream process, like YouTube’s automatic captioning, expect one audio track. To ensure that you’re mapping the right audio tracks run ffprobe before writing the script to identify which tracks are desired. More than two audio streams can be combined by extending the pattern present in the -filter_complex option.
ffmpeg -i input_file -af pan="stereo|c0=c0|c1=-1*c1" output_file
ffmpeg -i input_file -af pan="stereo|c0=c0|c1=-1*c1" output_file
This command inverses the audio phase of the second channel by rotating it 180°.
ffmpeg -i input_file -af loudnorm=print_format=json -f null -
ffmpeg -i input_file -af loudnorm=print_format=json -f null -
This filter calculates and outputs loudness information in json about an input file (labeled input) as well as what the levels would be if loudnorm were applied in its one pass mode (labeled output). The values generated can be used as inputs for a 'second pass' of the loudnorm filter allowing more accurate loudness normalization than if it is used in a single pass.
These instructions use the loudnorm defaults, which align well with PBS recommendations for target loudness. More information can be found at the loudnorm documentation.
Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document.
print_format=summary
ffmpeg -i input_file -af aemphasis=type=riaa output_file
ffmpeg -i input_file -af aemphasis=type=riaa output_file
This will apply RIAA equalization to an input file allowing correct listening of audio transferred 'flat' (without EQ) from records that used this EQ curve. For more information about RIAA equalization see the Wikipedia page on the subject.
ffmpeg -i input_file -af loudnorm=dual_mono=true -ar 48k output_file
ffmpeg -i input_file -af loudnorm=dual_mono=true -ar 48k output_file
This will normalize the loudness of an input using one pass, which is quicker but less accurate than using two passes. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the loudnorm documentation.
Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document.
ffmpeg -i input_file -af loudnorm=dual_mono=true:measured_I=input_i:measured_TP=input_tp:measured_LRA=input_lra:measured_thresh=input_thresh:offset=target_offset:linear=true -ar 48k output_file
ffmpeg -i input_file -af loudnorm=dual_mono=true:measured_I=input_i:measured_TP=input_tp:measured_LRA=input_lra:measured_thresh=input_thresh:offset=target_offset:linear=true -ar 48k output_file
This command allows using the levels calculated using a first pass of the loudnorm filter to more accurately normalize loudness. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the loudnorm documentation.
Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document.
ffmpeg -i input_file -c:v copy -c:a pcm_s16le -af "aresample=async=1000" output_file
ffmpeg -i input_file -c:v copy -c:a pcm_s16le -af "aresample=async=1000" output_file
ffmpeg -f concat -i mylist.txt -c copy output_file
ffmpeg -f concat -i mylist.txt -c copy output_file
This command takes two or more files of the same file type and joins them together to make a single file. All that the program needs is a text file with a list specifying the files that should be joined. However, it only works properly if the files to be combined have the exact same codec and technical specifications. Be careful, FFmpeg may appear to have successfully joined two video files with different codecs, but may only bring over the audio from the second file or have other weird behaviors. Don’t use this command for joining files with different codecs and technical specs and always preview your resulting video file!
file './first_file.ext' - file './second_file.ext' +file './first_file.ext' + file './second_file.ext' . . . - file './last_file.ext'- In the above, file is simply the word "file". Straight apostrophes ('like this') rather than curved quotation marks (‘like this’) must be used to enclose the file paths.
- Note: If specifying absolute file paths in the .txt file, add-safe 0
before the input file.
- e.g.:ffmpeg -f concat -safe 0 -i mylist.txt -c copy output_file
-safe 0
before the input file.ffmpeg -f concat -safe 0 -i mylist.txt -c copy output_file
For more information, see the FFmpeg wiki page on concatenating files.
@@ -957,18 +957,18 @@ffmpeg -i input_1.avi -i input_2.mp4 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[video_out][audio_out]" -map "[video_out]" -map "[audio_out]" output_file
ffmpeg -i input_1.avi -i input_2.mp4 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[video_out][audio_out]" -map "[video_out]" -map "[audio_out]" output_file
This command takes two or more files of the different file types and joins them together to make a single file.
The input files may differ in many respects - container, codec, chroma subsampling scheme, framerate, etc. However, the above command only works properly if the files to be combined have the same dimensions (e.g., 720x576). Also note that if the input files have different framerates, then the output file will be of variable framerate.
Some aspects of the input files will be normalised: for example, if an input file contains a video track and an audio track that do not have exactly the same duration, the shorter one will be padded. In the case of a shorter video track, the last frame will be repeated in order to cover the missing video; in the case of a shorter audio track, the audio stream will be padded with silence.
0:v:0
, the first zero refers to the first input file, v
means video stream, and the second zero indicates that it is the first video stream in the file that should be selected. Likewise, 0:a:0
means the first audio stream in the first input file.0:v:0
, the first zero refers to the first input file, v
means video stream, and the second zero indicates that it is the first video stream in the file that should be selected. Likewise, 0:a:0
means the first audio stream in the first input file.0
means the first input/stream/etc, 1
means the second input/stream/etc, and 4
would mean the fifth input/stream/etc.concat
filterIf no characteristics of the output files are specified, ffmpeg will use the default encodings associated with the given output file type. To specify the characteristics of the output stream(s), add flags after each -map "[out]"
part of the command.
For example, to ensure that the video stream of the output file is visually lossless H.264 with a 4:2:0 chroma subsampling scheme, the command above could be amended to include the following:
@@ -996,15 +996,15 @@
-vf scale=1920:1080:flags=lanczos
(The Lanczos scaling algorithm is recommended, as it is slower but better than the default bilinear algorithm).
The rescaling should be applied just before the point where the streams to be used in the output file are listed. Select the stream you want to rescale, apply the filter, and assign that to a variable name (rescaled_video
in the below example). Then you use this variable name in the list of streams to be concatenated.
ffmpeg -i input_1.avi -i input_2.mp4 -filter_complex "[0:v:0] scale=1920:1080:flags=lanczos [rescaled_video], [rescaled_video] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [video_out] [audio_out]" -map "[video_out]" -map "[audio_out]" output_file
ffmpeg -i input_1.avi -i input_2.mp4 -filter_complex "[0:v:0] scale=1920:1080:flags=lanczos [rescaled_video], [rescaled_video] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [video_out] [audio_out]" -map "[video_out]" -map "[audio_out]" output_file
However, this will only have the desired visual output if the inputs have the same aspect ratio. If you wish to concatenate an SD and an HD file, you will also wish to pillarbox the SD file while upscaling. (See the Convert 4:3 to pillarboxed HD command). The full command would look like this:
-ffmpeg -i input_1.avi -i input_2.mp4 -filter_complex "[0:v:0] scale=1440:1080:flags=lanczos, pad=1920:1080:(ow-iw)/2:(oh-ih)/2 [to_hd_video], [to_hd_video] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [video_out] [audio_out]" -map "[video_out]" -map "[audio_out]" output_file
ffmpeg -i input_1.avi -i input_2.mp4 -filter_complex "[0:v:0] scale=1440:1080:flags=lanczos, pad=1920:1080:(ow-iw)/2:(oh-ih)/2 [to_hd_video], [to_hd_video] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [video_out] [audio_out]" -map "[video_out]" -map "[audio_out]" output_file
Here, the first input is an SD file which needs to be upscaled to match the second input, which is 1920x1080. The scale filter enlarges the SD input to the height of the HD frame, keeping the 4:3 aspect ratio; then, the video is pillarboxed within a 1920x1080 frame.
If the input files have different framerates, then the output file may be of variable framerate. To explicitly obtain an output file of constant framerate, you may wish convert an input (or multiple inputs) to a different framerate prior to concatenation.
You can speed up or slow down a file using the fps
and atempo
filters (see also the Modify speed command).
Here's an example of the full command, in which input_1 is 30fps, input_2 is 25fps, and 25fps is the desired output speed.
-ffmpeg -i input_1.avi -i input_2.mp4 -filter_complex "[0:v:0] fps=fps=25 [video_to_25fps]; [0:a:0] atempo=(25/30) [audio_to_25fps]; [video_to_25fps] [audio_to_25fps] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [video_out] [audio_out]" -map "[video_out]" -map "[audio_out]" output_file
ffmpeg -i input_1.avi -i input_2.mp4 -filter_complex "[0:v:0] fps=fps=25 [video_to_25fps]; [0:a:0] atempo=(25/30) [audio_to_25fps]; [video_to_25fps] [audio_to_25fps] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [video_out] [audio_out]" -map "[video_out]" -map "[audio_out]" output_file
Note that the fps
filter will drop or repeat frames as necessary in order to achieve the desired frame rate - see the FFmpeg fps docs for more details.
For more information, see the FFmpeg wiki page on concatenating files of different types.
@@ -1016,16 +1016,16 @@ffmpeg -i input_file -c copy -map 0 -f segment -segment_time 60 -reset_timestamps 1 output_file-%03d.mkv
ffmpeg -i input_file -c copy -map 0 -f segment -segment_time 60 -reset_timestamps 1 output_file-%03d.mkv
Path, name and extension of the output file.
In order to have an incrementing number in each segment filename, FFmpeg supports printf-style syntax for a counter.
ffmpeg -i input_file -ss 00:02:00 -to 00:55:00 -c copy -map 0 output_file
ffmpeg -i input_file -ss 00:02:00 -to 00:55:00 -c copy -map 0 output_file
This command allows you to create an excerpt from a video file without re-encoding the image data.
-ss
with -c copy
if the source is encoded with an interframe codec (e.g., H.264). Since FFmpeg must split on i-frames, it will seek to the nearest i-frame to begin the stream copy.-ss
with -c copy
if the source is encoded with an interframe codec (e.g., H.264). Since FFmpeg must split on i-frames, it will seek to the nearest i-frame to begin the stream copy.
+ Variation: trim video by setting duration, by using -t
instead of -to
ffmpeg -i input_file -ss 00:05:00 -t 10 -c copy output_file
ffmpeg -i input_file -ss 00:05:00 -t 10 -c copy output_file
ffmpeg -i input_file -t 5 -c copy -map 0 output_file
ffmpeg -i input_file -t 5 -c copy -map 0 output_file
This command captures a certain portion of a video file, starting from the beginning and continuing for the amount of time (in seconds) specified in the script. This can be used to create a preview file, or to remove unwanted content from the end of the file. To be more specific, use timecode, such as 00:00:05.
ffmpeg -i input_file -ss 5 -c copy -map 0 output_file
ffmpeg -i input_file -ss 5 -c copy -map 0 output_file
This command copies a video file starting from a specified time, removing the first few seconds from the output. This can be used to create an excerpt, or remove unwanted content from the beginning of a video file.
ffmpeg -sseof -5 -i input_file -c copy -map 0 output_file
ffmpeg -sseof -5 -i input_file -c copy -map 0 output_file
This command copies a video file starting from a specified time before the end of the file, removing everything before from the output. This can be used to create an excerpt, or extract content from the end of a video file (e.g. for extracting the closing credits).
ffmpeg -i input_file -c:v libx264 -filter:v "yadif, scale=1440:1080:flags=lanczos, pad=1920:1080:(ow-iw)/2:(oh-ih)/2, format=yuv420p" output_file
ffmpeg -i input_file -c:v libx264 -filter:v "yadif, scale=1440:1080:flags=lanczos, pad=1920:1080:(ow-iw)/2:(oh-ih)/2, format=yuv420p" output_file
yadif=1
may produce visually better results.yadif=1
may produce visually better results.
Note: the very same scaling filter also downscales a bigger image size into HD.
+Note: the very same scaling filter also downscales a bigger image size into HD.
ffmpeg -i input_file -c:v libx264 -vf "yadif,format=yuv420p" output_file
ffmpeg -i input_file -c:v libx264 -vf "yadif,format=yuv420p" output_file
This command takes an interlaced input file and outputs a deinterlaced H.264 MP4.
-vf
is an alias of -filter:v
)yadif=1
may produce visually better results.yadif=1
may produce visually better results.
libx264
will use a chroma subsampling scheme that is the closest match to that of the input. This can result in Y′CBCR 4:2:0, 4:2:2, or 4:4:4 chroma subsampling. QuickTime and most other non-FFmpeg based players can’t decode H.264 files that are not 4:2:0, therefore it’s advisable to specify 4:2:0 chroma subsampling."yadif,format=yuv420p"
is an FFmpeg filtergraph. Here the filtergraph is made up of one filter chain, which is itself made up of the two filters (separated by the comma).
The enclosing quote marks are necessary when you use spaces within the filtergraph, e.g. -vf "yadif, format=yuv420p"
, and are included above as an example of good practice.
Note: FFmpeg includes several deinterlacers apart from yadif: bwdif, w3fdif, kerndeint, and nnedi.
+Note: FFmpeg includes several deinterlacers apart from yadif: bwdif, w3fdif, kerndeint, and nnedi.
For more H.264 encoding options, see the latter section of the encode H.264 command.
ffmpeg -i input_file -c:v libx264 -vf "fieldmatch,yadif,decimate" output_file
ffmpeg -i input_file -c:v libx264 -vf "fieldmatch,yadif,decimate" output_file
The inverse telecine procedure reverses the 3:2 pull down process, restoring 29.97fps interlaced video to the 24fps frame rate of the original film source.
"fieldmatch,yadif,decimate"
is an FFmpeg filtergraph. Here the filtergraph is made up of one filter chain, which is itself made up of the three filters (separated by commas).
The enclosing quote marks are necessary when you use spaces within the filtergraph, e.g. -vf "fieldmatch, yadif, decimate"
, and are included above as an example of good practice.
ffmpeg -i input_file -c:v video_codec -filter:v setfield=tff output_file
ffmpeg -i input_file -c:v video_codec -filter:v setfield=tff output_file
setfield=bff
for bottom field first.-c copy
. The video must be re-encoded with whatever video codec is chosen, e.g. ffv1
, v210
or prores
.-c copy
. The video must be re-encoded with whatever video codec is chosen, e.g. ffv1
, v210
or prores
.ffmpeg -i input file -filter:v idet -f null -
ffmpeg -i input file -filter:v idet -f null -
null
muxer. This allows video decoding without creating an output file.-
is just a place holder. No file is actually created.E.g For creating access copies with your institutions name
-ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:text=watermark_text:fontcolor=font_colour:alpha=0.4:x=(w-text_w)/2:y=(h-text_h)/2" output_file
ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:text=watermark_text:fontcolor=font_colour:alpha=0.4:x=(w-text_w)/2:y=(h-text_h)/2" output_file
fontfile=/Library/Fonts/AppleGothic.ttf
35
is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.text='FFMPROVISR EXAMPLE TEXT'
fontcolor=white
or a hexadecimal value such as fontcolor=0xFFFFFF
fontfile=/Library/Fonts/AppleGothic.ttf
35
is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.text='FFMPROVISR EXAMPLE TEXT'
fontcolor=white
or a hexadecimal value such as fontcolor=0xFFFFFF
-vf
is a shortcut for -filter:v
.ffmpeg -i input_video file -i input_image_file -filter_complex overlay=main_w-overlay_w-5:5 output_file
ffmpeg -i input_video file -i input_image_file -filter_complex overlay=main_w-overlay_w-5:5 output_file
main_w-overlay_w-5:5
uses relative coordinates to place the watermark in the upper right hand corner, based on the width of your input files. Please see the FFmpeg documentation for more examples.ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:timecode=starting_timecode:fontcolor=font_colour:box=1:boxcolor=box_colour:rate=timecode_rate:x=(w-text_w)/2:y=h/1.2" output_file
ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:timecode=starting_timecode:fontcolor=font_colour:box=1:boxcolor=box_colour:rate=timecode_rate:x=(w-text_w)/2:y=h/1.2" output_file
fontfile=/Library/Fonts/AppleGothic.ttf
35
is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.hh:mm:ss[:;.]ff
. Colon escaping is determined by O.S, for example in Ubuntu timecode='09\\:50\\:01\\:23'
. Ideally, this value would be generated from the file itself using ffprobe.fontcolor=white
or a hexadecimal value such as fontcolor=0xFFFFFF
fontfile=/Library/Fonts/AppleGothic.ttf
35
is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.hh:mm:ss[:;.]ff
. Colon escaping is determined by O.S, for example in Ubuntu timecode='09\\:50\\:01\\:23'
. Ideally, this value would be generated from the file itself using ffprobe.fontcolor=white
or a hexadecimal value such as fontcolor=0xFFFFFF
fontcolor=black
or a hexadecimal value such as fontcolor=0x000000
25/1
fontcolor=black
or a hexadecimal value such as fontcolor=0x000000
25/1
Note: -vf
is a shortcut for -filter:v
.
ffmpeg -i input_file -i subtitles_file -c copy -c:s mov_text output_file
ffmpeg -i input_file -i subtitles_file -c copy -c:s mov_text output_file
subtitles.srt
subtitles.srt
mov_text
codec. Note: The mov_text
codec works for MP4 and MOV containers. For the MKV container, acceptable formats are ASS
, SRT
, and SSA
.Note: -c:s
is a shortcut for -scodec
ffmpeg -i input_file -ss 00:00:20 -vframes 1 thumb.png
ffmpeg -i input_file -ss 00:00:20 -vframes 1 thumb.png
This command will grab a thumbnail 20 seconds into the video.
ffmpeg -i input_file -vf fps=1/60 out%d.png
ffmpeg -i input_file -vf fps=1/60 out%d.png
This will grab a thumbnail every minute and output sequential png files.
ffmpeg -f image2 -framerate 9 -pattern_type glob -i "input_image_*.jpg" -vf scale=250x250 output_file.gif
ffmpeg -f image2 -framerate 9 -pattern_type glob -i "input_image_*.jpg" -vf scale=250x250 output_file.gif
This will convert a series of image files into a GIF.
image2
specifies the image file demuxer.-vf
is an alias for -filter:v
Create high quality GIF
-ffmpeg -ss HH:MM:SS -i input_file -filter_complex "fps=10,scale=500:-1:flags=lanczos,palettegen" -t 3 palette.png
ffmpeg -ss HH:MM:SS -i input_file -i palette.png -filter_complex "[0:v]fps=10, scale=500:-1:flags=lanczos[v], [v][1:v]paletteuse" -t 3 -loop 6 output_file
ffmpeg -ss HH:MM:SS -i input_file -filter_complex "fps=10,scale=500:-1:flags=lanczos,palettegen" -t 3 palette.png
ffmpeg -ss HH:MM:SS -i input_file -i palette.png -filter_complex "[0:v]fps=10, scale=500:-1:flags=lanczos[v], [v][1:v]paletteuse" -t 3 -loop 6 output_file
The first command will use the palettegen filter to create a custom palette, then the second command will create the GIF with the paletteuse filter. The result is a high quality GIF.
500:-1
would create a GIF 500 pixels wide and with a height proportional to the original video). In the first script above, :flags=lanczos
specifies that the Lanczos rescaling algorithm will be used to resize the image.500:-1
would create a GIF 500 pixels wide and with a height proportional to the original video). In the first script above, :flags=lanczos
specifies that the Lanczos rescaling algorithm will be used to resize the image.The second command has a slightly different filtergraph, which breaks down as follows:
[v][1:v]paletteuse"
: applies the paletteuse
filter, setting the second input file (the palette) as the reference file.
Simpler GIF creation
-ffmpeg -ss HH:MM:SS -i input_file -vf "fps=10,scale=500:-1" -t 3 -loop 6 output_file
ffmpeg -ss HH:MM:SS -i input_file -vf "fps=10,scale=500:-1" -t 3 -loop 6 output_file
This is a quick and easy method. Dithering is more apparent than the above method using the palette filters, but the file size will be smaller. Perfect for that “legacy” GIF look.
ffmpeg -f image2 -framerate 24 -i input_file_%06d.ext -c:v v210 output_file
ffmpeg -f image2 -framerate 24 -i input_file_%06d.ext -c:v v210 output_file
-start_number 086400
before -i input_file_%06d.ext
. The extension for TIFF files is .tif or maybe .tiff; the extension for DPX files is .dpx (or eventually .cin for old files).ffmpeg -r 1 -loop 1 -i image_file -i audio_file -acodec copy -shortest -vf scale=1280:720 output_file
ffmpeg -r 1 -loop 1 -i image_file -i audio_file -acodec copy -shortest -vf scale=1280:720 output_file
This command will take an image file (e.g. image.jpg) and an audio file (e.g. audio.mp3) and combine them into a video file that contains the audio track with the image used as the video. It can be useful in a situation where you might want to upload an audio file to a platform like YouTube. You may want to adjust the scaling with -vf to suit your needs.
ffplay -f lavfi "amovie=input_file, asplit=2[out1][a], [a]abitscope=colors=purple|yellow[out0]"
ffplay -f lavfi "amovie=input_file, asplit=2[out1][a], [a]abitscope=colors=purple|yellow[out0]"
This filter allows visual analysis of the information held in various bit depths of an audio stream. This can aid with identifying when a file that is nominally of a higher bit depth actually has been 'padded' with null information. The provided GIF shows a 16 bit WAV file (left) and then the results of converting that same WAV to 32 bit (right). Note that in the 32 bit version, there is still only information in the first 16 bits.
ffplay -f lavfi "movie='input.mp4', signalstats=out=brng:color=cyan[out]"
ffplay -f lavfi "movie='input.mp4', signalstats=out=brng:color=cyan[out]"
ffplay input_file -vf "split=2[m][v], [v]vectorscope=b=0.7:m=color3:g=green[v], [m][v]overlay=x=W-w:y=H-h"
ffplay input_file -vf "split=2[m][v], [v]vectorscope=b=0.7:m=color3:g=green[v], [m][v]overlay=x=W-w:y=H-h"
ffmpeg -i input01 -i input02 -filter_complex "[0:v:0]tblend=all_mode=difference128[a];[1:v:0]tblend=all_mode=difference128[b];[a][b]hstack[out]" -map [out] -f nut -c:v rawvideo - | ffplay -
ffprobe -i input_file -show_format -show_streams -show_data -print_format xml
ffprobe -i input_file -show_format -show_streams -show_data -print_format xml
This command extracts technical metadata from a video file and displays it in xml.
ffmpeg -i input_file -map_metadata -1 -c:v copy -c:a copy output_file
ffmpeg -i input_file -map_metadata -1 -c:v copy -c:a copy output_file
Note: -c:v
and -c:a
are shortcuts for -vcodec
and -acodec
.
Note: the shell script (.sh file) and all .mxf files to be processed must be contained within the same directory, and the script must be run from that directory.
+
Note: the shell script (.sh file) and all .mxf files to be processed must be contained within the same directory, and the script must be run from that directory.
Execute the .sh file with the command sh Rewrap-MXF.sh
.
Modify the script as needed to perform different transcodes, or to use with ffprobe. :)
The basic pattern will look similar to this:
- for item in *.ext; do ffmpeg -i $item (FFmpeg options here) "${item%.ext}_suffix.ext"
for item in *.ext; do ffmpeg -i $item (FFmpeg options here) "${item%.ext}_suffix.ext"
e.g., if an input file is bestmovie002.avi, its output will be bestmovie002_suffix.avi.
Variation: recursively process all MXF files in subdirectories using find
instead of for
:
find input_directory -iname "*.mxf" -exec ffmpeg -i {} -map 0 -c copy {}.mov \;
As of Windows 10, it is possible to run Bash via Bash on Ubuntu on Windows, allowing you to use bash scripting. To enable Bash on Windows, see these instructions.
-On Windows, the primary native command line programme is PowerShell. PowerShell scripts are plain text files saved with a .ps1 extension. This entry explains how they work with the example of a PowerShell script named “rewrap-mp4.ps1”, which rewraps .mp4 files in a given directory to .mkv files.
+On Windows, the primary native command line programme is PowerShell. PowerShell scripts are plain text files saved with a .ps1 extension. This entry explains how they work with the example of a PowerShell script named “rewrap-mp4.ps1”, which rewraps .mp4 files in a given directory to .mkv files.
“rewrap-mp4.ps1” contains the following text:
$inputfiles = ls *.mp4
foreach ($file in $inputfiles) {
@@ -1718,13 +1718,13 @@
- {
- Opens the code block.
- $output = [io.path]::ChangeExtension($file, '.mkv')
- Sets up the output file: it will be located in the current folder and keep the same filename, but will have an .mkv extension instead of .mp4.
- ffmpeg -i $file
- Carry out the following FFmpeg command for each input file.
- Note: To call FFmpeg here as just ‘ffmpeg’ (rather than entering the full path to ffmpeg.exe), you must make sure that it’s correctly configured. See this article, especially the section ‘Add to Path’.
+ Note: To call FFmpeg here as just ‘ffmpeg’ (rather than entering the full path to ffmpeg.exe), you must make sure that it’s correctly configured. See this article, especially the section ‘Add to Path’.
- -map 0
- retain all streams
- -c copy
- enable stream copy (no re-encode)
- $output
- The output file is set to the value of the
$output
variable declared above: i.e., the current file name with an .mkv extension.
- }
- Closes the code block.
- Note: the PowerShell script (.ps1 file) and all .mp4 files to be rewrapped must be contained within the same directory, and the script must be run from that directory.
+
Note: the PowerShell script (.ps1 file) and all .mp4 files to be rewrapped must be contained within the same directory, and the script must be run from that directory.
Execute the .ps1 file by typing .\rewrap-mp4.ps1
in PowerShell.
Modify the script as needed to perform different transcodes, or to use with ffprobe. :)
@@ -1736,11 +1736,11 @@
Check decoder errors
- ffmpeg -i input_file -f null -
+ ffmpeg -i input_file -f null -
This decodes your video and prints any errors or found issues to the screen.
- ffmpeg
- starts the command
- - -i input_file
- path, name and extension of the input file
+ - -i input_file
- path, name and extension of the input file
- -f null
- Video is decoded with the
null
muxer. This allows video decoding without creating an output file.
- -
- FFmpeg syntax requires a specified output, and
-
is just a place holder. No file is actually created.
@@ -1753,13 +1753,13 @@
Check FFV1 Version 3 fixity
- ffmpeg -report -i input_file -f null -
+ ffmpeg -report -i input_file -f null -
This decodes your video and displays any CRC checksum mismatches. These errors will display in your terminal like this: [ffv1 @ 0x1b04660] CRC mismatch 350FBD8A!at 0.272000 seconds
Frame CRCs are enabled by default in FFV1 Version 3.
- ffmpeg
- starts the command
- - -report
- Dump full command line and console output to a file named ffmpeg-YYYYMMDD-HHMMSS.log in the current directory. It also implies
-loglevel verbose
.
- - -i input_file
- path, name and extension of the input file
+ - -report
- Dump full command line and console output to a file named ffmpeg-YYYYMMDD-HHMMSS.log in the current directory. It also implies
-loglevel verbose
.
+ - -i input_file
- path, name and extension of the input file
- -f null
- Video is decoded with the
null
muxer. This allows video decoding without creating an output file.
- -
- FFmpeg syntax requires a specified output, and
-
is just a place holder. No file is actually created.
@@ -1772,14 +1772,14 @@
Create MD5 checksums (video frames)
- ffmpeg -i input_file -f framemd5 -an output_file
+ ffmpeg -i input_file -f framemd5 -an output_file
This will create an MD5 checksum per video frame.
- ffmpeg
- starts the command
- - -i input_file
- path, name and extension of the input file
+ - -i input_file
- path, name and extension of the input file
- -f framemd5
- library used to calculate the MD5 checksums
- -an
- ignores the audio stream (audio no)
- - output_file
- path, name and extension of the output file
+ - output_file
- path, name and extension of the output file
You may verify an MD5 checksum file created this way by using a Bash script.
@@ -1791,7 +1791,7 @@
Create MD5 checksums (audio samples)
- ffmpeg -i input_file -af "asetnsamples=n=48000" -f framemd5 -vn output_file
+ ffmpeg -i input_file -af "asetnsamples=n=48000" -f framemd5 -vn output_file
This will create an MD5 checksum for each group of 48000 audio samples.
The number of samples per group can be set arbitrarily, but it's good practice to match the samplerate of the media file (so you will get one checksum per second).
Examples for other samplerates:
@@ -1799,14 +1799,14 @@
- 44.1 kHz: "asetnsamples=n=44100"
- 96 kHz: "asetnsamples=n=96000"
- Note: This filter trandscodes audio to 16 bit PCM by default. The generated framemd5s will represent this value. Validating these framemd5s will require using the same default settings. Alternatively, when your file has another quantisation rates (e.g. 24 bit), then you might add the audio codec -c:a pcm_s24le
to the command, for compatibility reasons with other tools, like BWF MetaEdit.
+ Note: This filter trandscodes audio to 16 bit PCM by default. The generated framemd5s will represent this value. Validating these framemd5s will require using the same default settings. Alternatively, when your file has another quantisation rates (e.g. 24 bit), then you might add the audio codec -c:a pcm_s24le
to the command, for compatibility reasons with other tools, like BWF MetaEdit.
- ffmpeg
- starts the command
- - -i input_file
- path, name and extension of the input file
- - -af "asetnsamples=n=48000"
- the audio filter sets the sampling rate
+ - -i input_file
- path, name and extension of the input file
+ - -af "asetnsamples=n=48000"
- the audio filter sets the sampling rate
- -f framemd5
- library used to calculate the MD5 checksums
- -vn
- ignores the video stream (video no)
- - output_file
- path, name and extension of the output file
+ - output_file
- path, name and extension of the output file
You may verify an MD5 checksum file created this way by using a Bash script.
@@ -1818,19 +1818,19 @@
Create stream MD5s
- ffmpeg -i input_file -map 0:v:0 -c:v copy -f md5 output_file_1 -map 0:a:0 -c:a copy -f md5 output_file_2
+ ffmpeg -i input_file -map 0:v:0 -c:v copy -f md5 output_file_1 -map 0:a:0 -c:a copy -f md5 output_file_2
This will create MD5 checksums for the first video and the first audio stream in a file. If only one of these is necessary (for example if used on a WAV file) either part of the command can be excluded to create the desired MD5 only. Use of this kind of checksum enables integrity of the A/V information to be verified independently of any changes to surrounding metadata.
- ffmpeg
- starts the command
- - -i input_file
- path, name and extension of the input file
+ - -i input_file
- path, name and extension of the input file
- -map 0:v:0
- selects the first video stream from the input
- -c:v copy
- ensures that FFmpeg will not transcode the video to a different codec before generating the MD5
- - output_file_1
- is the output file for the video stream MD5. Example file extensions are
.md5
and .txt
+ - output_file_1
- is the output file for the video stream MD5. Example file extensions are
.md5
and .txt
- -map 0:a:0
- selects the first audio stream from the input
- -c:a copy
- ensures that FFmpeg will not transcode the audio to a different codec before generating the MD5 (by default FFmpeg will use 16 bit PCM for audio MD5s).
- - output_file_2
- is the output file for the audio stream MD5.
+ - output_file_2
- is the output file for the audio stream MD5.
- Note:The MD5s generated by running this command on WAV files are compatible with those embedded by the BWF MetaEdit tool and can be compared.
+ Note:The MD5s generated by running this command on WAV files are compatible with those embedded by the BWF MetaEdit tool and can be compared.
@@ -1840,13 +1840,13 @@
Creates a QCTools report
- ffprobe -f lavfi -i "movie=input_file:s=v+a[in0][in1], [in0]signalstats=stat=tout+vrep+brng, cropdetect=reset=1:round=1, idet=half_life=1, split[a][b];[a]field=top[a1];[b]field=bottom, split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim[out0];[in1]ebur128=metadata=1, astats=metadata=1:reset=1:length=0.4[out1]" -show_frames -show_versions -of xml=x=1:q=1 -noprivate | gzip > input_file.qctools.xml.gz
+ ffprobe -f lavfi -i "movie=input_file:s=v+a[in0][in1], [in0]signalstats=stat=tout+vrep+brng, cropdetect=reset=1:round=1, idet=half_life=1, split[a][b];[a]field=top[a1];[b]field=bottom, split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim[out0];[in1]ebur128=metadata=1, astats=metadata=1:reset=1:length=0.4[out1]" -show_frames -show_versions -of xml=x=1:q=1 -noprivate | gzip > input_file.qctools.xml.gz
This will create an XML report for use in QCTools for a video file with one video track and one audio track. See also the QCTools documentation.
- ffprobe
- starts the command
- -f lavfi
- tells ffprobe to use the Libavfilter input virtual device
- -i
- input file and parameters
- - "movie=input_file:s=v+a[in0][in1], [in0]signalstats=stat=tout+vrep+brng, cropdetect=reset=1:round=1, idet=half_life=1, split[a][b];[a]field=top[a1];[b]field=bottom, split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim[out0];[in1]ebur128=metadata=1, astats=metadata=1:reset=1:length=0.4[out1]"
+ - "movie=input_file:s=v+a[in0][in1], [in0]signalstats=stat=tout+vrep+brng, cropdetect=reset=1:round=1, idet=half_life=1, split[a][b];[a]field=top[a1];[b]field=bottom, split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim[out0];[in1]ebur128=metadata=1, astats=metadata=1:reset=1:length=0.4[out1]"
- This very large lump of commands declares the input file and passes in a request for all potential data signal information for a file with one video and one audio track
- -show_frames
- asks for information about each frame and subtitle contained in the input multimedia stream
- -show_versions
- asks for information related to program and library versions
@@ -1854,7 +1854,7 @@
- -noprivate
- hides any private data that might exist in the file
- | gzip
- The | is to "pipe" (or push) the data into a compressed file format
>
- redirects the standard output (the data made by ffprobe about the video)
- - input_file.qctools.xml.gz
- names the zipped data output file, which can be named anything but needs the extension qctools.xml.gz for compatibility issues
+ - input_file.qctools.xml.gz
- names the zipped data output file, which can be named anything but needs the extension qctools.xml.gz for compatibility issues
@@ -1865,13 +1865,13 @@
Creates a QCTools report
- ffprobe -f lavfi -i "movie=input_file,signalstats=stat=tout+vrep+brng, cropdetect=reset=1:round=1, idet=half_life=1, split[a][b];[a]field=top[a1];[b]field=bottom,split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim" -show_frames -show_versions -of xml=x=1:q=1 -noprivate | gzip > input_file.qctools.xml.gz
+ ffprobe -f lavfi -i "movie=input_file,signalstats=stat=tout+vrep+brng, cropdetect=reset=1:round=1, idet=half_life=1, split[a][b];[a]field=top[a1];[b]field=bottom,split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim" -show_frames -show_versions -of xml=x=1:q=1 -noprivate | gzip > input_file.qctools.xml.gz
This will create an XML report for use in QCTools for a video file with one video track and NO audio track. See also the QCTools documentation.
- ffprobe
- starts the command
- -f lavfi
- tells ffprobe to use the Libavfilter input virtual device
- -i
- input file and parameters
- - "movie=input_file,signalstats=stat=tout+vrep+brng, cropdetect=reset=1:round=1, idet=half_life=1, split[a][b];[a]field=top[a1];[b]field=bottom,split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim"
+ - "movie=input_file,signalstats=stat=tout+vrep+brng, cropdetect=reset=1:round=1, idet=half_life=1, split[a][b];[a]field=top[a1];[b]field=bottom,split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim"
- This very large lump of commands declares the input file and passes in a request for all potential data signal information for a file with one video and one audio track
- -show_frames
- asks for information about each frame and subtitle contained in the input multimedia stream
- -show_versions
- asks for information related to program and library versions
@@ -1879,7 +1879,7 @@
- -noprivate
- hides any private data that might exist in the file
- | gzip
- The | is to "pipe" (or push) the data into a compressed file format
>
- redirects the standard output (the data made by ffprobe about the video)
- - input_file.qctools.xml.gz
- names the zipped data output file, which can be named anything but needs the extension qctools.xml.gz for compatibility issues
+ - input_file.qctools.xml.gz
- names the zipped data output file, which can be named anything but needs the extension qctools.xml.gz for compatibility issues
@@ -1890,16 +1890,16 @@
Read/Extract EIA-608 (Line 21) closed captioning
- ffprobe -f lavfi -i movie=input_file,readeia608 -show_entries frame=pkt_pts_time:frame_tags=lavfi.readeia608.0.line,lavfi.readeia608.0.cc,lavfi.readeia608.1.line,lavfi.readeia608.1.cc -of csv > input_file.csv
+ ffprobe -f lavfi -i movie=input_file,readeia608 -show_entries frame=pkt_pts_time:frame_tags=lavfi.readeia608.0.line,lavfi.readeia608.0.cc,lavfi.readeia608.1.line,lavfi.readeia608.1.cc -of csv > input_file.csv
This command uses FFmpeg's readeia608 filter to extract the hexadecimal values hidden within EIA-608 (Line 21) Closed Captioning, outputting a csv file. For more information about EIA-608, check out Adobe's Introduction to Closed Captions.
If hex isn't your thing, closed captioning character and code sets can be found in the documentation for SCTools.
- ffprobe
- starts the command
- -f lavfi
- tells ffprobe to use the libavfilter input virtual device
- - -i input_file
- input file and parameters
+ - -i input_file
- input file and parameters
- readeia608 -show_entries frame=pkt_pts_time:frame_tags=lavfi.readeia608.0.line,lavfi.readeia608.0.cc,lavfi.readeia608.1.line,lavfi.readeia608.1.cc -of csv
- specifies the first two lines of video in which EIA-608 data (hexadecimal byte pairs) are identifiable by ffprobe, outputting comma separated values (CSV)
- >
- redirects the standard output (the data created by ffprobe about the video)
- - output_file.csv
- names the CSV output file
+ - output_file.csv
- names the CSV output file
Example
@@ -1919,14 +1919,14 @@
Makes a mandelbrot test pattern video
- ffmpeg -f lavfi -i mandelbrot=size=1280x720:rate=25 -c:v libx264 -t 10 output_file
+ ffmpeg -f lavfi -i mandelbrot=size=1280x720:rate=25 -c:v libx264 -t 10 output_file
- ffmpeg
- starts the command
- -f lavfi
- tells FFmpeg to use the Libavfilter input virtual device
- -i mandelbrot=size=1280x720:rate=25
- asks for the mandelbrot test filter as input. Adjusting the
size
and rate
options allows you to choose a specific frame size and framerate.
- -c:v libx264
- transcodes video from rawvideo to H.264. Set
-pix_fmt
to yuv420p
for greater H.264 compatibility with media players.
- -t 10
- specifies recording time of 10 seconds
- - output_file
- path, name and extension of the output file. Try different file extensions such as mkv, mov, mp4, or avi.
+ - output_file
- path, name and extension of the output file. Try different file extensions such as mkv, mov, mp4, or avi.
@@ -1937,14 +1937,14 @@
Makes a SMPTE bars test pattern video
- ffmpeg -f lavfi -i smptebars=size=720x576:rate=25 -c:v prores -t 10 output_file
+ ffmpeg -f lavfi -i smptebars=size=720x576:rate=25 -c:v prores -t 10 output_file
- ffmpeg
- starts the command
- -f lavfi
- tells FFmpeg to use the Libavfilter input virtual device
- -i smptebars=size=720x576:rate=25
- asks for the smptebars test filter as input. Adjusting the
size
and rate
options allows you to choose a specific frame size and framerate.
- -c:v prores
- transcodes video from rawvideo to Apple ProRes 4:2:2.
- -t 10
- specifies recording time of 10 seconds
- - output_file
- path, name and extension of the output file. Try different file extensions such as mov or avi.
+ - output_file
- path, name and extension of the output file. Try different file extensions such as mov or avi.
@@ -1955,7 +1955,7 @@
Make a test pattern video
- ffmpeg -f lavfi -i testsrc=size=720x576:rate=25 -c:v v210 -t 10 output_file
+ ffmpeg -f lavfi -i testsrc=size=720x576:rate=25 -c:v v210 -t 10 output_file
- ffmpeg
- starts the command
- -f lavfi
- tells FFmpeg to use the libavfilter input virtual device
@@ -1963,7 +1963,7 @@
The different test patterns that can be generated are listed here.
- -c:v v210
- transcodes video from rawvideo to 10-bit Uncompressed Y′CBCR 4:2:2. Alter this setting to set your desired codec.
- -t 10
- specifies recording time of 10 seconds
- - output_file
- path, name and extension of the output file. Try different file extensions such as mkv, mov, mp4, or avi.
+ - output_file
- path, name and extension of the output file. Try different file extensions such as mkv, mov, mp4, or avi.
@@ -2007,13 +2007,13 @@
Sine wave
Generate a test audio file playing a sine wave.
- ffmpeg -f lavfi -i "sine=frequency=1000:sample_rate=48000:duration=5" -c:a pcm_s16le output_file.wav
+ ffmpeg -f lavfi -i "sine=frequency=1000:sample_rate=48000:duration=5" -c:a pcm_s16le output_file.wav
- ffmpeg
- starts the command
- -f lavfi
- tells FFmpeg to use the Libavfilter input virtual device
- -i "sine=frequency=1000:sample_rate=48000:duration=5"
- Sets the signal to 1000 Hz, sampling at 48 kHz, and for 5 seconds
- -c:a pcm_s16le
- encodes the audio codec in
pcm_s16le
(the default encoding for wav files). pcm
represents pulse-code modulation format (raw bytes), 16
means 16 bits per sample, and le
means "little endian"
- - output_file.wav
- path, name and extension of the output file
+ - output_file.wav
- path, name and extension of the output file
@@ -2025,7 +2025,7 @@
SMPTE bars + Sine wave audio
Generate a SMPTE bars test video + a 1kHz sine wave as audio testsignal.
- ffmpeg -f lavfi -i smptebars=size=720x576:rate=25 -f lavfi -i "sine=frequency=1000:sample_rate=48000" -c:a pcm_s16le -t 10 -c:v ffv1 output_file
+ ffmpeg -f lavfi -i smptebars=size=720x576:rate=25 -f lavfi -i "sine=frequency=1000:sample_rate=48000" -c:a pcm_s16le -t 10 -c:v ffv1 output_file
- ffmpeg
- starts the command
- -f lavfi
- tells FFmpeg to use the libavfilter input virtual device
@@ -2035,7 +2035,7 @@
- -c:a pcm_s16le
- encodes the audio codec in
pcm_s16le
(the default encoding for wav files). pcm
represents pulse-code modulation format (raw bytes), 16
means 16 bits per sample, and le
means "little endian"
- -t 10
- specifies recording time of 10 seconds
- -c:v ffv1
- Encodes to FFV1. Alter this setting to set your desired codec.
- - output_file
- path, name and extension of the output file
+ - output_file
- path, name and extension of the output file
@@ -2047,13 +2047,13 @@
Makes a broken test file
Modifies an existing, functioning file and intentionally breaks it for testing purposes.
- ffmpeg -i input_file -bsf noise=1 -c copy output_file
+ ffmpeg -i input_file -bsf noise=1 -c copy output_file
- ffmpeg
- starts the command
- - -i input_file
- takes in a normal file
+ - -i input_file
- takes in a normal file
- -bsf noise=1
- sets bitstream filters for all to 'noise'. Filters can be set on specific filters using syntax such as
-bsf:v
for video, -bsf:a
for audio, etc. The noise filter intentionally damages the contents of packets without damaging the container. This sets the noise level to 1 but it could be left blank or any number above 0.
- -c copy
- use stream copy mode to re-mux instead of re-encode
- - output_file
- path, name and extension of the output file
+ - output_file
- path, name and extension of the output file
@@ -2079,7 +2079,7 @@
To save a portion of the stream instead of playing it back infinitely, use the following command:
- ffmpeg -f lavfi -i life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800 -t 5 output_file
+ ffmpeg -f lavfi -i life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800 -t 5 output_file
@@ -2097,7 +2097,7 @@
ffplay input_file -vf "ocr,drawtext=fontfile=/Library/Fonts/Andale Mono.ttf:text=%{metadata\\\:lavfi.ocr.text}:fontcolor=white"
- ffplay
- starts the command
- - input_file
- path, name and extension of the input file
+ - input_file
- path, name and extension of the input file
- -vf
- creates a filtergraph to use for the streams
- "
- quotation mark to start filtergraph
- ocr,
- tells ffplay to use ocr as source and the comma signifies that the script is ready for filter assertion
@@ -2118,13 +2118,13 @@
Exports OCR data to screen
Note: FFmpeg must be compiled with the tesseract library for this script to work (--with-tesseract
if using the brew install ffmpeg
method)
- ffprobe -show_entries frame_tags=lavfi.ocr.text -f lavfi -i "movie=input_file,ocr"
+ ffprobe -show_entries frame_tags=lavfi.ocr.text -f lavfi -i "movie=input_file,ocr"
- ffprobe
- starts the command
- -show_entries
- sets a list of entries to show
- - frame_tags=lavfi.ocr.text
- shows the lavfi.ocr.text tag in the frame section of the video
+ - frame_tags=lavfi.ocr.text
- shows the lavfi.ocr.text tag in the frame section of the video
- -f lavfi
- tells ffprobe to use the Libavfilter input virtual device
- - -i "movie=input_file,ocr"
- declares 'movie' as input_file and passes in the 'ocr' command
+ - -i "movie=input_file,ocr"
- declares 'movie' as input_file and passes in the 'ocr' command
@@ -2139,10 +2139,10 @@
Compare two video files for content similarity using perceptual hashing
- ffmpeg -i input_one -i input_two -filter_complex signature=detectmode=full:nb_inputs=2 -f null -
+ ffmpeg -i input_one -i input_two -filter_complex signature=detectmode=full:nb_inputs=2 -f null -
- ffmpeg
- starts the command
- - -i input_one -i input_two
- assigns the input files
+ - -i input_one -i input_two
- assigns the input files
- -filter_complex
- enables using more than one input file to the filter
- signature=detectmode=full
- Applies the signature filter to the inputs in 'full' mode. The other option is 'fast'.
- nb_inputs=2
- tells the filter to expect two input files
@@ -2157,9 +2157,9 @@
Generate a perceptual hash for an input video file
- ffmpeg -i input -vf signature=format=xml:filename="output.xml" -an -f null -
+ ffmpeg -i input -vf signature=format=xml:filename="output.xml" -an -f null -
- - ffmpeg -i input
- starts the command using your input file
+ - ffmpeg -i input
- starts the command using your input file
- -vf signature=format=xml
- applies the signature filter to the input file and sets the output format for the fingerprint to xml
- filename="output.xml"
- sets the output for the signature filter
- -an
- tells FFmpeg to ignore the audio stream of the input file
@@ -2179,16 +2179,16 @@
Play an image sequence
Play an image sequence directly as moving images, without having to create a video first.
- ffplay -framerate 5 input_file_%06d.ext
+ ffplay -framerate 5 input_file_%06d.ext
- ffplay
- starts the command
- -framerate 5
- plays image sequence at rate of 5 images per second
- Note: this low framerate will produce a slideshow effect.
- - -i input_file
- path, name and extension of the input file
+ Note: this low framerate will produce a slideshow effect.
+ - -i input_file
- path, name and extension of the input file
This must match the naming convention used! The regex %06d matches six-digit-long numbers, possibly with leading zeroes. This allows the full sequence to be read in ascending order, one image after the other.
The extension for TIFF files is .tif or maybe .tiff; the extension for DPX files is .dpx (or even .cin for old files). Screenshots are often in .png format.
- Notes:
+ Notes:
If -framerate
is omitted, the playback speed depends on the images’ file sizes and on the computer’s processing power. It may be rather slow for large image files.
You can navigate durationally by clicking within the playback window. Clicking towards the left-hand side of the playback window takes you towards the beginning of the playback sequence; clicking towards the right takes you towards the end of the sequence.
@@ -2200,15 +2200,15 @@
Split audio and video tracks
- ffmpeg -i input_file -map 0:v:0 video_output_file -map 0:a:0 audio_output_file
+ ffmpeg -i input_file -map 0:v:0 video_output_file -map 0:a:0 audio_output_file
This command splits the original input file into a video and audio stream. The -map command identifies which streams are mapped to which file. To ensure that you’re mapping the right streams to the right file, run ffprobe before writing the script to identify which streams are desired.
- ffmpeg
- starts the command
- - -i input_file
- path, name and extension of the input file
+ - -i input_file
- path, name and extension of the input file
- -map 0:v:0
- grabs the first video stream and maps it into:
- - video_output_file
- path, name and extension of the video output file
+ - video_output_file
- path, name and extension of the video output file
- -map 0:a:0
- grabs the first audio stream and maps it into:
- - audio_output_file
- path, name and extension of the audio output file
+ - audio_output_file
- path, name and extension of the audio output file
@@ -2219,21 +2219,21 @@
Merge audio and video tracks
- ffmpeg -i video_file -i audio_file -map 0:v -map 1:a -c copy output_file
+ ffmpeg -i video_file -i audio_file -map 0:v -map 1:a -c copy output_file
This command takes a video file and an audio file as inputs, and creates an output file that combines the video stream in the first file with the audio stream in the second file.
- ffmpeg
- starts the command
- - -i video_file
- path, name and extension of the first input file (the video file)
- - -i audio_file
- path, name and extension of the second input file (the audio file)
- - -map 0:v
- selects the video streams from the first input file
- - -map 1:a
- selects the audio streams from the second input file
+ - -i video_file
- path, name and extension of the first input file (the video file)
+ - -i audio_file
- path, name and extension of the second input file (the audio file)
+ - -map 0:v
- selects the video streams from the first input file
+ - -map 1:a
- selects the audio streams from the second input file
- -c copy
- copies streams without re-encoding
- - output_file
- path, name and extension of the output file
+ - output_file
- path, name and extension of the output file
- Note: in the example above, the video input file is given prior to the audio input file. However, input files can be added any order, as long as they are indexed correctly when stream mapping with -map
. See the entry on stream mapping.
+ Note: in the example above, the video input file is given prior to the audio input file. However, input files can be added any order, as long as they are indexed correctly when stream mapping with -map
. See the entry on stream mapping.
Variation:
Include the audio tracks from both input files with the following command:
- ffmpeg -i video_file -i audio_file -map 0:v -map 0:a -map 1:a -c copy output_file
+ ffmpeg -i video_file -i audio_file -map 0:v -map 0:a -map 1:a -c copy output_file
@@ -2244,14 +2244,14 @@
Create ISO files for DVD access
Create an ISO file that can be used to burn a DVD. Please note, you will have to install dvdauthor. To install dvd author using Homebrew run: brew install dvdauthor
- ffmpeg -i input_file -aspect 4:3 -target ntsc-dvd output_file.mpg
+ ffmpeg -i input_file -aspect 4:3 -target ntsc-dvd output_file.mpg
This command will take any file and create an MPEG file that dvdauthor can use to create an ISO.
- ffmpeg
- starts the command
- - -i input_file
- path, name and extension of the input file
+ - -i input_file
- path, name and extension of the input file
- -aspect 4:3
- declares the aspect ratio of the resulting video file. You can also use 16:9.
- -target ntsc-dvd
- specifies the region for your DVD. This could be also pal-dvd.
- - output_file.mpg
- path and name of the output file. The extension must be
.mpg
+ - output_file.mpg
- path and name of the output file. The extension must be
.mpg
@@ -2262,12 +2262,12 @@
Exports CSV for scene detection using YDIF
- ffprobe -f lavfi -i movie=input_file,signalstats -show_entries frame=pkt_pts_time:frame_tags=lavfi.signalstats.YDIF -of csv
+ ffprobe -f lavfi -i movie=input_file,signalstats -show_entries frame=pkt_pts_time:frame_tags=lavfi.signalstats.YDIF -of csv
This ffprobe command prints a CSV correlating timestamps and their YDIF values, useful for determining cuts.
- ffprobe
- starts the command
- -f lavfi
- uses the Libavfilter input virtual device as chosen format
- - -i movie=input file
- path, name and extension of the input video file
+ - -i movie=input file
- path, name and extension of the input video file
- ,
- comma signifies closing of video source assertion and ready for filter assertion
- signalstats
- tells ffprobe to use the signalstats command
- -show_entries
- sets list of entries to show per column, determined on the next line
@@ -2283,11 +2283,11 @@
Cover head switching noise
- ffmpeg -i input_file -filter:v drawbox=w=iw:h=7:y=ih-h:t=max output_file
+ ffmpeg -i input_file -filter:v drawbox=w=iw:h=7:y=ih-h:t=max output_file
This command will draw a black box over a small area of the bottom of the frame, which can be used to cover up head switching noise.
- ffmpeg
- starts the command
- - -i input_file
- path, name and extension of the input file
+ - -i input_file
- path, name and extension of the input file
- -filter:v drawbox=
- This calls the drawtext filter with the following options:
@@ -2297,7 +2297,7 @@
- t=max
- T represents the thickness of the drawn box. Default is 3.
- - output_file
- path and name of the output file
+ - output_file
- path and name of the output file
@@ -2308,7 +2308,7 @@
Record and live-stream simultaneously
- ffmpeg -re -i ${INPUTFILE} -map 0 -flags +global_header -vf scale="1280:-1,format=yuv420p" -pix_fmt yuv420p -level 3.1 -vsync passthrough -crf 26 -g 50 -bufsize 3500k -maxrate 1800k -c:v libx264 -c:a aac -b:a 128000 -r:a 44100 -ac 2 -t ${STREAMDURATION} -f tee "[movflags=+faststart]${TARGETFILE}|[f=flv]${STREAMTARGET}"
+ ffmpeg -re -i ${INPUTFILE} -map 0 -flags +global_header -vf scale="1280:-1,format=yuv420p" -pix_fmt yuv420p -level 3.1 -vsync passthrough -crf 26 -g 50 -bufsize 3500k -maxrate 1800k -c:v libx264 -c:a aac -b:a 128000 -r:a 44100 -ac 2 -t ${STREAMDURATION} -f tee "[movflags=+faststart]${TARGETFILE}|[f=flv]${STREAMTARGET}"
I use this script to stream to a RTMP target and record the stream locally as .mp4 with only one ffmpeg-instance.
As input, I use bmdcapture
which is piped to ffmpeg. But it can also be used with a static videofile as input.
The input will be scaled to 1280px width, maintaining height. Also the stream will stop after a given time (see -t
option.)
@@ -2349,7 +2349,7 @@
View information about a specific decoder, encoder, demuxer, muxer, or filter
- ffmpeg -h type=name
+ ffmpeg -h type=name
- ffmpeg
- starts the command
- -h
- Call the help option
@@ -2397,12 +2397,12 @@
Compare two images
- compare -metric ae image1.ext image2.ext null:
+ compare -metric ae image1.ext image2.ext null:
Compares two images to each other.
- compare
- starts the command
- -metric ae
- applies the absolute error count metric, returning the number of different pixels. Other parameters are available for image comparison.
- - image1.ext image2.ext
- takes two images as input
+ - image1.ext image2.ext
- takes two images as input
- null:
- throws away the comparison image that would be generated
@@ -2423,7 +2423,7 @@
- -quality 75
- sets quality to 75 (out of 100), adding light compression to smaller files
- -path thumbs
- specifies where to save the thumbnails -- this goes to a folder within the active folder called "thumbs".
Note: You will have to make this folder if it doesn't already exist.
- - *.jpg
- The astericks acts as a "wildcard" to be applied to every file in the directory.
+ - *.jpg
- The astericks acts as a "wildcard" to be applied to every file in the directory.
@@ -2434,13 +2434,13 @@
Create grid of images
- montage @list.txt -tile 6x12 -geometry +0+0 output_grid.jpg
+ montage @list.txt -tile 6x12 -geometry +0+0 output_grid.jpg
- montage
- starts the command
- @list.txt
- path and name of a text file containing a list of filenames, one per each line
- -tile 6x12
- specifies the dimensions of the proposed grid (6 images wide, 12 images long)
- -geometry +0+0
- specifies to include no spacing around any of the tiles; they will be flush against each other
- - output_grid.jpg
- path and name of the output file
+ - output_grid.jpg
- path and name of the output file
@@ -2451,12 +2451,12 @@
Get file signature data
- convert -verbose input_file.ext | grep -i signature
+ convert -verbose input_file.ext | grep -i signature
Gets signature data from an image file, which is a hash that can be used to uniquely identify the image.
- convert
- starts the command
- -verbose
- sets verbose flag for collecting the most data
- - input_file.ext
- path and name of image file
+ - input_file.ext
- path and name of image file
- |
- pipe the data into something else
- grep
- starts the grep command
- -i signature
- ignore case and search for the phrase "signature"
@@ -2487,13 +2487,13 @@
Resize to width
- convert input_file.ext -resize 750 output_file.ext
+ convert input_file.ext -resize 750 output_file.ext
This script will also convert the file format, if the output has a different file extension than the input.
- convert
- starts the command
- - -i input_file.ext
- path and name of the input file
+ - -i input_file.ext
- path and name of the input file
- -resize 750
- resizes the image to 750 pixels wide, retaining aspect ratio
- - output_file.ext
- path and name of the output file
+ - output_file.ext
- path and name of the output file
@@ -2508,13 +2508,13 @@
Change the above data-target field, the hover-over description, the button text, and the below div ID
*****Longer title*****
- ffmpeg -i input_file *****code goes here***** output_file
+ ffmpeg -i input_file *****code goes here***** output_file
This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info! This is all about info!
- ffmpeg
- starts the command
- - -i input file
- path, name and extension of the input file
+ - -i input file
- path, name and extension of the input file
- *****parameter*****
- *****comments*****
- - output file
- path, name and extension of the output file
+ - output file
- path, name and extension of the output file
-->