FFmpeg is a powerful tool for manipulating audiovisual files. Unfortunately, it also has a steep learning curve, especially for users unfamiliar with a command line interface. This app helps users through the command generation process so that more people can reap the benefits of FFmpeg.
Each button displays helpful information about how to perform a wide variety of tasks using FFmpeg. To use this site, click on the task you would like to perform. A new window will open up with a sample command and a description of how that command works. You can copy this command and understand how the command works with a breakdown of each of the flags.
For FFmpeg basics, check out the program’s official website.
For instructions on how to install FFmpeg on Mac, Linux, and Windows, refer to Reto Kromer’s installation instructions.
For Bash and command line basics, try the Command Line Crash Course. For a little more context presented in an ffmprovisr style, try explainshell.com!
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Script Ahoy: Community Resource for Archivists and Librarians Scripting
The Sourcecaster: an app that helps you use the command line to work through common challenges that come up when working with digital primary sources.
Cable Bible: A Guide to Cables and Connectors Used for Audiovisual Tech
ffmpeg -i input_file.wav -write_id3v1 1 -id3v2_version 3 -dither_method modified_e_weighted -out_sample_rate 48k -qscale:a 1 output_file.mp3
This will convert your WAV files to MP3s.
-b:a 320k
to set to the maximum bitrate allowed by the MP3 format. For more detailed discussion on variable vs constant bitrates see here.ffmpeg -i input_file.wav -c:a aac -b:a 128k -dither_method modified_e_weighted -ar 44100 output_file.mp4
This will convert your WAV file to AAC/MP4.
ffmpeg -i input_file -c:v prores -profile:v 1 -vf yadif -c:a pcm_s16le output_file.mov
This command transcodes an input file into a deinterlaced Apple ProRes 422 LT file with 16-bit linear PCM encoded audio. The file is deinterlaced using the yadif filter (Yet Another De-Interlacing Filter).
.mov
.FFmpeg comes with more than one ProRes encoder:
prores
is much faster, can be used for progressive video only, and seems to be better for video according to Rec. 601 (Recommendation ITU-R BT.601).prores_ks
generates a better file, can also be used for interlaced video, allows also encoding of ProRes 4444 (-c:v prores_ks -profile:v 4
), and seems to be better for video according to Rec. 709 (Recommendation ITU-R BT.709).ffmpeg -i input_file -c:v libx264 -pix_fmt yuv420p -c:a copy output_file
This command takes an input file and transcodes it to H.264 with an .mp4 wrapper, keeping the audio the same codec as the original. The libx264 codec defaults to a “medium” preset for compression quality and a CRF of 23. CRF stands for constant rate factor and determines the quality and file size of the resulting H.264 video. A low CRF means high quality and large file size; a high CRF means the opposite.
In order to use the same basic command to make a higher quality file, you can add some of these presets:
ffmpeg -i input_file -c:v libx264 -pix_fmt yuv420p -preset veryslow -crf 18 -c:a copy output_file
ffmpeg -i input_video_file.mxf -i input_audio_file.mxf -c:v libx264 -pix_fmt yuv420p -c:a aac output_file.mp4
This will transcode mxf wrapped video and audio files to an H.264 encoded .mp4 file. Please note this only works for unencrypted, single reel DCPs.
Variation: Copy PCM audio streams by using Matroska instead of the MP4 container
ffmpeg -i input_video_file.mxf -i input_audio_file.mxf -c:v libx264 -pix_fmt yuv420p -c:a copy output_file.mkv
ffmpeg -i input_file -c:v libx264 -filter:v "yadif,scale=1440:1080:flags=lanczos,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,format=yuv420p" output_file
Transform a video file with 4:3 aspect ratio into a video file with 16:9 aspect ration by correct pillarboxing.
ffmpeg -i input_file -filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2" -c:a copy output_file
-c:a copy
by -an
.Transform a SD video file with 4:3 aspect ratio into an HD video file with 16:9 aspect ratio by correct pillarboxing.
ffmpeg -i input_file -filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0" -c:a copy output_file
scale=1440:1080
) works for both upscaling and downscaling. We use the Lanczos scaling algorithm (flags=lanczos
), which is slower but gives better results than the default bilinear algorithm.pad=1920:1080:240:0
) completes the transformation from SD to HD.-c:a copy
with -an
.Transform a video file with 16:9 aspect ratio into a video file with 4:3 aspect ration by correct letterboxing.
ffmpeg -i input_file -filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2" -c:a copy output_file
-c:a copy
by -an
.ffmpeg -i input_file -map 0 -dn -c:v ffv1 -level 3 -g 1 -slicecrc 1 -slices 16 -c:a copy output_file.mkv -f framemd5 -an md5_output_file
This will losslessly trancode your video with the FFV1 Version 3 codec in a Matroska container. In order to verify losslessness, a framemd5 of the source video is also generated. For more information on FFV1 encoding, try the ffmpeg wiki.
.mkv
extension to save your file in a Matroska container. Optionally, choose a different extension if you want a different container, such as .mov
or .avi
.ffmpeg -i input_file -c:v copy -aspect 4:3 output_file
4:3
. Experiment with other aspect ratios such as 16:9
. If used together with -c:v copy
, it will affect the aspect ratio stored at container level, but not the aspect ratio stored in encoded frames, if it exists.ffmpeg -i input_file.mkv -c:v copy -c:a aac output_file.mp4
This will convert your Matroska (MKV) files to MP4 files.
.mkv
.-c:a aac
by -an
..mp4
.ffmpeg -f image2 -framerate 9 -pattern_type glob -i "input_image_*.jpg" -vf scale=250x250 output_file.gif
This will convert a series of image files into a gif.
image2
specifies the image file demuxer.ffmpeg -i concat:input_file1\|input_file2\|input_file3 -c:v libx264 -c:a copy output_file.mp4
This command allows you to create an H.264 file from a DVD source that is not copy-protected.
Before encoding, you’ll need to establish which of the .VOB files on the DVD or .iso contain the content that you wish to encode. Inside the VIDEO_TS directory, you will see a series of files with names like VTS_01_0.VOB, VTS_01_1.VOB, etc. Some of the .VOB files will contain menus, special features, etc, so locate the ones that contain target content by playing them back in VLC.
-i concat:VTS_01_1.VOB\|VTS_01_2.VOB\|VTS_01_3.VOB
It’s also possible to adjust the quality of your output by setting the -crf and -preset values:
ffmpeg -i concat:input_file1\|input_file2\|input_file3 -c:v libx264 -crf 18 -preset veryslow -c:a copy output_file.mp4
Bear in mind that by default, libx264 will only encode a single video stream and a single audio stream, picking the ‘best’ of the options available. To preserve all video and audio streams, add -map parameters:
ffmpeg -i concat:input_file1\|input_file2 -map 0:v -map 0:a -c:v libx264 -c:a copy output_file.mp4
ffmpeg -i input_file -c:v libx265 -pix_fmt yuv420p -c:a copy output_file
This command takes an input file and transcodes it to H.265/HEVC in an .mp4 wrapper, keeping the audio codec the same as in the original file.
Note: ffmpeg must be compiled with libx265, the library of the H.265 codec, for this script to work. (Add the flag --with-x265
if using brew install ffmpeg
method).
The libx265 encoding library defaults to a ‘medium’ preset for compression quality and a CRF of 28. CRF stands for ‘constant rate factor’ and determines the quality and file size of the resulting H.265 video. The CRF scale ranges from 0 (best quality [lossless]; largest file size) to 51 (worst quality; smallest file size).
A CRF of 28 for H.265 can be considered a medium setting, corresponding to a CRF of 23 in encoding H.264, but should result in about half the file size.
To create a higher quality file, you can add these presets:
ffmpeg -i input_file -c:v libx265 -pix_fmt yuv420p -preset veryslow -crf 18 -c:a copy output_file
ffmpeg -i input_file -c:v libx264 -vf "yadif,format=yuv420p" output_file
This command takes an interlaced input file and outputs a deinterlaced H.264 MP4.
-vf
is an alias of -filter:v
)yadif=1
may produce visually better results.libx264
will use a chroma subsampling scheme that is the closest match to that of the input. This can result in Y′CBCR 4:2:0, 4:2:2, or 4:4:4 chroma subsampling. QuickTime and most other non-FFmpeg based players can’t decode H.264 files that are not 4:2:0, therefore it’s advisable to specify 4:2:0 chroma subsampling. "yadif,format=yuv420p"
is an ffmpeg filtergraph. Here the filtergraph is made up of one filter chain, which is itself made up of the two filters (separated by the comma).
The enclosing quote marks are necessary when you use spaces within the filtergraph, e.g. -vf "yadif, format=yuv420p"
, and are included above as an example of good practice.
Note: ffmpeg includes several deinterlacers apart from yadif: bwdif, w3fdif, kerndeint, and nnedi.
For more H.264 encoding options, see the latter section of the encode H.264 command.
Before and after deinterlacing:
This command uses a filter to convert the video to a different colour space.
ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst output_file
bt601
(Rec.601), smpte170m
(Rec.601, 525-line/NTSC version), bt470bg
(Rec.601, 625-line/PAL version), bt709
(Rec.709), and bt2020
(Rec.2020).-vf colormatrix=bt601:bt709
.Note: Converting between colourspaces with ffmpeg can be done via either the colormatrix or colorspace filters, with colorspace allowing finer control (individual setting of colourspace, transfer characteristics, primaries, range, pixel format, etc). See this entry on the ffmpeg wiki, and the ffmpeg documentation for colormatrix and colorspace.
ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst -color_primaries val -color_trc val -colorspace val output_file
smpte170m
(Rec.601, 525-line/NTSC version), bt470bg
(Rec.601, 625-line/PAL version), bt709
(Rec.709), and bt2020
(Rec.2020).
smpte170m
(Rec.601, 525-line/NTSC version), gamma28
(Rec.601, 625-line/PAL version)1, bt709
(Rec.709), bt2020_10
(Rec.2020 10-bit), and bt2020_12
(Rec.2020 12-bit).smpte170m
(Rec.601, 525-line/NTSC version), bt470bg
(Rec.601, 625-line/PAL version), bt709
(Rec.709), bt2020_cl
(Rec.2020 constant luminance), and bt2020_ncl
(Rec.2020 non-constant luminance).To Rec.601 (525-line/NTSC):
ffmpeg -i input_file -c:v libx264 -vf colormatrix=bt709:smpte170m -color_primaries smpte170m -color_trc smpte170m -colorspace smpte170m output_file
To Rec.601 (625-line/PAL):
ffmpeg -i input_file -c:v libx264 -vf colormatrix=bt709:bt470bg -color_primaries bt470bg -color_trc gamma28 -colorspace bt470bg output_file
To Rec.709:
ffmpeg -i input_file -c:v libx264 -vf colormatrix=bt601:bt709 -color_primaries bt709 -color_trc bt709 -colorspace bt709 output_file
MediaInfo output examples:
⚠ Using this command it is possible to add Rec.709 tags to a file that is actually Rec.601 (etc), so apply with caution!
These commands are relevant for H.264 and H.265 videos, encoded with libx264
and libx265
respectively.
Note: If you wish to embed colourspace metadata without changing to another colourspace, omit -vf colormatrix=src:dst
. However, since it is libx264
/libx265
that writes the metadata, it’s not possible to add these tags without reencoding the video stream.
For all possible values for -color_primaries
, -color_trc
, and -colorspace
, see the ffmpeg documentation on codec options.
1. Out of step with the regular pattern, -color_trc
doesn't accept bt470bg
; it is instead here referred to directly as gamma.
In the Rec.601 standard, 525-line/NTSC and 625-line/PAL video have assumed gammas of 2.2 and 2.8 respectively. ↩
ffmpeg -i input_file -c:v libx264 -vf "fieldmatch,yadif,decimate" output_file
The inverse telecine procedure reverses the 3:2 pull down process, restoring 29.97fps interlaced video to the 24fps frame rate of the original film source.
"fieldmatch,yadif,decimate"
is an ffmpeg filtergraph. Here the filtergraph is made up of one filter chain, which is itself made up of the three filters (separated by commas).
The enclosing quote marks are necessary when you use spaces within the filtergraph, e.g. -vf "fieldmatch, yadif, decimate"
, and are included above as an example of good practice.
Note that if applying an inverse telecine procedure to a 29.97i file, the output framerate will actually be 23.976fps.
This command can also be used to restore other framerates.
Before and after inverse telecine:
ffplay -f lavfi "amovie='input.mp3',astats=metadata=1:reset=1,adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0:size=700x256:bg=Black[out]"
ffplay -f lavfi "movie='input.mp4',signalstats=out=brng:color=cyan[out]"
Note: ffmpeg must be compiled with the tesseract library for this script to work (--with-tesseract
if using brew install ffmpeg
method).
ffplay input_file -vf "ocr,drawtext=fontfile=/Library/Fonts/Andale Mono.ttf:text=%{metadata\\\:lavfi.ocr.text}:fontcolor=white"
Note: ffmpeg must be compiled with the tesseract library for this script to work (--with-tesseract
if using brew install ffmpeg
method)
ffprobe -show_entries frame_tags=lavfi.ocr.text -f lavfi -i "movie=input_file,ocr"
ffplay input_file -vf "split=2[m][v],[v]vectorscope=b=0.7:m=color3:g=green[v],[m][v]overlay=x=W-w:y=H-h"
ffmpeg -i input01 -i input02 -filter_complex "[0:v:0]tblend=all_mode=difference128[a];[1:v:0]tblend=all_mode=difference128[b];[a][b]hstack[out]" -map [out] -f nut -c:v rawvideo - | ffplay -
Create high quality GIF
ffmpeg -ss HH:MM:SS -i input_file -filter_complex "fps=10,scale=500:-1:flags=lanczos,palettegen" -t 3 palette.png
ffmpeg -ss HH:MM:SS -i input_file -i palette.png -filter_complex "[0:v]fps=10,scale=500:-1:flags=lanczos[v],[v][1:v]paletteuse" -t 3 -loop 6 output_file
The first command will use the palettegen filter to create a custom palette, then the second command will create the GIF with the paletteuse filter. The result is a high quality GIF.
Simpler GIF creation
ffmpeg -ss HH:MM:SS -i input_file -vf "fps=10,scale=500:-1" -t 3 -loop 6 output_file
This is a quick and easy method. Dithering is more apparent than the above method using the palette* filters, but the file size will be smaller. Perfect for that “legacy” GIF look.
ffmpeg -i input_file -ss 00:00:20 -vframes 1 thumb.png
This command will grab a thumbnail 20 seconds into the video.
ffmpeg -i input_file -vf fps=1/60 out%d.png
This will grab a thumbnail every minute and output sequential png files.
ffmpeg -i input_file -t 5 -c copy output_file
This command captures a certain portion of a video file, starting from the beginning and continuing for the amount of time (in seconds) specified in the script. This can be used to create a preview file, or to remove unwanted content from the end of the file. To be more specific, use timecode, such as 00:00:05.
ffmpeg -i input_file -ss 00:02:00 -to 00:55:00 -c copy output_file
This command allows you to create an excerpt from a video file without re-encoding the image data.
-ss
with -c copy
if the source is encoded with an interframe codec (e.g., H.264). Since ffmpeg must split on i-frames, it will seek to the nearest i-frame to begin the stream copy.Variation: trim video by setting duration, by using -t
instead of -to
ffmpeg -i input_file -ss 00:05:00 -t 10 -c copy output_file
Note: In order to keep the original timestamps, without trying to sanitise them, you may add the -copyts
option.
ffmpeg -i input_file -ss 5 -c copy output_file
This command copies a video file starting from a specified time, removing the first few seconds from the output. This can be used to create an excerpt, or remove unwanted content from the beginning of a video file.
ffmpeg -sseof -5 -i input_file -c copy output_file
This command copies a video file starting from a specified time before the end of the file, removing everything before from the output. This can be used to create an excerpt, or extract content from the end of a video file (e.g. for extracting the closing credits).
Create an ISO file that can be used to burn a DVD. Please note, you will have to install dvdauthor. To install dvd author using Homebrew run: brew install dvdauthor
ffmpeg -i input_file -aspect 4:3 -target ntsc-dvd output_file.mpg
This command will take any file and create an MPEG file that dvdauthor can use to create an ISO.
ffmpeg -i input_file -filter:v drawbox=w=iw:h=7:y=ih-h:t=max output_file
This command will draw a black box over a small area of the bottom of the frame, which can be used to cover up head switching noise.
ffmpeg -i input_file -i input_file_to_append -filter_complex "[0:a:0]asplit=2[a][b];[b]afifo[bb];[1:a:0][bb]concat=n=2:v=0:a=1[concatout]" -map "[a]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file.mp3 -map "[concatout]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file_appended.mp3
This script allows you to generate two derivative audio files from a master while appending audio from a seperate file (for example a copyright or institutional notice) to one of them.
asplit
allows audio streams to be split up for seperate manipulation. This command splits the audio from the first input (the master file) into two streams "a" and "b"concat
is used to join files. n=2
tells the filter there are two inputs. v=0:a=1
Tells the filter there are 0 video outputs and 1 audio output. This command appends the audio from the second input to the beginning of stream "bb" and names the output "concatout"Bash scripts are plain text files saved with a .sh extension. This entry explains how they work with the example of a bash script named “Rewrap-MXF.sh”, which rewraps .MXF files in a given directory to .MOV files.
“Rewrap-MXF.sh” contains the following text:
for file in *.MXF; do ffmpeg -i "$file" -map 0 -c copy "${file%.MXF}.mov"; done
Note: the shell script (.sh file) and all .MXF files to be processed must be contained within the same directory, and the script must be run from that directory.
Execute the .sh file with the command sh Rewrap-MXF.sh
.
Modify the script as needed to perform different transcodes, or to use with ffprobe. :)
The basic pattern will look similar to this:
for item in *.ext; do ffmpeg -i $item (ffmpeg options here) "${item%.ext}_suffix.ext"
e.g., if an input file is bestmovie002.avi, its output will be bestmovie002_suffix.avi.
As of Windows 10, it is possible to run Bash via Bash on Ubuntu on Windows, allowing you to use bash scripting. To enable Bash on Windows, see these instructions.
On Windows, the primary native command line programme is PowerShell. PowerShell scripts are plain text files saved with a .ps1 extension. This entry explains how they work with the example of a PowerShell script named “rewrap-mp4.ps1”, which rewraps .mp4 files in a given directory to .mkv files.
“rewrap-mp4.ps1” contains the following text:
$inputfiles = ls *.mp4
foreach ($file in $inputfiles) {
$output = [io.path]::ChangeExtension($file, '.mkv')
ffmpeg -i $file -map 0 -c copy $output
}
$inputfiles
, which is a list of all the .mp4 files in the current folder.$inputfiles
.$file
is an arbitrary variable which will represent each .mp4 file in turn as it is looped over.$output
variable declared above: i.e., the current file name with an .mkv extension.Note: the PowerShell script (.ps1 file) and all .mp4 files to be rewrapped must be contained within the same directory, and the script must be run from that directory.
Execute the .ps1 file by typing .\rewrap-mp4.ps1
in PowerShell.
Modify the script as needed to perform different transcodes, or to use with ffprobe. :)
ffmpeg -i input_file -f framemd5 -an output_file
This will create an MD5 checksum per video frame.
You may verify an MD5 checksum file created this way by using a Bash script.
ffprobe -i input_file -show_format -show_streams -show_data -print_format xml
This command extracts technical metadata from a video file and displays it in xml.
ffmpeg documentation on ffprobe (full list of flags, commands, www.ffmpeg.org/ffprobe.html)
ffmpeg -report -i input_file -f null -
This decodes your video and displays any CRC checksum mismatches. These errors will display in your terminal like this: [ffv1 @ 0x1b04660] CRC mismatch 350FBD8A!at 0.272000 seconds
Frame crcs are enabled by default in FFV1 Version 3.
-loglevel verbose
.null
muxer. This allows video decoding without creating an output file.-
is just a place holder. No file is actually created. ffmpeg -i input file -filter:v idet -f null -
null
muxer. This allows video decoding without creating an output file.-
is just a place holder. No file is actually created. ffprobe -f lavfi -i "movie=input_file:s=v+a[in0][in1],[in0]signalstats=stat=tout+vrep+brng,cropdetect=reset=1:round=1,idet=half_life=1,split[a][b];[a]field=top[a1];[b]field=bottom,split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim[out0];[in1]ebur128=metadata=1,astats=metadata=1:reset=1:length=0.4[out1]" -show_frames -show_versions -of xml=x=1:q=1 -noprivate | gzip > input_file.qctools.xml.gz
This will create an XML report for use in QCTools for a video file with one video track and one audio track. See also the QCTools documentation.
>
ffprobe -f lavfi -i "movie=input_file,signalstats=stat=tout+vrep+brng,cropdetect=reset=1:round=1,idet=half_life=1,split[a][b];[a]field=top[a1];[b]field=bottom,split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim" -show_frames -show_versions -of xml=x=1:q=1 -noprivate | gzip > input_file.qctools.xml.gz
This will create an XML report for use in QCTools for a video file with one video track and NO audio track. See also the QCTools documentation.
>
ffmpeg -f lavfi -i mandelbrot=size=1280x720:rate=25 -c:v libx264 -t 10 output_file
size
and rate
options allow you to choose a specific frame size and framerate. [more]-pix_fmt
to yuv420p
for greater H.264 compatibility with media players.ffmpeg -f lavfi -i smptebars=size=720x576:rate=25 -c:v prores -t 10 output_file
size
and rate
options allow you to choose a specific frame size and framerate. [more]ffmpeg -f lavfi -i testsrc=size=720x576:rate=25 -c:v v210 -t 10 output_file
size
and rate
options allow you to choose a specific frame size and framerate. Test an HD video projector by playing the SMPTE colour bars pattern.
ffplay -f lavfi -i smptehdbars=size=1920x1080
Test a VGA (SD) video projector by playing the SMPTE colour bars pattern.
ffplay -f lavfi -i smptebars=size=640x480
Generate a test audio file playing a sine wave.
ffmpeg -f lavfi -i "sine=frequency=1000:sample_rate=48000:duration=5" -c:a pcm_s16le output_file.wav
pcm_s16le
(the default encoding for wav files). pcm represents pulse-code modulation format (raw bytes), 16
means 16 bits per sample, and le
means "little endian"ffmpeg -f concat -i mylist.txt -c copy output_file
This command takes two or more files of the same file type and joins them together to make a single file. All that the program needs is a text file with a list specifying the files that should be joined. However, it only works properly if the files to be combined have the exact same codec and technical specifications. Be careful, ffmpeg may appear to have successfully joined two video files with different codecs, but may only bring over the audio from the second file or have other weird behaviors. Don’t use this command for joining files with different codecs and technical specs and always preview your resulting video file!
file './first_file.ext' file './second_file.ext' . . . file './last_file.ext'In the above, file is simply the word "file". Straight apostrophes ('like this') rather than curved quotation marks (‘like this’) must be used to enclose the file paths.
-safe 0
before the input file.ffmpeg -f concat -safe 0 -i mylist.txt -c copy output_file
For more information, see the ffmpeg wiki page on concatenating files.
Play an image sequence directly as moving images, without having to create a video first.
ffplay -framerate 5 input_file_%06d.ext
Notes:
If -framerate
is omitted, the playback speed depends on the images’ file sizes and on the computer’s processing power. It may be rather slow for large image files.
You can navigate durationally by clicking within the playback window. Clicking towards the left-hand side of the playback window takes you towards the beginning of the playback sequence; clicking towards the right takes you towards the end of the sequence.
ffmpeg -i input_file -map 0:v video_output_file -map 0:a audio_output_file
This command splits the original input file into a video and audio stream. The -map command identifies which streams are mapped to which file. To ensure that you’re mapping the right streams to the right file, run ffprobe before writing the script to identify which streams are desired.
ffmpeg -i input_file -filter_complex "[0:a:0][0:a:1]amerge[out]" -map 0:v -map "[out]" -c:v copy -shortest output_file
This command combines two audio tracks present in a video file into one stream. It can be useful in situations where a downstream process, like YouTube’s automatic captioning, expect one audio track. To ensure that you’re mapping the right audio tracks run ffprobe before writing the script to identify which tracks are desired. More than two audio streams can be combined by extending the pattern present in the -filter_complex option.
ffmpeg -i input_file -c:a copy -vn output_file
This command extracts the audio stream without loss from an audiovisual file.
ffmpeg -i input_file -filter:v "hflip,vflip" -c:a copy output_file
-c:a copy
by -an
.E.g. for converting 24fps to 25fps with audio pitch compensation for PAL access copies. (Thanks @kieranjol!)
ffmpeg -i input_file -filter_complex "[0:v]setpts=input_fps/output_fps*PTS[v]; [0:a]atempo=output_fps/input_fps[a]" -map "[v]" -map "[a]" output_file
setpts
video filter modifies the PTS (presentation time stamp) of the video stream, and the atempo
audio filter modifies the speed of the audio stream while keeping the same sound pitch. Note that the parameter’s order for the image and for the sound are inverted:
setpts
the numerator input_fps
sets the input speed and the denominator output_fps
sets the output speed; both values are given in frames per second.atempo
the numerator output_fps
sets the output speed and the denominator input_fps
sets the input speed; both values are given in frames per second.E.g For creating access copies with your institutions name
ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:text=watermark_text:fontcolor=font_colour:alpha=0.4:x=(w-text_w)/2:y=(h-text_h)/2" output_file
fontfile=/Library/Fonts/AppleGothic.ttf
35
is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.text='FFMPROVISR EXAMPLE TEXT'
fontcolor=white
or a hexadecimal value such as fontcolor=0xFFFFFF
-vf
is a shortcut for -filter:v
.ffmpeg -i input_video file -i input_image_file -filter_complex overlay=main_w-overlay_w-5:5 output_file
main_w-overlay_w-5:5
uses relative coordinates to place the watermark in the upper right hand corner, based on the width of your input files. Please see the ffmpeg documentation for more examples.ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:timecode=starting_timecode:fontcolor=font_colour:box=1 :boxcolor=box_colour:rate=timecode_rate:x=(w-text_w)/2:y=h/1.2" output_file
fontfile=/Library/Fonts/AppleGothic.ttf
35
is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.hh:mm:ss[:;.]ff
. Colon escaping is determined by O.S, for example in Ubuntu timecode='09\\:50\\:01\\:23'
. Ideally, this value would be generated from the file itself using ffprobe.fontcolor=white
or a hexadecimal value such as fontcolor=0xFFFFFF
fontcolor=black
or a hexadecimal value such as fontcolor=0x000000
25/1
-vf
is a shortcut for -filter:v
.ffmpeg -f image2 -framerate 24 -i input_file_%06d.ext -c:v v210 -an output_file
-start_number 086400
before -i input_file_%06d.ext
. The extension for TIFF files is .tif or maybe .tiff; the extension for DPX files is .dpx (or eventually .cin for old files).ffmpeg -r 1 -loop 1 -i image_file -i audio_file -acodec copy -shortest -vf scale=1280:720 output_file
This command will take an image file (e.g. image.jpg) and an audio file (e.g. audio.mp3) and combine them into a video file that contains the audio track with the image used as the video. It can be useful in a situation where you might want to upload an audio file to a platform like YouTube. You may want to adjust the scaling with -vf to suit your needs.
ffmpeg -i input_file -c:v prores -filter:v setfield=tff output_file
setfield=bff
for bottom field first.These examples use QuickTime inputs and outputs. The strategy will vary or may not be possible in other file formats. In the case of these examples it is the intention to make a lossless copy while clarifying an unknown characteristic of the stream.
ffprobe input_file -show_streams
Values that are set to 'unknown' and 'undetermined' may be unspecified within the stream. An unknown aspect ratio would be expressed as '0:1'. Streams with many unknown properties may have interoperability issues or not play as intended. In many cases, an unknown or undetermined value may be accurate because the information about the source is unclear, but often the value is intended to be known. In many cases the stream will played with an assumed value if undetermined (for instance a display_aspect_ratio of '0:1' may be played as 'WIDTH:HEIGHT'), but this may or may not be what is intended. Use carefully.
If the display_aspect_ratio is set to '0:1' it may be clarified with the -aspect option and stream copy.
ffmpeg -i input_file -c copy -map 0 -aspect DAR_NUM:DAR_DEN output_file
Other properties may be clarified in a similar way. Replace -aspect and its value with other properties such as shown in the options below. Note that setting color values in QuickTime requires that -movflags write_colr is set.
ffmpeg -h type=name
encoder=libx264
decoder=mp3
muxer=matroska
demuxer=mov
filter=crop
Made with ♥ at AMIA #AVhack15! Contribute to the project via our GitHub page!