diff --git a/index.html b/index.html index 661f251..604070b 100644 --- a/index.html +++ b/index.html @@ -1,14 +1,14 @@ -
-FFmpeg is a powerful tool for manipulating audiovisual files. Unfortunately, it also has a steep learning curve, especially for users unfamiliar with a command line interface. This app helps users through the command generation process so that more people can reap the benefits of FFmpeg.
Each button displays helpful information about how to perform a wide variety of tasks using FFmpeg. To use this site, click on the task you would like to perform. A new window will open up with a sample command and a description of how that command works. You can copy this command and understand how the command works with a breakdown of each of the flags.
Tutorials -For FFmpeg basics, check out the program’s official website.
+For FFmpeg basics, check out the program’s official website.
For instructions on how to install FFmpeg on Mac, Linux, and Windows, refer to Reto Kromer’s installation instructions.
-For Bash and command line basics, try the Command Line Crash Course. For a little more context presented in an ffmprovisr style, try explainshell.com!
+For Bash and command line basics, try the Command Line Crash Course. For a little more context presented in an ffmprovisr style, try explainshell.com!
License
This work is licensed under a Creative Commons Attribution 4.0 International License.
Script Ahoy: Community Resource for Archivists and Librarians Scripting
+Script Ahoy: Community Resource for Archivists and Librarians Scripting
The Sourcecaster: an app that helps you use the command line to work through common challenges that come up when working with digital primary sources.
Cable Bible: A Guide to Cables and Connectors Used for Audiovisual Tech
Many FFmpeg commands use filters that manipulate the video or audio stream in some way: for example, hflip to horizontally flip a video, or amerge to merge two or more audio tracks into a single stream.
+Many FFmpeg commands use filters that manipulate the video or audio stream in some way: for example, hflip to horizontally flip a video, or amerge to merge two or more audio tracks into a single stream.
The use of a filter is signalled by the flag -vf
(video filter) or -af
(audio filter), followed by the name and options of the filter itself. For example, take the convert colourspace command:
ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst output_file
-
Here, colormatrix is the filter used, with src and dst representing the source and destination colourspaces. This part following the -vf
is a filtergraph.
Here, colormatrix is the filter used, with src and dst representing the source and destination colourspaces. This part following the -vf
is a filtergraph.
It is also possible to apply multiple filters to an input, which are sequenced together in the filtergraph. A chained set of filters is called a filter chain, and a filtergraph may include multiple filter chains. Filters in a filterchain are separated from each other by commas (,
), and filterchains are separated from each other by semicolons (;
). For example, take the inverse telecine command:
ffmpeg -i input_file -c:v libx264 -vf "fieldmatch,yadif,decimate" output_file
Here we have a filtergraph including one filter chain, which is made up of three video filters.
@@ -154,7 +154,7 @@To map all streams in the input file to the output file, use -map 0
. However, note that not all container formats can include all stream types: for example, .mp4 cannot contain timecode.
When no mapping is specified in an ffmpeg command, the default for video files is to take just one video and one audio stream for the output: other stream types, such as timecode or subtitles, will not be copied to the output file by default. If multiple video or audio streams are present, the best quality one is automatically selected by FFmpeg.
-For more information, check out the FFmpeg wiki Map page, and the official FFmpeg documentation on -map
.
For more information, check out the FFmpeg wiki Map page, and the official FFmpeg documentation on -map
.
-vf colormatrix=bt601:bt709
.
Note: Converting between colourspaces with FFmpeg can be done via either the colormatrix or colorspace filters, with colorspace allowing finer control (individual setting of colourspace, transfer characteristics, primaries, range, pixel format, etc). See this entry on the FFmpeg wiki, and the FFmpeg documentation for colormatrix and colorspace.
+Note: Converting between colourspaces with FFmpeg can be done via either the colormatrix or colorspace filters, with colorspace allowing finer control (individual setting of colourspace, transfer characteristics, primaries, range, pixel format, etc). See this entry on the FFmpeg wiki, and the FFmpeg documentation for colormatrix and colorspace.
ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst -color_primaries val -color_trc val -colorspace val output_file
⚠ Using this command it is possible to add Rec.709 tags to a file that is actually Rec.601 (etc), so apply with caution!
These commands are relevant for H.264 and H.265 videos, encoded with libx264
and libx265
respectively.
Note: If you wish to embed colourspace metadata without changing to another colourspace, omit -vf colormatrix=src:dst
. However, since it is libx264
/libx265
that writes the metadata, it’s not possible to add these tags without reencoding the video stream.
For all possible values for -color_primaries
, -color_trc
, and -colorspace
, see the FFmpeg documentation on codec options.
For all possible values for -color_primaries
, -color_trc
, and -colorspace
, see the FFmpeg documentation on codec options.
1. Out of step with the regular pattern, -color_trc
doesn’t accept bt470bg
; it is instead here referred to directly as gamma.
In the Rec.601 standard, 525-line/NTSC and 625-line/PAL video have assumed gammas of 2.2 and 2.8 respectively. ↩
The possible values for -color_primaries
, -color_trc
, and -field_order
are given in the Codec Options section of the FFmpeg docs - scroll down to near the bottom of the section.
The possible values for -color_primaries
, -color_trc
, and -field_order
are given in the Codec Options section of the FFmpeg docs - scroll down to near the bottom of the section.
ffmpeg -i input_file -af loudnorm=print_format=json -f null -
This filter calculates and outputs loudness information in json about an input file (labeled input) as well as what the levels would be if loudnorm were applied in its one pass mode (labeled output). The values generated can be used as inputs for a 'second pass' of the loudnorm filter allowing more accurate loudness normalization than if it is used in a single pass.
These instructions use the loudnorm defaults, which align well with PBS recommendations for target loudness. More information can be found at the loudnorm documentation.
-Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document.
+Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document.
ffmpeg -i input_file -af loudnorm=dual_mono=true -ar 48k output_file
This will normalize the loudness of an input using one pass, which is quicker but less accurate than using two passes. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the loudnorm documentation.
-Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document.
+Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document.
ffmpeg -i input_file -af loudnorm=dual_mono=true:measured_I=input_i:measured_TP=input_tp:measured_LRA=input_lra:measured_thresh=input_thresh:offset=target_offset:linear=true -ar 48k output_file
This command allows using the levels calculated using a first pass of the loudnorm filter to more accurately normalize loudness. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the loudnorm documentation.
-Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document.
+Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document.
file './first_file.ext' file './second_file.ext' @@ -918,7 +918,7 @@
main_w-overlay_w-5:5
uses relative coordinates to place the watermark in the upper right hand corner, based on the width of your input files. Please see the FFmpeg documentation for more examples.main_w-overlay_w-5:5
uses relative coordinates to place the watermark in the upper right hand corner, based on the width of your input files. Please see the FFmpeg documentation for more examples.ffplay -f lavfi "amovie='input.mp3', astats=metadata=1:reset=1, adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0:size=700x256:bg=Black[out]"
ffplay -f lavfi "movie='input.mp4', signalstats=out=brng:color=cyan[out]"
See also the FFmpeg documentation on ffprobe for a full list of flags, commands, and options.
+See also the FFmpeg documentation on ffprobe for a full list of flags, commands, and options.
This will create an XML report for use in QCTools for a video file with one video track and one audio track. See also the QCTools documentation.
This will create an XML report for use in QCTools for a video file with one video track and NO audio track. See also the QCTools documentation.
If hex isn't your thing, closed captioning character and code sets can be found in the documentation for SCTools.
ffmpeg -f lavfi -i mandelbrot=size=1280x720:rate=25 -c:v libx264 -t 10 output_file
size
and rate
options allows you to choose a specific frame size and framerate.-pix_fmt
to yuv420p
for greater H.264 compatibility with media players.ffmpeg -f lavfi -i smptebars=size=720x576:rate=25 -c:v prores -t 10 output_file
size
and rate
options allows you to choose a specific frame size and framerate.ffmpeg -f lavfi -i testsrc=size=720x576:rate=25 -c:v v210 -t 10 output_file
size
and rate
options allows you to choose a specific frame size and framerate. ffplay -f lavfi -i smptehdbars=size=1920x1080
ffplay -f lavfi -i smptebars=size=640x480
ffmpeg -f lavfi -i "sine=frequency=1000:sample_rate=48000:duration=5" -c:a pcm_s16le output_file.wav
pcm_s16le
(the default encoding for wav files). pcm
represents pulse-code modulation format (raw bytes), 16
means 16 bits per sample, and le
means "little endian"ffmpeg -f lavfi -i smptebars=size=720x576:rate=25 -f lavfi -i "sine=frequency=1000:sample_rate=48000" -c:a pcm_s16le -t 10 -c:v ffv1 output_file
size
and rate
options allows you to choose a specific frame size and framerate.-bsf:v
for video, -bsf:a
for audio, etc. The noise filter intentionally damages the contents of packets without damaging the container. This sets the noise level to 1 but it could be left blank or any number above 0.-bsf:v
for video, -bsf:a
for audio, etc. The noise filter intentionally damages the contents of packets without damaging the container. This sets the noise level to 1 but it could be left blank or any number above 0.ffplay -f lavfi life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800
This ffprobe command prints a CSV correlating timestamps and their YDIF values, useful for determining cuts.