diff --git a/index.html b/index.html index 9a6893c..32d77f1 100644 --- a/index.html +++ b/index.html @@ -27,19 +27,17 @@
Advanced FFmpeg concepts
Change container (rewrap)
Change codec (transcode)
-
Change video properties
+
Change video properties
+
Change or view audio properties
Join/trim/create an excerpt
Work with interlaced video
Overlay timecode or text on a video
-
Generate image files from a video
-
Generate an animated GIF
+
Create thumbnails or GIFs
Create a video from image(s) and audio
Use filters or scopes
-
Normalize/equalize audio
View or strip metadata
Preservation tasks
Generate test files
-
Repair a file
Use OCR
Compare similarity of videos
Something else
@@ -374,6 +372,31 @@ + + +
+

Generate two access MP3s from input. One with appended audio (such as a copyright notice) and one unmodified.

+ +

ffmpeg -i input_file -i input_file_to_append -filter_complex "[0:a:0]asplit=2[a][b];[b]afifo[bb];[1:a:0][bb]concat=n=2:v=0:a=1[concatout]" -map "[a]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file.mp3 -map "[concatout]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file_appended.mp3

+

This script allows you to generate two derivative audio files from a master while appending audio from a separate file (for example a copyright or institutional notice) to one of them.

+
+
ffmpeg
starts the command
+
-i input_file
path, name and extension of the input file (the master file)
+
-i input_file_to_append
path, name and extension of the input file (the file to be appended to access file)
+
-filter_complex
enables the complex filtering to manage splitting the input to two audio streams
+
[0:a:0]asplit=2[a][b];
asplit allows audio streams to be split up for separate manipulation. This command splits the audio from the first input (the master file) into two streams "a" and "b"
+
[b]afifo[bb];
this buffers the stream "b" to help prevent dropped samples and renames stream to "bb"
+
[1:a:0][bb]concat=n=2:v=0:a=1[concatout]
concat is used to join files. n=2 tells the filter there are two inputs. v=0:a=1 Tells the filter there are 0 video outputs and 1 audio output. This command appends the audio from the second input to the beginning of stream "bb" and names the output "concatout"
+
-map "[a]"
this maps the unmodified audio stream to the first output
+
-codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2
sets up MP3 options (using constant quality)
+
output_file
path, name and extension of the output file (unmodified)
+
-map "[concatout]"
this maps the modified stream to the second output
+
-codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2
sets up MP3 options (using constant quality)
+
output_file_appended
path, name and extension of the output file (with appended notice)
+
+
+ +
@@ -396,7 +419,7 @@
-

Change video properties

+

Change video properties

@@ -586,6 +609,164 @@
+ +
+

Change or view audio properties

+ + + +
+

Extract audio from an AV file

+ +

ffmpeg -i input_file -c:a copy -vn output_file

+

This command extracts the audio stream without loss from an audiovisual file.

+
+
ffmpeg
starts the command
+
-i input_file
path, name and extension of the input file
+
-c:a copy
re-encodes using the same audio codec
+
-vn
no video stream
+
output_file
path, name and extension of the output file
+
+
+ + + + +
+

Combine audio tracks into one in a video file

+ +

ffmpeg -i input_file -filter_complex "[0:a:0][0:a:1]amerge[out]" -map 0:v -map "[out]" -c:v copy -shortest output_file

+

This command combines two audio tracks present in a video file into one stream. It can be useful in situations where a downstream process, like YouTube’s automatic captioning, expect one audio track. To ensure that you’re mapping the right audio tracks run ffprobe before writing the script to identify which tracks are desired. More than two audio streams can be combined by extending the pattern present in the -filter_complex option.

+
+
ffmpeg
starts the command
+
-i input_file
path, name and extension of the input file
+
-filter_complex
tells fmpeg that we will be using a complex filter
+
"
quotation mark to start filtergraph
+
[0:a:0][0:a:1]amerge[out]
combines the two audio tracks into one
+
"
quotation mark to end filtergraph
+
-map 0:v
map the video
+
-map "[out]"
map the combined audio defined by the filter
+
-c:v copy
copy the video
+
-shortest
limit to the shortest stream
+
output_file
path, name and extension of the video output file
+
+
+ + + + +
+

Flip audio phase shift

+ +

ffmpeg -i input_file -af pan="stereo|c0=c0|c1=-1*c1" output_file

+

This command inverses the audio phase of the second channel by rotating it 180°.

+
+
ffmpeg
starts the command
+
-i input file
path, name and extension of the input file
+
-af
specifies that the next section should be interpreted as an audio filter
+
pan=
tell the quoted text below to use the pan filter
+
"stereo|c0=c0|c1=-1*c1"
maps the output's first channel (c0) to the input's first channel and the output's second channel (c1) to the inverse of the input's second channel
+
output file
path, name and extension of the output file
+
+
+ + + + +
+

Calculate Loudness Levels

+ +

ffmpeg -i input_file -af loudnorm=print_format=json -f null -

+

This filter calculates and outputs loudness information in json about an input file (labeled input) as well as what the levels would be if loudnorm were applied in its one pass mode (labeled output). The values generated can be used as inputs for a 'second pass' of the loudnorm filter allowing more accurate loudness normalization than if it is used in a single pass.

+

These instructions use the loudnorm defaults, which align well with PBS recommendations for target loudness. More information can be found at the loudnorm documentation.

+

Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document.

+
+
ffmpeg
starts the command
+
input_file
path, name and extension of the input file
+
-af loudnorm
activates the loudnorm filter
+
print_format=json
sets the output format for loudness information to json. This format makes it easy to use in a second pass. For a more human readable output, this can be set to print_format=summary
+
-f null -
sets the file output to null (since we are only interested in the metadata generated)
+
+
+ + + + +
+

RIAA Equalization

+ +

ffmpeg -i input_file -af aemphasis=type=riaa output_file

+

This will apply RIAA equalization to an input file allowing correct listening of audio transferred 'flat' (without EQ) from records that used this EQ curve. For more information about RIAA equalization see the Wikipedia page on the subject.

+
+
ffmpeg
starts the command
+
input_file
path, name and extension of the input file
+
-af aemphasis=type=riaa
activates the aemphasis filter and sets it to use RIAA equalization
+
output_file
path and name of output file
+
+
+ + + + +
+

One Pass Loudness Normalization

+ +

ffmpeg -i input_file -af loudnorm=dual_mono=true -ar 48k output_file

+

This will normalize the loudness of an input using one pass, which is quicker but less accurate than using two passes. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the loudnorm documentation.

+

Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document.

+
+
ffmpeg
starts the command
+
input_file
path, name and extension of the input file
+
-af loudnorm
activates the loudnorm filter with default settings
+
dual_mono=true
(optional) Use this for mono files meant to be played back on stereo systems for correct loudness. Not necessary for multi-track inputs.
+
-ar 48k
Sets the output sample rate to 48 kHz. (The loudnorm filter upsamples to 192 kHz so it is best to manually set a desired output sample rate).
+
output_file
path, name and extension for output file
+
+
+ + + + +
+

Two Pass Loudness Normalization

+ +

ffmpeg -i input_file -af loudnorm=dual_mono=true:measured_I=input_i:measured_TP=input_tp:measured_LRA=input_lra:measured_thresh=input_thresh:offset=target_offset:linear=true -ar 48k output_file

+

This command allows using the levels calculated using a first pass of the loudnorm filter to more accurately normalize loudness. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the loudnorm documentation.

+

Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document.

+
+
ffmpeg
starts the command
+
input_file
path, name and extension of the input file
+
-af loudnorm
activates the loudnorm filter with default settings
+
dual_mono=true
(optional) use this for mono files meant to be played back on stereo systems for correct loudness. Not necessary for multi-track inputs.
+
measured_I=input_i
use the 'input_i' value (integrated loudness) from the first pass in place of input_i
+
measured_TP=input_tp
use the 'input_tp' value (true peak) from the first pass in place of input_tp
+
measured_LRA=input_lra
use the 'input_lra' value (loudness range) from the first pass in place of input_lra
+
measured_LRA=input_thresh
use the 'input_thresh' value (threshold) from the first pass in place of input_thresh
+
offset=target_offset
use the 'target_offset' value (offset) from the first pass in place of target_offset
+
linear=true
tells loudnorm to use linear normalization
+
-ar 48k
Sets the output sample rate to 48 kHz. (The loudnorm filter upsamples to 192 kHz so it is best to manually set a desired output sample rate).
+
output_file
path, name and extension for output file
+
+
+ + + + +
+

Fix AV Sync: Resample audio

+ +

ffmpeg -i input_file -c:v copy -c:a pcm_s16le -af "aresample=async=1000" output_file

+
+
ffmpeg
starts the command
+
input_file
path, name and extension of the input file
+
-c:v copy
Copy all mapped video streams.
+
-c:a pcm_s16le
tells FFmpeg to encode the audio stream in 16-bit linear PCM (little endian)
+
-af "aresample=async=1000"
Uses the aresample filter to stretch/squeeze samples to given timestamps, with a maximum of 1000 samples per second compensation.
+
output_file
path, name and extension of the output file. Try different file extensions such as mkv, mov, mp4, or avi.
+
+
+ +

Join, trim, or excerpt a video

@@ -922,7 +1103,7 @@ e.g.: ffmpeg -f concat -safe 0 -i mylist.txt -c copy output_file
-

Generate image files from a video

+

Create thumbnails or GIFs

@@ -958,10 +1139,6 @@ e.g.: ffmpeg -f concat -safe 0 -i mylist.txt -c copy output_file -
-
-

Create an animated GIF

-
@@ -1185,107 +1362,6 @@ e.g.: ffmpeg -f concat -safe 0 -i mylist.txt -c copy output_file -
-
-

Normalize/equalize audio

- - - -
-

Flip audio phase shift

- -

ffmpeg -i input_file -af pan="stereo|c0=c0|c1=-1*c1" output_file

-

This command inverses the audio phase of the second channel by rotating it 180°.

-
-
ffmpeg
starts the command
-
-i input file
path, name and extension of the input file
-
-af
specifies that the next section should be interpreted as an audio filter
-
pan=
tell the quoted text below to use the pan filter
-
"stereo|c0=c0|c1=-1*c1"
maps the output's first channel (c0) to the input's first channel and the output's second channel (c1) to the inverse of the input's second channel
-
output file
path, name and extension of the output file
-
-
- - - - -
-

Calculate Loudness Levels

- -

ffmpeg -i input_file -af loudnorm=print_format=json -f null -

-

This filter calculates and outputs loudness information in json about an input file (labeled input) as well as what the levels would be if loudnorm were applied in its one pass mode (labeled output). The values generated can be used as inputs for a 'second pass' of the loudnorm filter allowing more accurate loudness normalization than if it is used in a single pass.

-

These instructions use the loudnorm defaults, which align well with PBS recommendations for target loudness. More information can be found at the loudnorm documentation.

-

Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document.

-
-
ffmpeg
starts the command
-
input_file
path, name and extension of the input file
-
-af loudnorm
activates the loudnorm filter
-
print_format=json
sets the output format for loudness information to json. This format makes it easy to use in a second pass. For a more human readable output, this can be set to print_format=summary
-
-f null -
sets the file output to null (since we are only interested in the metadata generated)
-
-
- - - - -
-

RIAA Equalization

- -

ffmpeg -i input_file -af aemphasis=type=riaa output_file

-

This will apply RIAA equalization to an input file allowing correct listening of audio transferred 'flat' (without EQ) from records that used this EQ curve. For more information about RIAA equalization see the Wikipedia page on the subject.

-
-
ffmpeg
starts the command
-
input_file
path, name and extension of the input file
-
-af aemphasis=type=riaa
activates the aemphasis filter and sets it to use RIAA equalization
-
output_file
path and name of output file
-
-
- - - - -
-

One Pass Loudness Normalization

- -

ffmpeg -i input_file -af loudnorm=dual_mono=true -ar 48k output_file

-

This will normalize the loudness of an input using one pass, which is quicker but less accurate than using two passes. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the loudnorm documentation.

-

Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document.

-
-
ffmpeg
starts the command
-
input_file
path, name and extension of the input file
-
-af loudnorm
activates the loudnorm filter with default settings
-
dual_mono=true
(optional) Use this for mono files meant to be played back on stereo systems for correct loudness. Not necessary for multi-track inputs.
-
-ar 48k
Sets the output sample rate to 48 kHz. (The loudnorm filter upsamples to 192 kHz so it is best to manually set a desired output sample rate).
-
output_file
path, name and extension for output file
-
-
- - - - -
-

Two Pass Loudness Normalization

- -

ffmpeg -i input_file -af loudnorm=dual_mono=true:measured_I=input_i:measured_TP=input_tp:measured_LRA=input_lra:measured_thresh=input_thresh:offset=target_offset:linear=true -ar 48k output_file

-

This command allows using the levels calculated using a first pass of the loudnorm filter to more accurately normalize loudness. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the loudnorm documentation.

-

Information about PBS loudness standards can be found in the PBS Technical Operating Specifications document. Information about EBU loudness standards can be found in the EBU R 128 recommendation document.

-
-
ffmpeg
starts the command
-
input_file
path, name and extension of the input file
-
-af loudnorm
activates the loudnorm filter with default settings
-
dual_mono=true
(optional) use this for mono files meant to be played back on stereo systems for correct loudness. Not necessary for multi-track inputs.
-
measured_I=input_i
use the 'input_i' value (integrated loudness) from the first pass in place of input_i
-
measured_TP=input_tp
use the 'input_tp' value (true peak) from the first pass in place of input_tp
-
measured_LRA=input_lra
use the 'input_lra' value (loudness range) from the first pass in place of input_lra
-
measured_LRA=input_thresh
use the 'input_thresh' value (threshold) from the first pass in place of input_thresh
-
offset=target_offset
use the 'target_offset' value (offset) from the first pass in place of target_offset
-
linear=true
tells loudnorm to use linear normalization
-
-ar 48k
Sets the output sample rate to 48 kHz. (The loudnorm filter upsamples to 192 kHz so it is best to manually set a desired output sample rate).
-
output_file
path, name and extension for output file
-
-
- -

View or strip metadata

@@ -1666,27 +1742,6 @@ ffmpeg -i $file -map 0 -c copy $output
-
-
-

Repair

- - - -
-

Fix AV Sync: Resample audio

- -

ffmpeg -i input_file -c:v copy -c:a pcm_s16le -af "aresample=async=1000" output_file

-
-
ffmpeg
starts the command
-
input_file
path, name and extension of the input file
-
-c:v copy
Copy all mapped video streams.
-
-c:a pcm_s16le
tells FFmpeg to encode the audio stream in 16-bit linear PCM (little endian)
-
-af "aresample=async=1000"
Uses the aresample filter to stretch/squeeze samples to given timestamps, with a maximum of 1000 samples per second compensation.
-
output_file
path, name and extension of the output file. Try different file extensions such as mkv, mov, mp4, or avi.
-
-
- -

Use OCR

@@ -1811,71 +1866,6 @@ ffmpeg -i $file -map 0 -c copy $output
- - -
-

Extract audio from an AV file

- -

ffmpeg -i input_file -c:a copy -vn output_file

-

This command extracts the audio stream without loss from an audiovisual file.

-
-
ffmpeg
starts the command
-
-i input_file
path, name and extension of the input file
-
-c:a copy
re-encodes using the same audio codec
-
-vn
no video stream
-
output_file
path, name and extension of the output file
-
-
- - - - -
-

Combine audio tracks into one in a video file

- -

ffmpeg -i input_file -filter_complex "[0:a:0][0:a:1]amerge[out]" -map 0:v -map "[out]" -c:v copy -shortest output_file

-

This command combines two audio tracks present in a video file into one stream. It can be useful in situations where a downstream process, like YouTube’s automatic captioning, expect one audio track. To ensure that you’re mapping the right audio tracks run ffprobe before writing the script to identify which tracks are desired. More than two audio streams can be combined by extending the pattern present in the -filter_complex option.

-
-
ffmpeg
starts the command
-
-i input_file
path, name and extension of the input file
-
-filter_complex
tells fmpeg that we will be using a complex filter
-
"
quotation mark to start filtergraph
-
[0:a:0][0:a:1]amerge[out]
combines the two audio tracks into one
-
"
quotation mark to end filtergraph
-
-map 0:v
map the video
-
-map "[out]"
map the combined audio defined by the filter
-
-c:v copy
copy the video
-
-shortest
limit to the shortest stream
-
output_file
path, name and extension of the video output file
-
-
- - - - -
-

Generate two access MP3s from input. One with appended audio (such as a copyright notice) and one unmodified.

- -

ffmpeg -i input_file -i input_file_to_append -filter_complex "[0:a:0]asplit=2[a][b];[b]afifo[bb];[1:a:0][bb]concat=n=2:v=0:a=1[concatout]" -map "[a]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file.mp3 -map "[concatout]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file_appended.mp3

-

This script allows you to generate two derivative audio files from a master while appending audio from a separate file (for example a copyright or institutional notice) to one of them.

-
-
ffmpeg
starts the command
-
-i input_file
path, name and extension of the input file (the master file)
-
-i input_file_to_append
path, name and extension of the input file (the file to be appended to access file)
-
-filter_complex
enables the complex filtering to manage splitting the input to two audio streams
-
[0:a:0]asplit=2[a][b];
asplit allows audio streams to be split up for separate manipulation. This command splits the audio from the first input (the master file) into two streams "a" and "b"
-
[b]afifo[bb];
this buffers the stream "b" to help prevent dropped samples and renames stream to "bb"
-
[1:a:0][bb]concat=n=2:v=0:a=1[concatout]
concat is used to join files. n=2 tells the filter there are two inputs. v=0:a=1 Tells the filter there are 0 video outputs and 1 audio output. This command appends the audio from the second input to the beginning of stream "bb" and names the output "concatout"
-
-map "[a]"
this maps the unmodified audio stream to the first output
-
-codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2
sets up MP3 options (using constant quality)
-
output_file
path, name and extension of the output file (unmodified)
-
-map "[concatout]"
this maps the modified stream to the second output
-
-codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2
sets up MP3 options (using constant quality)
-
output_file_appended
path, name and extension of the output file (with appended notice)
-
-
- -