diff --git a/index.html b/index.html index 1798154..a569a72 100644 --- a/index.html +++ b/index.html @@ -286,7 +286,7 @@
ffmpeg -i input_file -c:v libx265 -pix_fmt yuv420p -c:a copy output_file
This command takes an input file and transcodes it to H.265/HEVC in an .mp4 wrapper, keeping the audio codec the same as in the original file.
-Note: ffmpeg must be compiled with libx265, the library of the H.265 codec, for this script to work. (Add the flag --with-x265
if using the brew install ffmpeg
method).
Note: ffmpeg must be compiled with libx265, the library of the H.265 codec, for this script to work. (Add the flag --with-x265
if using the brew install ffmpeg
method).
Note: the very same scaling filter also downscales a bigger image size into HD.
+Note: the very same scaling filter also downscales a bigger image size into HD.
@@ -528,7 +528,7 @@ "yadif,format=yuv420p"
is an ffmpeg filtergraph. Here the filtergraph is made up of one filter chain, which is itself made up of the two filters (separated by the comma).
The enclosing quote marks are necessary when you use spaces within the filtergraph, e.g. -vf "yadif, format=yuv420p"
, and are included above as an example of good practice.
Note: ffmpeg includes several deinterlacers apart from yadif: bwdif, w3fdif, kerndeint, and nnedi.
+Note: ffmpeg includes several deinterlacers apart from yadif: bwdif, w3fdif, kerndeint, and nnedi.
For more H.264 encoding options, see the latter section of the encode H.264 command.
-vf colormatrix=bt601:bt709
.
Note: Converting between colourspaces with ffmpeg can be done via either the colormatrix or colorspace filters, with colorspace allowing finer control (individual setting of colourspace, transfer characteristics, primaries, range, pixel format, etc). See this entry on the ffmpeg wiki, and the ffmpeg documentation for colormatrix and colorspace.
+Note: Converting between colourspaces with ffmpeg can be done via either the colormatrix or colorspace filters, with colorspace allowing finer control (individual setting of colourspace, transfer characteristics, primaries, range, pixel format, etc). See this entry on the ffmpeg wiki, and the ffmpeg documentation for colormatrix and colorspace.
ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst -color_primaries val -color_trc val -colorspace val output_file
⚠ Using this command it is possible to add Rec.709 tags to a file that is actually Rec.601 (etc), so apply with caution!
These commands are relevant for H.264 and H.265 videos, encoded with libx264
and libx265
respectively.
Note: If you wish to embed colourspace metadata without changing to another colourspace, omit -vf colormatrix=src:dst
. However, since it is libx264
/libx265
that writes the metadata, it’s not possible to add these tags without reencoding the video stream.
Note: If you wish to embed colourspace metadata without changing to another colourspace, omit -vf colormatrix=src:dst
. However, since it is libx264
/libx265
that writes the metadata, it’s not possible to add these tags without reencoding the video stream.
For all possible values for -color_primaries
, -color_trc
, and -colorspace
, see the ffmpeg documentation on codec options.
1. Out of step with the regular pattern, -color_trc
doesn’t accept bt470bg
; it is instead here referred to directly as gamma.
@@ -679,7 +679,7 @@
ffplay -f lavfi "amovie='input.mp3', astats=metadata=1:reset=1, adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0:size=700x256:bg=Black[out]"
ffplay -f lavfi "movie='input.mp4', signalstats=out=brng:color=cyan[out]"
-ss
with -c copy
if the source is encoded with an interframe codec (e.g., H.264). Since ffmpeg must split on i-frames, it will seek to the nearest i-frame to begin the stream copy.
+ Note: watch out when using -ss
with -c copy
if the source is encoded with an interframe codec (e.g., H.264). Since ffmpeg must split on i-frames, it will seek to the nearest i-frame to begin the stream copy.
Variation: trim video by setting duration, by using -t
instead of -to
Note: the shell script (.sh file) and all .mxf files to be processed must be contained within the same directory, and the script must be run from that directory.
+
Note: the shell script (.sh file) and all .mxf files to be processed must be contained within the same directory, and the script must be run from that directory.
Execute the .sh file with the command sh Rewrap-MXF.sh
.
Modify the script as needed to perform different transcodes, or to use with ffprobe. :)
The basic pattern will look similar to this:
@@ -1317,13 +1317,13 @@ foreach ($file in $inputfiles) {
$output
variable declared above: i.e., the current file name with an .mkv extension.Note: the PowerShell script (.ps1 file) and all .mp4 files to be rewrapped must be contained within the same directory, and the script must be run from that directory.
+
Note: the PowerShell script (.ps1 file) and all .mp4 files to be rewrapped must be contained within the same directory, and the script must be run from that directory.
Execute the .ps1 file by typing .\rewrap-mp4.ps1
in PowerShell.
Modify the script as needed to perform different transcodes, or to use with ffprobe. :)
@@ -1470,7 +1470,7 @@ foreach ($file in $inputfiles) {This will create an XML report for use in QCTools for a video file with one video track and one audio track. See also the QCTools documentation.
This will create an XML report for use in QCTools for a video file with one video track and NO audio track. See also the QCTools documentation.
ffplay -f lavfi -i smptehdbars=size=1920x1080
ffplay -f lavfi -i smptebars=size=640x480
-safe 0
before the input file.-safe 0
before the input file.ffmpeg -f concat -safe 0 -i mylist.txt -c copy output_file
ffmpeg -f concat -safe 0 -i mylist.txt -c copy output_file
- ffplay
- starts the command
- -framerate 5
- plays image sequence at rate of 5 images per second
- Note: this low framerate will produce a slideshow effect.
+ Note: this low framerate will produce a slideshow effect.
- -i input_file
- path, name and extension of the input file
This must match the naming convention used! The regex %06d matches six-digit-long numbers, possibly with leading zeroes. This allows the full sequence to be read in ascending order, one image after the other.
The extension for TIFF files is .tif or maybe .tiff; the extension for DPX files is .dpx (or even .cin for old files). Screenshots are often in .png format.
Notes:
+Notes:
If -framerate
is omitted, the playback speed depends on the images’ file sizes and on the computer’s processing power. It may be rather slow for large image files.
You can navigate durationally by clicking within the playback window. Clicking towards the left-hand side of the playback window takes you towards the beginning of the playback sequence; clicking towards the right takes you towards the end of the sequence.
@@ -2069,7 +2069,7 @@ e.g.:ffmpeg -f concat -safe 0 -i mylist.txt -c copy output_file"- quotation mark to end drawtext filter command
- output_file
- path, name and extension of the output file.
Note: -vf
is a shortcut for -filter:v
.
Note: -vf
is a shortcut for -filter:v
.