diff --git a/index.html b/index.html index 56c48a3..01949f2 100644 --- a/index.html +++ b/index.html @@ -256,7 +256,7 @@
ffmpeg
starts the command
-i input_file
path, name and extension of the input file
-c:v libx264
tells FFmpeg to encode the video stream as H.264
-
-pix_fmt yuv420p
libx264 will use a chroma subsampling scheme that is the closest match to that of the input. This can result in Y′CBCR 4:2:0, 4:2:2, or 4:4:4 chroma subsampling. QuickTime and most other non-FFmpeg based players can’t decode H.264 files that are not 4:2:0. In order to allow the video to play in all players, you can specify 4:2:0 chroma subsampling.
+
-pix_fmt yuv420p
libx264 will use a chroma subsampling scheme that is the closest match to that of the input. This can result in Y′CBCR 4:2:0, 4:2:2, or 4:4:4 chroma subsampling. QuickTime and most other non-FFmpeg based players can’t decode H.264 files that are not 4:2:0. In order to allow the video to play in all players, you can specify 4:2:0 chroma subsampling.
-c:a copy
tells FFmpeg to copy the audio stream without re-encoding it
output_file
path, name and extension of the output file
@@ -319,7 +319,7 @@
-slices 16
Each frame is split into 16 slices. 16 is a good trade-off between filesize and encoding time.
-c:a copy
copies all mapped audio streams.
output_file.mkv
path and name of the output file. Use the .mkv extension to save your file in a Matroska container. Optionally, choose a different extension if you want a different container, such as .mov or .avi.
-
-f framemd5
Decodes video with the framemd5 muxer in order to generate MD5 checksums for every frame of your input file. This allows you to verify losslessness when compared against the framemd5s of the output file.
+
-f framemd5
Decodes video with the framemd5 muxer in order to generate MD5 checksums for every frame of your input file. This allows you to verify losslessness when compared against the framemd5s of the output file.
-an
ignores the audio stream when creating framemd5 (audio no)
framemd5_output_file
path, name and extension of the framemd5 file.
@@ -742,7 +742,7 @@
ffmpeg
starts the command
-i input_file
path, name and extension of the input file
-
-filter_complex
tells fmpeg that we will be using a complex filter
+
-filter_complex
tells fmpeg that we will be using a complex filter
"
quotation mark to start filtergraph
[0:a:0][0:a:1]amerge[out]
combines the two audio tracks into one
"
quotation mark to end filtergraph
@@ -1166,12 +1166,12 @@
-i input_file
path, name and extension of the input file
-vf drawtext=
This calls the drawtext filter with the following options:
-
fontfile=font_path
Set path to font. For example in macOS: fontfile=/Library/Fonts/AppleGothic.ttf
-
fontsize=font_size
Set font size. 35 is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.
-
text=watermark_text
Set the content of your watermark text. For example: text='FFMPROVISR EXAMPLE TEXT'
-
fontcolor=font_colour
Set colour of font. Can be a text string such as fontcolor=white or a hexadecimal value such as fontcolor=0xFFFFFF
-
alpha=0.4
Set transparency value.
-
x=(w-text_w)/2:y=(h-text_h)/2
Sets x and y coordinates for the watermark. These relative values will centre your watermark regardless of video dimensions.
+
fontfile=font_path
Set path to font. For example in macOS: fontfile=/Library/Fonts/AppleGothic.ttf
+
fontsize=font_size
Set font size. 35 is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.
+
text=watermark_text
Set the content of your watermark text. For example: text='FFMPROVISR EXAMPLE TEXT'
+
fontcolor=font_colour
Set colour of font. Can be a text string such as fontcolor=white or a hexadecimal value such as fontcolor=0xFFFFFF
+
alpha=0.4
Set transparency value.
+
x=(w-text_w)/2:y=(h-text_h)/2
Sets x and y coordinates for the watermark. These relative values will centre your watermark regardless of video dimensions.
Note: -vf is a shortcut for -filter:v.
output_file
path, name and extension of the output file.
@@ -1208,14 +1208,14 @@
-i input_file
path, name and extension of the input file
-vf drawtext=
This calls the drawtext filter with the following options:
"
quotation mark to start drawtext filter command
-
fontfile=font_path
Set path to font. For example in macOS: fontfile=/Library/Fonts/AppleGothic.ttf
-
fontsize=font_size
Set font size. 35 is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.
-
timecode=starting_timecode
Set the timecode to be displayed for the first frame. Timecode is to be represented as hh:mm:ss[:;.]ff. Colon escaping is determined by O.S, for example in Ubuntu timecode='09\\:50\\:01\\:23'. Ideally, this value would be generated from the file itself using ffprobe.
-
fontcolor=font_colour
Set colour of font. Can be a text string such as fontcolor=white or a hexadecimal value such as fontcolor=0xFFFFFF
-
box=1
Enable box around timecode
-
boxcolor=box_colour
Set colour of box. Can be a text string such as fontcolor=black or a hexadecimal value such as fontcolor=0x000000
-
rate=timecode_rate
Framerate of video. For example 25/1
-
x=(w-text_w)/2:y=h/1.2
Sets x and y coordinates for the timecode. These relative values will horizontally centre your timecode in the bottom third regardless of video dimensions.
+
fontfile=font_path
Set path to font. For example in macOS: fontfile=/Library/Fonts/AppleGothic.ttf
+
fontsize=font_size
Set font size. 35 is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.
+
timecode=starting_timecode
Set the timecode to be displayed for the first frame. Timecode is to be represented as hh:mm:ss[:;.]ff. Colon escaping is determined by O.S, for example in Ubuntu timecode='09\\:50\\:01\\:23'. Ideally, this value would be generated from the file itself using ffprobe.
+
fontcolor=font_colour
Set colour of font. Can be a text string such as fontcolor=white or a hexadecimal value such as fontcolor=0xFFFFFF
+
box=1
Enable box around timecode
+
boxcolor=box_colour
Set colour of box. Can be a text string such as fontcolor=black or a hexadecimal value such as fontcolor=0x000000
+
rate=timecode_rate
Framerate of video. For example 25/1
+
x=(w-text_w)/2:y=h/1.2
Sets x and y coordinates for the timecode. These relative values will horizontally centre your timecode in the bottom third regardless of video dimensions.
"
quotation mark to end drawtext filter command
output_file
path, name and extension of the output file.
@@ -2153,26 +2153,26 @@
  • This is in daily use to live-stream a real-world TV show. No errors for nearly 4 years. Some parameters were found by trial-and-error or empiric testing. So suggestions/questions are welcome.
  • -
    ffmpeg
    starts the command
    -
    -re
    Read input at native framerate
    -
    -i input.mov
    The input file. Can also be a - to use STDIN if you pipe in from webcam or SDI.
    -
    -map 0
    map ALL streams from input file to output
    -
    -flags +global_header
    Don't place extra data in every keyframe
    -
    -vf scale="1280:-1"
    Scale to 1280 width, maintain aspect ratio.
    -
    -pix_fmt yuv420p
    convert to 4:2:0 chroma subsampling scheme
    -
    -level 3.1
    H264 Level (defines some thresholds for bitrate)
    -
    -vsync passthrough
    Each frame is passed with its timestamp from the demuxer to the muxer.
    -
    -crf 26
    Constant rate factor - basically the quality
    -
    -g 50
    GOP size.
    -
    -bufsize 3500k
    Ratecontrol buffer size (~ maxrate x2)
    -
    -maxrate 1800k
    Maximum bit rate
    -
    -c:v libx264
    encode output video stream as H.264
    -
    -c:a aac
    encode output audio stream as AAC
    -
    -b:a 128000
    The audio bitrate
    -
    -r:a 44100
    The audio samplerate
    -
    -ac 2
    Two audio channels
    -
    -t ${STREAMDURATION}
    Time (in seconds) after which the stream should automatically end.
    -
    -f tee
    Use multiple outputs. Outputs defined below.
    +
    ffmpeg
    starts the command
    +
    -re
    Read input at native framerate
    +
    -i input.mov
    The input file. Can also be a - to use STDIN if you pipe in from webcam or SDI.
    +
    -map 0
    map ALL streams from input file to output
    +
    -flags +global_header
    Don't place extra data in every keyframe
    +
    -vf scale="1280:-1"
    Scale to 1280 width, maintain aspect ratio.
    +
    -pix_fmt yuv420p
    convert to 4:2:0 chroma subsampling scheme
    +
    -level 3.1
    H264 Level (defines some thresholds for bitrate)
    +
    -vsync passthrough
    Each frame is passed with its timestamp from the demuxer to the muxer.
    +
    -crf 26
    Constant rate factor - basically the quality
    +
    -g 50
    GOP size.
    +
    -bufsize 3500k
    Ratecontrol buffer size (~ maxrate x2)
    +
    -maxrate 1800k
    Maximum bit rate
    +
    -c:v libx264
    encode output video stream as H.264
    +
    -c:a aac
    encode output audio stream as AAC
    +
    -b:a 128000
    The audio bitrate
    +
    -r:a 44100
    The audio samplerate
    +
    -ac 2
    Two audio channels
    +
    -t ${STREAMDURATION}
    Time (in seconds) after which the stream should automatically end.
    +
    -f tee
    Use multiple outputs. Outputs defined below.
    "[movflags=+faststart]target-file.mp4|[f=flv]rtmp://stream-url/stream-id"
    The outputs, separated by a pipe (|). The first is the local file, the second is the live stream. Options for each target are given in square brackets before the target.