diff --git a/css/css.css b/css/css.css index cf95e25..7da0ce0 100644 --- a/css/css.css +++ b/css/css.css @@ -111,6 +111,10 @@ h3 { font-size: 1.5em; } +h4 { + font-size: 1.2em; +} + .intro-lead { font-family: 'Montserrat', sans-serif; font-size: 1em; diff --git a/index.html b/index.html index 918c8f1..9f5b8f0 100644 --- a/index.html +++ b/index.html @@ -90,9 +90,9 @@
FFplay allows you to stream created video and FFmpeg allows you to save video.
The following command creates and saves a 10-second video of SMPTE bars:
-ffmpeg -f lavfi -i smptebars=size=640x480 -t 5 output_file
+ ffmpeg -f lavfi -i smptebars=size=640x480 -t 5 output_file
This command plays and streams SMPTE bars but does not save them on the computer:
-ffplay -f lavfi smptebars=size=640x480
+ ffplay -f lavfi smptebars=size=640x480
The main difference is small but significant: the -i
flag is required for FFmpeg but not required for FFplay. Additionally, the FFmpeg script needs to have -t 5
and output.mkv
added to specify the length of time to record and the place to save the video.
ffmpeg -i input_file -i input_file_to_append -filter_complex "[0:a:0]asplit=2[a][b];[b]afifo[bb];[1:a:0][bb]concat=n=2:v=0:a=1[concatout]" -map "[a]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file.mp3 -map "[concatout]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file_appended.mp3
ffmpeg -i input_file -i input_file_to_append -filter_complex "[0:a:0]asplit=2[a][b];[b]afifo[bb];[1:a:0][bb]concat=n=2:v=0:a=1[concatout]" -map "[a]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file.mp3 -map "[concatout]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file_appended.mp3
This script allows you to generate two derivative audio files from a master while appending audio from a separate file (for example a copyright or institutional notice) to one of them.
-map "[video_out]" -c:v libx264 -pix_fmt yuv420p -preset veryslow -crf 18
Likewise, to encode the output audio stream as mp3, the command could include the following:
-map "[audio_out]" -c:a libmp3lame -dither_method modified_e_weighted -qscale:a 2
To concatenate files of different resolutions, you need to resize the videos to have matching resolutions prior to concatenation. The most basic way to do this is by using a scale filter and giving the dimensions of the file you wish to match:
+-vf scale=1920:1080:flags=lanczos
(The Lanczos scaling algorithm is recommended, as it is slower but better than the default bilinear algorithm).
+The rescaling should be applied just before the point where the streams to be used in the output file are listed. Select the stream you want to rescale, apply the filter, and assign that to a variable name (rescaled_video
in the below example). Then you use this variable name in the list of streams to be concatenated.
ffmpeg -i input1.avi -i input2.mp4 -filter_complex "[0:v:0] scale=1920:1080:flags=lanczos [rescaled_video], [rescaled_video] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [video_out] [audio_out]" -map "[video_out]" -map "[audio_out]" output_file
However, this will only have the desired visual output if the inputs have the same aspect ratio. If you wish to concatenate an SD and an HD file, you will also wish to pillarbox the SD file while upscaling. (See the Convert 4:3 to pillarboxed HD command). The full command would look like this:
+ffmpeg -i input1.avi -i input2.mp4 -filter_complex "[0:v:0] scale=1440:1080:flags=lanczos, pad=1920:1080:(ow-iw)/2:(oh-ih)/2 [to_hd_video], [to_hd_video] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [video_out] [audio_out]" -map "[video_out]" -map "[audio_out]" output_file
Here, the first input an SD file which needs to be upscaled to match the second input, which is 1920x1080. The scale filter enlarges the SD input to the height of the HD frame, keeping the 4:3 aspect ratio; then, the video is pillarboxed within a 1920x1080 frame.
For more information, see the FFmpeg wiki page on concatenating files of different types.
"yadif,format=yuv420p"
is an FFmpeg filtergraph. Here the filtergraph is made up of one filter chain, which is itself made up of the two filters (separated by the comma).
+
"yadif,format=yuv420p"
is an FFmpeg filtergraph. Here the filtergraph is made up of one filter chain, which is itself made up of the two filters (separated by the comma).
The enclosing quote marks are necessary when you use spaces within the filtergraph, e.g. -vf "yadif, format=yuv420p"
, and are included above as an example of good practice.
Note: FFmpeg includes several deinterlacers apart from yadif: bwdif, w3fdif, kerndeint, and nnedi.
For more H.264 encoding options, see the latter section of the encode H.264 command.