mirror of
https://github.com/amiaopensource/ffmprovisr.git
synced 2024-11-10 07:27:23 +01:00
index: various edits
Create GIF: Add examples using the palettegen and paletteuse filters. WAV to MP3: Mention what happens if "-ar" is omitted. Batch Processing: Add "-map 0" to map all streams (from input 0) instead of relying on stream selection defaults. Split audio and video tracks: Use stream specifiers instead of indexes. More efficient and less prone to errors. Transcode to H.264: Default is not 4:2:2, but depends on input. NTSC to H.264: Do all filtering in one filtergraph for better control. Use format instead of "-pix_fmt" when filtering. Various: Replace "-c:v copy -c:a copy" with "-c copy" and other minor edits. Remove superfluous "-f image2" instances. Signed-off-by: Lou Logan <lou@lrcd.com>
This commit is contained in:
parent
1b09c29e62
commit
97ce2a6928
81
index.html
81
index.html
@ -69,24 +69,22 @@ Change the above data-target field, the button text, and the below div class (th
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Create GIF</h3>
|
||||
<p>Part 1: Create 3 second clip from an existing source file (no audio necessary)</p>
|
||||
<p><code>ffmpeg -ss HH:MM:SS -i <i>input</i> -c:v copy -c:a copy -t 3 <i>output</i></code></p>
|
||||
<p>Create high quality GIF</p>
|
||||
<p><code>ffmpeg -ss HH:MM:SS -i <i>input</i> -filter_complex "fps=10,scale=500:-1:flags=lanczos,palettegen" -t 3 <i>palette.png</i></code></p>
|
||||
<p><code>ffmpeg -ss HH:MM:SS -i <i>input</i> -i palette.png -filter_complex "[0:v]fps=10,scale=500:-1:flags=lanczos[v],[v][1:v]paletteuse" -t 3 -loop 6 <i>output</i></code></p>
|
||||
<p>The first command will use the palettegen filter to create a custom palette, then the second command will create the GIF with the paletteuse filter. The result is a high quality GIF.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-ss <i>HH:MM:SS</i></dt><dd>starting point of the gif. If a plain numberical value is used it will be interpreted as seconds</dd>
|
||||
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-ss <i>HH:MM:SS</i></dt><dd>starting point of the gif</dd>
|
||||
<dt>-t <i>3</i></dt><dd>number of seconds after the starting point repeated in the gif (here 3; can be specified also with a full timestamp, i.e. here 00:00:03)</dd>
|
||||
<dt><i>output file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p>Part 2: Make the gif</p>
|
||||
<p><code>ffmpeg -i <i>input</i> -vf scale=500:-1 -t 10 -r 30 <i>output.gif</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-vf scale=<i>width</i>:<i>height</i></dt><dd>in pixels (a negative number keeps it in proportion)</dd>
|
||||
<dt>-t <i>10</i></dt><dd>running time in seconds (here 10)</dd>
|
||||
<dt>-r <i>30</i></dt><dd>run at 30 fps (frames per second)</dd>
|
||||
<dt>-filter_complex "fps=<i>frame rate</i>,scale=<i>width</i>:<i>height</i>,palettegen"</dt><dd>a complex filtergraph using the fps filter to set frame rate, the scale filter to resize, and the palettegen filter to generate the palette. The scale value of <i>-1</i> preserves the aspect ratio</dd>
|
||||
<dt>-t <i>3</i></dt><dd>duration in seconds (here 3; can be specified also with a full timestamp, i.e. here 00:00:03)</dd>
|
||||
<dt>-loop <i>6</i></dt><dd>number of times to loop the gif. A value of <i>-1</i> will disable looping. Omitting <i>-loop</i> will use the default which will loop infinitely</dd>
|
||||
<dt><i>output file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p>Simpler GIF creation</p>
|
||||
<p><code>ffmpeg -ss HH:MM:SS -i <i>input</i> -vf "fps=10,scale=500:-1" -t 3 -loop 6 <i>output</i></code></p>
|
||||
<p>This is a quick and easy method. Dithering is more apparent than the above method using the palette* filters, but the file size will be smaller. Perfect for that "legacy" GIF look.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
@ -106,7 +104,7 @@ Change the above data-target field, the button text, and the below div class (th
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-sample_fmt <i>s16p</i></dt><dd>sample format. This will give you 16 bit audio (To see a list of supported sample formats, type: <code>ffmpeg -sample_fmts</code>)</dd>
|
||||
<dt>-ar <i>44100</i></dt><dd>Sets the audio sampling frequency to 44.1 kHz (CD quality).</dd>
|
||||
<dt>-ar <i>44100</i></dt><dd>Sets the audio sampling frequency to 44.1 kHz (CD quality). This can be omitted to use the same sampling frequency as the input</dd>
|
||||
<dt><i>output file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
</div>
|
||||
@ -122,11 +120,11 @@ Change the above data-target field, the button text, and the below div class (th
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Create Bash Script named “Rewrap.MXF.sh” to do Batch FFmpeg Processing</h3>
|
||||
<p><code>for f in *.MXF; do ffmpeg -i "$f" -c:a copy -c:v copy "${f%.MXF}.mov"; done</code></p>
|
||||
<p><code>for f in *.MXF; do ffmpeg -i "$f" -map 0 -c copy "${f%.MXF}.mov"; done</code></p>
|
||||
<p>Re-wrap .MFX files in a specified directory to .mov files by using this code within a .sh file. The shell script (.sh file) and all MXF files must be contained in the same directory, and the script must be run from the directory itself (cd ~/Desktop/MXF_file_directory). Execute .sh file with the command <code>sh Rewrap-MXF.sh</code></p>
|
||||
<dl>
|
||||
<dt>-c:a copy</dt><dd>copy audio codec</dd>
|
||||
<dt>-c:v copy</dt><dd>copy video codec</dd>
|
||||
<dt>-map 0</dt><dd>select all input streams to map to output</dd>
|
||||
<dt>-c copy</dt><dd>enable stream copy. This will re-mux wihout re-encoding, so quality is preserved</dd>
|
||||
</dl>
|
||||
<p>Modify the ffmpeg script as needed to perform different transcodes :)</p>
|
||||
</div>
|
||||
@ -143,7 +141,7 @@ Change the above data-target field, the button text, and the below div class (th
|
||||
<div class="well">
|
||||
<h3>Create frame md5s</h3>
|
||||
<p><code>ffmpeg -i [inputfile.extension] -an -f framemd5 [outputfile.framemd5]</code></p>
|
||||
<p>This will create an md5 checksum per frame</p>
|
||||
<p>This will create an md5 checksum per video frame</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
@ -165,7 +163,7 @@ Change the above data-target field, the button text, and the below div class (th
|
||||
<div class="well">
|
||||
<h3>Transcode into a deinterlaced Apple ProRes LT</h3>
|
||||
<p><code>ffmpeg -i input.mov -c:v prores -profile:v 1 -c:a pcm_s16le -vf yadif output.mov</code></p>
|
||||
<p>This command transcodes an input file (input.mov) into a deinterlaced Apple ProRes LT .mov file with 16-bit linear PCM encoded audio. The file is deinterlaced using the yadif (Yet Another De-Interlacing Filter) command.</p>
|
||||
<p>This command transcodes an input file (input.mov) into a deinterlaced Apple ProRes LT .mov file with 16-bit linear PCM encoded audio. The file is deinterlaced using the yadif filter (Yet Another De-Interlacing Filter).</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
@ -194,13 +192,12 @@ Change the above data-target field, the button text, and the below div class (th
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>One thumbnail</h3>
|
||||
<p><code>ffmpeg -i [file path] -ss 00:00:20 -f image2 -vframes 1 thumb.png</code></p>
|
||||
<p><code>ffmpeg -i [file path] -ss 00:00:20 -vframes 1 thumb.png</code></p>
|
||||
<p>This command will grab a thumbnail 20 seconds into the video.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-ss <i>00:00:20</i></dt><dd>seeks video file to 20 seconds into the video</dd>
|
||||
<dt>-f image2</dt><dd>Forces the file format. image2 is an image file demuxer.</dd>
|
||||
<dt>-vframes <i>1</i></dt><dd>sets the number of frames (in this example, one frame)</dd>
|
||||
<dt><i>output file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
@ -217,14 +214,13 @@ Change the above data-target field, the button text, and the below div class (th
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Many thumbnails</h3>
|
||||
<p><code>ffmpeg -i {path/inputfile.extension} -f image2 -vf fps=fps=1/60 out%d.png</code></p>
|
||||
<p><code>ffmpeg -i {path/inputfile.extension} -vf fps=1/60 out%d.png</code></p>
|
||||
<p>This will grab a thumbnail every minute and output sequential png files.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-ss <i>00:00:20</i></dt><dd>seeks video file to 20 seconds into the video</dd>
|
||||
<dt>-f image2</dt><dd>Forces the file format. image2 is an image file demuxer.</dd>
|
||||
<dt>-vf fps=fps=1/60</dt><dd>-vf is an alias for -filter:v, which creates a filtergraph to use for the streams. The rest of the command identifies filtering by frames per second, and sets the frames per second at 1/60 (which is one per minute).</dd>
|
||||
<dt>-vf fps=1/60</dt><dd>-vf is an alias for -filter:v, which creates a filtergraph to use for the streams. The rest of the command identifies filtering by frames per second, and sets the frames per second at 1/60 (which is one per minute). Omitting this will output all frames from the video</dd>
|
||||
<dt><i>output file</i></dt><dd>path, name and extension of the output file. In the example out%d.png where %d is a regular expression that adds a number (d is for digit) and increments with each frame (out1.png, out2.png, out3.png…).</dd>
|
||||
</dl>
|
||||
</div>
|
||||
@ -287,7 +283,7 @@ Change the above data-target field, the button text, and the below div class (th
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Join files together</h3>
|
||||
<p><code>ffmpeg -f concat -i mylist.txt -c:a copy -c:v copy <i>output_file</i></code></p>
|
||||
<p><code>ffmpeg -f concat -i mylist.txt -c copy <i>output_file</i></code></p>
|
||||
<p>This command takes two or more files of the same file type and joins them together to make a single file. All that the program needs is a text file with a list specifying the files that should be joined. However, it only works properly if the files to be combined have the exact same codec and technical specifications. Be careful, ffmpeg may appear to have successfully joined two video files with different codecs, but may only bring over the audio from the second file or have other weird behaviors. Don’t use this command for joining files with different codecs and technical specs and always preview your resulting video file!</p>
|
||||
<p>ffmpeg documentation on concatenating files (full list of flags, commands, <a href="https://trac.ffmpeg.org/wiki/Concatenate">https://trac.ffmpeg.org/wiki/Concatenate</a>) </p>
|
||||
<dl>
|
||||
@ -298,8 +294,7 @@ Change the above data-target field, the button text, and the below div class (th
|
||||
path_name_and_extension_to_the_second_file
|
||||
. . .
|
||||
path_name_and_extension_to_the_last_file</i></pre></dd>
|
||||
<dt>-c:a copy</dt><dd>the audio codec is copied</dd>
|
||||
<dt>-c:v copy</dt><dd>the video codec is copied</dd>
|
||||
<dt>-c copy</dt><dd>use stream copy mode to re-mux instead of re-encode</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
</div>
|
||||
@ -315,14 +310,13 @@ path_name_and_extension_to_the_last_file</i></pre></dd>
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Excerpt from beginning</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -t <i>5</i> -c:v copy -c:a copy <i>output_file</i></code></p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -t <i>5</i> -c copy <i>output_file</i></code></p>
|
||||
<p>This command captures a certain portion of a video file, starting from the beginning and continuing for the amount of time (in seconds) specified in the script. This can be used to create a preview file, or to remove unwanted content from the end of the file. To be more specific, use timecode, such as 00:00:05.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-t <i>5</i></dt><dd>Tells ffmpeg to stop copying from the input file after a certain time, and specifies the number of seconds after which to stop copying. In this case, 5 seconds is specified.</dd>
|
||||
<dt>-c:a copy</dt><dd>the audio codec is copied</dd>
|
||||
<dt>-c:v copy</dt><dd>the video codec is copied</dd>
|
||||
<dt>-c copy</dt><dd>use stream copy mode to re-mux instead of re-encode</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
</div>
|
||||
@ -338,15 +332,14 @@ path_name_and_extension_to_the_last_file</i></pre></dd>
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Excerpt from middle</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -ss <i>5</i> -t <i>10</i> -c:v copy -c:a copy <i>output_file</i></code></p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -ss <i>5</i> -t <i>10</i> -c copy <i>output_file</i></code></p>
|
||||
<p>This command captures a certain portion of a video file, starting from a designated point in the file and taking an excerpt as long as the amount of time (in seconds) specified in the script. This can be used to create a preview or clip out a desired segment. To be more specific, use timecode, such as 00:00:05.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-ss <i>5</i></dt><dd>Tells ffmpeg what timecode in the file to look for to start copying, and specifies the number of seconds into the video that ffmpeg should start copying. To be more specific, you can use timecode such as 00:00:05.</dd>
|
||||
<dt>-t <i>10</i></dt><dd>Tells ffmpeg to stop copying from the input file after a certain time, and specifies the number of seconds after which to stop copying. In this case, 10 seconds is specified.</dd>
|
||||
<dt>-c:a copy</dt><dd>the audio codec is copied</dd>
|
||||
<dt>-c:v copy</dt><dd>the video codec is copied</dd>
|
||||
<dt>-c copy</dt><dd>use stream copy mode to re-mux instead of re-encode</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
</div>
|
||||
@ -362,14 +355,13 @@ path_name_and_extension_to_the_last_file</i></pre></dd>
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Excerpt to end</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -ss <i>5</i> -c:v copy -c:a copy <i>output_file</i></code></p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -ss <i>5</i> -c copy <i>output_file</i></code></p>
|
||||
<p>This command copies a video file starting from a specified time, removing the first few seconds from the output. This can be used to create an excerpt, or remove unwanted content from the beginning of a video file.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-ss <i>5</i></dt><dd>Tells ffmpeg what timecode in the file to look for to start copying, and specifies the number of seconds into the video that ffmpeg should start copying. To be more specific, you can use timecode such as 00:00:05.</dd>
|
||||
<dt>-c:a copy</dt><dd>the audio codec is copied</dd>
|
||||
<dt>-c:v copy</dt><dd>the video codec is copied</dd>
|
||||
<dt>-c copy</dt><dd>use stream copy mode to re-mux instead of re-encode</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
</div>
|
||||
@ -385,14 +377,14 @@ path_name_and_extension_to_the_last_file</i></pre></dd>
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Split audio and video tracks</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -map <i>0:0 video_output_file</i> -map <i>0:1 audio_output_file</i></code></p>
|
||||
<p>This command splits the original input file into a video and audio stream. The -map command identifies which streams are mapped to which file. To ensure that you’re mapping the right streams to the right file, run ffprobe before writing the script to identify which stream is 0:0 and which is 0:1.</p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -map <i>0:v video_output_file</i> -map <i>0:a audio_output_file</i></code></p>
|
||||
<p>This command splits the original input file into a video and audio stream. The -map command identifies which streams are mapped to which file. To ensure that you’re mapping the right streams to the right file, run ffprobe before writing the script to identify which streams are desired.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-map <i>0:0</i></dt><dd>grabs the first streams (0:0) and maps it into:</dd>
|
||||
<dt>-map <i>0:v:0</i></dt><dd>grabs the first video stream and maps it into:</dd>
|
||||
<dt><i>video_output_file</i></dt><dd>path, name and extension of the video output file</dd>
|
||||
<dt>-map <i>0:1</i></dt><dd>grabs the second streams (0:1) and maps it into:</dd>
|
||||
<dt>-map <i>0:a:0</i></dt><dd>grabs the first audio stream and maps it into:</dd>
|
||||
<dt><i>audio_output_file</i></dt><dd>path, name and extension of the audio output file</dd>
|
||||
</dl>
|
||||
</div>
|
||||
@ -423,7 +415,7 @@ path_name_and_extension_to_the_last_file</i></pre></dd>
|
||||
<dt>-preset <i>veryslow</i></dt><dd>This option tells ffmpeg to use the slowest preset possible for the best compression quality.</dd>
|
||||
<dt>-crf <i>18</i></dt><dd>Specifying a lower CRF will make a larger file with better visual quality. 18 is often considered a “visually lossless” compression.</dd>
|
||||
</dl>
|
||||
<p>libx264 also defaults to 4:2:2 chroma subsampling. Some versions of QuickTime can’t read H.264 files in 4:2:2. In order to allow the video to play in all QuickTime players, you can specify 4:2:0 chroma subsampling instead:</p>
|
||||
<p>libx264 will use a chroma subsampling scheme that is the closest match to that of the input. This can result in YUV 4:2:0, 4:2:2, or 4:4:4 chroma subsampling. QuickTime and most other non-FFmpeg based players can’t decode H.264 files that are not 4:2:0. In order to allow the video to play in all players, you can specify 4:2:0 chroma subsampling:</p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -pix_fmt yuv420p -preset veryslow -crf 18 -c:a copy <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>-pix_fmt <i>yuv420p</i></dt><dd>Specifies a pixel format of YUV 4:2:0 to allow the file to play in a standard QuickTime player.</dd>
|
||||
@ -486,15 +478,12 @@ path_name_and_extension_to_the_last_file</i></pre></dd>
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Upscaled, Pillar-boxed HD H.264 Access Files from SD NTSC source</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -pix_fmt yuv420p -filter:v "scale=1440:1080, pad=1920:1080:240:0" -vf yadif <i>output_file</i></code></p>
|
||||
<p>Pad without specifying pad width, just put the input video in the middle of the output: <code>pad=1920:1080:(ow-iw)/2:(oh-ih)/2</code></p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -filter:v "yadif,scale=1440:1080:flags=lanczos,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,format=yuv420p" <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>Calls the program ffmpeg</dd>
|
||||
<dt>-i</dt><dd>for input video file and audio file</dd>
|
||||
<dt>-c:v libx264</dt><dd>encodes video stream with libx264 (h264)</dd>
|
||||
<dt>-pix_fmt yuv420p</dt><dd> specifies a pixel format of YUV 4:2:0</dd>
|
||||
<dt>-filter:v</dt><dd>calls an option to apply a filter to the video stream. scale=1440:1080, pad=1920:1080:240:0": does the math! resizes the video frame then pads the area around the 4:3 aspect to complete 16:9. The very same scaling filter also downscales a bigger image size into HD.</dd>
|
||||
<dt>-vf yadif</dt><dd>deinterlaces the file (optional)</dd>
|
||||
<dt>-filter:v</dt><dd>calls an option to apply filtering to the video stream. yadif deinterlaces. scale and pad do the math! resizes the video frame then pads the area around the 4:3 aspect to complete 16:9. flags=lanczos uses the Lanczos scaling algorithm which is slower but better than the default bilinear. Finally, format specifies a pixel format of YUV 4:2:0. The very same scaling filter also downscales a bigger image size into HD.</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
</div>
|
||||
|
Loading…
Reference in New Issue
Block a user