Merge branch 'gh-pages' of github.com:amiaopensource/ffmprovisr into gh-pages

This commit is contained in:
Ashley Blewer 2015-12-06 20:54:49 -05:00
commit bd13a2889e

View File

@ -69,24 +69,22 @@ Change the above data-target field, the button text, and the below div class (th
<div class="modal-content">
<div class="well">
<h3>Create GIF</h3>
<p>Part 1: Create 3 second clip from an existing source file (no audio necessary)</p>
<p><code>ffmpeg -ss HH:MM:SS -i <i>input</i> -c:v copy -c:a copy -t 3 <i>output</i></code></p>
<p>Create high quality GIF</p>
<p><code>ffmpeg -ss HH:MM:SS -i <i>input</i> -filter_complex "fps=10,scale=500:-1:flags=lanczos,palettegen" -t 3 <i>palette.png</i></code></p>
<p><code>ffmpeg -ss HH:MM:SS -i <i>input</i> -i palette.png -filter_complex "[0:v]fps=10,scale=500:-1:flags=lanczos[v],[v][1:v]paletteuse" -t 3 -loop 6 <i>output</i></code></p>
<p>The first command will use the palettegen filter to create a custom palette, then the second command will create the GIF with the paletteuse filter. The result is a high quality GIF.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-ss <i>HH:MM:SS</i></dt><dd>starting point of the gif. If a plain numerical value is used it will be interpreted as seconds</dd>
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-ss <i>HH:MM:SS</i></dt><dd>starting point of the gif</dd>
<dt>-t <i>3</i></dt><dd>number of seconds after the starting point repeated in the gif (here 3; can be specified also with a full timestamp, i.e. here 00:00:03)</dd>
<dt><i>output file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p>Part 2: Make the gif</p>
<p><code>ffmpeg -i <i>input</i> -vf scale=500:-1 -t 10 -r 30 <i>output.gif</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-vf scale=<i>width</i>:<i>height</i></dt><dd>in pixels (a negative number keeps it in proportion)</dd>
<dt>-t <i>10</i></dt><dd>running time in seconds (here 10)</dd>
<dt>-r <i>30</i></dt><dd>run at 30 fps (frames per second)</dd>
<dt>-filter_complex "fps=<i>frame rate</i>,scale=<i>width</i>:<i>height</i>,palettegen"</dt><dd>a complex filtergraph using the fps filter to set frame rate, the scale filter to resize, and the palettegen filter to generate the palette. The scale value of <i>-1</i> preserves the aspect ratio</dd>
<dt>-t <i>3</i></dt><dd>duration in seconds (here 3; can be specified also with a full timestamp, i.e. here 00:00:03)</dd>
<dt>-loop <i>6</i></dt><dd>number of times to loop the gif. A value of <i>-1</i> will disable looping. Omitting <i>-loop</i> will use the default which will loop infinitely</dd>
<dt><i>output file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p>Simpler GIF creation</p>
<p><code>ffmpeg -ss HH:MM:SS -i <i>input</i> -vf "fps=10,scale=500:-1" -t 3 -loop 6 <i>output</i></code></p>
<p>This is a quick and easy method. Dithering is more apparent than the above method using the palette* filters, but the file size will be smaller. Perfect for that "legacy" GIF look.</p>
</div>
</div>
</div>
@ -106,7 +104,7 @@ Change the above data-target field, the button text, and the below div class (th
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-sample_fmt <i>s16p</i></dt><dd>sample format. This will give you 16 bit audio (To see a list of supported sample formats, type: <code>ffmpeg -sample_fmts</code>)</dd>
<dt>-ar <i>44100</i></dt><dd>Sets the audio sampling frequency to 44.1 kHz (CD quality).</dd>
<dt>-ar <i>44100</i></dt><dd>Sets the audio sampling frequency to 44.1 kHz (CD quality). This can be omitted to use the same sampling frequency as the input</dd>
<dt><i>output file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
</div>
@ -122,11 +120,11 @@ Change the above data-target field, the button text, and the below div class (th
<div class="modal-content">
<div class="well">
<h3>Create Bash Script named “Rewrap.MXF.sh” to do Batch FFmpeg Processing</h3>
<p><code>for f in *.MXF; do ffmpeg -i "$f" -c:a copy -c:v copy "${f%.MXF}.mov"; done</code></p>
<p><code>for f in *.MXF; do ffmpeg -i "$f" -map 0 -c copy "${f%.MXF}.mov"; done</code></p>
<p>Re-wrap .MFX files in a specified directory to .mov files by using this code within a .sh file. The shell script (.sh file) and all MXF files must be contained in the same directory, and the script must be run from the directory itself (cd ~/Desktop/MXF_file_directory). Execute .sh file with the command <code>sh Rewrap-MXF.sh</code></p>
<dl>
<dt>-c:a copy (copy audio codec)</dt>
<dt>-c:v copy (copy video codec)</dt>
<dt>-map 0</dt><dd>select all input streams to map to output</dd>
<dt>-c copy</dt><dd>enable stream copy. This will re-mux wihout re-encoding, so quality is preserved</dd>
</dl>
<p>Modify the ffmpeg script as needed to perform different transcodes :)</p>
</div>
@ -136,20 +134,20 @@ Change the above data-target field, the button text, and the below div class (th
<!-- ends batch processing -->
<!-- Create frame md5s -->
<span data-toggle="modal" data-target=".create_frame_md5s"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="This will create an md5 checksum per frame">Create frame md5s</button></span>
<span data-toggle="modal" data-target=".create_frame_md5s"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="This will create an MD5 checksum per video frame">Create MD5 checksums</button></span>
<div class="modal fade create_frame_md5s" tabindex="-1" role="dialog" aria-labelledby="myLargeModalLabel">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Create frame md5s</h3>
<p><code>ffmpeg -i [inputfile.extension] -an -f framemd5 [outputfile.framemd5]</code></p>
<p>This will create an md5 checksum per frame</p>
<h3>Create MD5 checksums</h3>
<p><code>ffmpeg -i <i>input_file</i> -f framemd5 -an <i>output_file</i></code></p>
<p>This will create an MD5 checksum per video frame.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-f framemed5</dt><dd>library used to calculate the MD5 checksums</dd>
<dt>-an</dt><dd>ignores the audio stream (audio no)</dd>
<dt>-f framemed5</dt><dd>file type</dd>
<dt><i>output file</i></dt><dd>path, name and extension of the output file</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
</div>
</div>
@ -164,11 +162,11 @@ Change the above data-target field, the button text, and the below div class (th
<div class="modal-content">
<div class="well">
<h3>Transcode into a deinterlaced Apple ProRes LT</h3>
<p><code>ffmpeg -i input.mov -c:v prores -profile:v 1 -c:a pcm_s16le -vf yadif output.mov</code></p>
<p>This command transcodes an input file (input.mov) into a deinterlaced Apple ProRes LT .mov file with 16-bit linear PCM encoded audio. The file is deinterlaced using the yadif (Yet Another De-Interlacing Filter) command.</p>
<p><code>ffmpeg -i <i>input_file</i> -c:v prores -profile:v 1 -vf yadif -c:a pcm_s16le <i>output_file</i>.mov</code></p>
<p>This command transcodes an input file into a deinterlaced Apple ProRes 422 LT file with 16-bit linear PCM encoded audio. The file is deinterlaced using the yadif filter (Yet Another De-Interlacing Filter).</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-c:v prores</dt><dd>Tells ffmpeg to transcode the video stream into Apple ProRes 422</dd>
<dt>-profile:v <i>1</i></dt><dd>Declares profile of ProRes you want to use. The profiles are explained below:
<ul>
@ -177,9 +175,9 @@ Change the above data-target field, the button text, and the below div class (th
<li>2 = ProRes 422 (Standard)</li>
<li>3 = ProRes 422 (HQ)</li>
</ul></dd>
<dt>-c:a pcm_s16le</dt><dd>Tells ffmpeg to encode the audio stream in 16-bit linear PCM</dd>
<dt>-vf yadif</dt><dd>Runs a deinterlacing video filter (yet another deinterlacing filter) on the new file</dd>
<dt><i>output file</i></dt><dd>path, name and extension of the output file</dd>
<dt>-c:a pcm_s16le</dt><dd>Tells ffmpeg to encode the audio stream in 16-bit linear PCM</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
</div>
</div>
@ -194,13 +192,12 @@ Change the above data-target field, the button text, and the below div class (th
<div class="modal-content">
<div class="well">
<h3>One thumbnail</h3>
<p><code>ffmpeg -i [file path] -ss 00:00:20 -f image2 -vframes 1 thumb.png</code></p>
<p><code>ffmpeg -i <i>input_file</i> -ss 00:00:20 -vframes 1 thumb.png</code></p>
<p>This command will grab a thumbnail 20 seconds into the video.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-ss <i>00:00:20</i></dt><dd>seeks video file to 20 seconds into the video</dd>
<dt>-f image2</dt><dd>Forces the file format. image2 is an image file demuxer.</dd>
<dt>-vframes <i>1</i></dt><dd>sets the number of frames (in this example, one frame)</dd>
<dt><i>output file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
@ -217,15 +214,14 @@ Change the above data-target field, the button text, and the below div class (th
<div class="modal-content">
<div class="well">
<h3>Many thumbnails</h3>
<p><code>ffmpeg -i {path/inputfile.extension} -f image2 -vf fps=fps=1/60 out%d.png</code></p>
<p><code>ffmpeg -i <i>input_file</i> -vf fps=1/60 out%d.png</code></p>
<p>This will grab a thumbnail every minute and output sequential png files.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-ss <i>00:00:20</i></dt><dd>seeks video file to 20 seconds into the video</dd>
<dt>-f image2</dt><dd>Forces the file format. image2 is an image file demuxer.</dd>
<dt>-vf fps=fps=1/60</dt><dd>-vf is an alias for -filter:v, which creates a filtergraph to use for the streams. The rest of the command identifies filtering by frames per second, and sets the frames per second at 1/60 (which is one per minute).</dd>
<dt><i>output file</i></dt><dd>path, name and extension of the output file. In the example out%d.png where %d is a regular expression that adds a number (d is for digit) and increments with each frame (out1.png, out2.png, out3.png…).</dd>
<dt>-vf fps=1/60</dt><dd>-vf is an alias for -filter:v, which creates a filtergraph to use for the streams. The rest of the command identifies filtering by frames per second, and sets the frames per second at 1/60 (which is one per minute). Omitting this will output all frames from the video</dd>
<dt><i>output file</i></dt><dd>path, name and extension of the output file. In the example out%d.png where %d is a regular expression that adds a number (d is for digit) and increments with each frame (out1.png, out2.png, out3.png…). You may also chose a regular expression like out%04d.png which gives 4 digits with leading 0 (out0001.png, out0002.png, out0003.png, …).</dd>
</dl>
</div>
</div>
@ -233,14 +229,13 @@ Change the above data-target field, the button text, and the below div class (th
</div>
<!-- ends Multi thumbnail -->
<!-- ##### RK: TO BE COMPLETED: -i "$f" ##### -->
<!-- Generate thumbnails -->
<span data-toggle="modal" data-target=".thumbnails"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Generate thumbnails from a video at regular intervals">Generate thumbnails</button></span>
<div class="modal fade thumbnails" tabindex="-1" role="dialog" aria-labelledby="myLargeModalLabel">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3> Generate thumbnails from a video at regular intervals</h3>
<h3>Generate thumbnails from a video at regular intervals</h3>
<p><code>ffmpeg -i <i>input_file</i> -ss <i>00:12.235</i> -i "$f" -vframes 1 <i>output_file</i></code></p>
<p>Create one thumbnail in JPEG format from a video file at a specific time. In this example: 0hours:0minutes:12sec.235msec</p>
<dl>
@ -264,15 +259,15 @@ Change the above data-target field, the button text, and the below div class (th
<div class="well">
<h3>Pull specs from video file</h3>
<p><code>ffprobe -i <i>input_file</i> -show_format -show_streams -show_data -print_format xml</code></p>
<p>This command extracts technical metadata from a video file and displays it in xml. </p>
<p>ffmpeg documentation on ffprobe (full list of flags, commands, <a href="https://www.ffmpeg.org/ffprobe.html" target="_blank">www.ffmpeg.org/ffprobe.html</a>) </p>
<p>This command extracts technical metadata from a video file and displays it in xml.</p>
<p>ffmpeg documentation on ffprobe (full list of flags, commands, <a href="https://www.ffmpeg.org/ffprobe.html" target="_blank">www.ffmpeg.org/ffprobe.html</a>)</p>
<dl>
<dt>ffprobe</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-show_format</dt><dd>outputs file container informations</dd>
<dt>-show_streams</dt><dd>outputs audio and video codec informations</dd>
<dt>-show_data</dt><dd>adds “hexdump” to show_streams command output</dd>
<dt>-print_format</dt><dd>Set the output printing format (in this example “xml”; other formats are “json” and “flat”)</dd>
<dt>-show_data</dt><dd>adds a short “hexdump” to show_streams command output</dd>
<dt>-print_format</dt><dd>Set the output printing format (in this example “xml”; other formats include “json” and “flat”)</dd>
</dl>
</div>
</div>
@ -287,9 +282,9 @@ Change the above data-target field, the button text, and the below div class (th
<div class="modal-content">
<div class="well">
<h3>Join files together</h3>
<p><code>ffmpeg -f concat -i mylist.txt -c:a copy -c:v copy <i>output_file</i></code></p>
<p>This command takes two or more files of the same file type and joins them together to make a single file. All that the program needs is a text file with a list specifying the files that should be joined. However, it only works properly if the files to be combined have the exact same codec and technical specifications. Be careful, ffmpeg may appear to have successfully joined two video files with different codecs, but may only bring over the audio from the second file or have other weird behaviors. Dont use this command for joining files with different codecs and technical specs and always preview your resulting video file!</p>
<p>ffmpeg documentation on concatenating files (full list of flags, commands, <a href="https://trac.ffmpeg.org/wiki/Concatenate">https://trac.ffmpeg.org/wiki/Concatenate</a>) </p>
<p><code>ffmpeg -f concat -i mylist.txt -c copy <i>output_file</i></code></p>
<p>This command takes two or more files of the same file type and joins them together to make a single file. All that the program needs is a text file with a list specifying the files that should be joined. However, it only works properly if the files to be combined have the exact same codec and technical specifications. Be careful, ffmpeg may appear to have successfully joined two video files with different codecs, but may only bring over the audio from the second file or have other weird behaviors. Dont use this command for joining files with different codecs and technical specs and always preview your resulting video file!</p>
<p>ffmpeg documentation on concatenating files (full list of flags, commands, <a href="https://trac.ffmpeg.org/wiki/Concatenate">https://trac.ffmpeg.org/wiki/Concatenate</a>)</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-f concat</dt><dd>forces ffmpeg to concatenate the files and to keep the same file format</dd>
@ -298,8 +293,7 @@ Change the above data-target field, the button text, and the below div class (th
path_name_and_extension_to_the_second_file
. . .
path_name_and_extension_to_the_last_file</i></pre></dd>
<dt>-c:a copy</dt><dd>the audio codec is copied</dd>
<dt>-c:v copy</dt><dd>the video codec is copied</dd>
<dt>-c copy</dt><dd>use stream copy mode to re-mux instead of re-encode</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
</div>
@ -315,14 +309,13 @@ path_name_and_extension_to_the_last_file</i></pre></dd>
<div class="modal-content">
<div class="well">
<h3>Excerpt from beginning</h3>
<p><code>ffmpeg -i <i>input_file</i> -t <i>5</i> -c:v copy -c:a copy <i>output_file</i></code></p>
<p><code>ffmpeg -i <i>input_file</i> -t <i>5</i> -c copy <i>output_file</i></code></p>
<p>This command captures a certain portion of a video file, starting from the beginning and continuing for the amount of time (in seconds) specified in the script. This can be used to create a preview file, or to remove unwanted content from the end of the file. To be more specific, use timecode, such as 00:00:05.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-t <i>5</i></dt><dd>Tells ffmpeg to stop copying from the input file after a certain time, and specifies the number of seconds after which to stop copying. In this case, 5 seconds is specified.</dd>
<dt>-c:a copy</dt><dd>the audio codec is copied</dd>
<dt>-c:v copy</dt><dd>the video codec is copied</dd>
<dt>-c copy</dt><dd>use stream copy mode to re-mux instead of re-encode</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
</div>
@ -338,15 +331,14 @@ path_name_and_extension_to_the_last_file</i></pre></dd>
<div class="modal-content">
<div class="well">
<h3>Excerpt from middle</h3>
<p><code>ffmpeg -i <i>input_file</i> -ss <i>5</i> -t <i>10</i> -c:v copy -c:a copy <i>output_file</i></code></p>
<p><code>ffmpeg -i <i>input_file</i> -ss <i>5</i> -t <i>10</i> -c copy <i>output_file</i></code></p>
<p>This command captures a certain portion of a video file, starting from a designated point in the file and taking an excerpt as long as the amount of time (in seconds) specified in the script. This can be used to create a preview or clip out a desired segment. To be more specific, use timecode, such as 00:00:05.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-ss <i>5</i></dt><dd>Tells ffmpeg what timecode in the file to look for to start copying, and specifies the number of seconds into the video that ffmpeg should start copying. To be more specific, you can use timecode such as 00:00:05.</dd>
<dt>-t <i>10</i></dt><dd>Tells ffmpeg to stop copying from the input file after a certain time, and specifies the number of seconds after which to stop copying. In this case, 10 seconds is specified.</dd>
<dt>-c:a copy</dt><dd>the audio codec is copied</dd>
<dt>-c:v copy</dt><dd>the video codec is copied</dd>
<dt>-c copy</dt><dd>use stream copy mode to re-mux instead of re-encode</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
</div>
@ -362,14 +354,13 @@ path_name_and_extension_to_the_last_file</i></pre></dd>
<div class="modal-content">
<div class="well">
<h3>Excerpt to end</h3>
<p><code>ffmpeg -i <i>input_file</i> -ss <i>5</i> -c:v copy -c:a copy <i>output_file</i></code></p>
<p><code>ffmpeg -i <i>input_file</i> -ss <i>5</i> -c copy <i>output_file</i></code></p>
<p>This command copies a video file starting from a specified time, removing the first few seconds from the output. This can be used to create an excerpt, or remove unwanted content from the beginning of a video file.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-ss <i>5</i></dt><dd>Tells ffmpeg what timecode in the file to look for to start copying, and specifies the number of seconds into the video that ffmpeg should start copying. To be more specific, you can use timecode such as 00:00:05.</dd>
<dt>-c:a copy</dt><dd>the audio codec is copied</dd>
<dt>-c:v copy</dt><dd>the video codec is copied</dd>
<dt>-c copy</dt><dd>use stream copy mode to re-mux instead of re-encode</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
</div>
@ -385,14 +376,14 @@ path_name_and_extension_to_the_last_file</i></pre></dd>
<div class="modal-content">
<div class="well">
<h3>Split audio and video tracks</h3>
<p><code>ffmpeg -i <i>input_file</i> -map <i>0:0 video_output_file</i> -map <i>0:1 audio_output_file</i></code></p>
<p>This command splits the original input file into a video and audio stream. The -map command identifies which streams are mapped to which file. To ensure that youre mapping the right streams to the right file, run ffprobe before writing the script to identify which stream is 0:0 and which is 0:1.</p>
<p><code>ffmpeg -i <i>input_file</i> -map <i>0:v video_output_file</i> -map <i>0:a audio_output_file</i></code></p>
<p>This command splits the original input file into a video and audio stream. The -map command identifies which streams are mapped to which file. To ensure that youre mapping the right streams to the right file, run ffprobe before writing the script to identify which streams are desired.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-map <i>0:0</i></dt><dd>grabs the first streams (0:0) and maps it into:</dd>
<dt>-map <i>0:v:0</i></dt><dd>grabs the first video stream and maps it into:</dd>
<dt><i>video_output_file</i></dt><dd>path, name and extension of the video output file</dd>
<dt>-map <i>0:1</i></dt><dd>grabs the second streams (0:1) and maps it into:</dd>
<dt>-map <i>0:a:0</i></dt><dd>grabs the first audio stream and maps it into:</dd>
<dt><i>audio_output_file</i></dt><dd>path, name and extension of the audio output file</dd>
</dl>
</div>
@ -402,7 +393,7 @@ path_name_and_extension_to_the_last_file</i></pre></dd>
<!-- ends Split audio and video tracks -->
<!-- Transcode to H.264 -->
<span data-toggle="modal" data-target=".transcode_h264"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transcode to an H.264 access file">Transcode to h.264</button></span>
<span data-toggle="modal" data-target=".transcode_h264"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transcode to an H.264 access file">Transcode to H.264</button></span>
<div class="modal fade transcode_h264" tabindex="-1" role="dialog" aria-labelledby="myLargeModalLabel">
<div class="modal-dialog modal-lg">
<div class="modal-content">
@ -421,9 +412,9 @@ path_name_and_extension_to_the_last_file</i></pre></dd>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -preset veryslow -crf 18 -c:a copy <i>output_file</i></code></p>
<dl>
<dt>-preset <i>veryslow</i></dt><dd>This option tells ffmpeg to use the slowest preset possible for the best compression quality.</dd>
<dt>-crf <i>18</i></dt><dd>Specifying a lower CRF will make a larger file with better visual quality.</dd>
<dt>-crf <i>18</i></dt><dd>Specifying a lower CRF will make a larger file with better visual quality. 18 is often considered a “visually lossless” compression.</dd>
</dl>
<p>libx264 also defaults to 4:2:2 chroma subsampling. Some versions of QuickTime cant read H.264 files in 4:2:2. In order to allow the video to play in all QuickTime players, you can specify 4:2:0 chroma subsampling instead:</p>
<p>libx264 will use a chroma subsampling scheme that is the closest match to that of the input. This can result in YUV 4:2:0, 4:2:2, or 4:4:4 chroma subsampling. QuickTime and most other non-FFmpeg based players cant decode H.264 files that are not 4:2:0. In order to allow the video to play in all players, you can specify 4:2:0 chroma subsampling:</p>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -pix_fmt yuv420p -preset veryslow -crf 18 -c:a copy <i>output_file</i></code></p>
<dl>
<dt>-pix_fmt <i>yuv420p</i></dt><dd>Specifies a pixel format of YUV 4:2:0 to allow the file to play in a standard QuickTime player.</dd>
@ -486,15 +477,12 @@ path_name_and_extension_to_the_last_file</i></pre></dd>
<div class="modal-content">
<div class="well">
<h3>Upscaled, Pillar-boxed HD H.264 Access Files from SD NTSC source</h3>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -pix_fmt yuv420p -filter:v "scale=1440:1080, pad=1920:1080:240:0" -vf yadif <i>output_file</i></code></p>
<p>Pad without specifying pad width, just put the input video in the middle of the output: <code>pad=1920:1080:(ow-iw)/2:(oh-ih)/2</code></p>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -filter:v "yadif,scale=1440:1080:flags=lanczos,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,format=yuv420p" <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>Calls the program ffmpeg</dd>
<dt>-i</dt><dd>for input video file and audio file</dd>
<dt>-c:v libx264</dt><dd>encodes video stream with libx264 (h264)</dd>
<dt>-pix_fmt yuv420p</dt><dd> specifies a pixel format of YUV 4:2:0</dd>
<dt>-filter:v</dt><dd>calls an option to apply a filter to the video stream. scale=1440:1080, pad=1920:1080:240:0": does the math! resizes the video frame then pads the area around the 4:3 aspect to complete 16:9.</dd>
<dt>-vf yadif</dt><dd>deinterlaces the file (optional)</dd>
<dt>-filter:v</dt><dd>calls an option to apply filtering to the video stream. yadif deinterlaces. scale and pad do the math! resizes the video frame then pads the area around the 4:3 aspect to complete 16:9. flags=lanczos uses the Lanczos scaling algorithm which is slower but better than the default bilinear. Finally, format specifies a pixel format of YUV 4:2:0. The very same scaling filter also downscales a bigger image size into HD.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
</div>
@ -511,12 +499,14 @@ path_name_and_extension_to_the_last_file</i></pre></dd>
<div class="well">
<h3>Transform 4:3 aspect ratio into 16:9 with pillarbox</h3>
<p>Transform a video file with 4:3 aspect ratio into a video file with 16:9 aspect ration by correct pillarboxing.</p>
<p><code>ffmpeg -i <i>input_file</i> -filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2" <i>output_file</i></code></p>
<p><code>ffmpeg -i <i>input_file</i> -filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2" -c:a copy <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"</dt><dd>video padding<br/>This resolution independent formula is actually padding any aspect ratio into 16:9 by pillarboxing, because the video filter uses relative values for input width (iw), input height (ih), output width (ow) and output height (oh).</dd>
<dt><i>output_file</i>.mpg</dt><dd>path and name of the output file</dd>
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br/>
For silent videos you can replace <code>-c:a copy</code> by <code>-an</code>.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
</div>
</div>
@ -532,12 +522,14 @@ path_name_and_extension_to_the_last_file</i></pre></dd>
<div class="well">
<h3>Transform 16:9 aspect ratio video into 4:3 with letterbox</h3>
<p>Transform a video file with 16:9 aspect ratio into a video file with 4:3 aspect ration by correct letterboxing.</p>
<p><code>ffmpeg -i <i>input_file</i> -filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2" <i>output_file</i></code></p>
<p><code>ffmpeg -i <i>input_file</i> -filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2" -c:a copy <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2"</dt><dd>video padding<br/>This resolution independent formula is actually padding any aspect ratio into 4:3 by letterboxing, because the video filter uses relative values for input width (iw), input height (ih), output width (ow) and output height (oh).</dd>
<dt><i>output_file</i>.mpg</dt><dd>path and name of the output file</dd>
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br/>
For silent videos you can replace <code>-c:a copy</code> by <code>-an</code>.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
</div>
</div>
@ -545,6 +537,28 @@ path_name_and_extension_to_the_last_file</i></pre></dd>
</div>
<!-- ends 16:9 to 4:3 -->
<!-- Flip image -->
<span data-toggle="modal" data-target=".flip_image"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Flip the image">Flip image</button></span>
<div class="modal fade flip_image" tabindex="-1" role="dialog" aria-labelledby="myLargeModalLabel">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Flip the video image horizontally and/or vertically</h3>
<p><code>ffmpeg -i <i>input_file</i> -filter:v "hflip,vflip" -c:a copy <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-filter:v "hflip,vflip"</dt><dd>flips the image horizontally and vertically<br/>By using only one of the parameters hflip or vflip for filtering the image is flipped on that axis only. The quote marks are not mandatory.</dd>
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br/>
For silent videos you can replace <code>-c:a copy</code> by <code>-an</code>.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
</div>
</div>
</div>
</div>
<!-- ends Filp image -->
</div> <!-- end "well col-md-6 col-md-offset-2" -->
</div> <!-- row -->