mirror of
https://github.com/amiaopensource/ffmprovisr.git
synced 2025-10-21 21:29:18 +02:00
Compare commits
49 Commits
v2017-03-0
...
v2017-04-1
Author | SHA1 | Date | |
---|---|---|---|
|
b2a04d138f | ||
|
b53f6c9984 | ||
|
64a362314c | ||
|
2a71179776 | ||
|
77e346c067 | ||
|
b995fb05c5 | ||
|
d30741e378 | ||
|
e3b01e2aa8 | ||
|
750a763157 | ||
|
0a6e5a4a7a | ||
|
54a8ab6057 | ||
|
e762c7dc42 | ||
|
cb7f001444 | ||
|
89039f55b3 | ||
|
59e6c6d879 | ||
|
195bc5446e | ||
|
a1cc5a4428 | ||
|
58663a869f | ||
|
05d16367f0 | ||
|
9477bcfe0a | ||
|
ad439d3b78 | ||
|
736b01e426 | ||
|
57166fe61d | ||
|
ec26d2038a | ||
|
321d998b5a | ||
|
009670eed1 | ||
|
d3a941a725 | ||
|
d334d7b4de | ||
|
d08cf349f6 | ||
|
b5b06021b0 | ||
|
e0ceeb0d73 | ||
|
dc47dbc618 | ||
|
28ae979652 | ||
|
0e2a2c2bfe | ||
|
388b107a3f | ||
|
117ebf506d | ||
|
9356f9af93 | ||
|
4d2d5c7c81 | ||
|
db202300e4 | ||
|
bf55928a63 | ||
|
6cea5862d6 | ||
|
dca518a9b6 | ||
|
45828357c9 | ||
|
10765da46a | ||
|
ab2e6fb789 | ||
|
e3e48ded1a | ||
|
9d58f313d4 | ||
|
1371b380c9 | ||
|
dcb74e8da6 |
628
index.html
628
index.html
@@ -30,12 +30,12 @@
|
||||
<p>For instructions on how to install FFmpeg on Mac, Linux, and Windows, refer to Reto Kromer’s <a href="https://avpres.net/FFmpeg/#ch1" target="_blank">installation instructions</a>.</p>
|
||||
<p>For Bash and command line basics, try the <a href="https://learnpythonthehardway.org/book/appendixa.html" target="_blank">Command Line Crash Course</a>. For a little more context presented in an ffmprovisr style, try <a href="http://explainshell.com/" target="_blank">explainshell.com</a>!</p>
|
||||
<h5>License</h5>
|
||||
<p><a target="_blank" href="https://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png"></a><br>
|
||||
This work is licensed under a <a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.</p>
|
||||
<p><a href="https://creativecommons.org/licenses/by-sa/4.0/" target="_blank"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png"></a><br>
|
||||
This work is licensed under a <a href="https://creativecommons.org/licenses/by-sa/4.0/" target="_blank">Creative Commons Attribution-ShareAlike 4.0 International License</a>.</p>
|
||||
<h5>Sister projects</h5>
|
||||
<p><a target="_blank" href="http://dd388.github.io/crals/">Script Ahoy</a>: Community Resource for Archivists and Librarians Scripting</p>
|
||||
<p><a target="_blank" href="https://datapraxis.github.io/sourcecaster/">The Sourcecaster</a>: an app that helps you use the command line to work through common challenges that come up when working with digital primary sources.</p>
|
||||
<p><a target="_blank" href="https://amiaopensource.github.io/cable-bible/">Cable Bible</a>: A Guide to Cables and Connectors Used for Audiovisual Tech</p>
|
||||
<p><a href="http://dd388.github.io/crals/" target="_blank">Script Ahoy</a>: Community Resource for Archivists and Librarians Scripting</p>
|
||||
<p><a href="https://datapraxis.github.io/sourcecaster/" target="_blank">The Sourcecaster</a>: an app that helps you use the command line to work through common challenges that come up when working with digital primary sources.</p>
|
||||
<p><a href="https://amiaopensource.github.io/cable-bible/" target="_blank">Cable Bible</a>: A Guide to Cables and Connectors Used for Audiovisual Tech</p>
|
||||
</div>
|
||||
|
||||
<div class="well col-md-8 col-md-offset-0">
|
||||
@@ -43,58 +43,42 @@
|
||||
<h3>What do you want to do?</h3>
|
||||
<h6>Select from the following.</h6>
|
||||
</div>
|
||||
<div class="well"><h4>Change formats</h4>
|
||||
|
||||
<!-- WAV to MP3 -->
|
||||
<span data-toggle="modal" data-target="#wav_to_mp3"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Convert WAV to MP3">WAV to MP3</button></span>
|
||||
<div id="wav_to_mp3" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="well">
|
||||
<h4>Change container (rewrap)</h4>
|
||||
|
||||
<!-- MKV to MP4 -->
|
||||
<span data-toggle="modal" data-target="#mkv_to_mp4"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Convert Matroska (MKV) to MP4">MKV to MP4</button></span>
|
||||
<div id="mkv_to_mp4" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>WAV to MP3</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i>.wav -write_id3v1 1 -id3v2_version 3 -dither_method modified_e_weighted -out_sample_rate 48k -qscale:a 1 <i>output_file</i>.mp3</code></p>
|
||||
<p>This will convert your WAV files to MP3s.</p>
|
||||
<h3>MKV to MP4</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i>.mkv -c:v copy -c:a aac <i>output_file</i>.mp4</code></p>
|
||||
<p>This will convert your Matroska (MKV) files to MP4 files.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path and name of the input file</dd>
|
||||
<dt>-write_id3v1 <i>1</i></dt><dd>Write ID3v1 tag. This will add metadata to the old MP3 format, assuming you’ve embedded metadata into the WAV file.</dd>
|
||||
<dt>-id3v2_version <i>3</i></dt><dd>Write ID3v2 tag. This will add metadata to a newer MP3 format, assuming you’ve embedded metadata into the WAV file.</dd>
|
||||
<dt>-dither_method <i>modified_e_weighted</i></dt><dd>Dither makes sure you don’t unnecessarily truncate the dynamic range of your audio.</dd>
|
||||
<dt>-out_sample_rate <i>48k</i></dt><dd>Sets the audio sampling frequency to 48 kHz. This can be omitted to use the same sampling frequency as the input.</dd>
|
||||
<dt>-qscale:a <i>1</i></dt><dd>This sets the encoder to use a constant quality with a variable bitrate of between 190-250kbit/s. If you would prefer to use a constant bitrate, this could be replaced with <code>-b:a 320k</code> to set to the maximum bitrate allowed by the MP3 format. For more detailed discussion on variable vs constant bitrates see <a href="https://trac.ffmpeg.org/wiki/Encode/MP3" target="_blank">here.</a></dd>
|
||||
<dt><i>output_file</i></dt><dd>path and name of the output file</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path and name of the input file<br>
|
||||
The extension for the Matroska container is <code>.mkv</code>.</dd>
|
||||
<dt>-c:v copy</dt><dd>re-encodes using the same video codec</dd>
|
||||
<dt>-c:a aac</dt><dd>re-encodes using the AAC audio codec<br>
|
||||
Note that sadly MP4 cannot contain sound encoded by a PCM (Pulse-Code Modulation) audio codec.<br>
|
||||
For silent videos you can replace <code>-c:a aac</code> by <code>-an</code>.</dd>
|
||||
<dt><i>output_file</i></dt><dd>path and name of the output file<br>
|
||||
The extension for the MP4 container is <code>.mp4</code>.</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends WAV to MP3 -->
|
||||
<!-- ends MKV to MP4 -->
|
||||
|
||||
</div>
|
||||
<!-- ends well -->
|
||||
|
||||
<!-- WAV to AAC/MP4 -->
|
||||
<span data-toggle="modal" data-target="#wav_to_mp4"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Convert WAV to AAC/MP4">WAV to AAC/MP4</button></span>
|
||||
<div id="wav_to_mp4" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>WAV to AAC/MP4</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i>.wav -c:a aac -b:a 128k -dither_method modified_e_weighted -ar 44100 <i>output_file</i>.mp4</code></p>
|
||||
<p>This will convert your WAV file to AAC/MP4.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path and name of the input file</dd>
|
||||
<dt>-c:a aac</dt><dd>sets the audio codec to AAC</dd>
|
||||
<dt>-b:a 128k</dt><dd>sets the bitrate of the audio to 128k</dd>
|
||||
<dt>-dither_method modified_e_weighted</dt><dd>Dither makes sure you don’t unnecessarily truncate the dynamic range of your audio.</dd>
|
||||
<dt>-ar 44100</dt><dd>sets the audio sampling frequency to 44100 Hz, or 44.1 kHz, or “CD quality”</dd>
|
||||
<dt><i>output_file</i></dt><dd>path and name of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends WAV to AAC/MP4 -->
|
||||
<h4>Change codec (transcode)</h4>
|
||||
|
||||
<!-- Transcode to ProRes -->
|
||||
<span data-toggle="modal" data-target="#to_prores"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transcode to deinterlaced Apple ProRes LT">Transcode to ProRes</button></span>
|
||||
@@ -153,9 +137,12 @@
|
||||
<p>In order to use the same basic command to make a higher quality file, you can add some of these presets:</p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -pix_fmt yuv420p -preset veryslow -crf 18 -c:a copy <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>-preset <i>veryslow</i></dt><dd>This option tells ffmpeg to use the slowest preset possible for the best compression quality.</dd>
|
||||
<dt>-crf <i>18</i></dt><dd>Specifying a lower CRF will make a larger file with better visual quality. 18 is often considered a “visually lossless” compression.</dd>
|
||||
<dt>-preset <i>veryslow</i></dt><dd>This option tells ffmpeg to use the slowest preset possible for the best compression quality.<br>
|
||||
Available presets, from slowest to fastest, are: <code>veryslow</code>, <code>slower</code>, <code>slow</code>, <code>medium</code>, <code>fast</code>, <code>faster</code>, <code>veryfast</code>, <code>superfast</code>, <code>ultrafast</code>.</dd>
|
||||
<dt>-crf <i>18</i></dt><dd>Specifying a lower CRF will make a larger file with better visual quality. For H.264 files being encoded with a 4:2:0 chroma subsampling scheme (i.e., using <code>-pix_fmt yuv420p</code>), the scale ranges between 0-51, with 0 being lossless and 51 the worst possible quality.<br>
|
||||
If no crf is specified, <code>libx264</code> will use a default value of 23. 18 is often considered a “visually lossless” compression.</dd>
|
||||
</dl>
|
||||
<p>For more information, see the <a href="https://trac.ffmpeg.org/wiki/Encode/H.264" target="_blank">FFmpeg and H.264 Encoding Guide</a> on the ffmpeg wiki.</p>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
@@ -195,106 +182,6 @@
|
||||
</div>
|
||||
<!-- ends H.264 from DCP -->
|
||||
|
||||
<!-- NTSC to H.264 -->
|
||||
<span data-toggle="modal" data-target="#ntsc_to_h264"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Upscaled, pillar-boxed HD H.264 access files from SD NTSC source">NTSC to H.264</button></span>
|
||||
<div id="ntsc_to_h264" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Upscaled, Pillar-boxed HD H.264 Access Files from SD NTSC source</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -filter:v "yadif,scale=1440:1080:flags=lanczos,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,format=yuv420p" <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>Calls the program ffmpeg</dd>
|
||||
<dt>-i</dt><dd>for input video file and audio file</dd>
|
||||
<dt>-c:v libx264</dt><dd>encodes video stream with libx264 (h264)</dd>
|
||||
<dt>-filter:v</dt><dd>calls an option to apply filtering to the video stream. yadif deinterlaces. scale and pad do the math! resizes the video frame then pads the area around the 4:3 aspect to complete 16:9. flags=lanczos uses the Lanczos scaling algorithm which is slower but better than the default bilinear. Finally, format specifies a pixel format of YUV 4:2:0. The very same scaling filter also downscales a bigger image size into HD.</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends NTSC to H.264 -->
|
||||
|
||||
<!-- 4:3 to 16:9 -->
|
||||
<span data-toggle="modal" data-target="#SD_HD"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transform 4:3 aspect ratio into 16:9 with pillarbox">4:3 to 16:9</button></span>
|
||||
<div id="SD_HD" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Transform 4:3 aspect ratio into 16:9 with pillarbox</h3>
|
||||
<p>Transform a video file with 4:3 aspect ratio into a video file with 16:9 aspect ration by correct pillarboxing.</p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2" -c:a copy <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"</dt><dd>video padding<br>This resolution independent formula is actually padding any aspect ratio into 16:9 by pillarboxing, because the video filter uses relative values for input width (iw), input height (ih), output width (ow) and output height (oh).</dd>
|
||||
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
|
||||
For silent videos you can replace <code>-c:a copy</code> by <code>-an</code>.</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends 4:3 to 16:9 -->
|
||||
|
||||
<!-- SD to HD -->
|
||||
<span data-toggle="modal" data-target="#SD_HD_2"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transform SD to HD with pillarbox">SD to HD</button></span>
|
||||
<div id="SD_HD_2" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Transform SD into HD with pillarbox</h3>
|
||||
<p>Transform a SD video file with 4:3 aspect ratio into an HD video file with 16:9 aspect ratio by correct pillarboxing.</p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0" -c:a copy <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0"</dt><dd>set colour matrix, video scaling and padding<br>Three filters are applied:
|
||||
<ol>
|
||||
<li>The luma coefficients are modified from SD video (according to Rec. 601) to HD video (according to Rec. 709) by a colour matrix. Note that today Rec. 709 is often used also for SD and therefore you may cancel this parameter.</li>
|
||||
<li>The scaling filter (<code>scale=1440:1080</code>) works for both upscaling and downscaling. We use the Lanczos scaling algorithm (<code>flags=lanczos</code>), which is slower but gives better results than the default bilinear algorithm.</li>
|
||||
<li>The padding filter (<code>pad=1920:1080:240:0</code>) completes the transformation from SD to HD.</li>
|
||||
</ol></dd>
|
||||
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
|
||||
For silent videos you can replace <code>-c:a copy</code> with <code>-an</code>.</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends SD to HD -->
|
||||
|
||||
<!-- 16:9 to 4:3 -->
|
||||
<span data-toggle="modal" data-target="#HD_SD"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transform 16:9 aspect ratio video into 4:3 with letterbox">16:9 to 4:3</button></span>
|
||||
<div id="HD_SD" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Transform 16:9 aspect ratio video into 4:3 with letterbox</h3>
|
||||
<p>Transform a video file with 16:9 aspect ratio into a video file with 4:3 aspect ration by correct letterboxing.</p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2" -c:a copy <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2"</dt><dd>video padding<br>
|
||||
This resolution independent formula is actually padding any aspect ratio into 4:3 by letterboxing, because the video filter uses relative values for input width (iw), input height (ih), output width (ow) and output height (oh).</dd>
|
||||
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
|
||||
For silent videos you can replace <code>-c:a copy</code> by <code>-an</code>.</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends 16:9 to 4:3 -->
|
||||
|
||||
<!-- Transcode to FFV1.mkv -->
|
||||
<span data-toggle="modal" data-target="#create_FFV1_mkv"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transcode your file with the FFV1 Version 3 Codec in a Matroska container">Create FFV1.mkv</button></span>
|
||||
<div id="create_FFV1_mkv" class="modal fade" tabindex="-1" role="dialog">
|
||||
@@ -327,55 +214,6 @@
|
||||
</div>
|
||||
<!-- ends Transcode to FFV1.mkv-->
|
||||
|
||||
<!-- Change display aspect ratio without re-encoding video-->
|
||||
<span data-toggle="modal" data-target="#change_DAR"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Change display aspect ratio without re-encoding">Change Display Aspect Ratio</button></span>
|
||||
<div id="change_DAR" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Change Display Aspect Ratio without reencoding video</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -c:v copy -aspect 4:3 <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-c:v copy</dt><dd>Copy all mapped video streams.</dd>
|
||||
<dt>-aspect 4:3</dt><dd>Change Display Aspect Ratio to <code>4:3</code>. Experiment with other aspect ratios such as <code>16:9</code>. If used together with <code>-c:v copy</code>, it will affect the aspect ratio stored at container level, but not the aspect ratio stored in encoded frames, if it exists.</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends Change display aspect ratio without re-encoding video -->
|
||||
|
||||
<!-- MKV to MP4 -->
|
||||
<span data-toggle="modal" data-target="#mkv_to_mp4"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Convert Matroska (MKV) to MP4">MKV to MP4</button></span>
|
||||
<div id="mkv_to_mp4" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>MKV to MP4</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i>.mkv -c:v copy -c:a aac <i>output_file</i>.mp4</code></p>
|
||||
<p>This will convert your Matroska (MKV) files to MP4 files.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path and name of the input file<br>
|
||||
The extension for the Matroska container is <code>.mkv</code>.</dd>
|
||||
<dt>-c:v copy</dt><dd>re-encodes using the same video codec</dd>
|
||||
<dt>-c:a aac</dt><dd>re-encodes using the AAC audio codec<br>
|
||||
Note that sadly MP4 cannot contain sound encoded by a PCM (Pulse-Code Modulation) audio codec.<br>
|
||||
For silent videos you can replace <code>-c:a aac</code> by <code>-an</code>.</dd>
|
||||
<dt><i>output_file</i></dt><dd>path and name of the output file<br>
|
||||
The extension for the MP4 container is <code>.mp4</code>.</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends MKV to MP4 -->
|
||||
|
||||
<!-- Images to GIF -->
|
||||
<span data-toggle="modal" data-target="#img_to_gif"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Converts images to GIF">Images to GIF</button></span>
|
||||
<div id="img_to_gif" class="modal fade" tabindex="-1" role="dialog">
|
||||
@@ -384,7 +222,7 @@
|
||||
<div class="well">
|
||||
<h3>Images to GIF</h3>
|
||||
<p><code>ffmpeg -f image2 -framerate 9 -pattern_type glob -i <i>"input_image_*.jpg"</i> -vf scale=250x250 <i>output_file</i>.gif</code></p>
|
||||
<p>This will convert a series of image files into a gif.</p>
|
||||
<p>This will convert a series of image files into a GIF.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-f image2</dt><dd>forces input or output file format. <code>image2</code> specifies the image file demuxer.</dd>
|
||||
@@ -473,6 +311,188 @@
|
||||
</div>
|
||||
<!-- ends Transcode to H.265 -->
|
||||
|
||||
<p> </p>
|
||||
<!-- Here comes audio-only transcoding -->
|
||||
|
||||
<!-- WAV to MP3 -->
|
||||
<span data-toggle="modal" data-target="#wav_to_mp3"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Convert WAV to MP3">WAV to MP3</button></span>
|
||||
<div id="wav_to_mp3" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>WAV to MP3</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i>.wav -write_id3v1 1 -id3v2_version 3 -dither_method modified_e_weighted -out_sample_rate 48k -qscale:a 1 <i>output_file</i>.mp3</code></p>
|
||||
<p>This will convert your WAV files to MP3s.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path and name of the input file</dd>
|
||||
<dt>-write_id3v1 <i>1</i></dt><dd>Write ID3v1 tag. This will add metadata to the old MP3 format, assuming you’ve embedded metadata into the WAV file.</dd>
|
||||
<dt>-id3v2_version <i>3</i></dt><dd>Write ID3v2 tag. This will add metadata to a newer MP3 format, assuming you’ve embedded metadata into the WAV file.</dd>
|
||||
<dt>-dither_method <i>modified_e_weighted</i></dt><dd>Dither makes sure you don’t unnecessarily truncate the dynamic range of your audio.</dd>
|
||||
<dt>-out_sample_rate <i>48k</i></dt><dd>Sets the audio sampling frequency to 48 kHz. This can be omitted to use the same sampling frequency as the input.</dd>
|
||||
<dt>-qscale:a <i>1</i></dt><dd>This sets the encoder to use a constant quality with a variable bitrate of between 190-250kbit/s. If you would prefer to use a constant bitrate, this could be replaced with <code>-b:a 320k</code> to set to the maximum bitrate allowed by the MP3 format. For more detailed discussion on variable vs constant bitrates see <a href="https://trac.ffmpeg.org/wiki/Encode/MP3" target="_blank">here.</a></dd>
|
||||
<dt><i>output_file</i></dt><dd>path and name of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends WAV to MP3 -->
|
||||
|
||||
<!-- WAV to AAC/MP4 -->
|
||||
<span data-toggle="modal" data-target="#wav_to_mp4"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Convert WAV to AAC/MP4">WAV to AAC/MP4</button></span>
|
||||
<div id="wav_to_mp4" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>WAV to AAC/MP4</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i>.wav -c:a aac -b:a 128k -dither_method modified_e_weighted -ar 44100 <i>output_file</i>.mp4</code></p>
|
||||
<p>This will convert your WAV file to AAC/MP4.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path and name of the input file</dd>
|
||||
<dt>-c:a aac</dt><dd>sets the audio codec to AAC</dd>
|
||||
<dt>-b:a 128k</dt><dd>sets the bitrate of the audio to 128k</dd>
|
||||
<dt>-dither_method modified_e_weighted</dt><dd>Dither makes sure you don’t unnecessarily truncate the dynamic range of your audio.</dd>
|
||||
<dt>-ar 44100</dt><dd>sets the audio sampling frequency to 44100 Hz, or 44.1 kHz, or “CD quality”</dd>
|
||||
<dt><i>output_file</i></dt><dd>path and name of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends WAV to AAC/MP4 -->
|
||||
|
||||
</div>
|
||||
<!-- ends well -->
|
||||
|
||||
<div class="well">
|
||||
<h4>Change formats</h4>
|
||||
|
||||
<!-- NTSC to H.264 -->
|
||||
<span data-toggle="modal" data-target="#ntsc_to_h264"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Upscaled, pillar-boxed HD H.264 access files from SD NTSC source">NTSC to H.264</button></span>
|
||||
<div id="ntsc_to_h264" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Upscaled, Pillar-boxed HD H.264 Access Files from SD NTSC source</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -filter:v "yadif,scale=1440:1080:flags=lanczos,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,format=yuv420p" <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>Calls the program ffmpeg</dd>
|
||||
<dt>-i</dt><dd>for input video file and audio file</dd>
|
||||
<dt>-c:v libx264</dt><dd>encodes video stream with libx264 (h264)</dd>
|
||||
<dt>-filter:v</dt><dd>calls an option to apply filtering to the video stream. yadif deinterlaces. scale and pad do the math! resizes the video frame then pads the area around the 4:3 aspect to complete 16:9. flags=lanczos uses the Lanczos scaling algorithm which is slower but better than the default bilinear. Finally, format specifies a pixel format of YUV 4:2:0. The very same scaling filter also downscales a bigger image size into HD.</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends NTSC to H.264 -->
|
||||
|
||||
<!-- 4:3 to 16:9 -->
|
||||
<span data-toggle="modal" data-target="#SD_HD"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transform 4:3 aspect ratio into 16:9 with pillarbox">4:3 to 16:9</button></span>
|
||||
<div id="SD_HD" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Transform 4:3 aspect ratio into 16:9 with pillarbox</h3>
|
||||
<p>Transform a video file with 4:3 aspect ratio into a video file with 16:9 aspect ration by correct pillarboxing.</p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2" -c:a copy <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"</dt><dd>video padding<br>This resolution independent formula is actually padding any aspect ratio into 16:9 by pillarboxing, because the video filter uses relative values for input width (iw), input height (ih), output width (ow) and output height (oh).</dd>
|
||||
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
|
||||
For silent videos you can replace <code>-c:a copy</code> by <code>-an</code>.</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends 4:3 to 16:9 -->
|
||||
|
||||
<!-- 16:9 to 4:3 -->
|
||||
<span data-toggle="modal" data-target="#HD_SD"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transform 16:9 aspect ratio video into 4:3 with letterbox">16:9 to 4:3</button></span>
|
||||
<div id="HD_SD" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Transform 16:9 aspect ratio video into 4:3 with letterbox</h3>
|
||||
<p>Transform a video file with 16:9 aspect ratio into a video file with 4:3 aspect ration by correct letterboxing.</p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2" -c:a copy <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2"</dt><dd>video padding<br>
|
||||
This resolution independent formula is actually padding any aspect ratio into 4:3 by letterboxing, because the video filter uses relative values for input width (iw), input height (ih), output width (ow) and output height (oh).</dd>
|
||||
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
|
||||
For silent videos you can replace <code>-c:a copy</code> by <code>-an</code>.</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends 16:9 to 4:3 -->
|
||||
|
||||
<!-- SD to HD -->
|
||||
<span data-toggle="modal" data-target="#SD_HD_2"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transform SD to HD with pillarbox">SD to HD</button></span>
|
||||
<div id="SD_HD_2" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Transform SD into HD with pillarbox</h3>
|
||||
<p>Transform a SD video file with 4:3 aspect ratio into an HD video file with 16:9 aspect ratio by correct pillarboxing.</p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0" -c:a copy <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0"</dt><dd>set colour matrix, video scaling and padding<br>Three filters are applied:
|
||||
<ol>
|
||||
<li>The luma coefficients are modified from SD video (according to Rec. 601) to HD video (according to Rec. 709) by a colour matrix. Note that today Rec. 709 is often used also for SD and therefore you may cancel this parameter.</li>
|
||||
<li>The scaling filter (<code>scale=1440:1080</code>) works for both upscaling and downscaling. We use the Lanczos scaling algorithm (<code>flags=lanczos</code>), which is slower but gives better results than the default bilinear algorithm.</li>
|
||||
<li>The padding filter (<code>pad=1920:1080:240:0</code>) completes the transformation from SD to HD.</li>
|
||||
</ol></dd>
|
||||
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
|
||||
For silent videos you can replace <code>-c:a copy</code> with <code>-an</code>.</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends SD to HD -->
|
||||
|
||||
<!-- Change display aspect ratio without re-encoding video-->
|
||||
<span data-toggle="modal" data-target="#change_DAR"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Change display aspect ratio without re-encoding">Change Display Aspect Ratio</button></span>
|
||||
<div id="change_DAR" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Change Display Aspect Ratio without reencoding video</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -c:v copy -aspect 4:3 <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-c:v copy</dt><dd>Copy all mapped video streams.</dd>
|
||||
<dt>-aspect 4:3</dt><dd>Change Display Aspect Ratio to <code>4:3</code>. Experiment with other aspect ratios such as <code>16:9</code>. If used together with <code>-c:v copy</code>, it will affect the aspect ratio stored at container level, but not the aspect ratio stored in encoded frames, if it exists.</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends Change display aspect ratio without re-encoding video -->
|
||||
|
||||
<!-- Deinterlace video -->
|
||||
<span data-toggle="modal" data-target="#deinterlace"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Deinterlace video">Deinterlace video</button></span>
|
||||
<div id="deinterlace" class="modal fade" tabindex="-1" role="dialog">
|
||||
@@ -480,7 +500,7 @@
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Deinterlace a video</h3>
|
||||
<p> <code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf "yadif,format=yuv420p" <i>output_file</i></code> </p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf "yadif,format=yuv420p" <i>output_file</i></code></p>
|
||||
<p>This command takes an interlaced input file and outputs a deinterlaced H.264 MP4.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
@@ -521,7 +541,7 @@
|
||||
<div class="well">
|
||||
<h3>Transcode video to a different colourspace</h3>
|
||||
<p>This command uses a filter to convert the video to a different colour space.</p>
|
||||
<p> <code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=src:dst <i>output_file</i></code> </p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=src:dst <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
@@ -534,7 +554,7 @@
|
||||
<p><b>Note</b>: Converting between colourspaces with ffmpeg can be done via either the <b>colormatrix</b> or <b>colorspace</b> filters, with colorspace allowing finer control (individual setting of colourspace, transfer characteristics, primaries, range, pixel format, etc). See <a href="https://trac.ffmpeg.org/wiki/colorspace" target="_blank">this</a> entry on the ffmpeg wiki, and the ffmpeg documentation for <a href="http://ffmpeg.org/ffmpeg-filters.html#colormatrix" target="_blank">colormatrix</a> and <a href="http://ffmpeg.org/ffmpeg-filters.html#colorspace" target="_blank">colorspace</a>.</p>
|
||||
<hr>
|
||||
<h4>Convert colourspace and embed colourspace metadata</h4>
|
||||
<p> <code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=src:dst -color_primaries <i>val</i> -color_trc <i>val</i> -colorspace <i>val</i> <i>output_file</i></code> </p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=src:dst -color_primaries <i>val</i> -color_trc <i>val</i> -colorspace <i>val</i> <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
@@ -550,11 +570,11 @@
|
||||
</dl>
|
||||
<h5>Examples</h5>
|
||||
<p>To Rec.601 (525-line/NTSC):</p>
|
||||
<p> <code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=bt709:smpte170m -color_primaries smpte170m -color_trc smpte170m -colorspace smpte170m <i>output_file</i></code> </p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=bt709:smpte170m -color_primaries smpte170m -color_trc smpte170m -colorspace smpte170m <i>output_file</i></code></p>
|
||||
<p>To Rec.601 (625-line/PAL):</p>
|
||||
<p> <code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=bt709:bt470bg -color_primaries bt470bg -color_trc gamma28 -colorspace bt470bg <i>output_file</i></code> </p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=bt709:bt470bg -color_primaries bt470bg -color_trc gamma28 -colorspace bt470bg <i>output_file</i></code></p>
|
||||
<p>To Rec.709:</p>
|
||||
<p> <code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=bt601:bt709 -color_primaries bt709 -color_trc bt709 -colorspace bt709 <i>output_file</i></code> </p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=bt601:bt709 -color_primaries bt709 -color_trc bt709 -colorspace bt709 <i>output_file</i></code></p>
|
||||
<p>MediaInfo output examples:</p>
|
||||
<img src="./img/colourspace_metadata_mediainfo.png" alt="MediaInfo screenshots of colourspace metadata"><br>
|
||||
<p><span class="beware">⚠</span> Using this command it is possible to add Rec.709 tags to a file that is actually Rec.601 (etc), so apply with caution!</p>
|
||||
@@ -579,18 +599,18 @@
|
||||
<div class="well">
|
||||
<h3>Inverse telecine a video file</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf "fieldmatch,yadif,decimate" <i>output_file</i></code></p>
|
||||
<p>The inverse telecine procedure reverses the <a href="https://en.wikipedia.org/wiki/Three-two_pull_down">3:2 pull down</a> process, restoring 29.97fps interlaced video to the 24fps frame rate of the original film source.</p>
|
||||
<p>The inverse telecine procedure reverses the <a href="https://en.wikipedia.org/wiki/Three-two_pull_down" target="_blank">3:2 pull down</a> process, restoring 29.97fps interlaced video to the 24fps frame rate of the original film source.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-c:v libx264</dt><dd>encode video as H.264</dd>
|
||||
<dt>-vf "fieldmatch,yadif,decimate"</dt><dd>applies these three video filters to the input video.<br>
|
||||
<a href="https://ffmpeg.org/ffmpeg-filters.html#fieldmatch">Fieldmatch</a> is a field matching filter for inverse telecine - it reconstructs the progressive frames from a telecined stream.<br>
|
||||
<a href="https://ffmpeg.org/ffmpeg-filters.html#yadif-1">Yadif</a> (‘yet another deinterlacing filter’) deinterlaces the video. (Note that ffmpeg also includes several other deinterlacers).<br>
|
||||
<a href="https://ffmpeg.org/ffmpeg-filters.html#decimate-1">Decimate</a> deletes duplicated frames.</dd>
|
||||
<a href="https://ffmpeg.org/ffmpeg-filters.html#fieldmatch" target="_blank">Fieldmatch</a> is a field matching filter for inverse telecine - it reconstructs the progressive frames from a telecined stream.<br>
|
||||
<a href="https://ffmpeg.org/ffmpeg-filters.html#yadif-1" target="_blank">Yadif</a> (‘yet another deinterlacing filter’) deinterlaces the video. (Note that ffmpeg also includes several other deinterlacers).<br>
|
||||
<a href="https://ffmpeg.org/ffmpeg-filters.html#decimate-1" target="_blank">Decimate</a> deletes duplicated frames.</dd>
|
||||
<dt><i>output file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p> <code>"fieldmatch,yadif,decimate"</code> is an ffmpeg <a href="https://trac.ffmpeg.org/wiki/FilteringGuide#FiltergraphChainFilterrelationship" target="_blank">filtergraph</a>. Here the filtergraph is made up of one filter chain, which is itself made up of the three filters (separated by commas).<br>
|
||||
<p><code>"fieldmatch,yadif,decimate"</code> is an ffmpeg <a href="https://trac.ffmpeg.org/wiki/FilteringGuide#FiltergraphChainFilterrelationship" target="_blank">filtergraph</a>. Here the filtergraph is made up of one filter chain, which is itself made up of the three filters (separated by commas).<br>
|
||||
The enclosing quote marks are necessary when you use spaces within the filtergraph, e.g. <code>-vf "fieldmatch, yadif, decimate"</code>, and are included above as an example of good practice.</p>
|
||||
<p>Note that if applying an inverse telecine procedure to a 29.97i file, the output framerate will actually be 23.976fps.</p>
|
||||
<p>This command can also be used to restore other framerates.</p>
|
||||
@@ -697,10 +717,6 @@
|
||||
<dt>fontcolor=white</dt><dd>specifies font color as white</dd>
|
||||
<dt>"</dt><dd>quotation mark to close filter command</dd>
|
||||
</dl>
|
||||
<div class="sample-image">
|
||||
<!-- <h4>Example of filter output</h4> -->
|
||||
<!-- <img src="" alt="ocr example"> -->
|
||||
</div>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
@@ -724,10 +740,6 @@
|
||||
<dt>-f lavfi</dt><dd>tells ffmpeg to use the <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">Libavfilter input virtual device</a></dd>
|
||||
<dt>-i "movie=<i>input_file</i>,ocr"</dt><dd>declares 'movie' as <i>input_file</i> and passes in the 'ocr' command</dd>
|
||||
</dl>
|
||||
<div class="sample-image">
|
||||
<!-- <h4>Example of filter output</h4> -->
|
||||
<!-- <img src="" alt="Exports OCR example"> -->
|
||||
</div>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
@@ -812,16 +824,24 @@
|
||||
<p>The first command will use the palettegen filter to create a custom palette, then the second command will create the GIF with the paletteuse filter. The result is a high quality GIF.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-ss <i>HH:MM:SS</i></dt><dd>starting point of the gif. If a plain numerical value is used it will be interpreted as seconds</dd>
|
||||
<dt>-ss <i>HH:MM:SS</i></dt><dd>starting point of the GIF. If a plain numerical value is used it will be interpreted as seconds</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-filter_complex "fps=<i>frame rate</i>,scale=<i>width</i>:<i>height</i>,palettegen"</dt><dd>a complex filtergraph using the fps filter to set frame rate, the scale filter to resize, and the palettegen filter to generate the palette. The scale value of <i>-1</i> preserves the aspect ratio</dd>
|
||||
<dt>-filter_complex "fps=<i>frame rate</i>,scale=<i>width</i>:<i>height</i>,palettegen"</dt><dd>a complex filtergraph.<br>
|
||||
Firstly, the fps filter sets the frame rate.<br>
|
||||
Then the scale filter resizes the image. You can specify both the width and the height, or specify a value for one and use a scale value of <i>-1</i> for the other to preserve the aspect ratio. (For example, <code>500:-1</code> would create a GIF 500 pixels wide and with a height proportional to the original video). In the first script above, <code>:flags=lanczos</code> specifies that the Lanczos rescaling algorithm will be used to resize the image.<br>
|
||||
Lastly, the palettegen filter generates the palette.</dd>
|
||||
<dt>-t <i>3</i></dt><dd>duration in seconds (here 3; can be specified also with a full timestamp, i.e. here 00:00:03)</dd>
|
||||
<dt>-loop <i>6</i></dt><dd>number of times to loop the gif. A value of <i>-1</i> will disable looping. Omitting <i>-loop</i> will use the default which will loop infinitely</dd>
|
||||
<dt>-loop <i>6</i></dt><dd>number of times to loop the GIF. A value of <i>-1</i> will disable looping. Omitting <i>-loop</i> will use the default which will loop infinitely</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p>The second command has a slightly different filtergraph, which breaks down as follows:</p>
|
||||
<dl>
|
||||
<dt>-filter_complex "[0:v]fps=10,scale=500:-1:flags=lanczos[v],[v][1:v]paletteuse"</dt><dd><code>[0:v]fps=10,scale=500:-1:flags=lanczos[v]</code>: applies the fps and scale filters described above to the first input file (the video).<br>
|
||||
<code>[v][1:v]paletteuse"</code>: applies the <code>paletteuse</code> filter, setting the second input file (the palette) as the reference file.</dd>
|
||||
</dl>
|
||||
<p>Simpler GIF creation</p>
|
||||
<p><code>ffmpeg -ss HH:MM:SS -i <i>input_file</i> -vf "fps=10,scale=500:-1" -t 3 -loop 6 <i>output_file</i></code></p>
|
||||
<p>This is a quick and easy method. Dithering is more apparent than the above method using the palette* filters, but the file size will be smaller. Perfect for that “legacy” GIF look.</p>
|
||||
<p>This is a quick and easy method. Dithering is more apparent than the above method using the palette filters, but the file size will be smaller. Perfect for that “legacy” GIF look.</p>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
@@ -1137,12 +1157,12 @@ foreach ($file in $inputfiles) {
|
||||
<!-- ends batch processing (Windows) -->
|
||||
|
||||
<!-- Create frame md5s -->
|
||||
<span data-toggle="modal" data-target="#create_frame_md5s"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Create an MD5 checksum per video frame">Create MD5 checksums</button></span>
|
||||
<div id="create_frame_md5s" class="modal fade" tabindex="-1" role="dialog">
|
||||
<span data-toggle="modal" data-target="#create_frame_md5s_v"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Create an MD5 checksum per video frame">Create MD5 checksums (video frames)</button></span>
|
||||
<div id="create_frame_md5s_v" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Create MD5 checksums</h3>
|
||||
<h3>Create MD5 checksums (video frames)</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -f framemd5 -an <i>output_file</i></code></p>
|
||||
<p>This will create an MD5 checksum per video frame.</p>
|
||||
<dl>
|
||||
@@ -1152,7 +1172,7 @@ foreach ($file in $inputfiles) {
|
||||
<dt>-an</dt><dd>ignores the audio stream (audio no)</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p>You may verify an MD5 checksum file created this way by using a <a href="check_framemd5.sh" target="_blank">Bash script</a>.</p>
|
||||
<p>You may verify an MD5 checksum file created this way by using a <a href="scripts/check_video_framemd5.sh" target="_blank">Bash script</a>.</p>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
@@ -1160,6 +1180,36 @@ foreach ($file in $inputfiles) {
|
||||
</div>
|
||||
<!-- ends Create frame md5s -->
|
||||
|
||||
<!-- Create frame md5s (audio) -->
|
||||
<span data-toggle="modal" data-target="#create_frame_md5s_a"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Create MD5 checksums for audio samples">Create MD5 checksums (audio samples)</button></span>
|
||||
<div id="create_frame_md5s_a" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Create MD5 checksums (audio samples)</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -af "asetnsamples=<i>n=48000</i>" -f framemd5 -vn <i>output_file</i></code></p>
|
||||
<p>This will create an MD5 checksum for each group of 48000 audio samples.<br> The number of samples per group can be set arbitrarily, but it's good practice to match the samplerate of the media file (so you will get one checksum per second).</p>
|
||||
<p>Examples for other samplerates:</p>
|
||||
<ul>
|
||||
<li>44.1 kHz: "asetnsamples=n=44100"</li>
|
||||
<li>96 kHz: "asetnsamples=n=96000"</li>
|
||||
</ul>
|
||||
<p>Note: This filter trandscodes audio to 16 bit PCM by default. The generated framemd5s will represent this value. Validating these framemd5s will require using the same default settings.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-f framemd5</dt><dd>library used to calculate the MD5 checksums</dd>
|
||||
<dt>-vn</dt><dd>ignores the video stream (video no)</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p>You may verify an MD5 checksum file created this way by using a <a href="scripts/check_audio_framemd5.sh" target="_blank">Bash script</a>.</p>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends Create frame md5s (audio) -->
|
||||
|
||||
<!-- Pull specs -->
|
||||
<span data-toggle="modal" data-target="#pull_specs"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Pull specs from video file">Pull specs</button></span>
|
||||
<div id="pull_specs" class="modal fade" tabindex="-1" role="dialog">
|
||||
@@ -1217,13 +1267,12 @@ foreach ($file in $inputfiles) {
|
||||
<div class="well">
|
||||
<h3>Check video file interlacement patterns</h3>
|
||||
<p><code>ffmpeg -i <i>input file</i> -filter:v idet -f null -</code></p>
|
||||
<p></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-filter:v idet</dt><dd>This calls the <a href="https://ffmpeg.org/ffmpeg-filters.html#idet" target="_blank">idet (detect video interlacing type) filter</a>.</dd>
|
||||
<dt>-f null</dt><dd>Video is decoded with the <code>null</code> muxer. This allows video decoding without creating an output file.</dd>
|
||||
<dt>-</dt><dd>FFmpeg syntax requires a specified output, and <code>-</code> is just a place holder. No file is actually created. </dd>
|
||||
<dt>-</dt><dd>FFmpeg syntax requires a specified output, and <code>-</code> is just a place holder. No file is actually created.</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
@@ -1307,7 +1356,7 @@ foreach ($file in $inputfiles) {
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-f lavfi</dt><dd>tells ffmpeg to use the Libavfilter input virtual device <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">[more]</a></dd>
|
||||
<dt>-i mandelbrot=size=1280x720:rate=25</dt><dd>asks for the mandelbrot test filter as input. Adjusting the <code>size</code> and <code>rate</code> options allow you to choose a specific frame size and framerate. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc" target="_blank">[more]</a></dd>
|
||||
<dt>-i mandelbrot=size=1280x720:rate=25</dt><dd>asks for the mandelbrot test filter as input. Adjusting the <code>size</code> and <code>rate</code> options allow you to choose a specific frame size and framerate. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc" target="_blank">[more]</a></dd>
|
||||
<dt>-c:v <i>libx264</i></dt><dd>transcodes video from rawvideo to H.264. Set <code>-pix_fmt</code> to <code>yuv420p</code> for greater H.264 compatibility with media players.</dd>
|
||||
<dt>-t 10</dt><dd>specifies recording time of 10 seconds</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file. Try different file extensions such as mkv, mov, mp4, or avi.</dd>
|
||||
@@ -1330,7 +1379,7 @@ foreach ($file in $inputfiles) {
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-f lavfi</dt><dd>tells ffmpeg to use the Libavfilter input virtual device <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">[more]</a></dd>
|
||||
<dt>-i smptebars=size=720x576:rate=25</dt><dd>asks for the smptebars test filter as input. Adjusting the <code>size</code> and <code>rate</code> options allow you to choose a specific frame size and framerate. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc" target="_blank">[more]</a></dd>
|
||||
<dt>-i smptebars=size=720x576:rate=25</dt><dd>asks for the smptebars test filter as input. Adjusting the <code>size</code> and <code>rate</code> options allow you to choose a specific frame size and framerate. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc" target="_blank">[more]</a></dd>
|
||||
<dt>-c:v <i>prores</i></dt><dd>transcodes video from rawvideo to Apple ProRes 4:2:2.</dd>
|
||||
<dt>-t 10</dt><dd>specifies recording time of 10 seconds</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file. Try different file extensions such as mov or avi.</dd>
|
||||
@@ -1378,7 +1427,7 @@ foreach ($file in $inputfiles) {
|
||||
<dl>
|
||||
<dt>ffplay</dt><dd>starts the command</dd>
|
||||
<dt>-f lavfi</dt><dd>tells ffmpeg to use the libavfilter input virtual device <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">[more]</a></dd>
|
||||
<dt>-i smptehdbars=size=1920x1080</dt><dd>asks for the smptehdbars filter pattern as input and sets the HD resolution. This generates a colour bars pattern, based on the SMPTE RP 219–2002. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc" target="_blank">[more]</a></dd>
|
||||
<dt>-i smptehdbars=size=1920x1080</dt><dd>asks for the smptehdbars filter pattern as input and sets the HD resolution. This generates a colour bars pattern, based on the SMPTE RP 219–2002. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc" target="_blank">[more]</a></dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
@@ -1399,7 +1448,7 @@ foreach ($file in $inputfiles) {
|
||||
<dl>
|
||||
<dt>ffplay</dt><dd>starts the command</dd>
|
||||
<dt>-f lavfi</dt><dd>tells ffmpeg to use the libavfilter input virtual device <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">[more]</a></dd>
|
||||
<dt>-i smptebars=size=640x480</dt><dd>asks for the smptehdbars filter pattern as input and sets the VGA (SD) resolution. This generates a colour bars pattern, based on the SMPTE Engineering Guideline EG 1–1990. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc" target="_blank">[more]</a></dd>
|
||||
<dt>-i smptebars=size=640x480</dt><dd>asks for the smptehdbars filter pattern as input and sets the VGA (SD) resolution. This generates a colour bars pattern, based on the SMPTE Engineering Guideline EG 1–1990. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc" target="_blank">[more]</a></dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
@@ -1408,6 +1457,29 @@ foreach ($file in $inputfiles) {
|
||||
</div>
|
||||
<!-- ends Play VGA SMPTE bars -->
|
||||
|
||||
<!-- Broken File -->
|
||||
<span data-toggle="modal" data-target="#broken_file"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Make a broken file out of a perfectly good one">Broken file</button></span>
|
||||
<div id="broken_file" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Makes a broken test file</h3>
|
||||
<p>Modifies an existing, functioning file and intentionally breaks it for testing purposes.</p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -bsf noise=1 -c copy <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>takes in a normal file</dd>
|
||||
<dt>-bsf noise=1</dt><dd>sets bitstream filters for all to 'noise'. Filters can be set on specific filters using syntax such as <code>-bsf:v</code> for video, <code>-bsf:a</code> for audio, etc. The <a href="https://www.ffmpeg.org/ffmpeg-bitstream-filters.html#noise" target="_blank">noise filter</a> intentionally damages the contents of packets without damaging the container. This sets the noise level to 1 but it could be left blank or any number above 0.</dd>
|
||||
<dt>-c copy</dt><dd>use stream copy mode to re-mux instead of re-encode</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends Broken File -->
|
||||
|
||||
<!-- Sine wave -->
|
||||
<span data-toggle="modal" data-target="#sine_wave"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Generate a test audio file playing a sine wave">Sine wave</button></span>
|
||||
<div id="sine_wave" class="modal fade" tabindex="-1" role="dialog">
|
||||
@@ -1431,6 +1503,33 @@ foreach ($file in $inputfiles) {
|
||||
</div>
|
||||
<!-- ends Sine wave -->
|
||||
|
||||
<!-- SMPTE bars + Sine wave -->
|
||||
<span data-toggle="modal" data-target="#smpte_bars_and_sine_wave"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Generate a SMPTE bars test video + audio playing a sine wave">SMPTE bars + Sine wave audio</button></span>
|
||||
<div id="smpte_bars_and_sine_wave" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>SMPTE bars + Sine wave audio</h3>
|
||||
<p>Generate a SMPTE bars test video + a 1kHz sine wave as audio testsignal.</p>
|
||||
<p><code>ffmpeg -f lavfi -i smptebars=size=720x576:rate=25 -f lavfi -i "sine=frequency=1000:sample_rate=48000" -c:a pcm_s16le -t 10 -c:v ffv1 <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-f lavfi</dt><dd>tells ffmpeg to use the <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">libavfilter</a> input virtual device</dd>
|
||||
<dt>-i smptebars=size=720x576:rate=25</dt><dd>asks for the smptebars test filter as input. Adjusting the <code>size</code> and <code>rate</code> options allow you to choose a specific frame size and framerate. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc" target="_blank">[more]</a></dd>
|
||||
<dt>-f lavfi</dt><dd>use libavfilter again, but now for audio</dd>
|
||||
<dt>-i "sine=frequency=1000:sample_rate=48000"</dt><dd>Sets the signal to 1000 Hz, sampling at 48 kHz.</dd>
|
||||
<dt>-c:a pcm_s16le</dt><dd>encodes the audio codec in <code>pcm_s16le</code> (the default encoding for wav files). pcm represents pulse-code modulation format (raw bytes), <code>16</code> means 16 bits per sample, and <code>le</code> means "little endian"</dd>
|
||||
<dt>-t 10</dt><dd>specifies recording time of 10 seconds</dd>
|
||||
<dt>-c:v <i>ffv1</i></dt><dd>Encodes to <a href="https://en.wikipedia.org/wiki/FFV1" target="_blank">FFV1</a>. Alter this setting to set your desired codec.</dd>
|
||||
<dt><i>output_file</i>.wav</dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends SMPTE bars + Sine wave -->
|
||||
|
||||
</div>
|
||||
<div class="well">
|
||||
<h4>Other</h4>
|
||||
@@ -1848,7 +1947,78 @@ e.g.: <code>ffmpeg -f concat -safe 0 -i mylist.txt -c copy <i>output_file</i></c
|
||||
</div>
|
||||
<!-- ends View Format info -->
|
||||
|
||||
</div><!-- closes the well -->
|
||||
<!-- Compare Video Fingerprints -->
|
||||
<span data-toggle="modal" data-target="#compare_video_fingerprints"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Compare Video Fingerprints">Compare Video Fingerprints</button></span>
|
||||
<div id="compare_video_fingerprints" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Compare two video files for content similarity using perceptual hashing</h3>
|
||||
<p><code>ffmpeg -i <i>input_one</i> -i <i>input_two</i> -filter_complex signature=detectmode=full:nb_inputs=2 -f null -</code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_one</i> -i <i>input_two</i></dt><dd>assigns the input files</dd>
|
||||
<dt>-filter_complex</dt><dd>enables using more than one input file to the filter</dd>
|
||||
<dt>signature=detectmode=full</dt><dd>Applies the signature filter to the inputs in 'full' mode. The other option is 'fast'.</dd>
|
||||
<dt>nb_inputs=2</dt><dd>tells the filter to expect two input files</dd>
|
||||
<dt>-f null -</dt><dd>Sets the output of ffmpeg to a null stream (since we are not creating a transcoded file, just viewing metadata).</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends Compare Video Fingerprints -->
|
||||
|
||||
<!-- Generate Video Fingerprint -->
|
||||
<span data-toggle="modal" data-target="#generate_video_fingerprint"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Generate Video Fingerprint">Generate Video Fingerprint</button></span>
|
||||
<div id="generate_video_fingerprint" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Generate a perceptual hash for an input video file</h3>
|
||||
<p><code>ffmpeg -i <i>input</i> -vf signature=format=xml:filename="output.xml" -an -f null -</code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg -i <i>input</i></dt><dd>starts the command using your input file</dd>
|
||||
<dt>-vf signature=format=xml</dt><dd>applies the signature filter to the input file and sets the output format for the fingerprint to xml</dd>
|
||||
<dt>filename="output.xml"</dt><dd>sets the output for the signature filter</dd>
|
||||
<dt>-an</dt><dd>tells ffmpeg to ignore the audio stream of the input file</dd>
|
||||
<dt>-f null -</dt><dd>Sets the ffmpeg output to a null stream (since we are only interested in the output generated by the filter).</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends Generate Video Fingerprint -->
|
||||
</div><!-- closes the well -->
|
||||
|
||||
<div class="well">
|
||||
<h4>Repair</h4>
|
||||
|
||||
<!-- Fix A/V async 1 -->
|
||||
<span data-toggle="modal" data-target="#avsync_aresample"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Fix A/V sync issues by resampling audio">Fix AV Sync: Resample audio</button></span>
|
||||
<div id="avsync_aresample" class="modal fade" tabindex="-1" role="dialog">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="well">
|
||||
<h3>Fix AV Sync: Resample audio</h3>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -c:v copy -c:a pcm_s16le -af "aresample=async=1000" <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt><i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-c:v copy</dt><dd>Copy all mapped video streams.</dd>
|
||||
<dt>-c:a pcm_s16le</dt><dd>Tells ffmpeg to encode the audio stream in 16-bit linear PCM (<a href="https://en.wikipedia.org/wiki/Endianness#Little-endian" target="_blank">little endian</a>)</dd>
|
||||
<dt>-af "aresample=async=1000"</dt><dd>Stretch/squeezes samples to given timestamps, with maximum of 1000 samples per second compensation <a href="https://ffmpeg.org/ffmpeg-filters.html#aresample-1" target="_blank">[more]</a></dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file. Try different file extensions such as mkv, mov, mp4, or avi.</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends Fix A/V async 1 -->
|
||||
</div><!-- closes the well -->
|
||||
|
||||
|
||||
<!-- sample example -->
|
||||
|
38
readme.md
38
readme.md
@@ -1,33 +1,41 @@
|
||||
# [ffmprovisr](http://amiaopensource.github.io/ffmprovisr)
|
||||
|
||||
Repository of useful FFmpeg command lines for archivists! [AMIA hackday](http://wiki.curatecamp.org/index.php/Association_of_Moving_Image_Archivists_%26_Digital_Library_Federation_Hack_Day_2015) edition.
|
||||
Repository of useful FFmpeg command lines for archivists!
|
||||
|
||||
## What is this?
|
||||
|
||||
Project Objective: To facilitate better understanding of FFmpeg through collaborative sharing of useful scripts and detailed flag-level description of how each script works so archivists can copy-paste and produce their own scripts but also understand how and why they work.
|
||||
#### Project Objective
|
||||
|
||||
To facilitate better understanding of FFmpeg through collaborative sharing of useful scripts and detailed flag-level description of how each script works, so archivists can copy-paste and produce their own scripts, but also understand how and why they work.
|
||||
|
||||
## How do I see it?
|
||||
|
||||
Code stuff in the gh-pages branch (the default primary branch). Readme is right here. The site is live and lives on github pages. You can see it [here](http://amiaopensource.github.io/ffmprovisr), or you can download a [release](https://github.com/amiaopensource/ffmprovisr/releases) and use it locally.
|
||||
The code is found in the gh-pages branch (the default primary branch). Readme is right here. You can see the site live on [GitHub pages](http://amiaopensource.github.io/ffmprovisr), or you can download a [release](https://github.com/amiaopensource/ffmprovisr/releases) and use it locally.
|
||||
|
||||
## How do I contribute?
|
||||
|
||||
You are welcome to edit the codebase yourself or just supply the information and ask it to be added to the site.
|
||||
You are welcome to edit the codebase yourself, or just supply the information and ask it to be added to the site.
|
||||
|
||||
To contribute to this project directly (and more quickly), clone this repository and create a new branch (`git checkout -b your-branch-name`) and add or modify a new block in index.html. Then submit a pull request and someone can review and integrate your code. There is a commented-out sample available at the bottom of index.html that can be used to build your own block.
|
||||
#### Edit codebase
|
||||
|
||||
If you are having trouble with the coding it yourself or with github, feel free to [submit an issue](https://github.com/amiaopensource/ffmprovisr/issues) with the kind of command you would like to see added to the site.
|
||||
To contribute to this project directly (and more quickly), clone this repository and create a new branch (`git checkout -b your-branch-name`) and add or modify a new block in `index.html`. Then [submit a pull request](https://github.com/amiaopensource/ffmprovisr/pulls) and the maintainers will review and integrate your code. There is a commented-out sample block available at the bottom of `index.html` that can be as a guideline for your command.
|
||||
|
||||
#### Make a request
|
||||
|
||||
If you are having trouble with coding it yourself or with github, feel free to [submit an issue](https://github.com/amiaopensource/ffmprovisr/issues) with the kind of command you would like to see added to the site.
|
||||
|
||||
#### General help
|
||||
|
||||
If you want to help but don't have a new script to add, you can help us by testing out the scripts available, by refining or clarifying the documentation, or [creating an issue](https://github.com/amiaopensource/ffmprovisr/issues) for anything that sounds confusing and requires clarification.
|
||||
|
||||
## License
|
||||
|
||||
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />This <span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/InteractiveResource" rel="dct:type">work</span> by <a xmlns:cc="http://creativecommons.org/ns#" href="http://amiaopensource.github.io/ffmprovisr/" property="cc:attributionName" rel="cc:attributionURL">ffmprovisr</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.<br />Based on a work at <a xmlns:dct="http://purl.org/dc/terms/" href="https://github.com/amiaopensource/ffmprovisr" rel="dct:source">https://github.com/amiaopensource/ffmprovisr</a>.
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
You can read our contributor code of conduct [here](https://github.com/amiaopensource/ffmprovisr/blob/gh-pages/code_of_conduct.md).
|
||||
|
||||
## Maintainers
|
||||
|
||||
[Ashley Blewer](https://github.com/ablwr), [Katherine Frances Nagels](https://github.com/kfrn), [Kieran O'Leary](https://github.com/kieranjol) and [Reto Kromer](https://github.com/retokromer)
|
||||
|
||||
## Contributors
|
||||
* Gathered using [octohatrack](https://github.com/LABHR/octohatrack)
|
||||
|
||||
@@ -68,11 +76,17 @@ Repo: amiaopensource/ffmprovisr
|
||||
GitHub Contributors: 11
|
||||
All Contributors: 18
|
||||
|
||||
## AVHack Team:
|
||||
## AVHack Team
|
||||
|
||||
[Ashley Blewer](https://github.com/ablwr), Eddy Colloton, Rebecca Dillmeier, [Jonathan Farbowitz](https://github.com/jfarbowitz), Rebecca Fraimow, Samuel Gutterman, Kelly Haydon, [Reto Kromer](https://github.com/retokromer), Nicole Martin, [Katherine Frances Nagels](https://github.com/kfrn), [Kieran O'Leary](https://github.com/kieranjol), Catriona Schlosser, Ben Turkus
|
||||
[Association of Moving Image Archivists & Digital Library Federation Hack Day 2015](http://wiki.curatecamp.org/index.php/Association_of_Moving_Image_Archivists_%26_Digital_Library_Federation_Hack_Day_2015)
|
||||
|
||||
[Ashley Blewer](https://github.com/ablwr), [Eddy Colloton](https://github.com/eddycolloton), Rebecca Dillmeier, [Jonathan Farbowitz](https://github.com/jfarbowitz), [Rebecca Fraimow](https://github.com/rfraimow), Samuel Gutterman, [Kelly Haydon](https://github.com/kellyhaydon), [Reto Kromer](https://github.com/retokromer), Nicole Martin, [Katherine Frances Nagels](https://github.com/kfrn), [Kieran O'Leary](https://github.com/kieranjol), Catriona Schlosser, [Ben Turkus](https://github.com/bturkus)
|
||||
|
||||
## Sister projects
|
||||
|
||||
[Script Ahoy](http://dd388.github.io/crals/): Community Resource for Archivists and Librarians Scripting
|
||||
[sourcecaster](https://datapraxis.github.io/sourcecaster/): helps you use the command line to work through common challenges that come up when working with digital primary sources.
|
||||
|
||||
## License
|
||||
|
||||
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />This <span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/InteractiveResource" rel="dct:type">work</span> by <a xmlns:cc="http://creativecommons.org/ns#" href="http://amiaopensource.github.io/ffmprovisr/" property="cc:attributionName" rel="cc:attributionURL">ffmprovisr</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.<br />Based on a work at <a xmlns:dct="http://purl.org/dc/terms/" href="https://github.com/amiaopensource/ffmprovisr" rel="dct:source">https://github.com/amiaopensource/ffmprovisr</a>.
|
||||
|
23
check_framemd5.sh → scripts/check_audio_framemd5.sh
Normal file → Executable file
23
check_framemd5.sh → scripts/check_audio_framemd5.sh
Normal file → Executable file
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
SCRIPT=$(basename "${0}")
|
||||
VERSION='2016-12-31'
|
||||
VERSION='2017-04-17'
|
||||
AUTHOR='ffmprovisr'
|
||||
RED='\033[1;31m'
|
||||
BLUE='\033[1;34m'
|
||||
@@ -13,7 +13,7 @@ fi
|
||||
|
||||
_output_prompt(){
|
||||
cat <<EOF
|
||||
Usage: ${SCRIPT} [-h] | [ -i <av_file> -m <md5_file> ]
|
||||
Usage: ${SCRIPT} -h | -i <av_file> -m <md5_file>
|
||||
EOF
|
||||
exit 1
|
||||
}
|
||||
@@ -32,7 +32,7 @@ Dependency:
|
||||
ffmpeg
|
||||
About:
|
||||
Version: ${VERSION}
|
||||
Website: https://github.com/amiaopensource/ffmprovisr/blob/gh-pages/check_framemd5.sh
|
||||
Website: https://github.com/amiaopensource/ffmprovisr/blob/gh-pages/scripts/check_audio_framemd5.sh
|
||||
EOF
|
||||
exit 0
|
||||
}
|
||||
@@ -54,21 +54,24 @@ done
|
||||
echo -e "${BLUE}Please wait...${NC}"
|
||||
unset md5_tmp
|
||||
if [[ $OSTYPE = "cygwin" ]]; then
|
||||
md5_tmp=""${USERPROFILE}/$(basename ${input_hash}).tmp""
|
||||
md5_tmp="${USERPROFILE}/$(basename "${input_hash}").tmp"
|
||||
else
|
||||
md5_tmp="${HOME}/$(basename ${input_hash}).tmp"
|
||||
md5_tmp="${HOME}/$(basename "${input_hash}").tmp"
|
||||
fi
|
||||
$(ffmpeg -i ${input_file} -loglevel 0 -f framemd5 -an ${md5_tmp})
|
||||
# Find audio frame size for hash calculation
|
||||
sample_rate=$(grep -v '^#' "${input_hash}" | head -n 1 | tr -d ' ' | cut -d',' -f4)
|
||||
ffmpeg -i "${input_file}" -loglevel 0 -af "asetnsamples=n='$sample_rate'" -f framemd5 -vn "${md5_tmp}"
|
||||
[[ ! -f ${md5_tmp} ]] && { echo -e "${RED}Error: '${input_file}' is not a valid audio-visual file.${NC}" ; _output_prompt ; }
|
||||
unset old_file
|
||||
unset tmp_file
|
||||
old_file=$(grep -v '^#' ${input_hash})
|
||||
tmp_file=$(grep -v '^#' ${md5_tmp})
|
||||
unset sample_rate
|
||||
old_file=$(grep -v '^#' "${input_hash}")
|
||||
tmp_file=$(grep -v '^#' "${md5_tmp}")
|
||||
if [[ "${old_file}" = "${tmp_file}" ]]; then
|
||||
echo -e "${BLUE}'$(basename ${input_file})' matches '$(basename ${input_hash})'${NC}"
|
||||
echo -e "${BLUE}'$(basename "${input_file}")' matches '$(basename "${input_hash}")'${NC}"
|
||||
rm "${md5_tmp}"
|
||||
else
|
||||
echo -e "${RED}The following differences were detected between '$(basename ${input_file})' and '$(basename ${input_hash})':${NC}"
|
||||
echo -e "${RED}The following differences were detected between '$(basename "${input_file}")' and '$(basename "${input_hash}")':${NC}"
|
||||
diff "${input_hash}" "${md5_tmp}"
|
||||
rm "${md5_tmp}"
|
||||
fi
|
74
scripts/check_video_framemd5.sh
Normal file
74
scripts/check_video_framemd5.sh
Normal file
@@ -0,0 +1,74 @@
|
||||
#!/usr/bin/env bash
|
||||
SCRIPT=$(basename "${0}")
|
||||
VERSION='2017-04-17'
|
||||
AUTHOR='ffmprovisr'
|
||||
RED='\033[1;31m'
|
||||
BLUE='\033[1;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
if [[ ${OSTYPE} = "cygwin" ]] || [ ! $(which diff) ]; then
|
||||
echo -e "${RED}Error: 'diff' is not installed by default. Please install 'diffutils' from Cygwin.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
_output_prompt(){
|
||||
cat <<EOF
|
||||
Usage: ${SCRIPT} -h | -i <av_file> -m <md5_file>
|
||||
EOF
|
||||
exit 1
|
||||
}
|
||||
|
||||
_output_help(){
|
||||
cat <<EOF
|
||||
Syntax:
|
||||
${SCRIPT}
|
||||
Prompts a short help message.
|
||||
${SCRIPT} -h
|
||||
This help.
|
||||
${SCRIPT} -i <av_file> -m <md5_file>
|
||||
Pass to the script the audio-visual file and the corresponding MD5
|
||||
file to check.
|
||||
Dependency:
|
||||
ffmpeg
|
||||
About:
|
||||
Version: ${VERSION}
|
||||
Website: https://github.com/amiaopensource/ffmprovisr/blob/gh-pages/scripts/check_video_framemd5.sh
|
||||
EOF
|
||||
exit 0
|
||||
}
|
||||
|
||||
unset input_file
|
||||
unset input_hash
|
||||
|
||||
while getopts ":hi:m:" opt; do
|
||||
case "${opt}" in
|
||||
h) _output_help ;;
|
||||
i) input_file=$OPTARG ;;
|
||||
m) input_hash=$OPTARG ;;
|
||||
:) echo -e "${RED}Error: option -${OPTARG} requires an argument${NC}" ; _output_prompt ;;
|
||||
*) echo -e "${RED}Error: bad option -${OPTARG}${NC}" ; _output_prompt ;;
|
||||
esac
|
||||
done
|
||||
|
||||
[[ -z "${#}" || ! ${input_file} || ! ${input_hash} ]] && _output_prompt
|
||||
echo -e "${BLUE}Please wait...${NC}"
|
||||
unset md5_tmp
|
||||
if [[ $OSTYPE = "cygwin" ]]; then
|
||||
md5_tmp="${USERPROFILE}/$(basename "${input_hash}").tmp"
|
||||
else
|
||||
md5_tmp="${HOME}/$(basename "${input_hash}").tmp"
|
||||
fi
|
||||
ffmpeg -i "${input_file}" -loglevel 0 -f framemd5 -an "${md5_tmp}"
|
||||
[[ ! -f "${md5_tmp}" ]] && { echo -e "${RED}Error: '${input_file}' is not a valid audio-visual file.${NC}" ; _output_prompt ; }
|
||||
unset old_file
|
||||
unset tmp_file
|
||||
old_file=$(grep -v '^#' "${input_hash}")
|
||||
tmp_file=$(grep -v '^#' "${md5_tmp}")
|
||||
if [[ "${old_file}" = "${tmp_file}" ]]; then
|
||||
echo -e "${BLUE}'$(basename "${input_file}")' matches '$(basename "${input_hash}")'${NC}"
|
||||
rm "${md5_tmp}"
|
||||
else
|
||||
echo -e "${RED}The following differences were detected between '$(basename "${input_file}")' and '$(basename "${input_hash}")':${NC}"
|
||||
diff "${input_hash}" "${md5_tmp}"
|
||||
rm "${md5_tmp}"
|
||||
fi
|
Reference in New Issue
Block a user