Compare commits

...

41 Commits

Author SHA1 Message Date
Andrew Weaver
18e2c17ab4 Script to open ffmprovisr when installed via homebrew (#181)
- script for opening ffmprovisr from terminal
2017-05-05 17:37:50 +02:00
Andrew Weaver
e4309d6664 add audio bitscope (#180)
* add audio bitscope

* alphabetical

* ch ch ch ch changes
2017-05-04 19:58:24 +02:00
Andrew Weaver
750810d392 Loudness/RIAA Curve (#179)
- start normalization section
- add RIAA
2017-05-03 04:58:20 +02:00
Reto Kromer
b2a04d138f Merge pull request #177 from kfrn/gh-pages
Tweak wording of readme
2017-04-19 10:15:32 +02:00
kfrn
b53f6c9984 Amend formatting & wording 2017-04-19 20:13:17 +12:00
kfrn
64a362314c Tweak wording of readme 2017-04-19 19:33:51 +12:00
Reto Kromer
2a71179776 Merge pull request #175 from amiaopensource/script_folder
scripts folder
2017-04-18 19:38:56 +02:00
Reto Kromer
77e346c067 Merge pull request #176 from amiaopensource/signature-examples
Add examples for using new Signature filter
2017-04-17 20:28:55 +02:00
Andrew Weaver
b995fb05c5 FFmpeg -> ffmpeg 2017-04-17 14:27:58 -04:00
Andrew Weaver
d30741e378 add generate fingerprint
wording
2017-04-17 14:11:26 -04:00
Andrew Weaver
e3b01e2aa8 add compare fingerprints 2017-04-17 13:40:18 -04:00
Reto Kromer
750a763157 more coherent output prompt 2017-04-17 11:46:46 +02:00
Reto Kromer
0a6e5a4a7a more coherent output prompt 2017-04-17 11:46:10 +02:00
Reto Kromer
54a8ab6057 plural 2017-04-17 07:58:26 +02:00
Reto Kromer
e762c7dc42 plural 2017-04-17 07:57:49 +02:00
Reto Kromer
cb7f001444 plural 2017-04-17 07:55:58 +02:00
Reto Kromer
89039f55b3 [WIP] update readme
- add maintainers
- add link to PR
- move licence at the end

Remarks:
- It needs more work by an English native.
- The section `AVHack Team` could be called `History` and start with Ashley’s first release.
2017-04-17 07:52:12 +02:00
Reto Kromer
59e6c6d879 correct link 2017-04-17 07:05:07 +02:00
Reto Kromer
195bc5446e script folder 2017-04-17 07:04:00 +02:00
Reto Kromer
a1cc5a4428 update date 2017-04-17 07:00:48 +02:00
Reto Kromer
58663a869f script folder 2017-04-17 07:00:15 +02:00
Reto Kromer
05d16367f0 script folder 2017-04-17 06:58:14 +02:00
Reto Kromer
9477bcfe0a Merge pull request #173 from amiaopensource/weaver-branch
add script for comparing audio framemd5
2017-04-16 16:56:05 +02:00
aweaver
ad439d3b78 version 2017-04-16 07:53:28 -07:00
aweaver
736b01e426 caveat about transcode 2017-04-16 07:40:00 -07:00
Ashley
57166fe61d Merge pull request #174 from amiaopensource/don_t_leave_ffmprovisr
external link opens a new window
2017-04-16 10:25:08 -04:00
aweaver
ec26d2038a urls and quotes 2017-04-16 07:17:27 -07:00
aweaver
321d998b5a name,filter and spaces 2017-04-16 07:01:40 -07:00
Reto Kromer
009670eed1 external link opens a new window
- add missing `target="_blank"`
- uniform `href` before `target` (easier to maintain)
2017-04-16 10:17:43 +02:00
aweaver
d3a941a725 add script for audio 2017-04-15 14:02:23 -07:00
Reto Kromer
d334d7b4de HTML cleanup
before new release
2017-04-14 08:31:40 +02:00
Reto Kromer
d08cf349f6 Merge pull request #172 from pjotrek-b/text
Split "change formats" into separate categories
2017-04-13 19:11:34 +02:00
Peter B
b5b06021b0 Rearranged aspect-ratio conversions to be next to each other. 2017-04-13 16:24:17 +02:00
Peter B
e0ceeb0d73 Split "change formats" into: rewrap,transcode,formats 2017-04-13 16:23:09 +02:00
Reto Kromer
dc47dbc618 Merge pull request #170 from pjotrek-b/avsync
Added AV sync repair option (aresample/async)
2017-04-13 15:42:46 +02:00
Reto Kromer
28ae979652 Merge pull request #171 from pjotrek-b/gh-pages
Fixed outdated link to ffmpeg filters paragaph.
2017-04-13 15:32:38 +02:00
Peter B
0e2a2c2bfe Fixed outdated link to ffmpeg filters paragaph. 2017-04-13 15:00:21 +02:00
Peter B
388b107a3f Added AV sync repair option (aresample/async) 2017-04-13 14:43:35 +02:00
Reto Kromer
117ebf506d Merge pull request #167 from pjotrek-b/gh-pages
Added framemd5 for audio.
2017-04-13 12:33:05 +02:00
Peter B
9356f9af93 Added testvideo with image + sound (SMPTE bars) 2017-04-13 11:10:41 +02:00
Peter B
4d2d5c7c81 Added framemd5 for audio. 2017-04-13 10:35:42 +02:00
6 changed files with 786 additions and 409 deletions

BIN
img/16_32_abitscope.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 450 KiB

View File

@@ -30,12 +30,12 @@
<p>For instructions on how to install FFmpeg on Mac, Linux, and Windows, refer to Reto Kromers <a href="https://avpres.net/FFmpeg/#ch1" target="_blank">installation instructions</a>.</p>
<p>For Bash and command line basics, try the <a href="https://learnpythonthehardway.org/book/appendixa.html" target="_blank">Command Line Crash Course</a>. For a little more context presented in an ffmprovisr style, try <a href="http://explainshell.com/" target="_blank">explainshell.com</a>!</p>
<h5>License</h5>
<p><a target="_blank" href="https://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png"></a><br>
This work is licensed under a <a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.</p>
<p><a href="https://creativecommons.org/licenses/by-sa/4.0/" target="_blank"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png"></a><br>
This work is licensed under a <a href="https://creativecommons.org/licenses/by-sa/4.0/" target="_blank">Creative Commons Attribution-ShareAlike 4.0 International License</a>.</p>
<h5>Sister projects</h5>
<p><a target="_blank" href="http://dd388.github.io/crals/">Script Ahoy</a>: Community Resource for Archivists and Librarians Scripting</p>
<p><a target="_blank" href="https://datapraxis.github.io/sourcecaster/">The Sourcecaster</a>: an app that helps you use the command line to work through common challenges that come up when working with digital primary sources.</p>
<p><a target="_blank" href="https://amiaopensource.github.io/cable-bible/">Cable Bible</a>: A Guide to Cables and Connectors Used for Audiovisual Tech</p>
<p><a href="http://dd388.github.io/crals/" target="_blank">Script Ahoy</a>: Community Resource for Archivists and Librarians Scripting</p>
<p><a href="https://datapraxis.github.io/sourcecaster/" target="_blank">The Sourcecaster</a>: an app that helps you use the command line to work through common challenges that come up when working with digital primary sources.</p>
<p><a href="https://amiaopensource.github.io/cable-bible/" target="_blank">Cable Bible</a>: A Guide to Cables and Connectors Used for Audiovisual Tech</p>
</div>
<div class="well col-md-8 col-md-offset-0">
@@ -43,58 +43,42 @@
<h3>What do you want to do?</h3>
<h6>Select from the following.</h6>
</div>
<div class="well"><h4>Change formats</h4>
<!-- WAV to MP3 -->
<span data-toggle="modal" data-target="#wav_to_mp3"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Convert WAV to MP3">WAV to MP3</button></span>
<div id="wav_to_mp3" class="modal fade" tabindex="-1" role="dialog">
<div class="well">
<h4>Change container (rewrap)</h4>
<!-- MKV to MP4 -->
<span data-toggle="modal" data-target="#mkv_to_mp4"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Convert Matroska (MKV) to MP4">MKV to MP4</button></span>
<div id="mkv_to_mp4" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>WAV to MP3</h3>
<p><code>ffmpeg -i <i>input_file</i>.wav -write_id3v1 1 -id3v2_version 3 -dither_method modified_e_weighted -out_sample_rate 48k -qscale:a 1 <i>output_file</i>.mp3</code></p>
<p>This will convert your WAV files to MP3s.</p>
<h3>MKV to MP4</h3>
<p><code>ffmpeg -i <i>input_file</i>.mkv -c:v copy -c:a aac <i>output_file</i>.mp4</code></p>
<p>This will convert your Matroska (MKV) files to MP4 files.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path and name of the input file</dd>
<dt>-write_id3v1 <i>1</i></dt><dd>Write ID3v1 tag. This will add metadata to the old MP3 format, assuming youve embedded metadata into the WAV file.</dd>
<dt>-id3v2_version <i>3</i></dt><dd>Write ID3v2 tag. This will add metadata to a newer MP3 format, assuming youve embedded metadata into the WAV file.</dd>
<dt>-dither_method <i>modified_e_weighted</i></dt><dd>Dither makes sure you dont unnecessarily truncate the dynamic range of your audio.</dd>
<dt>-out_sample_rate <i>48k</i></dt><dd>Sets the audio sampling frequency to 48 kHz. This can be omitted to use the same sampling frequency as the input.</dd>
<dt>-qscale:a <i>1</i></dt><dd>This sets the encoder to use a constant quality with a variable bitrate of between 190-250kbit/s. If you would prefer to use a constant bitrate, this could be replaced with <code>-b:a 320k</code> to set to the maximum bitrate allowed by the MP3 format. For more detailed discussion on variable vs constant bitrates see <a href="https://trac.ffmpeg.org/wiki/Encode/MP3" target="_blank">here.</a></dd>
<dt><i>output_file</i></dt><dd>path and name of the output file</dd>
<dt>-i <i>input_file</i></dt><dd>path and name of the input file<br>
The extension for the Matroska container is <code>.mkv</code>.</dd>
<dt>-c:v copy</dt><dd>re-encodes using the same video codec</dd>
<dt>-c:a aac</dt><dd>re-encodes using the AAC audio codec<br>
Note that sadly MP4 cannot contain sound encoded by a PCM (Pulse-Code Modulation) audio codec.<br>
For silent videos you can replace <code>-c:a aac</code> by <code>-an</code>.</dd>
<dt><i>output_file</i></dt><dd>path and name of the output file<br>
The extension for the MP4 container is <code>.mp4</code>.</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends WAV to MP3 -->
<!-- ends MKV to MP4 -->
</div>
<!-- ends well -->
<!-- WAV to AAC/MP4 -->
<span data-toggle="modal" data-target="#wav_to_mp4"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Convert WAV to AAC/MP4">WAV to AAC/MP4</button></span>
<div id="wav_to_mp4" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>WAV to AAC/MP4</h3>
<p><code>ffmpeg -i <i>input_file</i>.wav -c:a aac -b:a 128k -dither_method modified_e_weighted -ar 44100 <i>output_file</i>.mp4</code></p>
<p>This will convert your WAV file to AAC/MP4.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path and name of the input file</dd>
<dt>-c:a aac</dt><dd>sets the audio codec to AAC</dd>
<dt>-b:a 128k</dt><dd>sets the bitrate of the audio to 128k</dd>
<dt>-dither_method modified_e_weighted</dt><dd>Dither makes sure you dont unnecessarily truncate the dynamic range of your audio.</dd>
<dt>-ar 44100</dt><dd>sets the audio sampling frequency to 44100 Hz, or 44.1 kHz, or “CD quality”</dd>
<dt><i>output_file</i></dt><dd>path and name of the output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends WAV to AAC/MP4 -->
<h4>Change codec (transcode)</h4>
<!-- Transcode to ProRes -->
<span data-toggle="modal" data-target="#to_prores"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transcode to deinterlaced Apple ProRes LT">Transcode to ProRes</button></span>
@@ -198,106 +182,6 @@
</div>
<!-- ends H.264 from DCP -->
<!-- NTSC to H.264 -->
<span data-toggle="modal" data-target="#ntsc_to_h264"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Upscaled, pillar-boxed HD H.264 access files from SD NTSC source">NTSC to H.264</button></span>
<div id="ntsc_to_h264" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Upscaled, Pillar-boxed HD H.264 Access Files from SD NTSC source</h3>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -filter:v "yadif,scale=1440:1080:flags=lanczos,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,format=yuv420p" <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>Calls the program ffmpeg</dd>
<dt>-i</dt><dd>for input video file and audio file</dd>
<dt>-c:v libx264</dt><dd>encodes video stream with libx264 (h264)</dd>
<dt>-filter:v</dt><dd>calls an option to apply filtering to the video stream. yadif deinterlaces. scale and pad do the math! resizes the video frame then pads the area around the 4:3 aspect to complete 16:9. flags=lanczos uses the Lanczos scaling algorithm which is slower but better than the default bilinear. Finally, format specifies a pixel format of YUV 4:2:0. The very same scaling filter also downscales a bigger image size into HD.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends NTSC to H.264 -->
<!-- 4:3 to 16:9 -->
<span data-toggle="modal" data-target="#SD_HD"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transform 4:3 aspect ratio into 16:9 with pillarbox">4:3 to 16:9</button></span>
<div id="SD_HD" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Transform 4:3 aspect ratio into 16:9 with pillarbox</h3>
<p>Transform a video file with 4:3 aspect ratio into a video file with 16:9 aspect ration by correct pillarboxing.</p>
<p><code>ffmpeg -i <i>input_file</i> -filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2" -c:a copy <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"</dt><dd>video padding<br>This resolution independent formula is actually padding any aspect ratio into 16:9 by pillarboxing, because the video filter uses relative values for input width (iw), input height (ih), output width (ow) and output height (oh).</dd>
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
For silent videos you can replace <code>-c:a copy</code> by <code>-an</code>.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends 4:3 to 16:9 -->
<!-- SD to HD -->
<span data-toggle="modal" data-target="#SD_HD_2"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transform SD to HD with pillarbox">SD to HD</button></span>
<div id="SD_HD_2" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Transform SD into HD with pillarbox</h3>
<p>Transform a SD video file with 4:3 aspect ratio into an HD video file with 16:9 aspect ratio by correct pillarboxing.</p>
<p><code>ffmpeg -i <i>input_file</i> -filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0" -c:a copy <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0"</dt><dd>set colour matrix, video scaling and padding<br>Three filters are applied:
<ol>
<li>The luma coefficients are modified from SD video (according to Rec. 601) to HD video (according to Rec. 709) by a colour matrix. Note that today Rec. 709 is often used also for SD and therefore you may cancel this parameter.</li>
<li>The scaling filter (<code>scale=1440:1080</code>) works for both upscaling and downscaling. We use the Lanczos scaling algorithm (<code>flags=lanczos</code>), which is slower but gives better results than the default bilinear algorithm.</li>
<li>The padding filter (<code>pad=1920:1080:240:0</code>) completes the transformation from SD to HD.</li>
</ol></dd>
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
For silent videos you can replace <code>-c:a copy</code> with <code>-an</code>.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends SD to HD -->
<!-- 16:9 to 4:3 -->
<span data-toggle="modal" data-target="#HD_SD"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transform 16:9 aspect ratio video into 4:3 with letterbox">16:9 to 4:3</button></span>
<div id="HD_SD" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Transform 16:9 aspect ratio video into 4:3 with letterbox</h3>
<p>Transform a video file with 16:9 aspect ratio into a video file with 4:3 aspect ration by correct letterboxing.</p>
<p><code>ffmpeg -i <i>input_file</i> -filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2" -c:a copy <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2"</dt><dd>video padding<br>
This resolution independent formula is actually padding any aspect ratio into 4:3 by letterboxing, because the video filter uses relative values for input width (iw), input height (ih), output width (ow) and output height (oh).</dd>
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
For silent videos you can replace <code>-c:a copy</code> by <code>-an</code>.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends 16:9 to 4:3 -->
<!-- Transcode to FFV1.mkv -->
<span data-toggle="modal" data-target="#create_FFV1_mkv"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transcode your file with the FFV1 Version 3 Codec in a Matroska container">Create FFV1.mkv</button></span>
<div id="create_FFV1_mkv" class="modal fade" tabindex="-1" role="dialog">
@@ -330,55 +214,6 @@
</div>
<!-- ends Transcode to FFV1.mkv-->
<!-- Change display aspect ratio without re-encoding video-->
<span data-toggle="modal" data-target="#change_DAR"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Change display aspect ratio without re-encoding">Change Display Aspect Ratio</button></span>
<div id="change_DAR" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Change Display Aspect Ratio without reencoding video</h3>
<p><code>ffmpeg -i <i>input_file</i> -c:v copy -aspect 4:3 <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-c:v copy</dt><dd>Copy all mapped video streams.</dd>
<dt>-aspect 4:3</dt><dd>Change Display Aspect Ratio to <code>4:3</code>. Experiment with other aspect ratios such as <code>16:9</code>. If used together with <code>-c:v copy</code>, it will affect the aspect ratio stored at container level, but not the aspect ratio stored in encoded frames, if it exists.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends Change display aspect ratio without re-encoding video -->
<!-- MKV to MP4 -->
<span data-toggle="modal" data-target="#mkv_to_mp4"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Convert Matroska (MKV) to MP4">MKV to MP4</button></span>
<div id="mkv_to_mp4" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>MKV to MP4</h3>
<p><code>ffmpeg -i <i>input_file</i>.mkv -c:v copy -c:a aac <i>output_file</i>.mp4</code></p>
<p>This will convert your Matroska (MKV) files to MP4 files.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path and name of the input file<br>
The extension for the Matroska container is <code>.mkv</code>.</dd>
<dt>-c:v copy</dt><dd>re-encodes using the same video codec</dd>
<dt>-c:a aac</dt><dd>re-encodes using the AAC audio codec<br>
Note that sadly MP4 cannot contain sound encoded by a PCM (Pulse-Code Modulation) audio codec.<br>
For silent videos you can replace <code>-c:a aac</code> by <code>-an</code>.</dd>
<dt><i>output_file</i></dt><dd>path and name of the output file<br>
The extension for the MP4 container is <code>.mp4</code>.</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends MKV to MP4 -->
<!-- Images to GIF -->
<span data-toggle="modal" data-target="#img_to_gif"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Converts images to GIF">Images to GIF</button></span>
<div id="img_to_gif" class="modal fade" tabindex="-1" role="dialog">
@@ -476,6 +311,188 @@
</div>
<!-- ends Transcode to H.265 -->
<p>&nbsp;</p>
<!-- Here comes audio-only transcoding -->
<!-- WAV to MP3 -->
<span data-toggle="modal" data-target="#wav_to_mp3"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Convert WAV to MP3">WAV to MP3</button></span>
<div id="wav_to_mp3" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>WAV to MP3</h3>
<p><code>ffmpeg -i <i>input_file</i>.wav -write_id3v1 1 -id3v2_version 3 -dither_method modified_e_weighted -out_sample_rate 48k -qscale:a 1 <i>output_file</i>.mp3</code></p>
<p>This will convert your WAV files to MP3s.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path and name of the input file</dd>
<dt>-write_id3v1 <i>1</i></dt><dd>Write ID3v1 tag. This will add metadata to the old MP3 format, assuming youve embedded metadata into the WAV file.</dd>
<dt>-id3v2_version <i>3</i></dt><dd>Write ID3v2 tag. This will add metadata to a newer MP3 format, assuming youve embedded metadata into the WAV file.</dd>
<dt>-dither_method <i>modified_e_weighted</i></dt><dd>Dither makes sure you dont unnecessarily truncate the dynamic range of your audio.</dd>
<dt>-out_sample_rate <i>48k</i></dt><dd>Sets the audio sampling frequency to 48 kHz. This can be omitted to use the same sampling frequency as the input.</dd>
<dt>-qscale:a <i>1</i></dt><dd>This sets the encoder to use a constant quality with a variable bitrate of between 190-250kbit/s. If you would prefer to use a constant bitrate, this could be replaced with <code>-b:a 320k</code> to set to the maximum bitrate allowed by the MP3 format. For more detailed discussion on variable vs constant bitrates see <a href="https://trac.ffmpeg.org/wiki/Encode/MP3" target="_blank">here.</a></dd>
<dt><i>output_file</i></dt><dd>path and name of the output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends WAV to MP3 -->
<!-- WAV to AAC/MP4 -->
<span data-toggle="modal" data-target="#wav_to_mp4"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Convert WAV to AAC/MP4">WAV to AAC/MP4</button></span>
<div id="wav_to_mp4" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>WAV to AAC/MP4</h3>
<p><code>ffmpeg -i <i>input_file</i>.wav -c:a aac -b:a 128k -dither_method modified_e_weighted -ar 44100 <i>output_file</i>.mp4</code></p>
<p>This will convert your WAV file to AAC/MP4.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path and name of the input file</dd>
<dt>-c:a aac</dt><dd>sets the audio codec to AAC</dd>
<dt>-b:a 128k</dt><dd>sets the bitrate of the audio to 128k</dd>
<dt>-dither_method modified_e_weighted</dt><dd>Dither makes sure you dont unnecessarily truncate the dynamic range of your audio.</dd>
<dt>-ar 44100</dt><dd>sets the audio sampling frequency to 44100 Hz, or 44.1 kHz, or “CD quality”</dd>
<dt><i>output_file</i></dt><dd>path and name of the output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends WAV to AAC/MP4 -->
</div>
<!-- ends well -->
<div class="well">
<h4>Change formats</h4>
<!-- NTSC to H.264 -->
<span data-toggle="modal" data-target="#ntsc_to_h264"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Upscaled, pillar-boxed HD H.264 access files from SD NTSC source">NTSC to H.264</button></span>
<div id="ntsc_to_h264" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Upscaled, Pillar-boxed HD H.264 Access Files from SD NTSC source</h3>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -filter:v "yadif,scale=1440:1080:flags=lanczos,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,format=yuv420p" <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>Calls the program ffmpeg</dd>
<dt>-i</dt><dd>for input video file and audio file</dd>
<dt>-c:v libx264</dt><dd>encodes video stream with libx264 (h264)</dd>
<dt>-filter:v</dt><dd>calls an option to apply filtering to the video stream. yadif deinterlaces. scale and pad do the math! resizes the video frame then pads the area around the 4:3 aspect to complete 16:9. flags=lanczos uses the Lanczos scaling algorithm which is slower but better than the default bilinear. Finally, format specifies a pixel format of YUV 4:2:0. The very same scaling filter also downscales a bigger image size into HD.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends NTSC to H.264 -->
<!-- 4:3 to 16:9 -->
<span data-toggle="modal" data-target="#SD_HD"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transform 4:3 aspect ratio into 16:9 with pillarbox">4:3 to 16:9</button></span>
<div id="SD_HD" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Transform 4:3 aspect ratio into 16:9 with pillarbox</h3>
<p>Transform a video file with 4:3 aspect ratio into a video file with 16:9 aspect ration by correct pillarboxing.</p>
<p><code>ffmpeg -i <i>input_file</i> -filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2" -c:a copy <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"</dt><dd>video padding<br>This resolution independent formula is actually padding any aspect ratio into 16:9 by pillarboxing, because the video filter uses relative values for input width (iw), input height (ih), output width (ow) and output height (oh).</dd>
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
For silent videos you can replace <code>-c:a copy</code> by <code>-an</code>.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends 4:3 to 16:9 -->
<!-- 16:9 to 4:3 -->
<span data-toggle="modal" data-target="#HD_SD"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transform 16:9 aspect ratio video into 4:3 with letterbox">16:9 to 4:3</button></span>
<div id="HD_SD" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Transform 16:9 aspect ratio video into 4:3 with letterbox</h3>
<p>Transform a video file with 16:9 aspect ratio into a video file with 4:3 aspect ration by correct letterboxing.</p>
<p><code>ffmpeg -i <i>input_file</i> -filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2" -c:a copy <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2"</dt><dd>video padding<br>
This resolution independent formula is actually padding any aspect ratio into 4:3 by letterboxing, because the video filter uses relative values for input width (iw), input height (ih), output width (ow) and output height (oh).</dd>
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
For silent videos you can replace <code>-c:a copy</code> by <code>-an</code>.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends 16:9 to 4:3 -->
<!-- SD to HD -->
<span data-toggle="modal" data-target="#SD_HD_2"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Transform SD to HD with pillarbox">SD to HD</button></span>
<div id="SD_HD_2" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Transform SD into HD with pillarbox</h3>
<p>Transform a SD video file with 4:3 aspect ratio into an HD video file with 16:9 aspect ratio by correct pillarboxing.</p>
<p><code>ffmpeg -i <i>input_file</i> -filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0" -c:a copy <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0"</dt><dd>set colour matrix, video scaling and padding<br>Three filters are applied:
<ol>
<li>The luma coefficients are modified from SD video (according to Rec. 601) to HD video (according to Rec. 709) by a colour matrix. Note that today Rec. 709 is often used also for SD and therefore you may cancel this parameter.</li>
<li>The scaling filter (<code>scale=1440:1080</code>) works for both upscaling and downscaling. We use the Lanczos scaling algorithm (<code>flags=lanczos</code>), which is slower but gives better results than the default bilinear algorithm.</li>
<li>The padding filter (<code>pad=1920:1080:240:0</code>) completes the transformation from SD to HD.</li>
</ol></dd>
<dt>-c:a copy</dt><dd>re-encodes using the same audio codec<br>
For silent videos you can replace <code>-c:a copy</code> with <code>-an</code>.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends SD to HD -->
<!-- Change display aspect ratio without re-encoding video-->
<span data-toggle="modal" data-target="#change_DAR"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Change display aspect ratio without re-encoding">Change Display Aspect Ratio</button></span>
<div id="change_DAR" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Change Display Aspect Ratio without reencoding video</h3>
<p><code>ffmpeg -i <i>input_file</i> -c:v copy -aspect 4:3 <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-c:v copy</dt><dd>Copy all mapped video streams.</dd>
<dt>-aspect 4:3</dt><dd>Change Display Aspect Ratio to <code>4:3</code>. Experiment with other aspect ratios such as <code>16:9</code>. If used together with <code>-c:v copy</code>, it will affect the aspect ratio stored at container level, but not the aspect ratio stored in encoded frames, if it exists.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends Change display aspect ratio without re-encoding video -->
<!-- Deinterlace video -->
<span data-toggle="modal" data-target="#deinterlace"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Deinterlace video">Deinterlace video</button></span>
<div id="deinterlace" class="modal fade" tabindex="-1" role="dialog">
@@ -483,7 +500,7 @@
<div class="modal-content">
<div class="well">
<h3>Deinterlace a video</h3>
<p> <code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf "yadif,format=yuv420p" <i>output_file</i></code> </p>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf "yadif,format=yuv420p" <i>output_file</i></code></p>
<p>This command takes an interlaced input file and outputs a deinterlaced H.264 MP4.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
@@ -524,7 +541,7 @@
<div class="well">
<h3>Transcode video to a different colourspace</h3>
<p>This command uses a filter to convert the video to a different colour space.</p>
<p> <code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=src:dst <i>output_file</i></code> </p>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=src:dst <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
@@ -537,7 +554,7 @@
<p><b>Note</b>: Converting between colourspaces with ffmpeg can be done via either the <b>colormatrix</b> or <b>colorspace</b> filters, with colorspace allowing finer control (individual setting of colourspace, transfer characteristics, primaries, range, pixel format, etc). See <a href="https://trac.ffmpeg.org/wiki/colorspace" target="_blank">this</a> entry on the ffmpeg wiki, and the ffmpeg documentation for <a href="http://ffmpeg.org/ffmpeg-filters.html#colormatrix" target="_blank">colormatrix</a> and <a href="http://ffmpeg.org/ffmpeg-filters.html#colorspace" target="_blank">colorspace</a>.</p>
<hr>
<h4>Convert colourspace and embed colourspace metadata</h4>
<p> <code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=src:dst -color_primaries <i>val</i> -color_trc <i>val</i> -colorspace <i>val</i> <i>output_file</i></code> </p>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=src:dst -color_primaries <i>val</i> -color_trc <i>val</i> -colorspace <i>val</i> <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
@@ -553,11 +570,11 @@
</dl>
<h5>Examples</h5>
<p>To Rec.601 (525-line/NTSC):</p>
<p> <code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=bt709:smpte170m -color_primaries smpte170m -color_trc smpte170m -colorspace smpte170m <i>output_file</i></code> </p>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=bt709:smpte170m -color_primaries smpte170m -color_trc smpte170m -colorspace smpte170m <i>output_file</i></code></p>
<p>To Rec.601 (625-line/PAL):</p>
<p> <code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=bt709:bt470bg -color_primaries bt470bg -color_trc gamma28 -colorspace bt470bg <i>output_file</i></code> </p>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=bt709:bt470bg -color_primaries bt470bg -color_trc gamma28 -colorspace bt470bg <i>output_file</i></code></p>
<p>To Rec.709:</p>
<p> <code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=bt601:bt709 -color_primaries bt709 -color_trc bt709 -colorspace bt709 <i>output_file</i></code> </p>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf colormatrix=bt601:bt709 -color_primaries bt709 -color_trc bt709 -colorspace bt709 <i>output_file</i></code></p>
<p>MediaInfo output examples:</p>
<img src="./img/colourspace_metadata_mediainfo.png" alt="MediaInfo screenshots of colourspace metadata"><br>
<p><span class="beware"></span> Using this command it is possible to add Rec.709 tags to a file that is actually Rec.601 (etc), so apply with caution!</p>
@@ -582,18 +599,18 @@
<div class="well">
<h3>Inverse telecine a video file</h3>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf "fieldmatch,yadif,decimate" <i>output_file</i></code></p>
<p>The inverse telecine procedure reverses the <a href="https://en.wikipedia.org/wiki/Three-two_pull_down">3:2 pull down</a> process, restoring 29.97fps interlaced video to the 24fps frame rate of the original film source.</p>
<p>The inverse telecine procedure reverses the <a href="https://en.wikipedia.org/wiki/Three-two_pull_down" target="_blank">3:2 pull down</a> process, restoring 29.97fps interlaced video to the 24fps frame rate of the original film source.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-c:v libx264</dt><dd>encode video as H.264</dd>
<dt>-vf "fieldmatch,yadif,decimate"</dt><dd>applies these three video filters to the input video.<br>
<a href="https://ffmpeg.org/ffmpeg-filters.html#fieldmatch">Fieldmatch</a> is a field matching filter for inverse telecine - it reconstructs the progressive frames from a telecined stream.<br>
<a href="https://ffmpeg.org/ffmpeg-filters.html#yadif-1">Yadif</a> (yet another deinterlacing filter) deinterlaces the video. (Note that ffmpeg also includes several other deinterlacers).<br>
<a href="https://ffmpeg.org/ffmpeg-filters.html#decimate-1">Decimate</a> deletes duplicated frames.</dd>
<a href="https://ffmpeg.org/ffmpeg-filters.html#fieldmatch" target="_blank">Fieldmatch</a> is a field matching filter for inverse telecine - it reconstructs the progressive frames from a telecined stream.<br>
<a href="https://ffmpeg.org/ffmpeg-filters.html#yadif-1" target="_blank">Yadif</a> (yet another deinterlacing filter) deinterlaces the video. (Note that ffmpeg also includes several other deinterlacers).<br>
<a href="https://ffmpeg.org/ffmpeg-filters.html#decimate-1" target="_blank">Decimate</a> deletes duplicated frames.</dd>
<dt><i>output file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p> <code>"fieldmatch,yadif,decimate"</code> is an ffmpeg <a href="https://trac.ffmpeg.org/wiki/FilteringGuide#FiltergraphChainFilterrelationship" target="_blank">filtergraph</a>. Here the filtergraph is made up of one filter chain, which is itself made up of the three filters (separated by commas).<br>
<p><code>"fieldmatch,yadif,decimate"</code> is an ffmpeg <a href="https://trac.ffmpeg.org/wiki/FilteringGuide#FiltergraphChainFilterrelationship" target="_blank">filtergraph</a>. Here the filtergraph is made up of one filter chain, which is itself made up of the three filters (separated by commas).<br>
The enclosing quote marks are necessary when you use spaces within the filtergraph, e.g. <code>-vf "fieldmatch, yadif, decimate"</code>, and are included above as an example of good practice.</p>
<p>Note that if applying an inverse telecine procedure to a 29.97i file, the output framerate will actually be 23.976fps.</p>
<p>This command can also be used to restore other framerates.</p>
@@ -614,6 +631,32 @@
<div class="well">
<h4>Filters</h4>
<!-- abitscope -->
<span data-toggle="modal" data-target="#abitscope"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Audio Bitscope">Audio Bitscope</button></span>
<div id="abitscope" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Creates a visualization of the bits in an audio stream</h3>
<p><code>ffplay -f lavfi "amovie=<i>input_file</i>,asplit=2[out1][a],[a]abitscope=colors=purple|yellow[out0]"</code></p>
<p>This filter allows visual analysis of the information held in various bit depths of an audio stream. This can aid with identifying when a file that is nominally of a higher bit depth actually has been 'padded' with null information. The provided GIF shows a 16 bit WAV file (left) and then the results of converting that same WAV to 32 bit (right). Note that in the 32 bit version, there is still only information in the first 16 bits.</p>
<dl>
<dt>ffplay -f lavfi</dt><dd>starts the command and tells ffplay that you will be using the lavfi virtual device to create the input</dd>
<dt>amovie=<i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>asplit=2[out1][a]</dt><dd>splits the audio stream in two. One of these [a] will be passed to the filter, and the other [out1] will be the audible stream.</dd>
<dt>[a]abitscope=colors=purple|yellow[out0]</dt><dd>sends stream [a] into the abitscope filter, sets the colors for the channels to purple and yellow, and outputs the results to [out0]. This is what will be the visualization.</dd>
</dl>
<div class="sample-image">
<h4>Comparison of mono 16 bit and mono 16 bit padded to 32 bit.</h4>
<img src="img/16_32_abitscope.gif" alt="bit_scope_comparison">
</div>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends abitscope -->
<!-- astats -->
<span data-toggle="modal" data-target="#astats"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Play a graphical output showing decibel levels of an input file">Graphic for audio</button></span>
<div id="astats" class="modal fade" tabindex="-1" role="dialog">
@@ -700,10 +743,6 @@
<dt>fontcolor=white</dt><dd>specifies font color as white</dd>
<dt>"</dt><dd>quotation mark to close filter command</dd>
</dl>
<div class="sample-image">
<!-- <h4>Example of filter output</h4> -->
<!-- <img src="" alt="ocr example"> -->
</div>
<p class="link"></p>
</div>
</div>
@@ -727,10 +766,6 @@
<dt>-f lavfi</dt><dd>tells ffmpeg to use the <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">Libavfilter input virtual device</a></dd>
<dt>-i "movie=<i>input_file</i>,ocr"</dt><dd>declares 'movie' as <i>input_file</i> and passes in the 'ocr' command</dd>
</dl>
<div class="sample-image">
<!-- <h4>Example of filter output</h4> -->
<!-- <img src="" alt="Exports OCR example"> -->
</div>
<p class="link"></p>
</div>
</div>
@@ -827,8 +862,7 @@
</dl>
<p>The second command has a slightly different filtergraph, which breaks down as follows:</p>
<dl>
<dt>-filter_complex "[0:v]fps=10,scale=500:-1:flags=lanczos[v],[v][1:v]paletteuse"</dt><dd>
<code>[0:v]fps=10,scale=500:-1:flags=lanczos[v]</code>: applies the fps and scale filters described above to the first input file (the video).<br>
<dt>-filter_complex "[0:v]fps=10,scale=500:-1:flags=lanczos[v],[v][1:v]paletteuse"</dt><dd><code>[0:v]fps=10,scale=500:-1:flags=lanczos[v]</code>: applies the fps and scale filters described above to the first input file (the video).<br>
<code>[v][1:v]paletteuse"</code>: applies the <code>paletteuse</code> filter, setting the second input file (the palette) as the reference file.</dd>
</dl>
<p>Simpler GIF creation</p>
@@ -1073,6 +1107,112 @@
<!-- ends append notice to access mp3 -->
</div>
<div class="well">
<h4>Normalize/Equalize Audio</h4>
<!-- loudnorm metadata -->
<span data-toggle="modal" data-target="#loudnorm_metadata"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Calculate Loudness Levels">Calculate Loudness Levels</button></span>
<div id="loudnorm_metadata" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Calculate Loudness Levels</h3>
<p><code>ffmpeg -i <i>input_file</i> -af loudnorm=print_format=json -f null -</code></p>
<p>This filter calculates and outputs loudness information in json about an input file (labeled input) as well as what the levels would be if loudnorm were applied in its one pass mode (labeled output). The values generated can be used as inputs for a 'second pass' of the loudnorm filter allowing more accurate loudness normalization than if it is used in a single pass.</p>
<p>These instructions use the loudnorm defaults, which align well with PBS recommendations for target loudness. More information can be found at the <a href=https://ffmpeg.org/ffmpeg-filters.html#loudnorm>loudnorm documentation</a>.</p>
<p>Information about PBS loudness standards can be found in the <a href=http://www-tc.pbs.org/capt/Producing/TOS-2012-Pt2-Distribution.pdf>PBS Technical Operating Specifications</a> document. Information about EBU loudness standards can be found in the <a href=https://tech.ebu.ch/docs/r/r128-2014.pdf>EBU R 128</a> recommendation document.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt><i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-af loudnorm</dt><dd>activates the loudnorm filter</dd>
<dt>print_format=json</dt><dd>sets the output format for loudness information to json. This format makes it easy to use in a second pass. For a more human readable output, this can be set to <code>print_format=summary</code></dd>
<dt><i>-f null -</i></dt><dd>sets the file output to null (since we are only interested in the metadata generated)</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends loudnorm metadata -->
<!-- one pass loudnorm -->
<span data-toggle="modal" data-target="#loudnorm_one_pass"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="One Pass Loudness Normalization">One Pass Loudness Normalization</button></span>
<div id="loudnorm_one_pass" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>One Pass Loudness Normalization</h3>
<p><code>ffmpeg -i <i>input_file</i> -af loudnorm=dual_mono=true <i>output_file</i></code></p>
<p>This will normalize the loudness of an input using one pass, which is quicker but less accurate than using two passes. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the <a href=https://ffmpeg.org/ffmpeg-filters.html#loudnorm>loudnorm documentation</a>.</p>
<p>Information about PBS loudness standards can be found in the <a href=http://www-tc.pbs.org/capt/Producing/TOS-2012-Pt2-Distribution.pdf>PBS Technical Operating Specifications</a> document. Information about EBU loudness standards can be found in the <a href=https://tech.ebu.ch/docs/r/r128-2014.pdf>EBU R 128</a> recommendation document.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt><i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-af loudnorm</dt><dd>activates the loudnorm filter with default settings</dd>
<dt>dual_mono=true</dt><dd>(optional) Use this for mono files meant to be played back on stereo systems for correct loudness. Not necessary for multi-track inputs.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension for output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends one pass loudnorm -->
<!-- two pass loudnorm -->
<span data-toggle="modal" data-target="#loudnorm_two_pass"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Two Pass Loudness Normalization">Two Pass Loudness Normalization</button></span>
<div id="loudnorm_two_pass" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Two Pass Loudness Normalization</h3>
<p><code>ffmpeg -i <i>input_file</i> -af loudnorm=dual_mono=true:measured_I=<i>input_i</i>:measured_TP=<i>input_tp</i>:measured_LRA=<i>input_lra</i>:measured_thresh=<i>input_thresh</i>:offset=<i>target_offset</i>:linear=true <i>output_file</i></code></p>
<p>This command allows using the levels calculated using a <a href=#loudnorm_metadata>first pass of the loudnorm filter</a> to more accurately normalize loudness. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the <a href=https://ffmpeg.org/ffmpeg-filters.html#loudnorm>loudnorm documentation</a>.</p>
<p>Information about PBS loudness standards can be found in the <a href=http://www-tc.pbs.org/capt/Producing/TOS-2012-Pt2-Distribution.pdf>PBS Technical Operating Specifications</a> document. Information about EBU loudness standards can be found in the <a href=https://tech.ebu.ch/docs/r/r128-2014.pdf>EBU R 128</a> recommendation document.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt><i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-af loudnorm</dt><dd>activates the loudnorm filter with default settings</dd>
<dt>dual_mono=true</dt><dd>(optional) use this for mono files meant to be played back on stereo systems for correct loudness. Not necessary for multi-track inputs.</dd>
<dt>measured_I=<i>input_i</i></dt><dd>use the 'input_i' value (integrated loudness) from the first pass in place of input_i</dd>
<dt>measured_TP=<i>input_tp</i></dt><dd>use the 'input_tp' value (true peak) from the first pass in place of input_tp</dd>
<dt>measured_LRA=<i>input_lra</i></dt><dd>use the 'input_lra' value (loudness range) from the first pass in place of input_lra</dd>
<dt>measured_LRA=<i>input_thresh</i></dt><dd>use the 'input_thresh' value (threshold) from the first pass in place of input_thresh</dd>
<dt>offset=<i>target_offset</i></dt><dd>use the 'target_offset' value (offset) from the first pass in place of target_offset</dd>
<dt>linear=true</i></dt><dd>tells loudnorm to use linear normalization</dd>
<dt><i>output_file</i></dt><dd>path, name and extension for output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends two pass loudnorm -->
<!-- RIAA equalization -->
<span data-toggle="modal" data-target="#riaa_eq"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="RIAA Equalization">RIAA Equalization</button></span>
<div id="riaa_eq" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>RIAA Equalization</h3>
<p><code>ffmpeg -i <i>input_file</i> -af aemphasis=type=riaa <i>output_file</i></code></p>
<p>This will apply RIAA equalization to an input file allowing correct listening of audio transferred 'flat' (without EQ) from records that used this EQ curve. For more information about RIAA equalization see the <a href=https://en.wikipedia.org/wiki/RIAA_equalization>Wikipedia page</a> on the subject.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt><i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-af aemphasis=type=riaa</dt><dd>activates the aemphasis filter and sets it to use RIAA equalization</dd>
<dt><i>output_file</i></dt><dd>path and name of output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends RIAA equalization -->
</div><!-- closes the well -->
<div class="well">
<h4>Preservation</h4>
@@ -1149,12 +1289,12 @@ foreach ($file in $inputfiles) {
<!-- ends batch processing (Windows) -->
<!-- Create frame md5s -->
<span data-toggle="modal" data-target="#create_frame_md5s"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Create an MD5 checksum per video frame">Create MD5 checksums</button></span>
<div id="create_frame_md5s" class="modal fade" tabindex="-1" role="dialog">
<span data-toggle="modal" data-target="#create_frame_md5s_v"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Create an MD5 checksum per video frame">Create MD5 checksums (video frames)</button></span>
<div id="create_frame_md5s_v" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Create MD5 checksums</h3>
<h3>Create MD5 checksums (video frames)</h3>
<p><code>ffmpeg -i <i>input_file</i> -f framemd5 -an <i>output_file</i></code></p>
<p>This will create an MD5 checksum per video frame.</p>
<dl>
@@ -1164,7 +1304,7 @@ foreach ($file in $inputfiles) {
<dt>-an</dt><dd>ignores the audio stream (audio no)</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p>You may verify an MD5 checksum file created this way by using a <a href="check_framemd5.sh" target="_blank">Bash script</a>.</p>
<p>You may verify an MD5 checksum file created this way by using a <a href="scripts/check_video_framemd5.sh" target="_blank">Bash script</a>.</p>
<p class="link"></p>
</div>
</div>
@@ -1172,6 +1312,36 @@ foreach ($file in $inputfiles) {
</div>
<!-- ends Create frame md5s -->
<!-- Create frame md5s (audio) -->
<span data-toggle="modal" data-target="#create_frame_md5s_a"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Create MD5 checksums for audio samples">Create MD5 checksums (audio samples)</button></span>
<div id="create_frame_md5s_a" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Create MD5 checksums (audio samples)</h3>
<p><code>ffmpeg -i <i>input_file</i> -af "asetnsamples=<i>n=48000</i>" -f framemd5 -vn <i>output_file</i></code></p>
<p>This will create an MD5 checksum for each group of 48000 audio samples.<br> The number of samples per group can be set arbitrarily, but it's good practice to match the samplerate of the media file (so you will get one checksum per second).</p>
<p>Examples for other samplerates:</p>
<ul>
<li>44.1 kHz: "asetnsamples=n=44100"</li>
<li>96 kHz: "asetnsamples=n=96000"</li>
</ul>
<p>Note: This filter trandscodes audio to 16 bit PCM by default. The generated framemd5s will represent this value. Validating these framemd5s will require using the same default settings.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-f framemd5</dt><dd>library used to calculate the MD5 checksums</dd>
<dt>-vn</dt><dd>ignores the video stream (video no)</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p>You may verify an MD5 checksum file created this way by using a <a href="scripts/check_audio_framemd5.sh" target="_blank">Bash script</a>.</p>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends Create frame md5s (audio) -->
<!-- Pull specs -->
<span data-toggle="modal" data-target="#pull_specs"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Pull specs from video file">Pull specs</button></span>
<div id="pull_specs" class="modal fade" tabindex="-1" role="dialog">
@@ -1229,13 +1399,12 @@ foreach ($file in $inputfiles) {
<div class="well">
<h3>Check video file interlacement patterns</h3>
<p><code>ffmpeg -i <i>input file</i> -filter:v idet -f null -</code></p>
<p></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-filter:v idet</dt><dd>This calls the <a href="https://ffmpeg.org/ffmpeg-filters.html#idet" target="_blank">idet (detect video interlacing type) filter</a>.</dd>
<dt>-f null</dt><dd>Video is decoded with the <code>null</code> muxer. This allows video decoding without creating an output file.</dd>
<dt>-</dt><dd>FFmpeg syntax requires a specified output, and <code>-</code> is just a place holder. No file is actually created. </dd>
<dt>-</dt><dd>FFmpeg syntax requires a specified output, and <code>-</code> is just a place holder. No file is actually created.</dd>
</dl>
<p class="link"></p>
</div>
@@ -1319,7 +1488,7 @@ foreach ($file in $inputfiles) {
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-f lavfi</dt><dd>tells ffmpeg to use the Libavfilter input virtual device <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">[more]</a></dd>
<dt>-i mandelbrot=size=1280x720:rate=25</dt><dd>asks for the mandelbrot test filter as input. Adjusting the <code>size</code> and <code>rate</code> options allow you to choose a specific frame size and framerate. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc" target="_blank">[more]</a></dd>
<dt>-i mandelbrot=size=1280x720:rate=25</dt><dd>asks for the mandelbrot test filter as input. Adjusting the <code>size</code> and <code>rate</code> options allow you to choose a specific frame size and framerate. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc" target="_blank">[more]</a></dd>
<dt>-c:v <i>libx264</i></dt><dd>transcodes video from rawvideo to H.264. Set <code>-pix_fmt</code> to <code>yuv420p</code> for greater H.264 compatibility with media players.</dd>
<dt>-t 10</dt><dd>specifies recording time of 10 seconds</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file. Try different file extensions such as mkv, mov, mp4, or avi.</dd>
@@ -1342,7 +1511,7 @@ foreach ($file in $inputfiles) {
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-f lavfi</dt><dd>tells ffmpeg to use the Libavfilter input virtual device <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">[more]</a></dd>
<dt>-i smptebars=size=720x576:rate=25</dt><dd>asks for the smptebars test filter as input. Adjusting the <code>size</code> and <code>rate</code> options allow you to choose a specific frame size and framerate. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc" target="_blank">[more]</a></dd>
<dt>-i smptebars=size=720x576:rate=25</dt><dd>asks for the smptebars test filter as input. Adjusting the <code>size</code> and <code>rate</code> options allow you to choose a specific frame size and framerate. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc" target="_blank">[more]</a></dd>
<dt>-c:v <i>prores</i></dt><dd>transcodes video from rawvideo to Apple ProRes 4:2:2.</dd>
<dt>-t 10</dt><dd>specifies recording time of 10 seconds</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file. Try different file extensions such as mov or avi.</dd>
@@ -1390,7 +1559,7 @@ foreach ($file in $inputfiles) {
<dl>
<dt>ffplay</dt><dd>starts the command</dd>
<dt>-f lavfi</dt><dd>tells ffmpeg to use the libavfilter input virtual device <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">[more]</a></dd>
<dt>-i smptehdbars=size=1920x1080</dt><dd>asks for the smptehdbars filter pattern as input and sets the HD resolution. This generates a colour bars pattern, based on the SMPTE RP 2192002. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc" target="_blank">[more]</a></dd>
<dt>-i smptehdbars=size=1920x1080</dt><dd>asks for the smptehdbars filter pattern as input and sets the HD resolution. This generates a colour bars pattern, based on the SMPTE RP 2192002. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc" target="_blank">[more]</a></dd>
</dl>
<p class="link"></p>
</div>
@@ -1411,7 +1580,7 @@ foreach ($file in $inputfiles) {
<dl>
<dt>ffplay</dt><dd>starts the command</dd>
<dt>-f lavfi</dt><dd>tells ffmpeg to use the libavfilter input virtual device <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">[more]</a></dd>
<dt>-i smptebars=size=640x480</dt><dd>asks for the smptehdbars filter pattern as input and sets the VGA (SD) resolution. This generates a colour bars pattern, based on the SMPTE Engineering Guideline EG 11990. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc" target="_blank">[more]</a></dd>
<dt>-i smptebars=size=640x480</dt><dd>asks for the smptehdbars filter pattern as input and sets the VGA (SD) resolution. This generates a colour bars pattern, based on the SMPTE Engineering Guideline EG 11990. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc" target="_blank">[more]</a></dd>
</dl>
<p class="link"></p>
</div>
@@ -1432,7 +1601,7 @@ foreach ($file in $inputfiles) {
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>takes in a normal file</dd>
<dt>-bsf noise=1</dt><dd>sets bitstream filters for all to 'noise'. Filters can be set on specific filters using syntax such as <code>-bsf:v</code> for video, <code>-bsf:a</code> for audio, etc. The <a target="_blank" href="https://www.ffmpeg.org/ffmpeg-bitstream-filters.html#noise">noise filter</a> intentionally damages the contents of packets without damaging the container. This sets the noise level to 1 but it could be left blank or any number above 0.</dd>
<dt>-bsf noise=1</dt><dd>sets bitstream filters for all to 'noise'. Filters can be set on specific filters using syntax such as <code>-bsf:v</code> for video, <code>-bsf:a</code> for audio, etc. The <a href="https://www.ffmpeg.org/ffmpeg-bitstream-filters.html#noise" target="_blank">noise filter</a> intentionally damages the contents of packets without damaging the container. This sets the noise level to 1 but it could be left blank or any number above 0.</dd>
<dt>-c copy</dt><dd>use stream copy mode to re-mux instead of re-encode</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
@@ -1466,6 +1635,33 @@ foreach ($file in $inputfiles) {
</div>
<!-- ends Sine wave -->
<!-- SMPTE bars + Sine wave -->
<span data-toggle="modal" data-target="#smpte_bars_and_sine_wave"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Generate a SMPTE bars test video + audio playing a sine wave">SMPTE bars + Sine wave audio</button></span>
<div id="smpte_bars_and_sine_wave" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>SMPTE bars + Sine wave audio</h3>
<p>Generate a SMPTE bars test video + a 1kHz sine wave as audio testsignal.</p>
<p><code>ffmpeg -f lavfi -i smptebars=size=720x576:rate=25 -f lavfi -i "sine=frequency=1000:sample_rate=48000" -c:a pcm_s16le -t 10 -c:v ffv1 <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-f lavfi</dt><dd>tells ffmpeg to use the <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">libavfilter</a> input virtual device</dd>
<dt>-i smptebars=size=720x576:rate=25</dt><dd>asks for the smptebars test filter as input. Adjusting the <code>size</code> and <code>rate</code> options allow you to choose a specific frame size and framerate. <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc" target="_blank">[more]</a></dd>
<dt>-f lavfi</dt><dd>use libavfilter again, but now for audio</dd>
<dt>-i "sine=frequency=1000:sample_rate=48000"</dt><dd>Sets the signal to 1000 Hz, sampling at 48 kHz.</dd>
<dt>-c:a pcm_s16le</dt><dd>encodes the audio codec in <code>pcm_s16le</code> (the default encoding for wav files). pcm represents pulse-code modulation format (raw bytes), <code>16</code> means 16 bits per sample, and <code>le</code> means "little endian"</dd>
<dt>-t 10</dt><dd>specifies recording time of 10 seconds</dd>
<dt>-c:v <i>ffv1</i></dt><dd>Encodes to <a href="https://en.wikipedia.org/wiki/FFV1" target="_blank">FFV1</a>. Alter this setting to set your desired codec.</dd>
<dt><i>output_file</i>.wav</dt><dd>path, name and extension of the output file</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends SMPTE bars + Sine wave -->
</div>
<div class="well">
<h4>Other</h4>
@@ -1883,7 +2079,78 @@ e.g.: <code>ffmpeg -f concat -safe 0 -i mylist.txt -c copy <i>output_file</i></c
</div>
<!-- ends View Format info -->
</div><!-- closes the well -->
<!-- Compare Video Fingerprints -->
<span data-toggle="modal" data-target="#compare_video_fingerprints"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Compare Video Fingerprints">Compare Video Fingerprints</button></span>
<div id="compare_video_fingerprints" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Compare two video files for content similarity using perceptual hashing</h3>
<p><code>ffmpeg -i <i>input_one</i> -i <i>input_two</i> -filter_complex signature=detectmode=full:nb_inputs=2 -f null -</code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_one</i> -i <i>input_two</i></dt><dd>assigns the input files</dd>
<dt>-filter_complex</dt><dd>enables using more than one input file to the filter</dd>
<dt>signature=detectmode=full</dt><dd>Applies the signature filter to the inputs in 'full' mode. The other option is 'fast'.</dd>
<dt>nb_inputs=2</dt><dd>tells the filter to expect two input files</dd>
<dt>-f null -</dt><dd>Sets the output of ffmpeg to a null stream (since we are not creating a transcoded file, just viewing metadata).</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends Compare Video Fingerprints -->
<!-- Generate Video Fingerprint -->
<span data-toggle="modal" data-target="#generate_video_fingerprint"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Generate Video Fingerprint">Generate Video Fingerprint</button></span>
<div id="generate_video_fingerprint" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Generate a perceptual hash for an input video file</h3>
<p><code>ffmpeg -i <i>input</i> -vf signature=format=xml:filename="output.xml" -an -f null -</code></p>
<dl>
<dt>ffmpeg -i <i>input</i></dt><dd>starts the command using your input file</dd>
<dt>-vf signature=format=xml</dt><dd>applies the signature filter to the input file and sets the output format for the fingerprint to xml</dd>
<dt>filename="output.xml"</dt><dd>sets the output for the signature filter</dd>
<dt>-an</dt><dd>tells ffmpeg to ignore the audio stream of the input file</dd>
<dt>-f null -</dt><dd>Sets the ffmpeg output to a null stream (since we are only interested in the output generated by the filter).</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends Generate Video Fingerprint -->
</div><!-- closes the well -->
<div class="well">
<h4>Repair</h4>
<!-- Fix A/V async 1 -->
<span data-toggle="modal" data-target="#avsync_aresample"><button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="bottom" title="Fix A/V sync issues by resampling audio">Fix AV Sync: Resample audio</button></span>
<div id="avsync_aresample" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="well">
<h3>Fix AV Sync: Resample audio</h3>
<p><code>ffmpeg -i <i>input_file</i> -c:v copy -c:a pcm_s16le -af "aresample=async=1000" <i>output_file</i></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt><i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-c:v copy</dt><dd>Copy all mapped video streams.</dd>
<dt>-c:a pcm_s16le</dt><dd>Tells ffmpeg to encode the audio stream in 16-bit linear PCM (<a href="https://en.wikipedia.org/wiki/Endianness#Little-endian" target="_blank">little endian</a>)</dd>
<dt>-af "aresample=async=1000"</dt><dd>Stretch/squeezes samples to given timestamps, with maximum of 1000 samples per second compensation <a href="https://ffmpeg.org/ffmpeg-filters.html#aresample-1" target="_blank">[more]</a></dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file. Try different file extensions such as mkv, mov, mp4, or avi.</dd>
</dl>
<p class="link"></p>
</div>
</div>
</div>
</div>
<!-- ends Fix A/V async 1 -->
</div><!-- closes the well -->
<!-- sample example -->

View File

@@ -1,33 +1,41 @@
# [ffmprovisr](http://amiaopensource.github.io/ffmprovisr)
Repository of useful FFmpeg command lines for archivists! [AMIA hackday](http://wiki.curatecamp.org/index.php/Association_of_Moving_Image_Archivists_%26_Digital_Library_Federation_Hack_Day_2015) edition.
Repository of useful FFmpeg command lines for archivists!
## What is this?
Project Objective: To facilitate better understanding of FFmpeg through collaborative sharing of useful scripts and detailed flag-level description of how each script works so archivists can copy-paste and produce their own scripts but also understand how and why they work.
#### Project Objective
To facilitate better understanding of FFmpeg through collaborative sharing of useful scripts and detailed flag-level description of how each script works, so archivists can copy-paste and produce their own scripts, but also understand how and why they work.
## How do I see it?
Code stuff in the gh-pages branch (the default primary branch). Readme is right here. The site is live and lives on github pages. You can see it [here](http://amiaopensource.github.io/ffmprovisr), or you can download a [release](https://github.com/amiaopensource/ffmprovisr/releases) and use it locally.
The code is found in the gh-pages branch (the default primary branch). Readme is right here. You can see the site live on [GitHub pages](http://amiaopensource.github.io/ffmprovisr), or you can download a [release](https://github.com/amiaopensource/ffmprovisr/releases) and use it locally.
## How do I contribute?
You are welcome to edit the codebase yourself or just supply the information and ask it to be added to the site.
You are welcome to edit the codebase yourself, or just supply the information and ask it to be added to the site.
To contribute to this project directly (and more quickly), clone this repository and create a new branch (`git checkout -b your-branch-name`) and add or modify a new block in index.html. Then submit a pull request and someone can review and integrate your code. There is a commented-out sample available at the bottom of index.html that can be used to build your own block.
#### Edit codebase
If you are having trouble with the coding it yourself or with github, feel free to [submit an issue](https://github.com/amiaopensource/ffmprovisr/issues) with the kind of command you would like to see added to the site.
To contribute to this project directly (and more quickly), clone this repository and create a new branch (`git checkout -b your-branch-name`) and add or modify a new block in `index.html`. Then [submit a pull request](https://github.com/amiaopensource/ffmprovisr/pulls) and the maintainers will review and integrate your code. There is a commented-out sample block available at the bottom of `index.html` that can be as a guideline for your command.
#### Make a request
If you are having trouble with coding it yourself or with github, feel free to [submit an issue](https://github.com/amiaopensource/ffmprovisr/issues) with the kind of command you would like to see added to the site.
#### General help
If you want to help but don't have a new script to add, you can help us by testing out the scripts available, by refining or clarifying the documentation, or [creating an issue](https://github.com/amiaopensource/ffmprovisr/issues) for anything that sounds confusing and requires clarification.
## License
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />This <span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/InteractiveResource" rel="dct:type">work</span> by <a xmlns:cc="http://creativecommons.org/ns#" href="http://amiaopensource.github.io/ffmprovisr/" property="cc:attributionName" rel="cc:attributionURL">ffmprovisr</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.<br />Based on a work at <a xmlns:dct="http://purl.org/dc/terms/" href="https://github.com/amiaopensource/ffmprovisr" rel="dct:source">https://github.com/amiaopensource/ffmprovisr</a>.
## Code of Conduct
You can read our contributor code of conduct [here](https://github.com/amiaopensource/ffmprovisr/blob/gh-pages/code_of_conduct.md).
## Maintainers
[Ashley Blewer](https://github.com/ablwr), [Katherine Frances Nagels](https://github.com/kfrn), [Kieran O'Leary](https://github.com/kieranjol) and [Reto Kromer](https://github.com/retokromer)
## Contributors
* Gathered using [octohatrack](https://github.com/LABHR/octohatrack)
@@ -68,11 +76,17 @@ Repo: amiaopensource/ffmprovisr
GitHub Contributors: 11
All Contributors: 18
## AVHack Team:
## AVHack Team
[Ashley Blewer](https://github.com/ablwr), Eddy Colloton, Rebecca Dillmeier, [Jonathan Farbowitz](https://github.com/jfarbowitz), Rebecca Fraimow, Samuel Gutterman, Kelly Haydon, [Reto Kromer](https://github.com/retokromer), Nicole Martin, [Katherine Frances Nagels](https://github.com/kfrn), [Kieran O'Leary](https://github.com/kieranjol), Catriona Schlosser, Ben Turkus
[Association of Moving Image Archivists & Digital Library Federation Hack Day 2015](http://wiki.curatecamp.org/index.php/Association_of_Moving_Image_Archivists_%26_Digital_Library_Federation_Hack_Day_2015)
[Ashley Blewer](https://github.com/ablwr), [Eddy Colloton](https://github.com/eddycolloton), Rebecca Dillmeier, [Jonathan Farbowitz](https://github.com/jfarbowitz), [Rebecca Fraimow](https://github.com/rfraimow), Samuel Gutterman, [Kelly Haydon](https://github.com/kellyhaydon), [Reto Kromer](https://github.com/retokromer), Nicole Martin, [Katherine Frances Nagels](https://github.com/kfrn), [Kieran O'Leary](https://github.com/kieranjol), Catriona Schlosser, [Ben Turkus](https://github.com/bturkus)
## Sister projects
[Script Ahoy](http://dd388.github.io/crals/): Community Resource for Archivists and Librarians Scripting
[sourcecaster](https://datapraxis.github.io/sourcecaster/): helps you use the command line to work through common challenges that come up when working with digital primary sources.
## License
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />This <span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/InteractiveResource" rel="dct:type">work</span> by <a xmlns:cc="http://creativecommons.org/ns#" href="http://amiaopensource.github.io/ffmprovisr/" property="cc:attributionName" rel="cc:attributionURL">ffmprovisr</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.<br />Based on a work at <a xmlns:dct="http://purl.org/dc/terms/" href="https://github.com/amiaopensource/ffmprovisr" rel="dct:source">https://github.com/amiaopensource/ffmprovisr</a>.

23
check_framemd5.sh → scripts/check_audio_framemd5.sh Normal file → Executable file
View File

@@ -1,6 +1,6 @@
#!/usr/bin/env bash
SCRIPT=$(basename "${0}")
VERSION='2016-12-31'
VERSION='2017-04-17'
AUTHOR='ffmprovisr'
RED='\033[1;31m'
BLUE='\033[1;34m'
@@ -13,7 +13,7 @@ fi
_output_prompt(){
cat <<EOF
Usage: ${SCRIPT} [-h] | [ -i <av_file> -m <md5_file> ]
Usage: ${SCRIPT} -h | -i <av_file> -m <md5_file>
EOF
exit 1
}
@@ -32,7 +32,7 @@ Dependency:
ffmpeg
About:
Version: ${VERSION}
Website: https://github.com/amiaopensource/ffmprovisr/blob/gh-pages/check_framemd5.sh
Website: https://github.com/amiaopensource/ffmprovisr/blob/gh-pages/scripts/check_audio_framemd5.sh
EOF
exit 0
}
@@ -54,21 +54,24 @@ done
echo -e "${BLUE}Please wait...${NC}"
unset md5_tmp
if [[ $OSTYPE = "cygwin" ]]; then
md5_tmp=""${USERPROFILE}/$(basename ${input_hash}).tmp""
md5_tmp="${USERPROFILE}/$(basename "${input_hash}").tmp"
else
md5_tmp="${HOME}/$(basename ${input_hash}).tmp"
md5_tmp="${HOME}/$(basename "${input_hash}").tmp"
fi
$(ffmpeg -i ${input_file} -loglevel 0 -f framemd5 -an ${md5_tmp})
# Find audio frame size for hash calculation
sample_rate=$(grep -v '^#' "${input_hash}" | head -n 1 | tr -d ' ' | cut -d',' -f4)
ffmpeg -i "${input_file}" -loglevel 0 -af "asetnsamples=n='$sample_rate'" -f framemd5 -vn "${md5_tmp}"
[[ ! -f ${md5_tmp} ]] && { echo -e "${RED}Error: '${input_file}' is not a valid audio-visual file.${NC}" ; _output_prompt ; }
unset old_file
unset tmp_file
old_file=$(grep -v '^#' ${input_hash})
tmp_file=$(grep -v '^#' ${md5_tmp})
unset sample_rate
old_file=$(grep -v '^#' "${input_hash}")
tmp_file=$(grep -v '^#' "${md5_tmp}")
if [[ "${old_file}" = "${tmp_file}" ]]; then
echo -e "${BLUE}'$(basename ${input_file})' matches '$(basename ${input_hash})'${NC}"
echo -e "${BLUE}'$(basename "${input_file}")' matches '$(basename "${input_hash}")'${NC}"
rm "${md5_tmp}"
else
echo -e "${RED}The following differences were detected between '$(basename ${input_file})' and '$(basename ${input_hash})':${NC}"
echo -e "${RED}The following differences were detected between '$(basename "${input_file}")' and '$(basename "${input_hash}")':${NC}"
diff "${input_hash}" "${md5_tmp}"
rm "${md5_tmp}"
fi

View File

@@ -0,0 +1,74 @@
#!/usr/bin/env bash
SCRIPT=$(basename "${0}")
VERSION='2017-04-17'
AUTHOR='ffmprovisr'
RED='\033[1;31m'
BLUE='\033[1;34m'
NC='\033[0m'
if [[ ${OSTYPE} = "cygwin" ]] || [ ! $(which diff) ]; then
echo -e "${RED}Error: 'diff' is not installed by default. Please install 'diffutils' from Cygwin.${NC}"
exit 1
fi
_output_prompt(){
cat <<EOF
Usage: ${SCRIPT} -h | -i <av_file> -m <md5_file>
EOF
exit 1
}
_output_help(){
cat <<EOF
Syntax:
${SCRIPT}
Prompts a short help message.
${SCRIPT} -h
This help.
${SCRIPT} -i <av_file> -m <md5_file>
Pass to the script the audio-visual file and the corresponding MD5
file to check.
Dependency:
ffmpeg
About:
Version: ${VERSION}
Website: https://github.com/amiaopensource/ffmprovisr/blob/gh-pages/scripts/check_video_framemd5.sh
EOF
exit 0
}
unset input_file
unset input_hash
while getopts ":hi:m:" opt; do
case "${opt}" in
h) _output_help ;;
i) input_file=$OPTARG ;;
m) input_hash=$OPTARG ;;
:) echo -e "${RED}Error: option -${OPTARG} requires an argument${NC}" ; _output_prompt ;;
*) echo -e "${RED}Error: bad option -${OPTARG}${NC}" ; _output_prompt ;;
esac
done
[[ -z "${#}" || ! ${input_file} || ! ${input_hash} ]] && _output_prompt
echo -e "${BLUE}Please wait...${NC}"
unset md5_tmp
if [[ $OSTYPE = "cygwin" ]]; then
md5_tmp="${USERPROFILE}/$(basename "${input_hash}").tmp"
else
md5_tmp="${HOME}/$(basename "${input_hash}").tmp"
fi
ffmpeg -i "${input_file}" -loglevel 0 -f framemd5 -an "${md5_tmp}"
[[ ! -f "${md5_tmp}" ]] && { echo -e "${RED}Error: '${input_file}' is not a valid audio-visual file.${NC}" ; _output_prompt ; }
unset old_file
unset tmp_file
old_file=$(grep -v '^#' "${input_hash}")
tmp_file=$(grep -v '^#' "${md5_tmp}")
if [[ "${old_file}" = "${tmp_file}" ]]; then
echo -e "${BLUE}'$(basename "${input_file}")' matches '$(basename "${input_hash}")'${NC}"
rm "${md5_tmp}"
else
echo -e "${RED}The following differences were detected between '$(basename "${input_file}")' and '$(basename "${input_hash}")':${NC}"
diff "${input_hash}" "${md5_tmp}"
rm "${md5_tmp}"
fi

19
scripts/ffmprovisr Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bash
if [[ $OSTYPE = darwin* ]] ; then
if [ -d /usr/local/Cellar/ffmprovisr ] ; then
ffmprovisr_path=$(find /usr/local/Cellar/ffmprovisr -iname 'index.html' | sort -M | tail -n1)
fi
if [ -z "${ffmprovisr_path}" ] ; then
ffmprovisr_path='https://amiaopensource.github.io/ffmprovisr/'
fi
open "${ffmprovisr_path}"
elif [[ $OSTYPE = linux-gnu ]] ; then
if [ -d ~/.linuxbrew/Cellar/ffmprovisr ] ; then
ffmprovisr_path=$(find ~/.linuxbrew/Cellar/ffmprovisr -iname 'index.html' | sort -M | tail -n1)
fi
if [ -z "${ffmprovisr_path}" ] ; then
ffmprovisr_path='https://amiaopensource.github.io/ffmprovisr/'
fi
xdg-open "${ffmprovisr_path}"
fi