<p><ahref="https://datapraxis.github.io/sourcecaster/"target="_blank">The Sourcecaster</a>: an app that helps you use the command line to work through common challenges that come up when working with digital primary sources.</p>
<p><ahref="https://pugetsoundandvision.github.io/micropops/"target="_blank">Micropops</a>: One liners and automation tools from Moving Image Preservation of Puget Sound</p>
<p><ahref="https://amiaopensource.github.io/cable-bible/"target="_blank">Cable Bible</a>: A Guide to Cables and Connectors Used for Audiovisual Tech</p>
<p><ahref="https://eaasi.gitlab.io/qemu-qed/"target="_blank">QEMU QED</a>: instructions for using QEMU (Quick EMUlator), a command line application for computer emulation and virtualization</p>
</div>
<divclass="well">
@@ -72,7 +73,7 @@
<labelclass="recipe"for="basic-structure">Basic structure of an FFmpeg command</label>
<inputtype="checkbox"id="basic-structure">
<divclass="hiding">
<h3>Basic structure of an FFmpeg command</h3>
<h5>Basic structure of an FFmpeg command</h5>
<p>At its basis, an FFmpeg command is relatively simple. After you have installed FFmpeg (see instructions <ahref="https://avpres.net/FFmpeg/#ch1"target="_blank">here</a>), the program is invoked simply by typing <code>ffmpeg</code> at the command prompt.</p>
<p>Subsequently, each instruction that you supply to FFmpeg is actually a pair: a flag, which designates the <em>type</em> of action you want to carry out; and then the specifics of that action. Flags are always prepended with a hyphen.</p>
<p>For example, in the instruction <code>-i <em>input_file.ext</em></code>, the <code>-i</code> flag tells FFmpeg that you are supplying an input file, and <code>input_file.ext</code> states which file it is.</p>
@@ -93,7 +94,7 @@
<labelclass="recipe"for="streaming-saving">Streaming vs. Saving</label>
<inputtype="checkbox"id="streaming-saving">
<divclass="hiding">
<h3>Streaming vs. Saving</h3>
<h5>Streaming vs. Saving</h5>
<p>FFplay allows you to stream created video and FFmpeg allows you to save video.</p>
<p>The following command creates and saves a 10-second video of SMPTE bars:</p>
<p>Unless specified, FFmpeg will automatically set codec choices and codec parameters based off of internal defaults. These defaults are applied based on the file type used in the output (for example <code>.mov</code> or <code>.wav</code>).</p>
<p>When creating or transcoding files with FFmpeg, it is important to consider codec settings for both audio and video, as the default options may not be desirable in your particular context. The following is a brief list of codec defaults for some common file types:</p>
<p>Many FFmpeg commands use filters that manipulate the video or audio stream in some way: for example, <ahref="https://ffmpeg.org/ffmpeg-filters.html#hflip"target="_blank">hflip</a> to horizontally flip a video, or <ahref="https://ffmpeg.org/ffmpeg-filters.html#amerge-1"target="_blank">amerge</a> to merge two or more audio tracks into a single stream.</p>
<p>The use of a filter is signaled by the flag <code>-vf</code> (video filter) or <code>-af</code> (audio filter), followed by the name and options of the filter itself. For example, take the <ahref="#convert-colorspace">convert colorspace</a> command:</p>
<p>Stream mapping is the practice of defining which of the streams (e.g., video or audio tracks) present in an input file will be present in the output file. FFmpeg recognizes five stream types:</p>
<ul>
<li><code>a</code> - audio</li>
@@ -178,8 +179,12 @@
<li><code>-map 0:0 -map 0:2</code> means ‘take the first and third streams from the first input file’.</li>
<li><code>-map 0:1 -map 1:0</code> means ‘take the second stream from the first input file and the first stream from the second input file’.</li>
</ul>
<p>To map <em>all</em> streams in the input file to the output file, use <code>-map 0</code>. However, note that not all container formats can include all stream types: for example, .mp4 cannot contain timecode.</p>
<p>When no mapping is specified in an ffmpeg command, the default for video files is to take just one video and one audio stream for the output: other stream types, such as timecode or subtitles, will not be copied to the output file by default. If multiple video or audio streams are present, the best quality one is automatically selected by FFmpeg.</p>
<p>To map <em>all</em> streams in the input file to the output file, use <code>-map 0</code>. However, note that not all container formats can include all stream types: for example, .mp4 cannot contain timecode.</p>
<h4>Mapping with a failsafe</h4>
<p>To safely process files that may or may not contain given a type of stream, you can add a trailing <code>?</code> to your map commands: for example, <code>-map 0:a?</code> instead of <code>-map 0:a</code>.</p>
<p>This makes the map optional: audio streams will be mapped over if they are present in the file—but if the file contains no audio streams, the transcode will precede as usual, minus the audio stream mapping. Without adding the trailing <code>?</code>, FFmpeg will exit with an error on that file.</p>
<p>This is especially recommended when batch processing video files: it ensures that all files in your batch will be transcoded, whether or not they contain audio streams.</p>
<p>For more information, check out the FFmpeg wiki <ahref="https://trac.ffmpeg.org/wiki/Map"target="_blank">Map</a> page, and the official FFmpeg <ahref="https://ffmpeg.org/ffmpeg.html#Advanced-options"target="_blank">documentation on <code>-map</code></a>.</p>
<p>This script will rewrap a video file. It will create a new video video file where the inner content (the video, audio, and subtitle data) of the original file is unchanged, but these streams are rehoused within a different container format.</p>
<p><strong>Note:</strong> rewrapping is also known as remuxing, short for re-multiplexing.</p>
@@ -217,7 +222,7 @@
<labelclass="recipe"for="rewrap-dv">Rewrap DV video to .dv file</label>
<p>This script will take a video that is encoded in the <ahref="https://en.wikipedia.org/wiki/DV"target="_blank">DV Codec</a> but wrapped in a different container (such as MOV) and rewrap it into a raw DV file (with the .dv extension). Since DV files potentially contain a great deal of provenance metadata within the DV stream, it is necessary to rewrap files in this method to avoid unintentional stripping of this metadata.</p>
<dl>
@@ -239,7 +244,7 @@
<labelclass="recipe"for="to_prores">Transcode to deinterlaced Apple ProRes LT</label>
<inputtype="checkbox"id="to_prores">
<divclass="hiding">
<h3>Transcode into a deinterlaced Apple ProRes LT</h3>
<h5>Transcode into a deinterlaced Apple ProRes LT</h5>
<p>This command transcodes an input file into a deinterlaced Apple ProRes 422 LT file with 16-bit linear PCM encoded audio. The file is deinterlaced using the yadif filter (Yet Another De-Interlacing Filter).</p>
<dl>
@@ -271,7 +276,7 @@
<labelclass="recipe"for="transcode_h264">Transcode to an H.264 access file</label>
<p>This command takes an input file and transcodes it to H.264 with an .mp4 wrapper, audio is transcoded to AAC. The libx264 codec defaults to a “medium” preset for compression quality and a CRF of 23. CRF stands for constant rate factor and determines the quality and file size of the resulting H.264 video. A low CRF means high quality and large file size; a high CRF means the opposite.</p>
<dl>
@@ -305,7 +310,7 @@
<labelclass="recipe"for="dcp_to_h264">Transcode from DCP to an H.264 access file</label>
<p>This will transcode MXF wrapped video and audio files to an H.264 encoded MP4 file. Please note this only works for unencrypted, single reel DCPs.</p>
<dl>
@@ -332,7 +337,7 @@
<labelclass="recipe"for="create_FFV1_mkv">Transcode your file with the FFV1 Version 3 Codec in a Matroska container</label>
<inputtype="checkbox"id="create_FFV1_mkv">
<divclass="hiding">
<h3>Create FFV1 Version 3 video in a Matroska container with framemd5 of input</h3>
<h5>Create FFV1 Version 3 video in a Matroska container with framemd5 of input</h5>
<p>This will losslessly transcode your video with the FFV1 Version 3 codec in a Matroska container. In order to verify losslessness, a framemd5 of the source video is also generated. For more information on FFV1 encoding, <ahref="https://trac.ffmpeg.org/wiki/Encode/FFV1"target="_blank">try the FFmpeg wiki</a>.</p>
<dl>
@@ -359,7 +364,7 @@
<labelclass="recipe"for="dvd_to_file">Convert DVD to H.264</label>
<p>This command allows you to create an H.264 file from a DVD source that is not copy-protected.</p>
<p>Before encoding, you’ll need to establish which of the .VOB files on the DVD or .iso contain the content that you wish to encode. Inside the VIDEO_TS directory, you will see a series of files with names like VTS_01_0.VOB, VTS_01_1.VOB, etc. Some of the .VOB files will contain menus, special features, etc, so locate the ones that contain target content by playing them back in VLC.</p>
@@ -393,7 +398,7 @@
<labelclass="recipe"for="transcode_h265">Transcode to an H.265/HEVC MP4</label>
<p>This command takes an input file and transcodes it to H.265/HEVC in an .mp4 wrapper, keeping the audio codec the same as in the original file.</p>
<p><strong>Note:</strong> FFmpeg must be compiled with libx265, the library of the H.265 codec, for this script to work. (Add the flag <code>--with-x265</code> if using the <code>brew install ffmpeg</code> method).</p>
@@ -421,7 +426,7 @@
<labelclass="recipe"for="transcode_ogg">Transcode to an Ogg Theora</label>
<p>This command takes an input file and transcodes it to Ogg/Theora in an .ogv wrapper with 690k video bitrate.</p>
<p><strong>Note:</strong> FFmpeg must be installed with support for Ogg Theora. If you are using Homebrew, you can check with <code>brew info ffmpeg</code> and then update it with <code>brew upgrade ffmpeg --with-theora --with-libvorbis</code> if necessary.</p>
@@ -444,15 +449,15 @@
<labelclass="recipe"for="wav_to_mp3">Convert WAV to MP3</label>
<dt>-i <em>input_file</em></dt><dd>path and name of the input file</dd>
<dt>-write_id3v1 1</dt><dd>This will write metadata to an ID3v1 tag at the head of the file, assuming you’ve embedded metadata into the WAV file.</dd>
<dt>-id3v2_version 3</dt><dd>This will write metadata to an ID3v2.3 tag at the tail of the file, assuming you’ve embedded metadata into the WAV file.</dd>
<dt>-dither_method rectangular</dt><dd>Dither makes sure you don’t unnecessarily truncate the dynamic range of your audio.</dd>
<dt>-dither_method triangular</dt><dd>Dither makes sure you don’t unnecessarily truncate the dynamic range of your audio.</dd>
<dt>-out_sample_rate 48k</dt><dd>Sets the audio sampling frequency to 48 kHz. This can be omitted to use the same sampling frequency as the input.</dd>
<dt>-qscale:a 1</dt><dd>This sets the encoder to use a constant quality with a variable bitrate of between 190-250kbit/s. If you would prefer to use a constant bitrate, this could be replaced with <code>-b:a 320k</code> to set to the maximum bitrate allowed by the MP3 format. For more detailed discussion on variable vs constant bitrates see <ahref="https://trac.ffmpeg.org/wiki/Encode/MP3"target="_blank">here.</a></dd>
<dt><em>output_file</em></dt><dd>path and name of the output file</dd>
@@ -470,8 +475,8 @@
<labelclass="recipe"for="append_mp3">Generate two access MP3s (with and without copyright)</label>
<inputtype="checkbox"id="append_mp3">
<divclass="hiding">
<h3>Generate two access MP3s from input. One with appended audio (such as a copyright notice) and one unmodified.</h3>
<p>This script allows you to generate two derivative audio files from a master while appending audio from a separate file (for example a copyright or institutional notice) to one of them.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
@@ -482,10 +487,10 @@
<dt>[b]afifo[bb];</dt><dd>this buffers the stream "b" to help prevent dropped samples and renames stream to "bb"</dd>
<dt>[1:a:0][bb]concat=n=2:v=0:a=1[concatout]</dt><dd><code>concat</code> is used to join files. <code>n=2</code> tells the filter there are two inputs. <code>v=0:a=1</code> Tells the filter there are 0 video outputs and 1 audio output. This command appends the audio from the second input to the beginning of stream "bb" and names the output "concatout"</dd>
<dt>-map "[a]"</dt><dd>this maps the unmodified audio stream to the first output</dd>
<p>A command to slip the video channel approximate 2 frames (0.125 for a 25fps timeline) to align video and audio drift, if generated during video tape capture for example.</p>
<h2>Find undetermined or unknown stream properties</h2>
<p>These examples use QuickTime inputs and outputs. The strategy will vary or may not be possible in other file formats. In the case of these examples it is the intention to make a lossless copy while clarifying an unknown characteristic of the stream.</p>
<p>This command crops the input video to the dimensions defined</p>
<dl>
@@ -767,7 +772,7 @@
</dl>
<p>It's also possible to specify the crop position by adding the x and y coordinates representing the top left of your cropped area to your crop filter, as such:</p>
<p>This command combines two audio tracks present in a video file into one stream. It can be useful in situations where a downstream process, like YouTube’s automatic captioning, expect one audio track. To ensure that you’re mapping the right audio tracks run ffprobe before writing the script to identify which tracks are desired. More than two audio streams can be combined by extending the pattern present in the -filter_complex option.</p>
<dl>
@@ -859,7 +864,7 @@
<labelclass="recipe"for="phase_shift">Inverses the audio phase of the second channel</label>
<p>This filter calculates and outputs loudness information in json about an input file (labeled input) as well as what the levels would be if loudnorm were applied in its one pass mode (labeled output). The values generated can be used as inputs for a 'second pass' of the loudnorm filter allowing more accurate loudness normalization than if it is used in a single pass.</p>
<p>These instructions use the loudnorm defaults, which align well with PBS recommendations for target loudness. More information can be found at the <ahref="https://ffmpeg.org/ffmpeg-filters.html#loudnorm"target="_blank">loudnorm documentation</a>.</p>
<p>This will apply RIAA equalization to an input file allowing correct listening of audio transferred 'flat' (without EQ) from records that used this EQ curve. For more information about RIAA equalization see the <ahref="https://en.wikipedia.org/wiki/RIAA_equalization"target="_blank">Wikipedia page</a> on the subject.</p>
<dl>
@@ -915,7 +920,7 @@
<labelclass="recipe"for="cd_eq">Reverse CD Pre-Emphasis</label>
<p>This will apply de-emphasis to reverse the effects of CD pre-emphasis in the somewhat rare case of CDs that were created with this technology. Use this command to create more accurate listening copies of files that were ripped 'flat' (without any de-emphasis) where the original source utilized emphasis. For more information about CD pre-emphasis see the <ahref="https://wiki.hydrogenaud.io/index.php?title=Pre-emphasis"target="_blank">Hydrogen Audio page</a> on this subject.</p>
<p>This will normalize the loudness of an input using one pass, which is quicker but less accurate than using two passes. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the <ahref="https://ffmpeg.org/ffmpeg-filters.html#loudnorm"target="_blank">loudnorm documentation</a>.</p>
<p>Information about PBS loudness standards can be found in the <ahref="https://www-tc.pbs.org/capt/Producing/TOS-2012-Pt2-Distribution.pdf"target="_blank">PBS Technical Operating Specifications</a> document. Information about EBU loudness standards can be found in the <ahref="https://tech.ebu.ch/docs/r/r128-2014.pdf"target="_blank">EBU R 128</a> recommendation document.</p>
<p>This command allows using the levels calculated using a <ahref="#loudnorm_metadata">first pass of the loudnorm filter</a> to more accurately normalize loudness. This command uses the loudnorm filter defaults for target loudness. These defaults align well with PBS recommendations, but loudnorm does allow targeting of specific loudness levels. More information can be found at the <ahref="https://ffmpeg.org/ffmpeg-filters.html#loudnorm"target="_blank">loudnorm documentation</a>.</p>
<p>Information about PBS loudness standards can be found in the <ahref="https://www-tc.pbs.org/capt/Producing/TOS-2012-Pt2-Distribution.pdf"target="_blank">PBS Technical Operating Specifications</a> document. Information about EBU loudness standards can be found in the <ahref="https://tech.ebu.ch/docs/r/r128-2014.pdf"target="_blank">EBU R 128</a> recommendation document.</p>
@@ -978,7 +983,7 @@
<labelclass="recipe"for="avsync_aresample">Fix A/V sync issues by resampling audio</label>
<p>This command takes two or more files of the same file type and joins them together to make a single file. All that the program needs is a text file with a list specifying the files that should be joined. However, it only works properly if the files to be combined have the exact same codec and technical specifications. Be careful, FFmpeg may appear to have successfully joined two video files with different codecs, but may only bring over the audio from the second file or have other weird behaviors. Don’t use this command for joining files with different codecs and technical specs and always preview your resulting video file!</p>
<dl>
@@ -1027,7 +1032,7 @@
<labelclass="recipe"for="join_different_files">Join (concatenate) two or more files of different types</label>
<p>This command takes two or more files of the different file types and joins them together to make a single file.</p>
<p>The input files may differ in many respects - container, codec, chroma subsampling scheme, framerate, etc. However, the above command only works properly if the files to be combined have the same dimensions (e.g., 720x576). Also note that if the input files have different framerates, then the output file will be of variable framerate.</p>
@@ -1061,7 +1066,7 @@
<p>For example, to ensure that the video stream of the output file is visually lossless H.264 with a 4:2:0 chroma subsampling scheme, the command above could be amended to include the following:<br>
<h4>Variation: concatenating files of different resolutions</h4>
<p>To concatenate files of different resolutions, you need to resize the videos to have matching resolutions prior to concatenation. The most basic way to do this is by using a scale filter and giving the dimensions of the file you wish to match:</p>
<p>This command captures a certain portion of a file, starting from the beginning and continuing for the amount of time (in seconds) specified in the script. This can be used to create a preview file, or to remove unwanted content from the end of the file. To be more specific, use timecode, such as 00:00:05.</p>
<dl>
@@ -1163,7 +1168,7 @@
<labelclass="recipe"for="excerpt_to_end">Create a new file with the first five seconds trimmed off the original</label>
<p>This command copies a file starting from a specified time, removing the first few seconds from the output. This can be used to create an excerpt, or remove unwanted content from the beginning of a file.</p>
<dl>
@@ -1182,7 +1187,7 @@
<labelclass="recipe"for="excerpt_from_end">Create a new file with the final five seconds of the original</label>
<p>This command copies a file starting from a specified time before the end of the file, removing everything before from the output. This can be used to create an excerpt, or extract content from the end of a file (e.g. for extracting the closing credits).</p>
<dl>
@@ -1201,7 +1206,7 @@
<labelclass="recipe"for="trim_start_silence">Trim silence from beginning of an audio file</label>
<inputtype="checkbox"id="trim_start_silence">
<divclass="hiding">
<h3>Remove silent portion at the beginning of an audio file</h3>
<h5>Remove silent portion at the beginning of an audio file</h5>
<p>This command will automatically remove silence at the beginning of an audio file. The threshold for what qualifies as silence can be changed - this example uses anything under -57 dB, which is a decent level for accounting for analogue hiss.</p>
<p><strong>Note:</strong> Since this command uses a filter, the audio stream will be re-encoded for the output. If you do not specify a sample rate or codec, this command will use the sample rate from your input and <ahref='#codec-defaults'>the codec defaults for your output format</a>. Take care that you are getting your intended results!</p>
@@ -1222,7 +1227,7 @@
<labelclass="recipe"for="trim_end_silence">Trim silence from the end of an audio file</label>
<inputtype="checkbox"id="trim_end_silence">
<divclass="hiding">
<h3>Remove silent portion from the end of an audio file</h3>
<h5>Remove silent portion from the end of an audio file</h5>
<p>This command will automatically remove silence at the end of an audio file. Since the <code>silenceremove</code> filter is best at removing silence from the beginning of files, this command used the <code>areverse</code> filter twice to reverse the input, remove silence and then restore correct orientation.</p>
<p><strong>Note:</strong> Since this command uses a filter, the audio stream will be re-encoded for the output. If you do not specify a sample rate or codec, this command will use the sample rate from your input and <ahref='#codec-defaults'>the codec defaults for your output format</a>. Take care that you are getting your intended results!</p>
@@ -1249,7 +1254,7 @@
<labelclass="recipe"for="ntsc_to_h264">Upscaled, pillar-boxed HD H.264 access files from SD NTSC source</label>
<inputtype="checkbox"id="ntsc_to_h264">
<divclass="hiding">
<h3>Upscaled, Pillar-boxed HD H.264 Access Files from SD NTSC source</h3>
<h5>Upscaled, Pillar-boxed HD H.264 Access Files from SD NTSC source</h5>
<p>The inverse telecine procedure reverses the <ahref="https://en.wikipedia.org/wiki/Three-two_pull_down"target="_blank">3:2 pull down</a> process, restoring 29.97fps interlaced video to the 24fps frame rate of the original film source.</p>
<dl>
@@ -1340,7 +1345,7 @@
<labelclass="recipe"for="set_field_order">Set field order for interlaced video</label>
<inputtype="checkbox"id="set_field_order">
<divclass="hiding">
<h3>Change field order of an interlaced video</h3>
<h5>Change field order of an interlaced video</h5>
<p>This command will take an image file (e.g. image.jpg) and an audio file (e.g. audio.mp3) and combine them into a video file that contains the audio track with the image used as the video. It can be useful in a situation where you might want to upload an audio file to a platform like YouTube. You may want to adjust the scaling with -vf to suit your needs.</p>
<p>This filter allows visual analysis of the information held in various bit depths of an audio stream. This can aid with identifying when a file that is nominally of a higher bit depth actually has been 'padded' with null information. The provided GIF shows a 16 bit WAV file (left) and then the results of converting that same WAV to 32 bit (right). Note that in the 32 bit version, there is still only information in the first 16 bits.</p>
<dl>
@@ -1633,7 +1638,7 @@
<labelclass="recipe"for="astats">Play a graphical output showing decibel levels of an input file</label>
<inputtype="checkbox"id="astats">
<divclass="hiding">
<h3>Plays a graphical output showing decibel levels of an input file</h3>
<h5>Plays a graphical output showing decibel levels of an input file</h5>
<labelclass="recipe"for="xstack">Use xstack to arrange output layout of multiple video sources</label>
<inputtype="checkbox"id="xstack">
<divclass="hiding">
<h3>This filter enables vertical and horizontal stacking of multiple video sources into one output.</h3>
<h5>This filter enables vertical and horizontal stacking of multiple video sources into one output.</h5>
<p>This filter is useful for the creation of output windows such as the one utilized in <ahref="https://github.com/amiaopensource/vrecord"target="_blank">vrecord.</a></p>
<p>The following example uses the 'testsrc' virtual input combined with the <ahref="https://ffmpeg.org/ffmpeg-filters.html#split_002c-asplit"target="_blank">split filter</a> to generate the multiple inputs.</p>
@@ -1766,7 +1771,7 @@
<labelclass="recipe"for="pull_specs">Pull specs from video file</label>
<h3>Create Bash script to batch process with FFmpeg</h3>
<h5>Create Bash script to batch process with FFmpeg</h5>
<p>Bash scripts are plain text files saved with a .sh extension. This entry explains how they work with the example of a bash script named “Rewrap-MXF.sh”, which rewraps .mxf files in a given directory to .mov files.</p>
<p>“Rewrap-MXF.sh” contains the following text:</p>
<p><code>for file in *.mxf; do ffmpeg -i "$file" -map 0 -c copy "${file%.mxf}.mov"; done</code></p>
<h3>Create PowerShell script to batch process with FFmpeg</h3>
<h5>Create PowerShell script to batch process with FFmpeg</h5>
<p>As of Windows 10, it is possible to run Bash via <ahref="https://msdn.microsoft.com/en-us/commandline/wsl/about"target="_blank">Bash on Ubuntu on Windows</a>, allowing you to use <ahref="#batch_processing_bash">bash scripting</a>. To enable Bash on Windows, see <ahref="https://msdn.microsoft.com/en-us/commandline/wsl/install_guide"target="_blank">these instructions</a>.</p>
<p>On Windows, the primary native command line program is <strong>PowerShell</strong>. PowerShell scripts are plain text files saved with a .ps1 extension. This entry explains how they work with the example of a PowerShell script named “rewrap-mp4.ps1”, which rewraps .mp4 files in a given directory to .mkv files.</p>
<p>“rewrap-mp4.ps1” contains the following text:</p>
@@ -1862,7 +1867,7 @@
<dt>$output</dt><dd>The output file is set to the value of the <code>$output</code> variable declared above: i.e., the current file name with an .mkv extension.</dd>
<dt>}</dt><dd>Closes the code block.</dd>
</dl>
<p><strong>Note:</strong> the PowerShell script (.ps1 file) and all .mp4 files to be rewrapped must be contained within the same directory, and the script must be run from that directory.<p>
<p><strong>Note:</strong> the PowerShell script (.ps1 file) and all .mp4 files to be rewrapped must be contained within the same directory, and the script must be run from that directory.</p>
<p>Execute the .ps1 file by typing <code>.\rewrap-mp4.ps1</code> in PowerShell.</p>
<p>Modify the script as needed to perform different transcodes, or to use with ffprobe. :)</p>
<p>This will create an MD5 checksum for each group of 48000 audio samples.<br>
The number of samples per group can be set arbitrarily, but it's good practice to match the samplerate of the media file (so you will get one checksum per second).</p>
@@ -1955,7 +1960,7 @@
<labelclass="recipe"for="create_stream_md5s">Create MD5 checksum(s) for A/V stream data only</label>
<p>This will create MD5 checksums for the first video and the first audio stream in a file. If only one of these is necessary (for example if used on a WAV file) either part of the command can be excluded to create the desired MD5 only. Use of this kind of checksum enables integrity of the A/V information to be verified independently of any changes to surrounding metadata.</p>
<dl>
@@ -1977,7 +1982,7 @@
<labelclass="recipe"for="get_stream_checksum">Get checksum for video/audio stream</label>
<p>This script will perform a fixity check on a specified audio or video stream of the file, useful for checking that the content within a video has not changed even if the container format has changed.</p>
<dl>
@@ -1992,11 +1997,38 @@
</div>
<!-- ends Get checksum for video/audio stream -->
<!-- Get checksum for all video/audio streams -->
<labelclass="recipe"for="get_streamhash">Get individual checksums for all video/audio streams ("Streamhash")</label>
<inputtype="checkbox"id="get_streamhash">
<divclass="hiding">
<h5>Get individual checksums for all video/audio streams ("Streamhash")</h5>
<p>The outcome is very similar to that of "-f hash", except you get one hash per-stream, instead of one (summary) hash. Another benefit is that you don't have to know which streams, or how many to expect in the source file. This is very handy for hashing mixed born-digital material.</p>
<p>This script will perform a fixity check on all audio and video streams in the file and return one hashcode for each one. This is useful for e.g. being able to change to container/codec format later on and validate it matches the original source.</p>
<p>The output is formatted for easily processing it further in any kind of programming/scripting language.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <em>input_file</em></dt><dd>path, name and extension of the input file</dd>
<dt>-map 0</dt><dd>map ALL streams from input file to output. If you omit this, ffmpeg chooses only the first "best" (*) stream: 1 for audio, 1 for video (not all streams).</dd>
<dt>-f streamhash -hash md5</dt><dd>produce a checksum hash per-stream, and set the hash algorithm to md5. See the official <ahref="https://www.ffmpeg.org/ffmpeg-formats.html#streamhash-1"target="_blank">documentation on streamhash</a> for other algorithms and more details.</dd>
<dt>-</dt><dd>FFmpeg syntax requires a specified output, and <code>-</code> is just a place holder. No file is actually created. Choose an output filename to write the hashcode lines into a textfile.</dd>
<dt>-v quiet</dt><dd>(Optional) Disables FFmpeg's processing output. With this option it's easier to see the text output of the hashes.</dd>
</dl>
<p>The output looks like this, for example (1 video, 2 audio streams):
<code>
0,v,MD5=89bed8031048d985b48550b6b4cb171c<br>
0,a,MD5=36daadb543b63610f63f9dcff11680fb<br>
1,a,MD5=f21269116a847f887710cfc67ecc3e6e
</code></p>
<pclass="link"></p>
</div>
<!-- ends Get checksum for all video/audio streams -->
<p>This will create an XML report for use in <ahref="https://github.com/bavc/qctools"target="_blank">QCTools</a> for a video file with one video track and one audio track. See also the <ahref="https://github.com/bavc/qctools/blob/master/docs/data_format.md#creating-a-qctools-document"target="_blank">QCTools documentation</a>.</p>
<dl>
@@ -2021,7 +2053,7 @@
<labelclass="recipe"for="qctools_no_audio">QCTools report (no audio)</label>
<p>This will create an XML report for use in <ahref="https://github.com/bavc/qctools"target="_blank">QCTools</a> for a video file with one video track and NO audio track. See also the <ahref="https://github.com/bavc/qctools/blob/master/docs/data_format.md#creating-a-qctools-document"target="_blank">QCTools documentation</a>.</p>
<p>This command uses FFmpeg's <ahref="https://ffmpeg.org/ffmpeg-filters.html#readeia608"target="_blank">readeia608</a> filter to extract the hexadecimal values hidden within <ahref="https://en.wikipedia.org/wiki/EIA-608"target="_blank">EIA-608 (Line 21)</a> Closed Captioning, outputting a csv file. For more information about EIA-608, check out Adobe's <ahref="https://www.adobe.com/content/dam/Adobe/en/devnet/video/pdfs/introduction_to_closed_captions.pdf"target="_blank">Introduction to Closed Captions</a>.</p>
<p>If hex isn't your thing, closed captioning <ahref="http://www.theneitherworld.com/mcpoodle/SCC_TOOLS/DOCS/CC_CHARS.HTML"target="_blank">character</a> and <ahref="http://www.theneitherworld.com/mcpoodle/SCC_TOOLS/DOCS/CC_CODES.HTML"target="_blank">code</a> sets can be found in the documentation for SCTools.</p>
@@ -2075,7 +2107,7 @@
<labelclass="recipe"for="mandelbrot">Make a mandelbrot test pattern video</label>
<labelclass="recipe"for="ocr_on_top">Play video with OCR</label>
<inputtype="checkbox"id="ocr_on_top">
<divclass="hiding">
<h3>Plays video with OCR on top</h3>
<h5>Plays video with OCR on top</h5>
<p>Note: ffmpeg must be compiled with the tesseract library for this script to work (<code>--with-tesseract</code> if using the <code>brew install ffmpeg</code> method).</p>
<labelclass="recipe"for="ffprobe_ocr">Export OCR from video to screen</label>
<inputtype="checkbox"id="ffprobe_ocr">
<divclass="hiding">
<h3>Exports OCR data to screen</h3>
<h5>Exports OCR data to screen</h5>
<p>Note: FFmpeg must be compiled with the tesseract library for this script to work (<code>--with-tesseract</code> if using the <code>brew install ffmpeg</code> method)</p>
<p>This command splits the original input file into a video and audio stream. The -map command identifies which streams are mapped to which file. To ensure that you’re mapping the right streams to the right file, run ffprobe before writing the script to identify which streams are desired.</p>
<dl>
@@ -2399,7 +2431,7 @@
<labelclass="recipe"for="create_iso">Create ISO files for DVD access</label>
<inputtype="checkbox"id="create_iso">
<divclass="hiding">
<h3>Create ISO files for DVD access</h3>
<h5>Create ISO files for DVD access</h5>
<p>Create an ISO file that can be used to burn a DVD. Please note, you will have to install dvdauthor. To install dvd author using Homebrew run: <code>brew install dvdauthor</code></p>
<h3>View information about a specific decoder, encoder, demuxer, muxer, or filter</h3>
<h5>View information about a specific decoder, encoder, demuxer, muxer, or filter</h5>
<p><code>ffmpeg -h <em>type=name</em></code></p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
@@ -2539,7 +2571,7 @@
<labelclass="recipe"for="find-offset">Find Drive Offset for Exact CD Ripping</label>
<inputtype="checkbox"id="find-offset">
<divclass="hiding">
<h3>Find Drive Offset for Exact CD Ripping</h3>
<h5>Find Drive Offset for Exact CD Ripping</h5>
<p>If you want to make CD rips that can be verified via checksums to other rips of the same content, you need to know the offset of your CD drive. Put simply, different models of CD drives have different offsets, meaning they start reading in slightly different locations. This must be compensated for in order for files created on different (model) drives to generate the same checksum. For a more detailed explanation of drive offsets see the explanation <ahref="https://dbpoweramp.com/spoons-audio-guide-cd-ripping.htm"target="_blank">here.</a> In order to find your drive offset, first you will need to know exactly what model your drive is, then you can look it up in the list of drive offsets by Accurate Rip.</p>
<p>Often it can be difficult to tell what model your drive is simply by looking at it - it may be housed inside your computer or have external branding that is different from the actual drive manufacturer. For this reason, it can be useful to query your drive with CD ripping software in order to ID it. The following commands should give you a better idea of what drive you have.</p>
<p><strong>Cdda2wav:</strong><code>cdda2wav -scanbus</code> or simply <code>cdda2wav</code></p>
@@ -2554,7 +2586,7 @@
<labelclass="recipe"for="cdparanoia">Rip a CD with CD Paranoia</label>
<p>This command will use CD Paranoia to rip a CD into separate tracks while compensating for the sample offset of the CD drive. (For more information about drive offset see <ahref="#find-offset">the related ffmprovisr command.</a>)</p>
<dl>
@@ -2573,7 +2605,7 @@
<labelclass="recipe"for="cdda2wav">Rip a CD with Cdda2wav</label>
<inputtype="checkbox"id="cdda2wav">
<divclass="hiding">
<h3>Rip a CD with Cdda2wav</h3>
<h5>Rip a CD with Cdda2wav</h5>
<p><code>cdda2wav -L0 -t all -cuefile -paranoia paraopts=retries=200,readahead=600,minoverlap=sectors-per-request-1 -verbose-level all <em>output.wav</em></code></p>
<p>Cdda2wav is a tool that uses the <ahref="https://www.xiph.org/paranoia/">Paranoia library</a> to facilitate accurate ripping of audio CDs (CDDA). It can be installed via Homebrew with the command <code> brew install cdrtools</code>. This command will accurately rip an audio CD into a single wave file, while querying the CDDB database for track information and creating a cue sheet. This cue sheet can then be used either for playback of the WAV file or to split it into individual access files. Any <ahref="https://en.wikipedia.org/wiki/CD-Text">cdtext</a> information that is discovered will be stored as a sidecar. For more information about cue sheets see <ahref="https://en.wikipedia.org/wiki/Cue_sheet_(computing)">this Wikipedia article.</a></p>
<p><strong>Notes: </strong>On macOS the CD must be unmounted before this command is run. This can be done with the command <code>sudo umount '/Volumes/Name_of_CD'</code></p>
@@ -2596,7 +2628,7 @@
<labelclass="recipe"for="cd-emph-check">Check/Compensate for CD Emphasis</label>
<inputtype="checkbox"id="cd-emph-check">
<divclass="hiding">
<h3>Check/Compensate for CD Emphasis</h3>
<h5>Check/Compensate for CD Emphasis</h5>
<p>While somewhat rare, certain CDs had 'emphasis' applied as a form of noise reduction. This seems to mostly affect early (1980s) era CDs and some CDs pressed in Japan. Emphasis is part of the <ahref="https://en.wikipedia.org/wiki/Compact_Disc_Digital_Audio#Standard">Red Book standard</a> and, if present, must be compensated for to ensure accurate playback. CDs that use emphasis contain flags on tracks that tell the CD player to de-emphasize the audio on playback. When ripping a CD with emphasis, it is important to take this into account and either apply de-emphasis while ripping, or if storing a 'flat' copy, create another de-emphasized listening copy.</p>
<p>The following commands will output information about the presence of emphasis when run on a target CD:</p>
<p>ImageMagick is a free and open-source software suite for displaying, converting, and editing raster image and vector image files.</p>
<p>It's official website can be found <ahref="https://www.imagemagick.org/script/index.php"target="_blank">here</a>.</p>
<p>Another great resource with lots of supplemental explanations of filters is available at <ahref="http://www.fmwconcepts.com/imagemagick/index.php"target="_blank">Fred's ImageMagick Scripts</a>.</p>
@@ -2628,7 +2660,7 @@
<labelclass="recipe"for="im_compare">Compare two images</label>
<p>The flac tool is the tool created by the FLAC project to transcode to/from FLAC and to manipulate metadata in FLAC files. One advantage it has over other tools used to transcode into FLAC is the capability of embedding foreign metadata (such as BWF metadata). This means that it is possible to compress a BWF file into FLAC and maintain the ability to transcode back into an identical BWF, metadata and all. For a more detailed explanation, see <ahref="http://dericed.com/2013/flac-in-the-archives/"target="_blank">Dave Rice's article</a> on the topic, from which the following commands are adapted.</p>
<h3>Transcode to FLAC</h3>
<p>Use this command to transcode from WAV to FLAC while maintaining BWF metadata</p>
Repository of useful FFmpeg command lines for archivists!
* [What is this?](#what-is-this)
* [How do I see it?](#how-do-i-see-it)
* [How do I contribute?](#how-do-i-contribute)
* [Code of conduct](#code-of-conduct)
* [Maintainers](#maintainers)
* [Contributors](#contributors)
* [AVHack Team](#avhack-team)
* [Sister projects](#sister-projects)
* [Awards and mentions](#articles-and-mentions)
* [License](#license)
## What is this?
#### Project Objective
@@ -137,9 +148,18 @@ Last updated: 2019-02-11
## Sister projects
[The Cable Bible](https://amiaopensource.github.io/cable-bible/): A Guide to Cables and Connectors Used for Audiovisual Tech
[QEMU QED](https://eaasi.gitlab.io/qemu-qed): instructions for using QEMU (Quick EMUlator), a command line application for computer emulation and virtualization
[Script Ahoy](http://dd388.github.io/crals/): Community Resource for Archivists and Librarians Scripting
[sourcecaster](https://datapraxis.github.io/sourcecaster/): helps you use the command line to work through common challenges that come up when working with digital primary sources.
## Articles and mentions
* 2019-09: **Andrew Weaver & Ashley Blewer**, [Sustainability through community: ffmprovisr and the Case for Collaborative Knowledge Transfer](https://ipres2019.org/static/pdf/iPres2019_paper_97.pdf) (PDF), iPRES 2019
- Andrew Weaver [won](https://twitter.com/iPRES2019/status/1177136202144768000) iPres' Best First Time Contribution Award for his work on this paper :)
* 2018-11: ffmprovisr is mentioned in [a job advert](http://web.library.emory.edu/documents/pa_staff_Audiovisual%20Conservator_Nov2018.pdf)!
* 2015-11: **AMIA & DLF Hack Day 2015**, [ffmprovsr](https://wiki.curatecamp.org/index.php/Association_of_Moving_Image_Archivists_&_Digital_Library_Federation_Hack_Day_2015#ffmprovsr) - the genesis of ffmprovisr (then spelled without the 'i')
curl https://amiaopensource.github.io/ffmprovisr/ -s | grep -E '<h3>.*</h3>|<p><code>.*</code></p>' | sed 's/.*<code>\(.*\)<\/code>/\1/' | sed 's/.*<h3>\(.*\)<\/h3>/# \1/' | grep -v '\*\*\*' | sed -e 's/<[^>]*>//g'
curl https://amiaopensource.github.io/ffmprovisr/ -s | grep -E '<h5>.*</h5>|<p><code>.*</code></p>' | sed 's/.*<code>\(.*\)<\/code>/\1/' | sed 's/.*<h5>\(.*\)<\/h5>/# \1/' | grep -v '\*\*\*' | sed -e 's/<[^>]*>//g'
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.