mirror of
https://github.com/amiaopensource/ffmprovisr.git
synced 2024-12-25 11:18:20 +01:00
Reorder several sections of the page
This commit is contained in:
parent
24b8c5ac35
commit
1ac02df52a
432
index.html
432
index.html
@ -30,13 +30,13 @@
|
||||
<a href="#properties"><div class="contents-list">Change video properties</div></a>
|
||||
<a href="#join-trim"><div class="contents-list">Join/trim/create an excerpt</div></a>
|
||||
<a href="#interlacing"><div class="contents-list">Work with interlaced video</div></a>
|
||||
<a href="#filters-scopes"><div class="contents-list">Use filters or scopes</div></a>
|
||||
<a href="#metadata"><div class="contents-list">View or strip metadata</div></a>
|
||||
<a href="#overlay"><div class="contents-list">Overlay timecode or text on a video</div></a>
|
||||
<a href="#create-thumbnails"><div class="contents-list">Generate image files from a video</div></a>
|
||||
<a href="#animated-gif"><div class="contents-list">Generate an animated GIF</div></a>
|
||||
<a href="#create-video"><div class="contents-list">Create a video from image(s) and audio</div></a>
|
||||
<a href="#overlay"><div class="contents-list">Overlay timecode or text on a video</div></a>
|
||||
<a href="#filters-scopes"><div class="contents-list">Use filters or scopes</div></a>
|
||||
<a href="#normalise-audio"><div class="contents-list">Normalize/equalize audio</div></a>
|
||||
<a href="#metadata"><div class="contents-list">View or strip metadata</div></a>
|
||||
<a href="#preservation"><div class="contents-list">Preservation tasks</div></a>
|
||||
<a href="#test-files"><div class="contents-list">Generate test files</div></a>
|
||||
<a href="#repair"><div class="contents-list">Repair a file</div></a>
|
||||
@ -851,171 +851,74 @@ e.g.: <code>ffmpeg -f concat -safe 0 -i mylist.txt -c copy <i>output_file</i></c
|
||||
|
||||
</div>
|
||||
<div class="well">
|
||||
<h2 id="filters-scopes">Use filters or scopes</h2>
|
||||
<h2 id="overlay">Overlay timecode or text</h2>
|
||||
|
||||
<!-- abitscope -->
|
||||
<span data-toggle="collapse" data-target="#abitscope"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Audio Bitscope">Audio Bitscope</button></span>
|
||||
<div id="abitscope" class="collapse">
|
||||
<h3>Creates a visualization of the bits in an audio stream</h3>
|
||||
<!-- Text Watermark -->
|
||||
<span data-toggle="collapse" data-target="#text_watermark"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Create opaque centered text watermark ">Text Watermark</button></span>
|
||||
<div id="text_watermark" class="collapse">
|
||||
<h3>Create centered, transparent text watermark</h3>
|
||||
<p class="link"></p>
|
||||
<p><code>ffplay -f lavfi "amovie=<i>input_file</i>, asplit=2[out1][a], [a]abitscope=colors=purple|yellow[out0]"</code></p>
|
||||
<p>This filter allows visual analysis of the information held in various bit depths of an audio stream. This can aid with identifying when a file that is nominally of a higher bit depth actually has been 'padded' with null information. The provided GIF shows a 16 bit WAV file (left) and then the results of converting that same WAV to 32 bit (right). Note that in the 32 bit version, there is still only information in the first 16 bits.</p>
|
||||
<dl>
|
||||
<dt>ffplay -f lavfi</dt><dd>starts the command and tells ffplay that you will be using the lavfi virtual device to create the input</dd>
|
||||
<dt>"</dt><dd>quotation mark to start the lavfi filtergraph</dd>
|
||||
<dt>amovie=<i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>asplit=2[out1][a]</dt><dd>splits the audio stream in two. One of these [a] will be passed to the filter, and the other [out1] will be the audible stream.</dd>
|
||||
<dt>[a]abitscope=colors=purple|yellow[out0]</dt><dd>sends stream [a] into the abitscope filter, sets the colors for the channels to purple and yellow, and outputs the results to [out0]. This is what will be the visualization.</dd>
|
||||
<dt>"</dt><dd>quotation mark to end the lavfi filtergraph</dd>
|
||||
</dl>
|
||||
<div class="sample-image">
|
||||
<h2>Comparison of mono 16 bit and mono 16 bit padded to 32 bit.</h2>
|
||||
<img src="img/16_32_abitscope.gif" alt="bit_scope_comparison">
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends abitscope -->
|
||||
|
||||
<!-- astats -->
|
||||
<span data-toggle="collapse" data-target="#astats"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Play a graphical output showing decibel levels of an input file">Graphic for audio</button></span>
|
||||
<div id="astats" class="collapse">
|
||||
<h3>Plays a graphical output showing decibel levels of an input file</h3>
|
||||
<p class="link"></p>
|
||||
<p><code>ffplay -f lavfi "amovie='input.mp3', astats=metadata=1:reset=1, adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0:size=700x256:bg=Black[out]"</code></p>
|
||||
<dl>
|
||||
<dt>ffplay</dt><dd>starts the command</dd>
|
||||
<dt>-f lavfi</dt><dd>tells ffplay to use the <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">Libavfilter input virtual device</a></dd>
|
||||
<dt>"</dt><dd>quotation mark to start the lavfi filtergraph</dd>
|
||||
<dt>movie='<i>input.mp3</i>'</dt><dd>declares audio source file on which to apply filter</dd>
|
||||
<dt>,</dt><dd>comma signifies the end of audio source section and the beginning of the filter section</dd>
|
||||
<dt>astats=metadata=1</dt><dd>tells the astats filter to ouput metadata that can be passed to another filter (in this case adrawgraph)</dd>
|
||||
<dt>:</dt><dd>divides between options of the same filter</dd>
|
||||
<dt>reset=1</dt><dd>tells the filter to calculate the stats on every frame (increasing this number would calculate stats for groups of frames)</dd>
|
||||
<dt>,</dt><dd>comma divides one filter in the chain from another</dd>
|
||||
<dt>adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0</dt><dd>draws a graph using the overall peak volume calculated by the astats filter. It sets the max for the graph to 0 (dB) and the minimum to -30 (dB). For more options on data points that can be graphed see the <a href="https://ffmpeg.org/ffmpeg-filters.html#astats-1" target="_blank">FFmpeg astats documentation</a></dd>
|
||||
<dt>size=700x256:bg=Black</dt><dd>sets the background color and size of the output</dd>
|
||||
<dt>[out]</dt><dd>ends the filterchain and sets the output</dd>
|
||||
<dt>"</dt><dd>quotation mark to end the lavfi filtergraph</dd>
|
||||
</dl>
|
||||
<div class="sample-image">
|
||||
<h2>Example of filter output</h2>
|
||||
<img src="img/astats_levels.gif" alt="astats example">
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends astats -->
|
||||
|
||||
<!-- BRNG -->
|
||||
<span data-toggle="collapse" data-target="#brng"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Identify pixels out of broadcast range">Broadcast Range</button></span>
|
||||
<div id="brng" class="collapse">
|
||||
<h3>Shows all pixels outside of broadcast range</h3>
|
||||
<p class="link"></p>
|
||||
<p><code>ffplay -f lavfi "movie='<i>input.mp4</i>', signalstats=out=brng:color=cyan[out]"</code></p>
|
||||
<dl>
|
||||
<dt>ffplay</dt><dd>starts the command</dd>
|
||||
<dt>-f lavfi</dt><dd>tells ffplay to use the <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">Libavfilter input virtual device</a></dd>
|
||||
<dt>"</dt><dd>quotation mark to start the lavfi filtergraph</dd>
|
||||
<dt>movie='<i>input.mp4</i>'</dt><dd>declares video file source to apply filter</dd>
|
||||
<dt>,</dt><dd>comma signifies closing of video source assertion and ready for filter assertion</dd>
|
||||
<dt>signalstats=out=brng:</dt><dd>tells ffplay to use the signalstats command, output the data, use the brng filter</dd>
|
||||
<dt>:</dt><dd>indicates there’s another parameter coming</dd>
|
||||
<dt>color=cyan[out]</dt><dd>sets the color of out-of-range pixels to cyan</dd>
|
||||
<dt>"</dt><dd>quotation mark to end the lavfi filtergraph</dd>
|
||||
</dl>
|
||||
<div class="sample-image">
|
||||
<h2>Example of filter output</h2>
|
||||
<img src="./img/outside_broadcast_range.gif" alt="BRNG example">
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends BRNG -->
|
||||
|
||||
<!-- Vectorscope -->
|
||||
<span data-toggle="collapse" data-target="#vectorscope"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Vectorscope from video to screen">Vectorscope</button></span>
|
||||
<div id="vectorscope" class="collapse">
|
||||
<h3>Plays vectorscope of video</h3>
|
||||
<p class="link"></p>
|
||||
<p><code>ffplay <i>input_file</i> -vf "split=2[m][v], [v]vectorscope=b=0.7:m=color3:g=green[v], [m][v]overlay=x=W-w:y=H-h"</code></p>
|
||||
<dl>
|
||||
<dt>ffplay</dt><dd>starts the command</dd>
|
||||
<dt><i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-vf</dt><dd>creates a filtergraph to use for the streams</dd>
|
||||
<dt>"</dt><dd>quotation mark to start filtergraph</dd>
|
||||
<dt>split=2[m][v]</dt><dd>Splits the input into two identical outputs and names them [m] and [v]</dd>
|
||||
<dt>,</dt><dd>comma signifies there is another parameter coming</dd>
|
||||
<dt>[v]vectorscope=b=0.7:m=color3:g=green[v]</dt><dd>asserts usage of the vectorscope filter and sets a light background opacity (b, alias for bgopacity), sets a background color style (m, alias for mode), and graticule color (g, alias for graticule)</dd>
|
||||
<dt>,</dt><dd>comma signifies there is another parameter coming</dd>
|
||||
<dt>[m][v]overlay=x=W-w:y=H-h</dt><dd>declares where the vectorscope will overlay on top of the video image as it plays</dd>
|
||||
<dt>"</dt><dd>quotation mark to end filtergraph</dd>
|
||||
</dl>
|
||||
</div>
|
||||
<!-- ends Vectorscope -->
|
||||
|
||||
<!--Side by Side Videos/Temporal Difference Filter-->
|
||||
<span data-toggle="collapse" data-target="#tempdif"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Play two videos side by side while applying the temporal difference filter to both">Side by Side Videos/Temporal Difference Filter</button></span>
|
||||
<div id="tempdif" class="collapse">
|
||||
<h3>This will play two input videos side by side while also applying the temporal difference filter to them</h3>
|
||||
<p class="link"></p>
|
||||
<p><code>ffmpeg -i input01 -i input02 -filter_complex "[0:v:0]tblend=all_mode=difference128[a];[1:v:0]tblend=all_mode=difference128[b];[a][b]hstack[out]" -map [out] -f nut -c:v rawvideo - | ffplay -</code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input01</i> -i <i>input02</i></dt><dd>Designates the files to use for inputs one and two respectively</dd>
|
||||
<dt>-filter_complex</dt><dd>Lets FFmpeg know we will be using a complex filter (this must be used for multiple inputs)</dd>
|
||||
<dt>"</dt><dd>quotation mark to start filtergraph</dd>
|
||||
<dt>[0:v:0]tblend=all_mode=difference128[a]</dt><dd>Applies the tblend filter (with the settings all_mode and difference128) to the first video stream from the first input and assigns the result to the output [a]</dd>
|
||||
<dt>[1:v:0]tblend=all_mode=difference128[b]</dt><dd>Applies the tblend filter (with the settings all_mode and difference128) to the first video stream from the second input and assigns the result to the output [b]</dd>
|
||||
<dt>[a][b]hstack[out]</dt><dd>Takes the outputs from the previous steps ([a] and [b] and uses the hstack (horizontal stack) filter on them to create the side by side output. This output is then named [out])</dd>
|
||||
<dt>"</dt><dd>quotation mark to end filtergraph</dd>
|
||||
<dt>-map [out]</dt><dd>Maps the output of the filter chain</dd>
|
||||
<dt>-f nut</dt><dd>Sets the format for the output video stream to <a href="https://www.ffmpeg.org/ffmpeg-formats.html#nut" target="_blank">Nut</a></dd>
|
||||
<dt>-c:v rawvideo</dt><dd>Sets the video codec of the output video stream to raw video</dd>
|
||||
<dt>-</dt><dd>tells FFmpeg that the output will be piped to a new command (as opposed to a file)</dd>
|
||||
<dt>|</dt><dd>Tells the system you will be piping the output of the previous command into a new command</dd>
|
||||
<dt>ffplay -</dt><dd>Starts ffplay and tells it to use the pipe from the previous command as its input</dd>
|
||||
</dl>
|
||||
<div class="sample-image">
|
||||
<h2>Example of filter output</h2>
|
||||
<img src="img/tempdif.gif" alt="astats example">
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends Side by Side Videos/Temporal Difference Filter -->
|
||||
|
||||
</div>
|
||||
<div class="well">
|
||||
<h2 id="metadata">View or strip metadata</h2>
|
||||
|
||||
<!-- Pull specs -->
|
||||
<span data-toggle="collapse" data-target="#pull_specs"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Pull specs from video file">Pull specs</button></span>
|
||||
<div id="pull_specs" class="collapse">
|
||||
<h3>Pull specs from video file</h3>
|
||||
<p class="link"></p>
|
||||
<p><code>ffprobe -i <i>input_file</i> -show_format -show_streams -show_data -print_format xml</code></p>
|
||||
<p>This command extracts technical metadata from a video file and displays it in xml.</p>
|
||||
<dl>
|
||||
<dt>ffprobe</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-show_format</dt><dd>outputs file container informations</dd>
|
||||
<dt>-show_streams</dt><dd>outputs audio and video codec informations</dd>
|
||||
<dt>-show_data</dt><dd>adds a short “hexdump” to show_streams command output</dd>
|
||||
<dt>-print_format</dt><dd>Set the output printing format (in this example “xml”; other formats include “json” and “flat”)</dd>
|
||||
</dl>
|
||||
<p>See also the <a href="www.ffmpeg.org/ffprobe.html" target="_blank"> FFmpeg documentation on ffprobe</a> for a full list of flags, commands, and options.</p>
|
||||
</div>
|
||||
<!-- ends Pull specs -->
|
||||
|
||||
<!-- Strip metadata -->
|
||||
<span data-toggle="collapse" data-target="#strip_metadata"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Strip metadata">Strip metadata</button></span>
|
||||
<div id="strip_metadata" class="collapse">
|
||||
<h3>Strips metadata from video file</h3>
|
||||
<p class="link"></p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -map_metadata -1 -c:v copy -c:a copy <i>output_file</i></code></p>
|
||||
<p>E.g For creating access copies with your institutions name</p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -vf drawtext="fontfile=<i>font_path</i>:fontsize=<i>font_size</i>:text=<i>watermark_text</i>:fontcolor=<i>font_colour</i>:alpha=0.4:x=(w-text_w)/2:y=(h-text_h)/2" <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-map_metadata -1</dt><dd>sets metadata copying to -1, which copies nothing</dd>
|
||||
<dt>-vcodec copy</dt><dd>copies video track</dd>
|
||||
<dt>-acodec copy</dt><dd>copies audio track</dd>
|
||||
<dt><i>output_file</i></dt><dd>Makes copy of original file and names output file</dd>
|
||||
<dt>-vf drawtext=</dt><dd>This calls the drawtext filter with the following options:
|
||||
<dl>
|
||||
<dt>fontfile=<i>font_path</i></dt><dd> Set path to font. For example in macOS: <code>fontfile=/Library/Fonts/AppleGothic.ttf</code></dd>
|
||||
<dt>fontsize=<i>font_size</i></dt><dd> Set font size. <code>35</code> is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.</dd>
|
||||
<dt>text=<i>watermark_text</i> </dt><dd> Set the content of your watermark text. For example: <code>text='FFMPROVISR EXAMPLE TEXT'</code></dd>
|
||||
<dt>fontcolor=<i>font_colour</i> </dt><dd> Set colour of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
|
||||
<dt>alpha=0.4</dt><dd> Set transparency value.</dd>
|
||||
<dt>x=(w-text_w)/2:y=(h-text_h)/2</dt><dd> Sets <i>x</i> and <i>y</i> coordinates for the watermark. These relative values will centre your watermark regardless of video dimensions.</dd>
|
||||
</dl>
|
||||
Note: <code>-vf</code> is a shortcut for <code>-filter:v</code>.</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file.</dd>
|
||||
</dl>
|
||||
</div>
|
||||
<!-- ends Strip metadata -->
|
||||
<!-- ends Text watermark -->
|
||||
|
||||
<!-- Transparent Image Watermark -->
|
||||
<span data-toggle="collapse" data-target="#image_watermark"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Overlay image watermark">Overlay image watermark on video</button></span>
|
||||
<div id="image_watermark" class="collapse">
|
||||
<h3>Overlay image watermark on video</h3>
|
||||
<p class="link"></p>
|
||||
<p><code>ffmpeg -i <i>input_video file</i> -i <i>input_image_file</i> -filter_complex overlay=main_w-overlay_w-5:5 <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_video_file</i></dt><dd>path, name and extension of the input video file</dd>
|
||||
<dt>-i <i>input_image_file</i></dt><dd>path, name and extension of the image file</dd>
|
||||
<dt>-filter_complex overlay=main_w-overlay_w-5:5</dt><dd>This calls the overlay filter and sets x and y coordinates for the position of the watermark on the video. Instead of hardcoding specific x and y coordinates, <code>main_w-overlay_w-5:5</code> uses relative coordinates to place the watermark in the upper right hand corner, based on the width of your input files. Please see the <a href="https://www.ffmpeg.org/ffmpeg-all.html#toc-Examples-102" target="_blank">FFmpeg documentation for more examples.</a></dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
</div>
|
||||
<!-- ends Image Watermark -->
|
||||
|
||||
<!-- Burn in timecode-->
|
||||
<span data-toggle="collapse" data-target="#burn_in_timecode"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Burn in timecode ">Burn in timecode</button></span>
|
||||
<div id="burn_in_timecode" class="collapse">
|
||||
<h3>Create a burnt in timecode on your image</h3>
|
||||
<p class="link"></p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -vf drawtext="fontfile=<i>font_path</i>:fontsize=<i>font_size</i>:timecode=<i>starting_timecode</i>:fontcolor=<i>font_colour</i>:box=1:boxcolor=<i>box_colour</i>:rate=<i>timecode_rate</i>:x=(w-text_w)/2:y=h/1.2" <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-vf drawtext=</dt><dd>This calls the drawtext filter with the following options:
|
||||
<dt>"</dt><dd>quotation mark to start drawtext filter command</dd>
|
||||
<dt>fontfile=<i>font_path</i></dt><dd> Set path to font. For example in macOS: <code>fontfile=/Library/Fonts/AppleGothic.ttf</code></dd>
|
||||
<dt>fontsize=<i>font_size</i></dt><dd> Set font size. <code>35</code> is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.</dd>
|
||||
<dt>timecode=<i>starting_timecode</i> </dt><dd> Set the timecode to be displayed for the first frame. Timecode is to be represented as <code>hh:mm:ss[:;.]ff</code>. Colon escaping is determined by O.S, for example in Ubuntu <code>timecode='09\\:50\\:01\\:23'</code>. Ideally, this value would be generated from the file itself using ffprobe.</dd>
|
||||
<dt>fontcolor=<i>font_colour</i> </dt><dd> Set colour of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
|
||||
<dt>box=1</dt><dd> Enable box around timecode</dd>
|
||||
<dt>boxcolor=<i>box_colour</i></dt><dd> Set colour of box. Can be a text string such as <code>fontcolor=black</code> or a hexadecimal value such as <code>fontcolor=0x000000</code></dd>
|
||||
<dt>rate=<i>timecode_rate</i></dt><dd> Framerate of video. For example <code>25/1</code></dd>
|
||||
<dt>x=(w-text_w)/2:y=h/1.2</dt><dd> Sets <i>x</i> and <i>y</i> coordinates for the timecode. These relative values will horizontally centre your timecode in the bottom third regardless of video dimensions.</dd>
|
||||
<dt>"</dt><dd>quotation mark to end drawtext filter command</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file.</dd>
|
||||
</dl>
|
||||
<p>Note: <code>-vf</code> is a shortcut for <code>-filter:v</code>.</p>
|
||||
</div>
|
||||
<!-- ends Burn in timecode -->
|
||||
|
||||
</div>
|
||||
<div class="well">
|
||||
@ -1156,74 +1059,131 @@ e.g.: <code>ffmpeg -f concat -safe 0 -i mylist.txt -c copy <i>output_file</i></c
|
||||
|
||||
</div>
|
||||
<div class="well">
|
||||
<h2 id="overlay">Overlay timecode or text</h2>
|
||||
<h2 id="filters-scopes">Use filters or scopes</h2>
|
||||
|
||||
<!-- Text Watermark -->
|
||||
<span data-toggle="collapse" data-target="#text_watermark"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Create opaque centered text watermark ">Text Watermark</button></span>
|
||||
<div id="text_watermark" class="collapse">
|
||||
<h3>Create centered, transparent text watermark</h3>
|
||||
<!-- abitscope -->
|
||||
<span data-toggle="collapse" data-target="#abitscope"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Audio Bitscope">Audio Bitscope</button></span>
|
||||
<div id="abitscope" class="collapse">
|
||||
<h3>Creates a visualization of the bits in an audio stream</h3>
|
||||
<p class="link"></p>
|
||||
<p>E.g For creating access copies with your institutions name</p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -vf drawtext="fontfile=<i>font_path</i>:fontsize=<i>font_size</i>:text=<i>watermark_text</i>:fontcolor=<i>font_colour</i>:alpha=0.4:x=(w-text_w)/2:y=(h-text_h)/2" <i>output_file</i></code></p>
|
||||
<p><code>ffplay -f lavfi "amovie=<i>input_file</i>, asplit=2[out1][a], [a]abitscope=colors=purple|yellow[out0]"</code></p>
|
||||
<p>This filter allows visual analysis of the information held in various bit depths of an audio stream. This can aid with identifying when a file that is nominally of a higher bit depth actually has been 'padded' with null information. The provided GIF shows a 16 bit WAV file (left) and then the results of converting that same WAV to 32 bit (right). Note that in the 32 bit version, there is still only information in the first 16 bits.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-vf drawtext=</dt><dd>This calls the drawtext filter with the following options:
|
||||
<dl>
|
||||
<dt>fontfile=<i>font_path</i></dt><dd> Set path to font. For example in macOS: <code>fontfile=/Library/Fonts/AppleGothic.ttf</code></dd>
|
||||
<dt>fontsize=<i>font_size</i></dt><dd> Set font size. <code>35</code> is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.</dd>
|
||||
<dt>text=<i>watermark_text</i> </dt><dd> Set the content of your watermark text. For example: <code>text='FFMPROVISR EXAMPLE TEXT'</code></dd>
|
||||
<dt>fontcolor=<i>font_colour</i> </dt><dd> Set colour of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
|
||||
<dt>alpha=0.4</dt><dd> Set transparency value.</dd>
|
||||
<dt>x=(w-text_w)/2:y=(h-text_h)/2</dt><dd> Sets <i>x</i> and <i>y</i> coordinates for the watermark. These relative values will centre your watermark regardless of video dimensions.</dd>
|
||||
<dt>ffplay -f lavfi</dt><dd>starts the command and tells ffplay that you will be using the lavfi virtual device to create the input</dd>
|
||||
<dt>"</dt><dd>quotation mark to start the lavfi filtergraph</dd>
|
||||
<dt>amovie=<i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>asplit=2[out1][a]</dt><dd>splits the audio stream in two. One of these [a] will be passed to the filter, and the other [out1] will be the audible stream.</dd>
|
||||
<dt>[a]abitscope=colors=purple|yellow[out0]</dt><dd>sends stream [a] into the abitscope filter, sets the colors for the channels to purple and yellow, and outputs the results to [out0]. This is what will be the visualization.</dd>
|
||||
<dt>"</dt><dd>quotation mark to end the lavfi filtergraph</dd>
|
||||
</dl>
|
||||
Note: <code>-vf</code> is a shortcut for <code>-filter:v</code>.</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file.</dd>
|
||||
<div class="sample-image">
|
||||
<h2>Comparison of mono 16 bit and mono 16 bit padded to 32 bit.</h2>
|
||||
<img src="img/16_32_abitscope.gif" alt="bit_scope_comparison">
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends abitscope -->
|
||||
|
||||
<!-- astats -->
|
||||
<span data-toggle="collapse" data-target="#astats"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Play a graphical output showing decibel levels of an input file">Graphic for audio</button></span>
|
||||
<div id="astats" class="collapse">
|
||||
<h3>Plays a graphical output showing decibel levels of an input file</h3>
|
||||
<p class="link"></p>
|
||||
<p><code>ffplay -f lavfi "amovie='input.mp3', astats=metadata=1:reset=1, adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0:size=700x256:bg=Black[out]"</code></p>
|
||||
<dl>
|
||||
<dt>ffplay</dt><dd>starts the command</dd>
|
||||
<dt>-f lavfi</dt><dd>tells ffplay to use the <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">Libavfilter input virtual device</a></dd>
|
||||
<dt>"</dt><dd>quotation mark to start the lavfi filtergraph</dd>
|
||||
<dt>movie='<i>input.mp3</i>'</dt><dd>declares audio source file on which to apply filter</dd>
|
||||
<dt>,</dt><dd>comma signifies the end of audio source section and the beginning of the filter section</dd>
|
||||
<dt>astats=metadata=1</dt><dd>tells the astats filter to ouput metadata that can be passed to another filter (in this case adrawgraph)</dd>
|
||||
<dt>:</dt><dd>divides between options of the same filter</dd>
|
||||
<dt>reset=1</dt><dd>tells the filter to calculate the stats on every frame (increasing this number would calculate stats for groups of frames)</dd>
|
||||
<dt>,</dt><dd>comma divides one filter in the chain from another</dd>
|
||||
<dt>adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0</dt><dd>draws a graph using the overall peak volume calculated by the astats filter. It sets the max for the graph to 0 (dB) and the minimum to -30 (dB). For more options on data points that can be graphed see the <a href="https://ffmpeg.org/ffmpeg-filters.html#astats-1" target="_blank">FFmpeg astats documentation</a></dd>
|
||||
<dt>size=700x256:bg=Black</dt><dd>sets the background color and size of the output</dd>
|
||||
<dt>[out]</dt><dd>ends the filterchain and sets the output</dd>
|
||||
<dt>"</dt><dd>quotation mark to end the lavfi filtergraph</dd>
|
||||
</dl>
|
||||
<div class="sample-image">
|
||||
<h2>Example of filter output</h2>
|
||||
<img src="img/astats_levels.gif" alt="astats example">
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends astats -->
|
||||
|
||||
<!-- BRNG -->
|
||||
<span data-toggle="collapse" data-target="#brng"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Identify pixels out of broadcast range">Broadcast Range</button></span>
|
||||
<div id="brng" class="collapse">
|
||||
<h3>Shows all pixels outside of broadcast range</h3>
|
||||
<p class="link"></p>
|
||||
<p><code>ffplay -f lavfi "movie='<i>input.mp4</i>', signalstats=out=brng:color=cyan[out]"</code></p>
|
||||
<dl>
|
||||
<dt>ffplay</dt><dd>starts the command</dd>
|
||||
<dt>-f lavfi</dt><dd>tells ffplay to use the <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">Libavfilter input virtual device</a></dd>
|
||||
<dt>"</dt><dd>quotation mark to start the lavfi filtergraph</dd>
|
||||
<dt>movie='<i>input.mp4</i>'</dt><dd>declares video file source to apply filter</dd>
|
||||
<dt>,</dt><dd>comma signifies closing of video source assertion and ready for filter assertion</dd>
|
||||
<dt>signalstats=out=brng:</dt><dd>tells ffplay to use the signalstats command, output the data, use the brng filter</dd>
|
||||
<dt>:</dt><dd>indicates there’s another parameter coming</dd>
|
||||
<dt>color=cyan[out]</dt><dd>sets the color of out-of-range pixels to cyan</dd>
|
||||
<dt>"</dt><dd>quotation mark to end the lavfi filtergraph</dd>
|
||||
</dl>
|
||||
<div class="sample-image">
|
||||
<h2>Example of filter output</h2>
|
||||
<img src="./img/outside_broadcast_range.gif" alt="BRNG example">
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends BRNG -->
|
||||
|
||||
<!-- Vectorscope -->
|
||||
<span data-toggle="collapse" data-target="#vectorscope"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Vectorscope from video to screen">Vectorscope</button></span>
|
||||
<div id="vectorscope" class="collapse">
|
||||
<h3>Plays vectorscope of video</h3>
|
||||
<p class="link"></p>
|
||||
<p><code>ffplay <i>input_file</i> -vf "split=2[m][v], [v]vectorscope=b=0.7:m=color3:g=green[v], [m][v]overlay=x=W-w:y=H-h"</code></p>
|
||||
<dl>
|
||||
<dt>ffplay</dt><dd>starts the command</dd>
|
||||
<dt><i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-vf</dt><dd>creates a filtergraph to use for the streams</dd>
|
||||
<dt>"</dt><dd>quotation mark to start filtergraph</dd>
|
||||
<dt>split=2[m][v]</dt><dd>Splits the input into two identical outputs and names them [m] and [v]</dd>
|
||||
<dt>,</dt><dd>comma signifies there is another parameter coming</dd>
|
||||
<dt>[v]vectorscope=b=0.7:m=color3:g=green[v]</dt><dd>asserts usage of the vectorscope filter and sets a light background opacity (b, alias for bgopacity), sets a background color style (m, alias for mode), and graticule color (g, alias for graticule)</dd>
|
||||
<dt>,</dt><dd>comma signifies there is another parameter coming</dd>
|
||||
<dt>[m][v]overlay=x=W-w:y=H-h</dt><dd>declares where the vectorscope will overlay on top of the video image as it plays</dd>
|
||||
<dt>"</dt><dd>quotation mark to end filtergraph</dd>
|
||||
</dl>
|
||||
</div>
|
||||
<!-- ends Text watermark -->
|
||||
<!-- ends Vectorscope -->
|
||||
|
||||
<!-- Transparent Image Watermark -->
|
||||
<span data-toggle="collapse" data-target="#image_watermark"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Overlay image watermark">Overlay image watermark on video</button></span>
|
||||
<div id="image_watermark" class="collapse">
|
||||
<h3>Overlay image watermark on video</h3>
|
||||
<!--Side by Side Videos/Temporal Difference Filter-->
|
||||
<span data-toggle="collapse" data-target="#tempdif"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Play two videos side by side while applying the temporal difference filter to both">Side by Side Videos/Temporal Difference Filter</button></span>
|
||||
<div id="tempdif" class="collapse">
|
||||
<h3>This will play two input videos side by side while also applying the temporal difference filter to them</h3>
|
||||
<p class="link"></p>
|
||||
<p><code>ffmpeg -i <i>input_video file</i> -i <i>input_image_file</i> -filter_complex overlay=main_w-overlay_w-5:5 <i>output_file</i></code></p>
|
||||
<p><code>ffmpeg -i input01 -i input02 -filter_complex "[0:v:0]tblend=all_mode=difference128[a];[1:v:0]tblend=all_mode=difference128[b];[a][b]hstack[out]" -map [out] -f nut -c:v rawvideo - | ffplay -</code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_video_file</i></dt><dd>path, name and extension of the input video file</dd>
|
||||
<dt>-i <i>input_image_file</i></dt><dd>path, name and extension of the image file</dd>
|
||||
<dt>-filter_complex overlay=main_w-overlay_w-5:5</dt><dd>This calls the overlay filter and sets x and y coordinates for the position of the watermark on the video. Instead of hardcoding specific x and y coordinates, <code>main_w-overlay_w-5:5</code> uses relative coordinates to place the watermark in the upper right hand corner, based on the width of your input files. Please see the <a href="https://www.ffmpeg.org/ffmpeg-all.html#toc-Examples-102" target="_blank">FFmpeg documentation for more examples.</a></dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||
<dt>-i <i>input01</i> -i <i>input02</i></dt><dd>Designates the files to use for inputs one and two respectively</dd>
|
||||
<dt>-filter_complex</dt><dd>Lets FFmpeg know we will be using a complex filter (this must be used for multiple inputs)</dd>
|
||||
<dt>"</dt><dd>quotation mark to start filtergraph</dd>
|
||||
<dt>[0:v:0]tblend=all_mode=difference128[a]</dt><dd>Applies the tblend filter (with the settings all_mode and difference128) to the first video stream from the first input and assigns the result to the output [a]</dd>
|
||||
<dt>[1:v:0]tblend=all_mode=difference128[b]</dt><dd>Applies the tblend filter (with the settings all_mode and difference128) to the first video stream from the second input and assigns the result to the output [b]</dd>
|
||||
<dt>[a][b]hstack[out]</dt><dd>Takes the outputs from the previous steps ([a] and [b] and uses the hstack (horizontal stack) filter on them to create the side by side output. This output is then named [out])</dd>
|
||||
<dt>"</dt><dd>quotation mark to end filtergraph</dd>
|
||||
<dt>-map [out]</dt><dd>Maps the output of the filter chain</dd>
|
||||
<dt>-f nut</dt><dd>Sets the format for the output video stream to <a href="https://www.ffmpeg.org/ffmpeg-formats.html#nut" target="_blank">Nut</a></dd>
|
||||
<dt>-c:v rawvideo</dt><dd>Sets the video codec of the output video stream to raw video</dd>
|
||||
<dt>-</dt><dd>tells FFmpeg that the output will be piped to a new command (as opposed to a file)</dd>
|
||||
<dt>|</dt><dd>Tells the system you will be piping the output of the previous command into a new command</dd>
|
||||
<dt>ffplay -</dt><dd>Starts ffplay and tells it to use the pipe from the previous command as its input</dd>
|
||||
</dl>
|
||||
<div class="sample-image">
|
||||
<h2>Example of filter output</h2>
|
||||
<img src="img/tempdif.gif" alt="astats example">
|
||||
</div>
|
||||
<!-- ends Image Watermark -->
|
||||
|
||||
<!-- Burn in timecode-->
|
||||
<span data-toggle="collapse" data-target="#burn_in_timecode"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Burn in timecode ">Burn in timecode</button></span>
|
||||
<div id="burn_in_timecode" class="collapse">
|
||||
<h3>Create a burnt in timecode on your image</h3>
|
||||
<p class="link"></p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -vf drawtext="fontfile=<i>font_path</i>:fontsize=<i>font_size</i>:timecode=<i>starting_timecode</i>:fontcolor=<i>font_colour</i>:box=1:boxcolor=<i>box_colour</i>:rate=<i>timecode_rate</i>:x=(w-text_w)/2:y=h/1.2" <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-vf drawtext=</dt><dd>This calls the drawtext filter with the following options:
|
||||
<dt>"</dt><dd>quotation mark to start drawtext filter command</dd>
|
||||
<dt>fontfile=<i>font_path</i></dt><dd> Set path to font. For example in macOS: <code>fontfile=/Library/Fonts/AppleGothic.ttf</code></dd>
|
||||
<dt>fontsize=<i>font_size</i></dt><dd> Set font size. <code>35</code> is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.</dd>
|
||||
<dt>timecode=<i>starting_timecode</i> </dt><dd> Set the timecode to be displayed for the first frame. Timecode is to be represented as <code>hh:mm:ss[:;.]ff</code>. Colon escaping is determined by O.S, for example in Ubuntu <code>timecode='09\\:50\\:01\\:23'</code>. Ideally, this value would be generated from the file itself using ffprobe.</dd>
|
||||
<dt>fontcolor=<i>font_colour</i> </dt><dd> Set colour of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
|
||||
<dt>box=1</dt><dd> Enable box around timecode</dd>
|
||||
<dt>boxcolor=<i>box_colour</i></dt><dd> Set colour of box. Can be a text string such as <code>fontcolor=black</code> or a hexadecimal value such as <code>fontcolor=0x000000</code></dd>
|
||||
<dt>rate=<i>timecode_rate</i></dt><dd> Framerate of video. For example <code>25/1</code></dd>
|
||||
<dt>x=(w-text_w)/2:y=h/1.2</dt><dd> Sets <i>x</i> and <i>y</i> coordinates for the timecode. These relative values will horizontally centre your timecode in the bottom third regardless of video dimensions.</dd>
|
||||
<dt>"</dt><dd>quotation mark to end drawtext filter command</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file.</dd>
|
||||
</dl>
|
||||
<p>Note: <code>-vf</code> is a shortcut for <code>-filter:v</code>.</p>
|
||||
</div>
|
||||
<!-- ends Burn in timecode -->
|
||||
<!-- ends Side by Side Videos/Temporal Difference Filter -->
|
||||
|
||||
</div>
|
||||
<div class="well">
|
||||
@ -1326,6 +1286,46 @@ e.g.: <code>ffmpeg -f concat -safe 0 -i mylist.txt -c copy <i>output_file</i></c
|
||||
</div>
|
||||
<!-- ends two pass loudnorm -->
|
||||
|
||||
</div>
|
||||
<div class="well">
|
||||
<h2 id="metadata">View or strip metadata</h2>
|
||||
|
||||
<!-- Pull specs -->
|
||||
<span data-toggle="collapse" data-target="#pull_specs"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Pull specs from video file">Pull specs</button></span>
|
||||
<div id="pull_specs" class="collapse">
|
||||
<h3>Pull specs from video file</h3>
|
||||
<p class="link"></p>
|
||||
<p><code>ffprobe -i <i>input_file</i> -show_format -show_streams -show_data -print_format xml</code></p>
|
||||
<p>This command extracts technical metadata from a video file and displays it in xml.</p>
|
||||
<dl>
|
||||
<dt>ffprobe</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-show_format</dt><dd>outputs file container informations</dd>
|
||||
<dt>-show_streams</dt><dd>outputs audio and video codec informations</dd>
|
||||
<dt>-show_data</dt><dd>adds a short “hexdump” to show_streams command output</dd>
|
||||
<dt>-print_format</dt><dd>Set the output printing format (in this example “xml”; other formats include “json” and “flat”)</dd>
|
||||
</dl>
|
||||
<p>See also the <a href="www.ffmpeg.org/ffprobe.html" target="_blank"> FFmpeg documentation on ffprobe</a> for a full list of flags, commands, and options.</p>
|
||||
</div>
|
||||
<!-- ends Pull specs -->
|
||||
|
||||
<!-- Strip metadata -->
|
||||
<span data-toggle="collapse" data-target="#strip_metadata"><button type="button" class="btn" data-toggle="tooltip" data-placement="bottom" title="Strip metadata">Strip metadata</button></span>
|
||||
<div id="strip_metadata" class="collapse">
|
||||
<h3>Strips metadata from video file</h3>
|
||||
<p class="link"></p>
|
||||
<p><code>ffmpeg -i <i>input_file</i> -map_metadata -1 -c:v copy -c:a copy <i>output_file</i></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-map_metadata -1</dt><dd>sets metadata copying to -1, which copies nothing</dd>
|
||||
<dt>-vcodec copy</dt><dd>copies video track</dd>
|
||||
<dt>-acodec copy</dt><dd>copies audio track</dd>
|
||||
<dt><i>output_file</i></dt><dd>Makes copy of original file and names output file</dd>
|
||||
</dl>
|
||||
</div>
|
||||
<!-- ends Strip metadata -->
|
||||
|
||||
</div>
|
||||
<div class="well">
|
||||
<h2 id="preservation">Preservation tasks</h2>
|
||||
|
Loading…
Reference in New Issue
Block a user