mirror of
https://github.com/amiaopensource/ffmprovisr.git
synced 2025-10-22 21:59:13 +02:00
Compare commits
42 Commits
v2019-02-1
...
v2019-09-2
Author | SHA1 | Date | |
---|---|---|---|
|
782b1a992a | ||
|
9f6e6846e0 | ||
|
c4bd6a9191 | ||
|
8b48abf751 | ||
|
54aab85937 | ||
|
51ca7a4200 | ||
|
76c3fe1f88 | ||
|
839d50111e | ||
|
a6dd9c203c | ||
|
3402d968a7 | ||
|
283756f8cf | ||
|
d28ae29f5c | ||
|
2be5576012 | ||
|
43c98527a7 | ||
|
164d757309 | ||
|
2a87a120c3 | ||
|
fbe5f216a7 | ||
|
f93922a9c3 | ||
|
e06a76f559 | ||
|
0279c1d842 | ||
|
7e72b1c254 | ||
|
07fe8bf966 | ||
|
c32a7f44ad | ||
|
ea2c29a38c | ||
|
d624a3fc11 | ||
|
ade2615da3 | ||
|
72545d5c31 | ||
|
c6215c1953 | ||
|
abfb9ea982 | ||
|
02beb6ab1d | ||
|
b552ec4a31 | ||
|
c26c0d57ea | ||
|
d023bf7500 | ||
|
5b795e53dd | ||
|
806fd0c49b | ||
|
8a2cdbc088 | ||
|
9df208345c | ||
|
19e38145dd | ||
|
7453e500df | ||
|
8ceb0f4fc6 | ||
|
d95c2e6aa1 | ||
|
ef82e43fb8 |
@@ -105,7 +105,7 @@ h2 {
|
||||
margin: 6px 0px 12px 0px;
|
||||
}
|
||||
|
||||
h3 {
|
||||
h3, h5 {
|
||||
font-size: 1.5em;
|
||||
}
|
||||
|
||||
|
55
index.html
55
index.html
@@ -62,6 +62,7 @@
|
||||
<span class="intro-lead">Sister projects</span>
|
||||
<p><a href="https://dd388.github.io/crals/" target="_blank">Script Ahoy</a>: Community Resource for Archivists and Librarians Scripting</p>
|
||||
<p><a href="https://datapraxis.github.io/sourcecaster/" target="_blank">The Sourcecaster</a>: an app that helps you use the command line to work through common challenges that come up when working with digital primary sources.</p>
|
||||
<p><a href="https://pugetsoundandvision.github.io/micropops/" target="_blank">Micropops</a>: One liners and automation tools from Moving Image Preservation of Puget Sound</p>
|
||||
<p><a href="https://amiaopensource.github.io/cable-bible/" target="_blank">Cable Bible</a>: A Guide to Cables and Connectors Used for Audiovisual Tech</p>
|
||||
</div>
|
||||
|
||||
@@ -71,7 +72,7 @@
|
||||
<label class="recipe" for="basic-structure">Basic structure of an FFmpeg command</label>
|
||||
<input type="checkbox" id="basic-structure">
|
||||
<div class="hiding">
|
||||
<h3>Basic structure of an FFmpeg command</h3>
|
||||
<h5>Basic structure of an FFmpeg command</h5>
|
||||
<p>At its basis, an FFmpeg command is relatively simple. After you have installed FFmpeg (see instructions <a href="https://avpres.net/FFmpeg/#ch1" target="_blank">here</a>), the program is invoked simply by typing <code>ffmpeg</code> at the command prompt.</p>
|
||||
<p>Subsequently, each instruction that you supply to FFmpeg is actually a pair: a flag, which designates the <em>type</em> of action you want to carry out; and then the specifics of that action. Flags are always prepended with a hyphen.</p>
|
||||
<p>For example, in the instruction <code>-i <em>input_file.ext</em></code>, the <code>-i</code> flag tells FFmpeg that you are supplying an input file, and <code>input_file.ext</code> states which file it is.</p>
|
||||
@@ -110,7 +111,7 @@
|
||||
<label class="recipe" for="codec-defaults">Codec defaults</label>
|
||||
<input type="checkbox" id="codec-defaults">
|
||||
<div class="hiding">
|
||||
<h3>Codec Defaults</h3>
|
||||
<h5>Codec Defaults</h5>
|
||||
<p>Unless specified, FFmpeg will automatically set codec choices and codec parameters based off of internal defaults. These defaults are applied based on the file type used in the output (for example <code>.mov</code> or <code>.wav</code>).</p>
|
||||
<p>When creating or transcoding files with FFmpeg, it is important to consider codec settings for both audio and video, as the default options may not be desirable in your particular context. The following is a brief list of codec defaults for some common file types:</p>
|
||||
<ul>
|
||||
@@ -160,7 +161,7 @@
|
||||
<label class="recipe" for="stream-mapping">Stream mapping</label>
|
||||
<input type="checkbox" id="stream-mapping">
|
||||
<div class="hiding">
|
||||
<h3>Stream mapping</h3>
|
||||
<h5>Stream mapping</h5>
|
||||
<p>Stream mapping is the practice of defining which of the streams (e.g., video or audio tracks) present in an input file will be present in the output file. FFmpeg recognizes five stream types:</p>
|
||||
<ul>
|
||||
<li><code>a</code> - audio</li>
|
||||
@@ -177,8 +178,12 @@
|
||||
<li><code>-map 0:0 -map 0:2</code> means ‘take the first and third streams from the first input file’.</li>
|
||||
<li><code>-map 0:1 -map 1:0</code> means ‘take the second stream from the first input file and the first stream from the second input file’.</li>
|
||||
</ul>
|
||||
<p>To map <em>all</em> streams in the input file to the output file, use <code>-map 0</code>. However, note that not all container formats can include all stream types: for example, .mp4 cannot contain timecode.</p>
|
||||
<p>When no mapping is specified in an ffmpeg command, the default for video files is to take just one video and one audio stream for the output: other stream types, such as timecode or subtitles, will not be copied to the output file by default. If multiple video or audio streams are present, the best quality one is automatically selected by FFmpeg.</p>
|
||||
<p>To map <em>all</em> streams in the input file to the output file, use <code>-map 0</code>. However, note that not all container formats can include all stream types: for example, .mp4 cannot contain timecode.</p>
|
||||
<h4>Mapping with a failsafe</h4>
|
||||
<p>To safely process files that may or may not contain given a type of stream, you can add a trailing <code>?</code> to your map commands: for example, <code>-map 0:a?</code> instead of <code>-map 0:a</code>.</p>
|
||||
<p>This makes the map optional: audio streams will be mapped over if they are present in the file—but if the file contains no audio streams, the transcode will precede as usual, minus the audio stream mapping. Without adding the trailing <code>?</code>, FFmpeg will exit with an error on that file.</p>
|
||||
<p>This is especially recommended when batch processing video files: it ensures that all files in your batch will be transcoded, whether or not they contain audio streams.</p>
|
||||
<p>For more information, check out the FFmpeg wiki <a href="https://trac.ffmpeg.org/wiki/Map" target="_blank">Map</a> page, and the official FFmpeg <a href="https://ffmpeg.org/ffmpeg.html#Advanced-options" target="_blank">documentation on <code>-map</code></a>.</p>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
@@ -212,28 +217,6 @@
|
||||
</div>
|
||||
<!-- End Basic rewrap command -->
|
||||
|
||||
<!-- MKV to MP4 -->
|
||||
<label class="recipe" for="mkv_to_mp4">Convert Matroska (MKV) to MP4</label>
|
||||
<input type="checkbox" id="mkv_to_mp4">
|
||||
<div class="hiding">
|
||||
<h3>MKV to MP4</h3>
|
||||
<p><code>ffmpeg -i <em>input_file</em>.mkv -c:v copy -c:a aac <em>output_file</em>.mp4</code></p>
|
||||
<p>This will convert your Matroska (MKV) files to MP4 files.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <em>input_file</em></dt><dd>path and name of the input file<br>
|
||||
The extension for the Matroska container is <code>.mkv</code>.</dd>
|
||||
<dt>-c:v copy</dt><dd>copies the video stream without re-encoding it</dd>
|
||||
<dt>-c:a aac</dt><dd>re-encodes the audio stream using the AAC audio codec<br>
|
||||
Note that sadly MP4 cannot contain sound encoded by a PCM (Pulse-Code Modulation) audio codec.<br>
|
||||
For silent videos you can replace <code>-c:a aac</code> by <code>-an</code>, which means that there will be no audio track in the output file.</dd>
|
||||
<dt><em>output_file</em></dt><dd>path and name of the output file<br>
|
||||
The extension for the MP4 container is <code>.mp4</code>.</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
<!-- ends MKV to MP4 -->
|
||||
|
||||
<!-- Rewrap DV -->
|
||||
<label class="recipe" for="rewrap-dv">Rewrap DV video to .dv file</label>
|
||||
<input type="checkbox" id="rewrap-dv">
|
||||
@@ -314,7 +297,7 @@
|
||||
<dl>
|
||||
<dt>-preset <em>veryslow</em></dt><dd>This option tells FFmpeg to use the slowest preset possible for the best compression quality.<br>
|
||||
Available presets, from slowest to fastest, are: <code>veryslow</code>, <code>slower</code>, <code>slow</code>, <code>medium</code>, <code>fast</code>, <code>faster</code>, <code>veryfast</code>, <code>superfast</code>, <code>ultrafast</code>.</dd>
|
||||
<dt>-crf <em>18</em></dt><dd>Specifying a lower CRF will make a larger file with better visual quality. For H.264 files being encoded with a 4:2:0 chroma subsampling scheme (i.e., using <code>-pix_fmt yuv420p</code>), the scale ranges between 0-51, with 0 being lossless and 51 the worst possible quality.<br>
|
||||
<dt>-crf <em>18</em></dt><dd>Specifying a lower CRF will make a larger file with better visual quality. For H.264 files being encoded with a 4:2:0 chroma subsampling scheme (i.e., using <code>-pix_fmt yuv420p</code>), the scale ranges between 0-51 for 8-bit content, with 0 being lossless and 51 the worst possible quality.<br>
|
||||
If no crf is specified, <code>libx264</code> will use a default value of 23. 18 is often considered a “visually lossless” compression.</dd>
|
||||
</dl>
|
||||
<p>For more information, see the <a href="https://trac.ffmpeg.org/wiki/Encode/H.264" target="_blank">FFmpeg and H.264 Encoding Guide</a> on the FFmpeg wiki.</p>
|
||||
@@ -427,7 +410,7 @@
|
||||
<dt><em>output file</em></dt><dd>path, name and extension of the output file</dd>
|
||||
</dl>
|
||||
<p>The libx265 encoding library defaults to a ‘medium’ preset for compression quality and a CRF of 28. CRF stands for ‘constant rate factor’ and determines the quality and file size of the resulting H.265 video. The CRF scale ranges from 0 (best quality [lossless]; largest file size) to 51 (worst quality; smallest file size).</p>
|
||||
<p>A CRF of 28 for H.265 can be considered a medium setting, <a href="https://trac.ffmpeg.org/wiki/Encode/H.265#ConstantRateFactorCRF" target="_blank">corresponding</a> to a CRF of 23 in <a href="./index.html#transcode_h264">encoding H.264</a>, but should result in about half the file size.</p>
|
||||
<p>A CRF of 28 for H.265 can be considered a medium setting, <a href="https://trac.ffmpeg.org/wiki/Encode/H.265#ConstantRateFactorCRF" target="_blank">corresponding</a> to a CRF of 23 in <a href="#transcode_h264">encoding H.264</a>, but should result in about half the file size.</p>
|
||||
<p>To create a higher quality file, you can add these presets:</p>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -c:v libx265 -pix_fmt yuv420p -preset veryslow -crf 18 -c:a copy <em>output_file</em></code></p>
|
||||
<dl>
|
||||
@@ -488,7 +471,7 @@
|
||||
<!-- ends WAV to MP3 -->
|
||||
|
||||
<!-- append notice to access mp3 -->
|
||||
<label class="recipe" for="append_mp3">Generate two access MP3s (with and without copyright).</label>
|
||||
<label class="recipe" for="append_mp3">Generate two access MP3s (with and without copyright)</label>
|
||||
<input type="checkbox" id="append_mp3">
|
||||
<div class="hiding">
|
||||
<h3>Generate two access MP3s from input. One with appended audio (such as a copyright notice) and one unmodified.</h3>
|
||||
@@ -623,7 +606,7 @@
|
||||
<label class="recipe" for="change_DAR">Change display aspect ratio without re-encoding</label>
|
||||
<input type="checkbox" id="change_DAR">
|
||||
<div class="hiding">
|
||||
<h3>Change Display Aspect Ratio without reencoding video</h3>
|
||||
<h3>Change Display Aspect Ratio without re-encoding video</h3>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -c:v copy -aspect 4:3 <em>output_file</em></code></p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
@@ -680,7 +663,7 @@
|
||||
<img src="./img/colourspace_metadata_mediainfo.png" alt="MediaInfo screenshots of colorspace metadata"><br>
|
||||
<p><span class="beware">⚠</span> Using this command it is possible to add Rec.709 tags to a file that is actually Rec.601 (etc), so apply with caution!</p>
|
||||
<p>These commands are relevant for H.264 and H.265 videos, encoded with <code>libx264</code> and <code>libx265</code> respectively.</p>
|
||||
<p><strong>Note:</strong> If you wish to embed colorspace metadata <em>without</em> changing to another colorspace, omit <code>-vf colormatrix=src:dst</code>. However, since it is <code>libx264</code>/<code>libx265</code> that writes the metadata, it’s not possible to add these tags without reencoding the video stream.</p>
|
||||
<p><strong>Note:</strong> If you wish to embed colorspace metadata <em>without</em> changing to another colorspace, omit <code>-vf colormatrix=src:dst</code>. However, since it is <code>libx264</code>/<code>libx265</code> that writes the metadata, it’s not possible to add these tags without re-encoding the video stream.</p>
|
||||
<p>For all possible values for <code>-color_primaries</code>, <code>-color_trc</code>, and <code>-colorspace</code>, see the FFmpeg documentation on <a href="https://ffmpeg.org/ffmpeg-codecs.html#Codec-Options" target="_blank">codec options</a>.</p>
|
||||
<hr>
|
||||
<p id="fn1" class="footnote">1. Out of step with the regular pattern, <code>-color_trc</code> doesn’t accept <code>bt470bg</code>; it is instead here referred to directly as gamma.<br>
|
||||
@@ -788,7 +771,7 @@
|
||||
</dl>
|
||||
<p>It's also possible to specify the crop position by adding the x and y coordinates representing the top left of your cropped area to your crop filter, as such:</p>
|
||||
<p><code>ffmpeg -i <em>input_file</em> -vf "crop=<em>width</em>:<em>height</em>[:<em>x_position</em>:<em>y_position</em>]" <em>output_file</em></code></p>
|
||||
<h3>Examples</h3>
|
||||
<h5>Examples</h5>
|
||||
<p>The original frame, a screenshot of the SMPTE colorbars:</p>
|
||||
<img class="sample-image" src="img/crop_example_orig.png" alt="VLC screenshot of Maggie Cheung">
|
||||
<p>Result of the command <code>ffmpeg -i <em>smpte_colorsbars.mov</em> -vf "crop=500:500" <em>output_file</em></code>:</p>
|
||||
@@ -2137,7 +2120,7 @@
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-f lavfi</dt><dd>tells FFmpeg to use the <a href="https://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">libavfilter</a> input virtual device</dd>
|
||||
<dt>-i testsrc=size=720x576:rate=25</dt><dd>asks for the testsrc filter pattern as input. Adjusting the <code>size</code> and <code>rate</code> options allows you to choose a specific frame size and framerate. <br>
|
||||
<dt>-i testsrc=size=720x576:rate=25</dt><dd>asks for the testsrc filter pattern as input. Adjusting the <code>size</code> and <code>rate</code> options allows you to choose a specific frame size and framerate.<br>
|
||||
The different test patterns that can be generated are listed <a href="https://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc" target="_blank">here</a>.</dd>
|
||||
<dt>-c:v v210</dt><dd>transcodes video from rawvideo to 10-bit Uncompressed Y′C<sub>B</sub>C<sub>R</sub> 4:2:2. Alter this setting to set your desired codec.</dd>
|
||||
<dt>-t 10</dt><dd>specifies recording time of 10 seconds</dd>
|
||||
@@ -2243,7 +2226,7 @@
|
||||
<div class="hiding">
|
||||
<h3>Conway's Game of Life</h3>
|
||||
<p>Simulates <a href="https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life" target="_blank">Conway's Game of Life</a></p>
|
||||
<p><code>ffplay -f lavfi life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800</code></p>
|
||||
<p><code>ffplay -f lavfi life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#c83232:life_color=#00ff00,scale=1200:800</code></p>
|
||||
<dl>
|
||||
<dt>ffplay</dt><dd>starts the command</dd>
|
||||
<dt>-f lavfi</dt><dd>tells ffplay to use the <a href="https://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">Libavfilter</a> input virtual device</dd>
|
||||
@@ -2251,13 +2234,13 @@
|
||||
<dt>:</dt><dd>indicates there’s another parameter coming</dd>
|
||||
<dt>mold=10:r=60:ratio=0.1</dt><dd>sets up the rules of the game: cell mold speed, video rate, and random fill ratio</dd>
|
||||
<dt>:</dt><dd>indicates there’s another parameter coming</dd>
|
||||
<dt>death_color=#C83232:life_color=#00ff00</dt><dd>specifies color for cell death and cell life; mold_color can also be set</dd>
|
||||
<dt>death_color=#c83232:life_color=#00ff00</dt><dd>specifies color for cell death and cell life; mold_color can also be set</dd>
|
||||
<dt>,</dt><dd>comma signifies closing of video source assertion and ready for filter assertion</dd>
|
||||
<dt>scale=1200:800</dt><dd>scale to 1280 width and 800 height</dd>
|
||||
</dl>
|
||||
<img src="img/life.gif" alt="GIF of above command">
|
||||
<p>To save a portion of the stream instead of playing it back infinitely, use the following command:</p>
|
||||
<p><code>ffmpeg -f lavfi -i life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800 -t 5 <em>output_file</em></code></p>
|
||||
<p><code>ffmpeg -f lavfi -i life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#c83232:life_color=#00ff00,scale=1200:800 -t 5 <em>output_file</em></code></p>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
<!-- ends Game of Life -->
|
||||
|
35
js/js.js
35
js/js.js
@@ -1,23 +1,38 @@
|
||||
$(document).ready(function() {
|
||||
|
||||
// open recipe window if a hash is found in URL
|
||||
if(window.location.hash) {
|
||||
id = window.location.hash
|
||||
console.log(id.substring(1))
|
||||
function appendLink(id) {
|
||||
$(id).next('div').find('.link').empty();
|
||||
$(id).next('div').find('.link').append("<small>Link to this command: <a href='https://amiaopensource.github.io/ffmprovisr/index.html" + id + "'>https://amiaopensource.github.io/ffmprovisr/index.html" + id + "</a></small>");
|
||||
}
|
||||
|
||||
function moveToRecipe(id) {
|
||||
document.getElementById(id.substring(1)).checked = true;
|
||||
$('html, body').animate({ scrollTop: $(id).offset().top}, 1000);
|
||||
$(id).closest('div').find('.link').empty();
|
||||
$(id).closest('div').find('.link').append("<small>Link to this command: <a href='https://amiaopensource.github.io/ffmprovisr/index.html"+window.location.hash+"'>https://amiaopensource.github.io/ffmprovisr/index.html"+window.location.hash+"</a></small>");
|
||||
$('html, body').animate({ scrollTop: $(id).offset().top }, 1000);
|
||||
appendLink(id)
|
||||
}
|
||||
|
||||
// open recipe window if a hash is found in URL
|
||||
if (window.location.hash) {
|
||||
id = window.location.hash
|
||||
moveToRecipe(id)
|
||||
}
|
||||
|
||||
// add hash URL when recipe is opened
|
||||
$('label[class="recipe"]').on("click", function(){
|
||||
id = $(this).attr("for");
|
||||
window.location.hash = ('#' + id)
|
||||
$('#' + id).closest('div').find('.link').empty();
|
||||
$('#' + id).closest('div').find('.link').append("<small>Link to this command: <a href='https://amiaopensource.github.io/ffmprovisr/index.html"+window.location.hash+"'>https://amiaopensource.github.io/ffmprovisr/index.html"+window.location.hash+"</a></small>");
|
||||
});
|
||||
appendLink('#' + id)
|
||||
})
|
||||
|
||||
// open recipe when clicked
|
||||
$('a').on("click", function(){
|
||||
intralink = $(this).attr("href")
|
||||
if (intralink[0] == "#") {
|
||||
moveToRecipe(intralink)
|
||||
}
|
||||
})
|
||||
|
||||
// open all windows if button is clicked
|
||||
$('#open-all').on("click", function(){
|
||||
$('input[type=checkbox]').each(function(){
|
||||
this.checked = !this.checked;
|
||||
|
30
readme.md
30
readme.md
@@ -23,6 +23,26 @@ ffmprovisr
|
||||
```
|
||||
This works currently under macOS, Linux and the Linux apps on Windows (Ubuntu and Debian tested). On classic Windows you can install the last [release](https://github.com/amiaopensource/ffmprovisr/releases) manually and the open `index.html` in a browser.
|
||||
|
||||
#### Parseable list of the commands
|
||||
|
||||
A list of all recipes in an easily parseable [ASCII text](recipes.txt) format is provided as well. It contains for each recipe its title and command in the following format:
|
||||
|
||||
```
|
||||
# title of recipe 1
|
||||
ffmpeg command 1
|
||||
# title of recipe 2
|
||||
ffmpeg command 2
|
||||
|
||||
...
|
||||
|
||||
# title of recipe n-1
|
||||
ffmpeg command n-1
|
||||
# title of recipe n
|
||||
ffmpeg command n
|
||||
```
|
||||
|
||||
The used [one-liner](scripts/get_recipe_list) is in the `scripts` folder.
|
||||
|
||||
## How do I contribute?
|
||||
|
||||
You are welcome to edit the codebase yourself, or just supply the information and ask it to be added to the site.
|
||||
@@ -53,6 +73,7 @@ You can read our contributor code of conduct [here](https://github.com/amiaopens
|
||||
*Code Contributors*:
|
||||
ablwr (Ashley)
|
||||
bastibeckr (Basti Becker)
|
||||
b00giehead (Joanna White)
|
||||
bturkus
|
||||
dericed (Dave Rice)
|
||||
edsu (Ed Summers)
|
||||
@@ -62,6 +83,7 @@ kfrn (Katherine Frances Nagels)
|
||||
kgrons (Kathryn Gronsbell)
|
||||
kieranjol (Kieran O'Leary)
|
||||
llogan (Lou)
|
||||
mgiraldo (Mauricio Giraldo)
|
||||
pjotrek-b (Peter B.)
|
||||
privatezero (Andrew Weaver)
|
||||
retokromer (Reto Kromer)
|
||||
@@ -71,6 +93,7 @@ rfraimow
|
||||
ablwr (Ashley)
|
||||
audiovisualopen
|
||||
bastibeckr (Basti Becker)
|
||||
b00giehead (Joanna White)
|
||||
brainwane (Sumana Harihareswara)
|
||||
bturkus
|
||||
dericed (Dave Rice)
|
||||
@@ -89,6 +112,7 @@ kfrn (Katherine Frances Nagels)
|
||||
kgrons (Kathryn Gronsbell)
|
||||
kieranjol (Kieran O'Leary)
|
||||
llogan (Lou)
|
||||
mgiraldo (Mauricio Giraldo)
|
||||
mulvya
|
||||
nkrabben (Nick Krabbenhoeft)
|
||||
pjotrek-b (Peter B.)
|
||||
@@ -100,9 +124,9 @@ ross-spencer (Ross Spencer)
|
||||
todrobbins (Tod Robbins)
|
||||
|
||||
Repo: amiaopensource/ffmprovisr
|
||||
Code Contributors: 15
|
||||
All Contributors: 30
|
||||
Last updated: 2018-04-22 (4:2:2 Day)
|
||||
Code Contributors: 17
|
||||
All Contributors: 32
|
||||
Last updated: 2019-02-11
|
||||
|
||||
## AVHack Team
|
||||
|
||||
|
210
recipes.txt
Normal file
210
recipes.txt
Normal file
@@ -0,0 +1,210 @@
|
||||
# Basic rewrap command
|
||||
ffmpeg -i input_file.ext -c copy -map 0 output_file.ext
|
||||
# Rewrap DV video to .dv file
|
||||
ffmpeg -i input_file -f rawvideo -c:v copy output_file.dv
|
||||
# Transcode to deinterlaced Apple ProRes LT
|
||||
ffmpeg -i input_file -c:v prores -profile:v 1 -vf yadif -c:a pcm_s16le output_file.mov
|
||||
# Transcode to an H.264 access file
|
||||
ffmpeg -i input_file -c:v libx264 -pix_fmt yuv420p -c:a aac output_file
|
||||
# Transcode from DCP to an H.264 access file
|
||||
ffmpeg -i input_video_file.mxf -i input_audio_file.mxf -c:v libx264 -pix_fmt yuv420p -c:a aac output_file.mp4
|
||||
# Transcode your file with the FFV1 Version 3 Codec in a Matroska container
|
||||
ffmpeg -i input_file -map 0 -dn -c:v ffv1 -level 3 -g 1 -slicecrc 1 -slices 16 -c:a copy output_file.mkv -f framemd5 -an framemd5_output_file
|
||||
# Convert DVD to H.264
|
||||
ffmpeg -i concat:input_file_1\|input_file_2\|input_file_3 -c:v libx264 -c:a aac output_file.mp4
|
||||
# Transcode to an H.265/HEVC MP4
|
||||
ffmpeg -i input_file -c:v libx265 -pix_fmt yuv420p -c:a copy output_file
|
||||
# Transcode to an Ogg Theora
|
||||
ffmpeg -i input_file -acodec libvorbis -b:v 690k output_file
|
||||
# Convert WAV to MP3
|
||||
ffmpeg -i input_file.wav -write_id3v1 1 -id3v2_version 3 -dither_method rectangular -out_sample_rate 48k -qscale:a 1 output_file.mp3
|
||||
# Generate two access MP3s (with and without copyright).
|
||||
ffmpeg -i input_file -i input_file_to_append -filter_complex "[0:a:0]asplit=2[a][b];[b]afifo[bb];[1:a:0][bb]concat=n=2:v=0:a=1[concatout]" -map "[a]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file.mp3 -map "[concatout]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 output_file_appended.mp3
|
||||
# Convert WAV to AAC/MP4
|
||||
ffmpeg -i input_file.wav -c:a aac -b:a 128k -dither_method rectangular -ar 44100 output_file.mp4
|
||||
# Transform 4:3 aspect ratio into 16:9 with pillarbox
|
||||
ffmpeg -i input_file -filter:v "pad=ih*16/9:ih:(ow-iw)/2:(oh-ih)/2" -c:a copy output_file
|
||||
# Transform 16:9 aspect ratio video into 4:3 with letterbox
|
||||
ffmpeg -i input_file -filter:v "pad=iw:iw*3/4:(ow-iw)/2:(oh-ih)/2" -c:a copy output_file
|
||||
# Flip video image
|
||||
ffmpeg -i input_file -filter:v "hflip,vflip" -c:a copy output_file
|
||||
# Transform SD to HD with pillarbox
|
||||
ffmpeg -i input_file -filter:v "colormatrix=bt601:bt709, scale=1440:1080:flags=lanczos, pad=1920:1080:240:0" -c:a copy output_file
|
||||
# Change display aspect ratio without re-encoding
|
||||
ffmpeg -i input_file -c:v copy -aspect 4:3 output_file
|
||||
# Convert colorspace of video
|
||||
ffmpeg -i input_file -c:v libx264 -vf colormatrix=src:dst output_file
|
||||
# Modify image and sound speed
|
||||
ffmpeg -i input_file -r output_fps -filter_complex "[0:v]setpts=input_fps/output_fps*PTS[v]; [0:a]atempo=output_fps/input_fps[a]" -map "[v]" -map "[a]" output_file
|
||||
# Synchronize video and audio streams
|
||||
ffmpeg -i input_file -itsoffset 0.125 -i input_file -map 1:v -map 0:a -c copy output_file
|
||||
# Clarify stream properties
|
||||
ffprobe input_file -show_streams
|
||||
# Crop video
|
||||
ffmpeg -i input_file -vf "crop=width:height" output_file
|
||||
# Change video color to black and white
|
||||
ffmpeg -i input_file -filter_complex hue=s=0 -c:a copy output_file
|
||||
# Extract audio without loss from an AV file
|
||||
ffmpeg -i input_file -c:a copy -vn output_file
|
||||
# Combine audio tracks
|
||||
ffmpeg -i input_file -filter_complex "[0:a:0][0:a:1]amerge[out]" -map 0:v -map "[out]" -c:v copy -shortest output_file
|
||||
# Inverses the audio phase of the second channel
|
||||
ffmpeg -i input_file -af pan="stereo|c0=c0|c1=-1*c1" output_file
|
||||
# Calculate Loudness Levels
|
||||
ffmpeg -i input_file -af loudnorm=print_format=json -f null -
|
||||
# RIAA Equalization
|
||||
ffmpeg -i input_file -af aemphasis=type=riaa output_file
|
||||
# Reverse CD Pre-Emphasis
|
||||
ffmpeg -i input_file -af aemphasis=type=cd output_file
|
||||
# One Pass Loudness Normalization
|
||||
ffmpeg -i input_file -af loudnorm=dual_mono=true -ar 48k output_file
|
||||
# Two Pass Loudness Normalization
|
||||
ffmpeg -i input_file -af loudnorm=dual_mono=true:measured_I=input_i:measured_TP=input_tp:measured_LRA=input_lra:measured_thresh=input_thresh:offset=target_offset:linear=true -ar 48k output_file
|
||||
# Fix A/V sync issues by resampling audio
|
||||
ffmpeg -i input_file -c:v copy -c:a pcm_s16le -af "aresample=async=1000" output_file
|
||||
# Join (concatenate) two or more files of the same type
|
||||
ffmpeg -f concat -i mylist.txt -c copy output_file
|
||||
# Join (concatenate) two or more files of different types
|
||||
ffmpeg -i input_1.avi -i input_2.mp4 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[video_out][audio_out]" -map "[video_out]" -map "[audio_out]" output_file
|
||||
# Split one file into several smaller segments
|
||||
ffmpeg -i input_file -c copy -map 0 -f segment -segment_time 60 -reset_timestamps 1 output_file-%03d.mkv
|
||||
# Trim file
|
||||
ffmpeg -i input_file -ss 00:02:00 -to 00:55:00 -c copy -map 0 output_file
|
||||
# Create an excerpt, starting from the beginning of the file
|
||||
ffmpeg -i input_file -t 5 -c copy -map 0 output_file
|
||||
# Create a new file with the first five seconds trimmed off the original
|
||||
ffmpeg -i input_file -ss 5 -c copy -map 0 output_file
|
||||
# Create a new file with the final five seconds of the original
|
||||
ffmpeg -sseof -5 -i input_file -c copy -map 0 output_file
|
||||
# Trim silence from beginning of an audio file
|
||||
ffmpeg -i input_file -af silenceremove=start_threshold=-57dB:start_duration=1:start_periods=1 -c:a your_codec_choice -ar your_sample_rate_choice output_file
|
||||
# Trim silence from the end of an audio file
|
||||
ffmpeg -i input_file -af areverse,silenceremove=start_threshold=-57dB:start_duration=1:start_periods=1,areverse -c:a your_codec_choice -ar your_sample_rate_choice output_file
|
||||
# Upscaled, pillar-boxed HD H.264 access files from SD NTSC source
|
||||
ffmpeg -i input_file -c:v libx264 -filter:v "yadif, scale=1440:1080:flags=lanczos, pad=1920:1080:(ow-iw)/2:(oh-ih)/2, format=yuv420p" output_file
|
||||
# Deinterlace video
|
||||
ffmpeg -i input_file -c:v libx264 -vf "yadif,format=yuv420p" output_file
|
||||
# Inverse telecine
|
||||
ffmpeg -i input_file -c:v libx264 -vf "fieldmatch,yadif,decimate" output_file
|
||||
# Set field order for interlaced video
|
||||
ffmpeg -i input_file -c:v video_codec -filter:v setfield=tff output_file
|
||||
# Identify interlacement patterns in a video file
|
||||
ffmpeg -i input file -filter:v idet -f null -
|
||||
# Create opaque centered text watermark
|
||||
ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:text=watermark_text:fontcolor=font_color:alpha=0.4:x=(w-text_w)/2:y=(h-text_h)/2" output_file
|
||||
# Overlay image watermark on video
|
||||
ffmpeg -i input_video file -i input_image_file -filter_complex overlay=main_w-overlay_w-5:5 output_file
|
||||
# Burn in timecode
|
||||
ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:timecode=starting_timecode:fontcolor=font_colour:box=1:boxcolor=box_colour:rate=timecode_rate:x=(w-text_w)/2:y=h/1.2" output_file
|
||||
Embed subtitles
|
||||
ffmpeg -i input_file -i subtitles_file -c copy -c:s mov_text output_file
|
||||
# Export one thumbnail per video file
|
||||
ffmpeg -i input_file -ss 00:00:20 -vframes 1 thumb.png
|
||||
# Export many thumbnails per video file
|
||||
ffmpeg -i input_file -vf fps=1/60 out%d.png
|
||||
# Create GIF from still images
|
||||
ffmpeg -f image2 -framerate 9 -pattern_type glob -i "input_image_*.jpg" -vf scale=250x250 output_file.gif
|
||||
# Create GIF from a video
|
||||
ffmpeg -ss HH:MM:SS -i input_file -filter_complex "fps=10,scale=500:-1:flags=lanczos,palettegen" -t 3 palette.png
|
||||
ffmpeg -ss HH:MM:SS -i input_file -i palette.png -filter_complex "[0:v]fps=10, scale=500:-1:flags=lanczos[v], [v][1:v]paletteuse" -t 3 -loop 6 output_file
|
||||
# Transcode an image sequence into uncompressed 10-bit video
|
||||
ffmpeg -f image2 -framerate 24 -i input_file_%06d.ext -c:v v210 output_file
|
||||
# Create video from image and audio
|
||||
ffmpeg -r 1 -loop 1 -i image_file -i audio_file -acodec copy -shortest -vf scale=1280:720 output_file
|
||||
# Audio Bitscope
|
||||
ffplay -f lavfi "amovie=input_file, asplit=2[out1][a], [a]abitscope=colors=purple|yellow[out0]"
|
||||
# Play a graphical output showing decibel levels of an input file
|
||||
ffplay -f lavfi "amovie='input.mp3', astats=metadata=1:reset=1, adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0:size=700x256:bg=Black[out]"
|
||||
# Identify pixels out of broadcast range
|
||||
ffplay -f lavfi "movie='input.mp4', signalstats=out=brng:color=cyan[out]"
|
||||
# Vectorscope from video to screen
|
||||
ffplay input_file -vf "split=2[m][v], [v]vectorscope=b=0.7:m=color3:g=green[v], [m][v]overlay=x=W-w:y=H-h"
|
||||
# Side by Side Videos/Temporal Difference Filter
|
||||
ffmpeg -i input01 -i input02 -filter_complex "[0:v:0]tblend=all_mode=difference128[a];[1:v:0]tblend=all_mode=difference128[b];[a][b]hstack[out]" -map [out] -f nut -c:v rawvideo - | ffplay -
|
||||
# Use xstack to arrange output layout of multiple video sources
|
||||
ffplay -f lavfi -i testsrc -vf "split=3[a][b][c],[a][b][c]xstack=inputs=3:layout=0_0|0_h0|0_h0+h1[out]"
|
||||
# Pull specs from video file
|
||||
ffprobe -i input_file -show_format -show_streams -show_data -print_format xml
|
||||
# Strip metadata
|
||||
ffmpeg -i input_file -map_metadata -1 -c:v copy -c:a copy output_file
|
||||
# Batch processing (Mac/Linux)
|
||||
for file in *.mxf; do ffmpeg -i "$file" -map 0 -c copy "${file%.mxf}.mov"; done
|
||||
# Check decoder errors
|
||||
ffmpeg -i input_file -f null -
|
||||
# Check FFV1 fixity
|
||||
ffmpeg -report -i input_file -f null -
|
||||
# Create MD5 checksums (video frames)
|
||||
ffmpeg -i input_file -f framemd5 -an output_file
|
||||
# Create MD5 checksums (audio samples)
|
||||
ffmpeg -i input_file -af "asetnsamples=n=48000" -f framemd5 -vn output_file
|
||||
# Create MD5 checksum(s) for A/V stream data only
|
||||
ffmpeg -i input_file -map 0:v:0 -c:v copy -f md5 output_file_1 -map 0:a:0 -c:a copy -f md5 output_file_2
|
||||
# Get checksum for video/audio stream
|
||||
ffmpeg -loglevel error -i input_file -map 0:v:0 -f hash -hash md5 -
|
||||
# QCTools report (with audio)
|
||||
ffprobe -f lavfi -i "movie=input_file:s=v+a[in0][in1], [in0]signalstats=stat=tout+vrep+brng, cropdetect=reset=1:round=1, idet=half_life=1, split[a][b];[a]field=top[a1];[b]field=bottom, split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim[out0];[in1]ebur128=metadata=1, astats=metadata=1:reset=1:length=0.4[out1]" -show_frames -show_versions -of xml=x=1:q=1 -noprivate | gzip > input_file.qctools.xml.gz
|
||||
# QCTools report (no audio)
|
||||
ffprobe -f lavfi -i "movie=input_file,signalstats=stat=tout+vrep+brng, cropdetect=reset=1:round=1, idet=half_life=1, split[a][b];[a]field=top[a1];[b]field=bottom,split[b1][b2];[a1][b1]psnr[c1];[c1][b2]ssim" -show_frames -show_versions -of xml=x=1:q=1 -noprivate | gzip > input_file.qctools.xml.gz
|
||||
# Read/Extract EIA-608 Closed Captioning
|
||||
ffprobe -f lavfi -i movie=input_file,readeia608 -show_entries frame=pkt_pts_time:frame_tags=lavfi.readeia608.0.line,lavfi.readeia608.0.cc,lavfi.readeia608.1.line,lavfi.readeia608.1.cc -of csv > input_file.csv
|
||||
# Make a mandelbrot test pattern video
|
||||
ffmpeg -f lavfi -i mandelbrot=size=1280x720:rate=25 -c:v libx264 -t 10 output_file
|
||||
# Make a SMPTE bars test pattern video
|
||||
ffmpeg -f lavfi -i smptebars=size=720x576:rate=25 -c:v prores -t 10 output_file
|
||||
# Make a test pattern video
|
||||
ffmpeg -f lavfi -i testsrc=size=720x576:rate=25 -c:v v210 -t 10 output_file
|
||||
# Play HD SMPTE bars
|
||||
ffplay -f lavfi -i smptehdbars=size=1920x1080
|
||||
# Play VGA SMPTE bars
|
||||
ffplay -f lavfi -i smptebars=size=640x480
|
||||
# Generate a sine wave test audio file
|
||||
ffmpeg -f lavfi -i "sine=frequency=1000:sample_rate=48000:duration=5" -c:a pcm_s16le output_file.wav
|
||||
# SMPTE bars + Sine wave audio
|
||||
ffmpeg -f lavfi -i "smptebars=size=720x576:rate=25" -f lavfi -i "sine=frequency=1000:sample_rate=48000" -c:a pcm_s16le -t 10 -c:v ffv1 output_file
|
||||
# Make a broken file
|
||||
ffmpeg -i input_file -bsf noise=1 -c copy output_file
|
||||
# Conway's Game of Life
|
||||
ffplay -f lavfi life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800
|
||||
# Play video with OCR
|
||||
ffplay input_file -vf "ocr,drawtext=fontfile=/Library/Fonts/Andale Mono.ttf:text=%{metadata\\\:lavfi.ocr.text}:fontcolor=white"
|
||||
# Export OCR from video to screen
|
||||
ffprobe -show_entries frame_tags=lavfi.ocr.text -f lavfi -i "movie=input_file,ocr"
|
||||
# Compare Video Fingerprints
|
||||
ffmpeg -i input_one -i input_two -filter_complex signature=detectmode=full:nb_inputs=2 -f null -
|
||||
# Generate Video Fingerprint
|
||||
ffmpeg -i input -vf signature=format=xml:filename="output.xml" -an -f null -
|
||||
# Play an image sequence
|
||||
ffplay -framerate 5 input_file_%06d.ext
|
||||
# Split audio and video tracks
|
||||
ffmpeg -i input_file -map 0:v:0 video_output_file -map 0:a:0 audio_output_file
|
||||
# Merge audio and video tracks
|
||||
ffmpeg -i video_file -i audio_file -map 0:v -map 1:a -c copy output_file
|
||||
# Create ISO files for DVD access
|
||||
ffmpeg -i input_file -aspect 4:3 -target ntsc-dvd output_file.mpg
|
||||
# CSV with timecodes and YDIF
|
||||
ffprobe -f lavfi -i movie=input_file,signalstats -show_entries frame=pkt_pts_time:frame_tags=lavfi.signalstats.YDIF -of csv
|
||||
# Cover head switching noise
|
||||
ffmpeg -i input_file -filter:v drawbox=w=iw:h=7:y=ih-h:t=max output_file
|
||||
# Record and live-stream simultaneously
|
||||
ffmpeg -re -i ${INPUTFILE} -map 0 -flags +global_header -vf scale="1280:-1,format=yuv420p" -pix_fmt yuv420p -level 3.1 -vsync passthrough -crf 26 -g 50 -bufsize 3500k -maxrate 1800k -c:v libx264 -c:a aac -b:a 128000 -r:a 44100 -ac 2 -t ${STREAMDURATION} -f tee "[movflags=+faststart]${TARGETFILE}|[f=flv]${STREAMTARGET}"
|
||||
# View FFmpeg subprogram information
|
||||
ffmpeg -h type=name
|
||||
# Rip a CD with CD Paranoia
|
||||
cdparanoia -L -B -O [Drive Offset] [Starting Track Number]-[Ending Track Number] output_file.wav
|
||||
# Rip a CD with Cdda2wav
|
||||
cdda2wav -L0 -t all -cuefile -paranoia paraopts=retries=200,readahead=600,minoverlap=sectors-per-request-1 -verbose-level all output.wav
|
||||
# Compare two images
|
||||
compare -metric ae image1.ext image2.ext null:
|
||||
# Create thumbnails of images
|
||||
mogrify -resize 80x80 -format jpg -quality 75 -path thumbs *.jpg
|
||||
# Creates grid of images from text file
|
||||
montage @list.txt -tile 6x12 -geometry +0+0 output_grid.jpg
|
||||
# Get file signature data
|
||||
convert -verbose input_file.ext | grep -i signature
|
||||
# Removes exif metadata
|
||||
mogrify -path ./stripped/ -strip *.jpg
|
||||
# Resizes image to specific pixel width
|
||||
convert input_file.ext -resize 750 output_file.ext
|
||||
# Transcoding to/from FLAC
|
||||
flac --best --keep-foreign-metadata --preserve-modtime --verify input.wav
|
||||
flac --decode --keep-foreign-metadata --preserve-modtime --verify input.flac
|
1
scripts/get_recipe_list
Normal file
1
scripts/get_recipe_list
Normal file
@@ -0,0 +1 @@
|
||||
curl https://amiaopensource.github.io/ffmprovisr/ -s | grep -E '<h3>.*</h3>|<p><code>.*</code></p>' | sed 's/.*<code>\(.*\)<\/code>/\1/' | sed 's/.*<h3>\(.*\)<\/h3>/# \1/' | grep -v '\*\*\*' | sed -e 's/<[^>]*>//g'
|
Reference in New Issue
Block a user