mirror of
https://github.com/amiaopensource/ffmprovisr.git
synced 2024-11-10 07:27:23 +01:00
commit
dbe9e1a049
36
index.html
36
index.html
@ -75,8 +75,6 @@
|
||||
<!-- ends MKV to MP4 -->
|
||||
|
||||
</div>
|
||||
<!-- ends well -->
|
||||
|
||||
<div class="well">
|
||||
<h4>Change codec (transcode)</h4>
|
||||
|
||||
@ -100,7 +98,7 @@
|
||||
<li>2 = ProRes 422 (Standard)</li>
|
||||
<li>3 = ProRes 422 (HQ)</li>
|
||||
</ul></dd>
|
||||
<dt>-vf yadif</dt><dd>Runs a deinterlacing video filter (yet another deinterlacing filter) on the new file</dd>
|
||||
<dt>-vf yadif</dt><dd>Runs a deinterlacing video filter (yet another deinterlacing filter) on the new file. <code>-vf</code> is an alias for <code>-filter:v</code>.</dd>
|
||||
<dt>-c:a pcm_s16le</dt><dd>Tells ffmpeg to encode the audio stream in 16-bit linear PCM</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file<br>
|
||||
The extension for the QuickTime container is <code>.mov</code>.</dd>
|
||||
@ -158,11 +156,11 @@
|
||||
<div class="well">
|
||||
<h3>H.264 from DCP</h3>
|
||||
<p><code>ffmpeg -i <i>input_video_file</i>.mxf -i <i>input_audio_file</i>.mxf -c:v <i>libx264</i> -pix_fmt <i>yuv420p</i> -c:a <i>aac output_file.mp4</i></code></p>
|
||||
<p>This will transcode mxf wrapped video and audio files to an H.264 encoded .mp4 file. Please note this only works for unencrypted, single reel DCPs.</p>
|
||||
<p>This will transcode MXF wrapped video and audio files to an H.264 encoded MP4 file. Please note this only works for unencrypted, single reel DCPs.</p>
|
||||
<dl>
|
||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||
<dt>-i <i>input_video_file</i></dt><dd>path and name of the video input file. This extension must be .mxf</dd>
|
||||
<dt>-i <i>input_audio_file</i></dt><dd>path and name of the audio input file. This extension must be .mxf</dd>
|
||||
<dt>-i <i>input_video_file</i></dt><dd>path and name of the video input file. This extension must be <code>.mxf</code></dd>
|
||||
<dt>-i <i>input_audio_file</i></dt><dd>path and name of the audio input file. This extension must be <code>.mxf</code></dd>
|
||||
<dt>-c:v <i>libx264</i></dt><dd>transcodes video to H.264</dd>
|
||||
<dt>-pix_fmt <i>yuv420p</i></dt><dd>sets pixel format to yuv420p for greater compatibility with media players</dd>
|
||||
<dt>-c:a aac</dt><dd>re-encodes using the AAC audio codec<br>
|
||||
@ -203,7 +201,7 @@
|
||||
<dt>-slices 16</dt><dd>Each frame is split into 16 slices. 16 is a good trade-off between filesize and encoding time. <a href="http://ndsr.nycdigital.org/diving-in-head-first/" target="_blank">[more]</a></dd>
|
||||
<dt>-c:a copy</dt><dd>copies all mapped audio streams.</dd>
|
||||
<dt><i>output_file</i>.mkv</dt><dd>path and name of the output file. Use the <code>.mkv</code> extension to save your file in a Matroska container. Optionally, choose a different extension if you want a different container, such as <code>.mov</code> or <code>.avi</code>.</dd>
|
||||
<dt>-f framemd5</dt><dd> Decodes video with the framemd5 muxer in order to generate md5 checksums for every frame of your input file. This allows you to verify losslessness when compared against the framemd5s of the output file.</dd>
|
||||
<dt>-f framemd5</dt><dd> Decodes video with the framemd5 muxer in order to generate MD5 checksums for every frame of your input file. This allows you to verify losslessness when compared against the framemd5s of the output file.</dd>
|
||||
<dt>-an</dt><dd>ignores the audio stream when creating framemd5 (audio no)</dd>
|
||||
<dt><i>framemd5_output_file</i></dt><dd>path, name and extension of the framemd5 file.</dd>
|
||||
</dl>
|
||||
@ -230,7 +228,7 @@
|
||||
<dt>-pattern_type glob</dt><dd>tells ffmpeg that the following mapping should "interpret like a <a href="https://en.wikipedia.org/wiki/Glob_%28programming%29" target="_blank">glob</a>" (a "global command" function that relies on the * as a wildcard and finds everything that matches)</dd>
|
||||
<dt>-i <i>"input_image_*.jpg"</i></dt><dd>maps all files in the directory that start with input_image_, for example input_image_001.jpg, input_image_002.jpg, input_image_003.jpg... etc.<br>
|
||||
(The quotation marks are necessary for the above “glob” pattern!)</dd>
|
||||
<dt>-vf scale=250x250</dt><dd>filter the video to scale it to 250x250; -vf is an alias for -filter:v</dd>
|
||||
<dt>-vf scale=250x250</dt><dd>filter the video to scale it to 250x250; <code>-vf</code> is an alias for <code>-filter:v</code></dd>
|
||||
<dt><i>output_file.gif</i></dt><dd>path and name of the output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
@ -262,7 +260,7 @@
|
||||
<p>It’s also possible to adjust the quality of your output by setting the <b>-crf</b> and <b>-preset</b> values:</p>
|
||||
<p><code>ffmpeg -i concat:<i>input_file1</i>\|<i>input_file2</i>\|<i>input_file3</i> -c:v libx264 -crf 18 -preset veryslow -c:a copy <i>output_file</i>.mp4</code></p>
|
||||
<dl>
|
||||
<dt>-crf 18</dt><dd>sets the constant rate factor to a visually lossless value. Libx264 defaults to a <a href="https://trac.ffmpeg.org/wiki/Encode/H.264#crf" target="_blank">crf of 23</a>, considered medium quality; a smaller crf value produces a larger and higher quality video.</dd>
|
||||
<dt>-crf 18</dt><dd>sets the constant rate factor to a visually lossless value. Libx264 defaults to a <a href="https://trac.ffmpeg.org/wiki/Encode/H.264#crf" target="_blank">crf of 23</a>, considered medium quality; a smaller CRF value produces a larger and higher quality video.</dd>
|
||||
<dt>-preset veryslow</dt><dd>A slower preset will result in better compression and therefore a higher-quality file. The default is <b>medium</b>; slower presets are <b>slow</b>, <b>slower</b>, and <b>veryslow</b>.</dd>
|
||||
</dl>
|
||||
<p>Bear in mind that by default, libx264 will only encode a single video stream and a single audio stream, picking the ‘best’ of the options available. To preserve all video and audio streams, add <b>-map</b> parameters:</p>
|
||||
@ -366,8 +364,6 @@
|
||||
<!-- ends WAV to AAC/MP4 -->
|
||||
|
||||
</div>
|
||||
<!-- ends well -->
|
||||
|
||||
<div class="well">
|
||||
<h4>Change formats</h4>
|
||||
|
||||
@ -1036,7 +1032,7 @@
|
||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||
<dt>-aspect <i>4:3</i></dt><dd>declares the aspect ratio of the resulting video file. You can also use 16:9.</dd>
|
||||
<dt>-target <i>ntsc-dvd</i></dt><dd>specifies the region for your DVD. This could be also pal-dvd.</dd>
|
||||
<dt><i>output_file</i>.mpg</dt><dd>path and name of the output file. The extension must be .mpg</dd>
|
||||
<dt><i>output_file</i>.mpg</dt><dd>path and name of the output file. The extension must be <code>.mpg</code></dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
</div>
|
||||
@ -1093,10 +1089,10 @@
|
||||
<dt>[b]afifo[bb];</dt><dd>this buffers the stream "b" to help prevent dropped samples and renames stream to "bb"</dd>
|
||||
<dt>[1:a:0][bb]concat=n=2:v=0:a=1[concatout]</dt><dd><code>concat</code> is used to join files. <code>n=2</code> tells the filter there are two inputs. <code>v=0:a=1</code> Tells the filter there are 0 video outputs and 1 audio output. This command appends the audio from the second input to the beginning of stream "bb" and names the output "concatout"</dd>
|
||||
<dt>-map "[a]"</dt><dd>this maps the unmodified audio stream to the first output</dd>
|
||||
<dt>-codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2</dt><dd>sets up mp3 options (using constant quality)</dd>
|
||||
<dt>-codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2</dt><dd>sets up MP3 options (using constant quality)</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file (unmodified)</dd>
|
||||
<dt>-map "[concatout]"</dt><dd>this maps the modified stream to the second output</dd>
|
||||
<dt>-codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2</dt><dd>sets up mp3 options (using constant quality)</dd>
|
||||
<dt>-codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2</dt><dd>sets up MP3 options (using constant quality)</dd>
|
||||
<dt><i>output_file_appended</i></dt><dd>path, name and extension of the output file (with appended notice)</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
@ -1107,7 +1103,6 @@
|
||||
<!-- ends append notice to access mp3 -->
|
||||
|
||||
</div>
|
||||
|
||||
<div class="well">
|
||||
<h4>Normalize/Equalize Audio</h4>
|
||||
|
||||
@ -1180,7 +1175,7 @@
|
||||
<dt>measured_LRA=<i>input_lra</i></dt><dd>use the 'input_lra' value (loudness range) from the first pass in place of input_lra</dd>
|
||||
<dt>measured_LRA=<i>input_thresh</i></dt><dd>use the 'input_thresh' value (threshold) from the first pass in place of input_thresh</dd>
|
||||
<dt>offset=<i>target_offset</i></dt><dd>use the 'target_offset' value (offset) from the first pass in place of target_offset</dd>
|
||||
<dt>linear=true</i></dt><dd>tells loudnorm to use linear normalization</dd>
|
||||
<dt>linear=true</dt><dd>tells loudnorm to use linear normalization</dd>
|
||||
<dt><i>output_file</i></dt><dd>path, name and extension for output file</dd>
|
||||
</dl>
|
||||
<p class="link"></p>
|
||||
@ -1211,8 +1206,8 @@
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends RIAA equalization -->
|
||||
</div><!-- closes the well -->
|
||||
|
||||
|
||||
</div>
|
||||
<div class="well">
|
||||
<h4>Preservation</h4>
|
||||
|
||||
@ -2123,8 +2118,8 @@ e.g.: <code>ffmpeg -f concat -safe 0 -i mylist.txt -c copy <i>output_file</i></c
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends Generate Video Fingerprint -->
|
||||
</div><!-- closes the well -->
|
||||
|
||||
</div>
|
||||
<div class="well">
|
||||
<h4>Repair</h4>
|
||||
|
||||
@ -2150,7 +2145,8 @@ e.g.: <code>ffmpeg -f concat -safe 0 -i mylist.txt -c copy <i>output_file</i></c
|
||||
</div>
|
||||
</div>
|
||||
<!-- ends Fix A/V async 1 -->
|
||||
</div><!-- closes the well -->
|
||||
|
||||
</div>
|
||||
|
||||
|
||||
<!-- sample example -->
|
||||
|
Loading…
Reference in New Issue
Block a user