Compare commits

...

20 Commits

Author SHA1 Message Date
Katherine Frances Nagels
1ef6c3305b Concatenate files of different resolutions (#307) 2018-02-10 09:31:48 +01:00
Katherine Frances Nagels
9c4da4102a Merge pull request #306 from amiaopensource/style
unify coding style
2018-02-10 19:25:31 +13:00
Reto Kromer
c47a7a534f unify coding style 2018-02-10 06:53:13 +01:00
Reto Kromer
4d8fdc9f4e uniform coding style 2018-02-10 06:50:06 +01:00
Reto Kromer
ae590706b0 Merge pull request #305 (Code style: add enclosing quotation marks)
Code style: add enclosing quotation marks
2018-02-10 06:45:22 +01:00
kfrn
e84f0a9fb6 Code style: add enclosing quotation marks 2018-02-10 12:20:21 +13:00
Reto Kromer
e2850d38c0 simplify code (#304) 2018-02-09 21:22:31 +01:00
Reto Kromer
8927478efb Merge pull request #303 (uniform syntax) 2018-02-09 17:06:52 +01:00
Reto Kromer
3c815b1f3b uniform syntax 2018-02-09 16:20:26 +01:00
Ashley
58aa0549ff Merge pull request #301 from amiaopensource/css
replace tabs by spaces
2018-02-05 11:00:11 -05:00
Ashley
4a83b45e7e Merge pull request #302 from amiaopensource/html
fix HTML5
2018-02-05 10:59:59 -05:00
Reto Kromer
f6b44c56ce fix HTML5 2018-02-04 20:58:14 +01:00
Reto Kromer
8149aa163c replace tabs by spaces 2018-02-04 20:45:25 +01:00
Reto Kromer
94f935198f Merge pull request #300 (add alias) 2018-01-25 18:44:17 +01:00
Reto Kromer
c04c9ff12f add alias 2018-01-25 15:55:21 +01:00
Reto Kromer
64787edd4e Merge pull request #299 (Add explanation about input files for join_different_files recipe) 2018-01-25 10:41:06 +01:00
kfrn
f995e8b483 Improve explanation about input files for join_different_files recipe 2018-01-25 18:28:18 +13:00
Katherine Frances Nagels
dea85d1e47 Merge pull request #297 from amiaopensource/uniform_style
uniform style
2018-01-25 07:47:24 +13:00
Reto Kromer
e9fd3fd002 uniform style 2018-01-24 13:16:28 +01:00
Katherine Frances Nagels
debc510205 New recipe: concat files of different types (+ a few fixups) (#296) 2018-01-24 13:10:22 +01:00
5 changed files with 102 additions and 43 deletions

View File

@@ -111,6 +111,10 @@ h3 {
font-size: 1.5em;
}
h4 {
font-size: 1.2em;
}
.intro-lead {
font-family: 'Montserrat', sans-serif;
font-size: 1em;
@@ -250,17 +254,17 @@ nav .heading {
.hiding {
opacity: 0;
height: 0;
height: 0;
overflow: hidden;
}
input {
position: absolute;
left: -999em
position: absolute;
left: -999em;
}
input[type=checkbox]:checked + div {
opacity: 1;
opacity: 1;
height: auto;
overflow: hidden;
transition: opacity .5s linear, height .5s linear;

View File

@@ -90,9 +90,9 @@
<h3>Streaming vs. Saving</h3>
<p>FFplay allows you to stream created video and FFmpeg allows you to save video.</p>
<p>The following command creates and saves a 10-second video of SMPTE bars:</p>
<code>ffmpeg -f lavfi -i smptebars=size=640x480 -t 5 output_file</code>
<p><code>ffmpeg -f lavfi -i smptebars=size=640x480 -t 5 output_file</code></p>
<p>This command plays and streams SMPTE bars but does not save them on the computer:</p>
<code>ffplay -f lavfi smptebars=size=640x480</code>
<p><code>ffplay -f lavfi smptebars=size=640x480</code></p>
<p>The main difference is small but significant: the <code>-i</code> flag is required for FFmpeg but not required for FFplay. Additionally, the FFmpeg script needs to have <code>-t 5</code> and <code>output.mkv</code> added to specify the length of time to record and the place to save the video.</p>
<p class="link"></p>
</div>
@@ -113,16 +113,17 @@
<p>It is also possible to apply multiple filters to an input, which are sequenced together in the filtergraph. A chained set of filters is called a filter chain, and a filtergraph may include multiple filter chains. Filters in a filterchain are separated from each other by commas (<code>,</code>), and filterchains are separated from each other by semicolons (<code>;</code>). For example, take the <a href="#inverse-telecine">inverse telecine</a> command:</p>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf "fieldmatch,yadif,decimate" <i>output_file</i></code></p>
<p>Here we have a filtergraph including one filter chain, which is made up of three video filters.</p>
<p>It is often prudent to enclose your filtergraph in quotation marks; this means that you can use spaces within the filtergraph. Using the inverse telecine example again, the following filter commands are all valid and equivalent:
<ul>
<li><code>-vf fieldmatch,yadif,decimate</code></li>
<li><code>-vf "fieldmatch,yadif,decimate"</code></li>
<li><code>-vf "fieldmatch, yadif, decimate"</code></li>
</ul>
but <code>-vf fieldmatch, yadif, decimate</code> is not valid.</p>
<p>It is often prudent to enclose your filtergraph in quotation marks; this means that you can use spaces within the filtergraph. Using the inverse telecine example again, the following filter commands are all valid and equivalent:</p>
<ul>
<li><code>-vf fieldmatch,yadif,decimate</code></li>
<li><code>-vf "fieldmatch,yadif,decimate"</code></li>
<li><code>-vf "fieldmatch, yadif, decimate"</code></li>
</ul>
<p>but <code>-vf fieldmatch, yadif, decimate</code> is not valid.</p>
<p>The ordering of the filters is significant. Video filters are applied in the order given, with the output of one filter being passed along as the input to the next filter in the chain. In the example above, <code>fieldmatch</code> reconstructs the original frames from the inverse telecined video, <code>yadif</code> deinterlaces (this is a failsafe in case any combed frames remain, for example if the source mixes telecined and real interlaced content), and <code>decimate</code> deletes duplicated frames. Clearly, it is not possible to delete duplicated frames before those frames are reconstructed.</p>
<h4>Notes</h4>
<ul>
<li><code>-vf</code> is an alias for <code>-filter:v</code></li>
<li>If the command involves more than one input or output, you must use the flag <code>-filter_complex</code> instead of <code>-vf</code>.</li>
<li>Straight quotation marks ("like this") rather than curved quotation marks (“like this”) should be used.</li>
</ul>
@@ -250,18 +251,19 @@
<input type="checkbox" id="transcode_h264">
<div class="hiding">
<h3>Transcode to H.264</h3>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -pix_fmt yuv420p -c:a copy <i>output_file</i></code></p>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -pix_fmt yuv420p -c:a aac <i>output_file</i></code></p>
<p>This command takes an input file and transcodes it to H.264 with an .mp4 wrapper, keeping the audio the same codec as the original. The libx264 codec defaults to a “medium” preset for compression quality and a CRF of 23. CRF stands for constant rate factor and determines the quality and file size of the resulting H.264 video. A low CRF means high quality and large file size; a high CRF means the opposite.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-c:v libx264</dt><dd>tells FFmpeg to encode the video stream as H.264</dd>
<dt>-pix_fmt yuv420p</dt><dd>libx264 will use a chroma subsampling scheme that is the closest match to that of the input. This can result in YC<sub>B</sub>C<sub>R</sub> 4:2:0, 4:2:2, or 4:4:4 chroma subsampling. QuickTime and most other non-FFmpeg based players cant decode H.264 files that are not 4:2:0. In order to allow the video to play in all players, you can specify 4:2:0 chroma subsampling.</dd>
<dt>-c:a copy</dt><dd>tells FFmpeg to copy the audio stream without re-encoding it</dd>
<dt>-c:a aac</dt><dd>encode audio as AAC.<br>
AAC is the codec most often used for audio streams within an .mp4 container.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p>In order to use the same basic command to make a higher quality file, you can add some of these presets:</p>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -pix_fmt yuv420p -preset veryslow -crf 18 -c:a copy <i>output_file</i></code></p>
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -pix_fmt yuv420p -preset veryslow -crf 18 -c:a aac <i>output_file</i></code></p>
<dl>
<dt>-preset <i>veryslow</i></dt><dd>This option tells FFmpeg to use the slowest preset possible for the best compression quality.<br>
Available presets, from slowest to fastest, are: <code>veryslow</code>, <code>slower</code>, <code>slow</code>, <code>medium</code>, <code>fast</code>, <code>faster</code>, <code>veryfast</code>, <code>superfast</code>, <code>ultrafast</code>.</dd>
@@ -332,7 +334,7 @@
<input type="checkbox" id="dvd_to_file">
<div class="hiding">
<h3>Convert DVD to H.264</h3>
<p><code>ffmpeg -i concat:<i>input_file1</i>\|<i>input_file2</i>\|<i>input_file3</i> -c:v libx264 -c:a copy <i>output_file</i>.mp4</code></p>
<p><code>ffmpeg -i concat:<i>input_file_1</i>\|<i>input_file_2</i>\|<i>input_file_3</i> -c:v libx264 -c:a aac <i>output_file</i>.mp4</code></p>
<p>This command allows you to create an H.264 file from a DVD source that is not copy-protected.</p>
<p>Before encoding, youll need to establish which of the .VOB files on the DVD or .iso contain the content that you wish to encode. Inside the VIDEO_TS directory, you will see a series of files with names like VTS_01_0.VOB, VTS_01_1.VOB, etc. Some of the .VOB files will contain menus, special features, etc, so locate the ones that contain target content by playing them back in VLC.</p>
<dl>
@@ -341,17 +343,18 @@
<code>-i concat:VTS_01_1.VOB\|VTS_01_2.VOB\|VTS_01_3.VOB</code><br>
The backslash is simply an escape character for the pipe (<b>|</b>).</dd>
<dt>-c:v libx264</dt><dd>sets the video codec as H.264</dd>
<dt>-c:a copy</dt><dd>audio remains as-is (no re-encode)</dd>
<dt>-c:a aac</dt><dd>encode audio as AAC.<br>
AAC is the codec most often used for audio streams within an .mp4 container.</dd>
<dt><i>output_file.mp4</i></dt><dd>path and name of the output file</dd>
</dl>
<p>Its also possible to adjust the quality of your output by setting the <b>-crf</b> and <b>-preset</b> values:</p>
<p><code>ffmpeg -i concat:<i>input_file1</i>\|<i>input_file2</i>\|<i>input_file3</i> -c:v libx264 -crf 18 -preset veryslow -c:a copy <i>output_file</i>.mp4</code></p>
<p><code>ffmpeg -i concat:<i>input_file_1</i>\|<i>input_file_2</i>\|<i>input_file_3</i> -c:v libx264 -crf 18 -preset veryslow -c:a aac <i>output_file</i>.mp4</code></p>
<dl>
<dt>-crf 18</dt><dd>sets the constant rate factor to a visually lossless value. Libx264 defaults to a <a href="https://trac.ffmpeg.org/wiki/Encode/H.264#crf" target="_blank">crf of 23</a>, considered medium quality; a smaller CRF value produces a larger and higher quality video.</dd>
<dt>-preset veryslow</dt><dd>A slower preset will result in better compression and therefore a higher-quality file. The default is <b>medium</b>; slower presets are <b>slow</b>, <b>slower</b>, and <b>veryslow</b>.</dd>
</dl>
<p>Bear in mind that by default, libx264 will only encode a single video stream and a single audio stream, picking the best of the options available. To preserve all video and audio streams, add <b>-map</b> parameters:</p>
<p><code>ffmpeg -i concat:<i>input_file1</i>\|<i>input_file2</i> -map 0:v -map 0:a -c:v libx264 -c:a copy <i>output_file</i>.mp4</code></p>
<p><code>ffmpeg -i concat:<i>input_file_1</i>\|<i>input_file_2</i> -map 0:v -map 0:a -c:v libx264 -c:a aac <i>output_file</i>.mp4</code></p>
<dl>
<dt>-map 0:v</dt><dd>encodes all video streams</dd>
<dt>-map 0:a</dt><dd>encodes all audio streams</dd>
@@ -422,7 +425,7 @@
<input type="checkbox" id="append_mp3">
<div class="hiding">
<h3>Generate two access MP3s from input. One with appended audio (such as a copyright notice) and one unmodified.</h3>
<p> <code>ffmpeg -i <i>input_file</i> -i <i>input_file_to_append</i> -filter_complex "[0:a:0]asplit=2[a][b];[b]afifo[bb];[1:a:0][bb]concat=n=2:v=0:a=1[concatout]" -map "[a]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 <i>output_file.mp3</i> -map "[concatout]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 <i>output_file_appended.mp3</i></code></p>
<p><code>ffmpeg -i <i>input_file</i> -i <i>input_file_to_append</i> -filter_complex "[0:a:0]asplit=2[a][b];[b]afifo[bb];[1:a:0][bb]concat=n=2:v=0:a=1[concatout]" -map "[a]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 <i>output_file.mp3</i> -map "[concatout]" -codec:a libmp3lame -dither_method modified_e_weighted -qscale:a 2 <i>output_file_appended.mp3</i></code></p>
<p>This script allows you to generate two derivative audio files from a master while appending audio from a separate file (for example a copyright or institutional notice) to one of them.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
@@ -880,8 +883,8 @@
<div class="well">
<h2 id="join-trim">Join, trim, or excerpt a video</h2>
<!-- Join files together -->
<label class="recipe" for="join_files">Join (concatenate) two or more files into a single file</label>
<!-- Join files of the same type together -->
<label class="recipe" for="join_files">Join (concatenate) two or more files of the same type</label>
<input type="checkbox" id="join_files">
<div class="hiding">
<h3>Join files together</h3>
@@ -905,7 +908,60 @@
<p>For more information, see the <a href="https://trac.ffmpeg.org/wiki/Concatenate" target="_blank">FFmpeg wiki page on concatenating files</a>.</p>
<p class="link"></p>
</div>
<!-- ends Join files together -->
<!-- ends Join files of the same type together -->
<!-- Join files of different types together -->
<label class="recipe" for="join_different_files">Join (concatenate) two or more files of different types</label>
<input type="checkbox" id="join_different_files">
<div class="hiding">
<h3>Join files together</h3>
<p><code>ffmpeg -i input1.avi -i input2.mp4 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[video_out][audio_out]" -map "[video_out]" -map "[audio_out]" <i>output_file</i></code></p>
<p>This command takes two or more files of the different file types and joins them together to make a single file.</p>
<p>The input files may differ in many respects - container, codec, chroma subsampling scheme, framerate, etc. However, the above command only works properly if the files to be combined have the same dimensions (e.g., 720x576). Also note that if the input files have different framerates, then the output file will be of variable framerate.</p>
<p>Some aspects of the input files will be normalised: for example, if an input file contains a video track and an audio track that do not have exactly the same duration, the shorter one will be padded. In the case of a shorter video track, the last frame will be repeated in order to cover the missing video; in the case of a shorter audio track, the audio stream will be padded with silence.</p>
<dl>
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input1.ext</i></dt><dd>path, name and extension of the first input file</dd>
<dt>-i <i>input2.ext</i></dt><dd>path, name and extension of the second input file</dd>
<dt>-filter_complex</dt><dd>states that a complex filtergraph will be used</dd>
<dt>"</dt><dd>quotation mark to start filtergraph</dd>
<dt>[0:v:0][0:a:0]</dt><dd>selects the first video stream and first audio stream from the first input.<br>
Each reference to a specific stream is enclosed in square brackets. In the first stream reference, <code>0:v:0</code>, the first zero refers to the first input file, <code>v</code> means video stream, and the second zero indicates that it is the <i>first</i> video stream in the file that should be selected. Likewise, <code>0:a:0</code> means the first audio stream in the first input file.<br>
As demonstrated above, ffmpeg uses zero-indexing: <code>0</code> means the first input/stream/etc, <code>1</code> means the second input/stream/etc, and <code>4</code> would mean the fifth input/stream/etc.</dd>
<dt>[1:v:0][1:a:0]</dt><dd>As described above, this means select the first video and audio streams from the second input file.</dd>
<dt>concat=</dt><dd>starts the <code>concat</code> filter</dd>
<dt>n=2</dt><dd>states that there are two input files</dd>
<dt>:</dt><dd>separator</dd>
<dt>v=1</dt><dd>sets the number of output video streams.<br>
Note that this must be equal to the number of video streams selected from each segment.</dd>
<dt>:</dt><dd>separator</dd>
<dt>a=1</dt><dd>sets the number of output audio streams.<br>
Note that this must be equal to the number of audio streams selected from each segment.</dd>
<dt>[video_out]</dt><dd>name of the concatenated output video stream. This is a variable name which you define, so you could call it something different, like “vOut”, “outv”, or “banana”.</dd>
<dt>[audio_out]</dt><dd>name of the concatenated output audio stream. Again, this is a variable name which you define.</dd>
<dt>"</dt><dd>quotation mark to end filtergraph</dd>
<dt>-map "[video_out]"</dt><dd>map the concatenated video stream into the output file by referencing the variable defined above</dd>
<dt>-map "[audio_out]"</dt><dd>map the concatenated audio stream into the output file by referencing the variable defined above</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p>If no characteristics of the output files are specified, ffmpeg will use the default encodings associated with the given output file type. To specify the characteristics of the output stream(s), add flags after each <code>-map "[out]"</code> part of the command.</p>
<p>For example, to ensure that the video stream of the output file is visually lossless H.264 with a 4:2:0 chroma subsampling scheme, the command above could be amended to include the following:<br>
<code>-map "[video_out]" -c:v libx264 -pix_fmt yuv420p -preset veryslow -crf 18</code></p>
<p>Likewise, to encode the output audio stream as mp3, the command could include the following:<br>
<code>-map "[audio_out]" -c:a libmp3lame -dither_method modified_e_weighted -qscale:a 2</code></p>
<h4>Variation: concatenating files of different resolutions</h4>
<p>To concatenate files of different resolutions, you need to resize the videos to have matching resolutions prior to concatenation. The most basic way to do this is by using a scale filter and giving the dimensions of the file you wish to match:</p>
<p><code>-vf scale=1920:1080:flags=lanczos</code></p>
<p>(The Lanczos scaling algorithm is recommended, as it is slower but better than the default bilinear algorithm).</p>
<p>The rescaling should be applied just before the point where the streams to be used in the output file are listed. Select the stream you want to rescale, apply the filter, and assign that to a variable name (<code>rescaled_video</code> in the below example). Then you use this variable name in the list of streams to be concatenated.</p>
<p><code>ffmpeg -i input1.avi -i input2.mp4 -filter_complex "[0:v:0] scale=1920:1080:flags=lanczos [rescaled_video], [rescaled_video] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [video_out] [audio_out]" -map "[video_out]" -map "[audio_out]" <i>output_file</i></code></p>
<p>However, this will only have the desired visual output if the inputs have the same aspect ratio. If you wish to concatenate an SD and an HD file, you will also wish to pillarbox the SD file while upscaling. (See the <a href="https://amiaopensource.github.io/ffmprovisr/#SD_HD_2">Convert 4:3 to pillarboxed HD</a> command). The full command would look like this:</p>
<p><code>ffmpeg -i input1.avi -i input2.mp4 -filter_complex "[0:v:0] scale=1440:1080:flags=lanczos, pad=1920:1080:(ow-iw)/2:(oh-ih)/2 [to_hd_video], [to_hd_video] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [video_out] [audio_out]" -map "[video_out]" -map "[audio_out]" <i>output_file</i></code></p>
<p>Here, the first input an SD file which needs to be upscaled to match the second input, which is 1920x1080. The scale filter enlarges the SD input to the height of the HD frame, keeping the 4:3 aspect ratio; then, the video is pillarboxed within a 1920x1080 frame.</p>
<p>For more information, see the <a href="https://trac.ffmpeg.org/wiki/Concatenate#differentcodec" target="_blank">FFmpeg wiki page on concatenating files of different types</a>.</p>
<p class="link"></p>
</div>
<!-- ends Join files of the different types together -->
<!-- Split file into segments -->
<label class="recipe" for="segment_file">Split one file into several smaller segments</label>
@@ -951,7 +1007,7 @@
<dt>-ss 00:02:00</dt><dd>sets in point at 00:02:00</dd>
<dt>-to 00:55:00</dt><dd>sets out point at 00:55:00</dd>
<dt>-c copy</dt><dd>use stream copy mode (no re-encoding)<br>
<dt>-map 0</dt><dd>tells FFmpeg to map all streams of the input to the output.</dd>
<dt>-map 0</dt><dd>tells FFmpeg to map all streams of the input to the output.<br>
<b>Note:</b> watch out when using <code>-ss</code> with <code>-c copy</code> if the source is encoded with an interframe codec (e.g., H.264). Since FFmpeg must split on i-frames, it will seek to the nearest i-frame to begin the stream copy.</dd>
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
@@ -1036,6 +1092,7 @@
<dt>ffmpeg</dt><dd>starts the command</dd>
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
<dt>-c:v libx264</dt><dd>encodes video stream with libx264 (h264)</dd>
<dt>-filter:v</dt><dd>a video filter will be used</dd>
<dt>"</dt><dd>quotation mark to start filtergraph</dd>
<dt>yadif</dt><dd>deinterlacing filter (yet another deinterlacing filter)<br>
By default, <a href="https://ffmpeg.org/ffmpeg-filters.html#yadif-1" target="_blank">yadif</a> will output one frame for each frame. Outputting one frame for each <i>field</i> (thereby doubling the frame rate) with <code>yadif=1</code> may produce visually better results.</dd>
@@ -1071,7 +1128,7 @@
<dt>"</dt><dd>end of filtergraph</dd>
<dt><i>output file</i></dt><dd>path, name and extension of the output file</dd>
</dl>
<p> <code>"yadif,format=yuv420p"</code> is an FFmpeg <a href="https://trac.ffmpeg.org/wiki/FilteringGuide#FiltergraphChainFilterrelationship" target="_blank">filtergraph</a>. Here the filtergraph is made up of one filter chain, which is itself made up of the two filters (separated by the comma).<br>
<p><code>"yadif,format=yuv420p"</code> is an FFmpeg <a href="https://trac.ffmpeg.org/wiki/FilteringGuide#FiltergraphChainFilterrelationship" target="_blank">filtergraph</a>. Here the filtergraph is made up of one filter chain, which is itself made up of the two filters (separated by the comma).<br>
The enclosing quote marks are necessary when you use spaces within the filtergraph, e.g. <code>-vf "yadif, format=yuv420p"</code>, and are included above as an example of good practice.</p>
<p><b>Note:</b> FFmpeg includes several deinterlacers apart from <a href="https://ffmpeg.org/ffmpeg-filters.html#yadif-1" target="_blank">yadif</a>: <a href="https://ffmpeg.org/ffmpeg-filters.html#bwdif" target="_blank">bwdif</a>, <a href="https://ffmpeg.org/ffmpeg-filters.html#w3fdif" target="_blank">w3fdif</a>, <a href="https://ffmpeg.org/ffmpeg-filters.html#kerndeint" target="_blank">kerndeint</a>, and <a href="https://ffmpeg.org/ffmpeg-filters.html#nnedi" target="_blank">nnedi</a>.</p>
<p>For more H.264 encoding options, see the latter section of the <a href="./index.html#transcode_h264">encode H.264 command</a>.</p>

View File

@@ -1,12 +1,12 @@
#!/usr/bin/env bash
SCRIPT=$(basename "${0}")
VERSION='2017-07-08'
AUTHOR='ffmprovisr'
RED='\033[1;31m'
BLUE='\033[1;34m'
NC='\033[0m'
VERSION="2018-02-10"
AUTHOR="ffmprovisr"
RED="\033[1;31m"
BLUE="\033[1;34m"
NC="\033[0m"
if [[ "${OSTYPE}" = "cygwin" ]] || [ ! $(which diff) ]; then
if [[ "${OSTYPE}" = "cygwin" ]] || [[ ! "$(which diff)" ]]; then
echo -e "${RED}Error: 'diff' is not installed by default. Please install 'diffutils' from Cygwin.${NC}"
exit 1
fi
@@ -67,9 +67,8 @@ old_file=$(grep -v '^#' "${input_hash}")
tmp_file=$(grep -v '^#' "${md5_tmp}")
if [[ "${old_file}" = "${tmp_file}" ]]; then
echo -e "${BLUE}'$(basename "${input_file}")' matches '$(basename "${input_hash}")'${NC}"
rm "${md5_tmp}"
else
echo -e "${RED}The following differences were detected between '$(basename "${input_file}")' and '$(basename "${input_hash}")':${NC}"
diff "${input_hash}" "${md5_tmp}"
rm "${md5_tmp}"
fi
rm "${md5_tmp}"

View File

@@ -1,12 +1,12 @@
#!/usr/bin/env bash
SCRIPT=$(basename "${0}")
VERSION='2017-07-08'
AUTHOR='ffmprovisr'
RED='\033[1;31m'
BLUE='\033[1;34m'
NC='\033[0m'
VERSION="2018-02-10"
AUTHOR="ffmprovisr"
RED="\033[1;31m"
BLUE="\033[1;34m"
NC="\033[0m"
if [[ "${OSTYPE}" = "cygwin" ]] || [ ! $(which diff) ]; then
if [[ "${OSTYPE}" = "cygwin" ]] || [[ ! "$(which diff)" ]]; then
echo -e "${RED}Error: 'diff' is not installed by default. Please install 'diffutils' from Cygwin.${NC}"
exit 1
fi
@@ -64,9 +64,8 @@ old_file=$(grep -v '^#' "${input_hash}")
tmp_file=$(grep -v '^#' "${md5_tmp}")
if [[ "${old_file}" = "${tmp_file}" ]]; then
echo -e "${BLUE}'$(basename "${input_file}")' matches '$(basename "${input_hash}")'${NC}"
rm "${md5_tmp}"
else
echo -e "${RED}The following differences were detected between '$(basename "${input_file}")' and '$(basename "${input_hash}")':${NC}"
diff "${input_hash}" "${md5_tmp}"
rm "${md5_tmp}"
fi
rm "${md5_tmp}"

View File

@@ -10,7 +10,7 @@ if [[ "$(uname -s)" = "Darwin" ]] ; then
else
ffmprovisr_path=$(find /usr/local/Cellar/ffmprovisr -iname 'index.html' | sort -M | tail -n1)
fi
if [ -n "${default_browser}" ] ; then
if [[ -n "${default_browser}" ]] ; then
open -b "${default_browser}" "${ffmprovisr_path}"
else
open "${ffmprovisr_path}"