mirror of
https://github.com/amiaopensource/ffmprovisr.git
synced 2025-10-26 06:32:06 +01:00
Compare commits
29 Commits
v2017-11-0
...
v2017-12-1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1fec9b21c8 | ||
|
|
42189e5b94 | ||
|
|
929f92a52a | ||
|
|
02e2f11718 | ||
|
|
10636e24e2 | ||
|
|
61b890d31c | ||
|
|
85a79d2eb5 | ||
|
|
b5ec56174a | ||
|
|
f0e0cf8ed3 | ||
|
|
5c1c336d77 | ||
|
|
d71793583e | ||
|
|
6705bdf41d | ||
|
|
25e779a59f | ||
|
|
ba0852a957 | ||
|
|
88024c040f | ||
|
|
3d9b9edf1c | ||
|
|
c0326ad7d9 | ||
|
|
0cb6827b39 | ||
|
|
0d68614c04 | ||
|
|
1e86b70ba4 | ||
|
|
ced142a215 | ||
|
|
bf301daa71 | ||
|
|
278ac2baae | ||
|
|
10b8e4c941 | ||
|
|
1d1b3e4eac | ||
|
|
5a3e437d76 | ||
|
|
75a7aa1299 | ||
|
|
cf13529485 | ||
|
|
7c03ae2f80 |
12
css/css.css
12
css/css.css
@@ -44,6 +44,9 @@ html, body {
|
|||||||
"content"
|
"content"
|
||||||
"footer";
|
"footer";
|
||||||
}
|
}
|
||||||
|
code {
|
||||||
|
word-break: break-all;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@media only screen and (min-width: 1000px) {
|
@media only screen and (min-width: 1000px) {
|
||||||
@@ -118,8 +121,8 @@ code {
|
|||||||
color: #c7254e;
|
color: #c7254e;
|
||||||
background-color: #f9f2f4;
|
background-color: #f9f2f4;
|
||||||
border-radius: 4px;
|
border-radius: 4px;
|
||||||
word-break: break-all;
|
|
||||||
word-wrap: break-word;
|
word-wrap: break-word;
|
||||||
|
max-width: 800px;
|
||||||
white-space: normal;
|
white-space: normal;
|
||||||
display: inline-block;
|
display: inline-block;
|
||||||
}
|
}
|
||||||
@@ -150,6 +153,13 @@ img {
|
|||||||
text-align: center;
|
text-align: center;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
.sample-image-small {
|
||||||
|
margin: 0 auto;
|
||||||
|
margin-bottom: 18px;
|
||||||
|
max-width: 250px;
|
||||||
|
text-align: center;
|
||||||
|
}
|
||||||
|
|
||||||
div {
|
div {
|
||||||
font-family: 'Merriweather', serif;
|
font-family: 'Merriweather', serif;
|
||||||
color: white;
|
color: white;
|
||||||
|
|||||||
BIN
img/crop_example_aftercrop1.png
Normal file
BIN
img/crop_example_aftercrop1.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 245 KiB |
BIN
img/crop_example_aftercrop2.png
Normal file
BIN
img/crop_example_aftercrop2.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 167 KiB |
BIN
img/crop_example_aftercrop3.png
Normal file
BIN
img/crop_example_aftercrop3.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 146 KiB |
BIN
img/crop_example_orig.png
Normal file
BIN
img/crop_example_orig.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 436 KiB |
BIN
img/life.gif
Normal file
BIN
img/life.gif
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 574 KiB |
209
index.html
209
index.html
@@ -83,6 +83,20 @@
|
|||||||
</div>
|
</div>
|
||||||
<!-- End Basic structure of an FFmpeg command -->
|
<!-- End Basic structure of an FFmpeg command -->
|
||||||
|
|
||||||
|
<!-- Streaming vs. Saving -->
|
||||||
|
<label class="recipe" for="streaming-saving">Streaming vs. Saving</label>
|
||||||
|
<input type="checkbox" id="streaming-saving">
|
||||||
|
<div class="hiding">
|
||||||
|
<h3>Streaming vs. Saving</h3>
|
||||||
|
<p>FFplay allows you to stream created video and FFmpeg allows you to save video.</p>
|
||||||
|
<p>The following command creates and saves a 10-second video of SMPTE bars:</p>
|
||||||
|
<code>ffmpeg -f lavfi -i smptebars=size=640x480 -t 5 output_file</code>
|
||||||
|
<p>This command plays and streams SMPTE bars but does not save them on the computer:</p>
|
||||||
|
<code>ffplay -f lavfi smptebars=size=640x480</code>
|
||||||
|
<p>The main difference is small but significant: the <code>-i</code> flag is required for FFmpeg but not required for FFplay. Additionally, the FFmpeg script needs to have <code>-t 5</code> and <code>output.mkv</code> added to specify the length of time to record and the place to save the video.</p>
|
||||||
|
<p class="link"></p>
|
||||||
|
</div>
|
||||||
|
<!-- End Streaming vs. Saving -->
|
||||||
</div>
|
</div>
|
||||||
<div class="well">
|
<div class="well">
|
||||||
<h2 id="concepts">Learn about more advanced FFmpeg concepts</h2>
|
<h2 id="concepts">Learn about more advanced FFmpeg concepts</h2>
|
||||||
@@ -99,7 +113,7 @@
|
|||||||
<p>It is also possible to apply multiple filters to an input, which are sequenced together in the filtergraph. A chained set of filters is called a filter chain, and a filtergraph may include multiple filter chains. Filters in a filterchain are separated from each other by commas (<code>,</code>), and filterchains are separated from each other by semicolons (<code>;</code>). For example, take the <a href="#inverse-telecine">inverse telecine</a> command:</p>
|
<p>It is also possible to apply multiple filters to an input, which are sequenced together in the filtergraph. A chained set of filters is called a filter chain, and a filtergraph may include multiple filter chains. Filters in a filterchain are separated from each other by commas (<code>,</code>), and filterchains are separated from each other by semicolons (<code>;</code>). For example, take the <a href="#inverse-telecine">inverse telecine</a> command:</p>
|
||||||
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf "fieldmatch,yadif,decimate" <i>output_file</i></code></p>
|
<p><code>ffmpeg -i <i>input_file</i> -c:v libx264 -vf "fieldmatch,yadif,decimate" <i>output_file</i></code></p>
|
||||||
<p>Here we have a filtergraph including one filter chain, which is made up of three video filters.</p>
|
<p>Here we have a filtergraph including one filter chain, which is made up of three video filters.</p>
|
||||||
<p>It is often prudent to enclose your filtergraph in quotation marks; this means that you can use spaces within the filtergraph. Using the inverse telecine example again, the following filter commands are all valid and equivalent:
|
<p>It is often prudent to enclose your filtergraph in quotation marks; this means that you can use spaces within the filtergraph. Using the inverse telecine example again, the following filter commands are all valid and equivalent:
|
||||||
<ul>
|
<ul>
|
||||||
<li><code>-vf fieldmatch,yadif,decimate</code></li>
|
<li><code>-vf fieldmatch,yadif,decimate</code></li>
|
||||||
<li><code>-vf "fieldmatch,yadif,decimate"</code></li>
|
<li><code>-vf "fieldmatch,yadif,decimate"</code></li>
|
||||||
@@ -242,7 +256,7 @@
|
|||||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||||
<dt>-c:v libx264</dt><dd>tells FFmpeg to encode the video stream as H.264</dd>
|
<dt>-c:v libx264</dt><dd>tells FFmpeg to encode the video stream as H.264</dd>
|
||||||
<dt>-pix_fmt yuv420p</dt><dd> libx264 will use a chroma subsampling scheme that is the closest match to that of the input. This can result in Y′C<sub>B</sub>C<sub>R</sub> 4:2:0, 4:2:2, or 4:4:4 chroma subsampling. QuickTime and most other non-FFmpeg based players can’t decode H.264 files that are not 4:2:0. In order to allow the video to play in all players, you can specify 4:2:0 chroma subsampling.</dd>
|
<dt>-pix_fmt yuv420p</dt><dd>libx264 will use a chroma subsampling scheme that is the closest match to that of the input. This can result in Y′C<sub>B</sub>C<sub>R</sub> 4:2:0, 4:2:2, or 4:4:4 chroma subsampling. QuickTime and most other non-FFmpeg based players can’t decode H.264 files that are not 4:2:0. In order to allow the video to play in all players, you can specify 4:2:0 chroma subsampling.</dd>
|
||||||
<dt>-c:a copy</dt><dd>tells FFmpeg to copy the audio stream without re-encoding it</dd>
|
<dt>-c:a copy</dt><dd>tells FFmpeg to copy the audio stream without re-encoding it</dd>
|
||||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||||
</dl>
|
</dl>
|
||||||
@@ -305,7 +319,7 @@
|
|||||||
<dt>-slices 16</dt><dd>Each frame is split into 16 slices. 16 is a good trade-off between filesize and encoding time.</dd>
|
<dt>-slices 16</dt><dd>Each frame is split into 16 slices. 16 is a good trade-off between filesize and encoding time.</dd>
|
||||||
<dt>-c:a copy</dt><dd>copies all mapped audio streams.</dd>
|
<dt>-c:a copy</dt><dd>copies all mapped audio streams.</dd>
|
||||||
<dt><i>output_file</i>.mkv</dt><dd>path and name of the output file. Use the <code>.mkv</code> extension to save your file in a Matroska container. Optionally, choose a different extension if you want a different container, such as <code>.mov</code> or <code>.avi</code>.</dd>
|
<dt><i>output_file</i>.mkv</dt><dd>path and name of the output file. Use the <code>.mkv</code> extension to save your file in a Matroska container. Optionally, choose a different extension if you want a different container, such as <code>.mov</code> or <code>.avi</code>.</dd>
|
||||||
<dt>-f framemd5</dt><dd> Decodes video with the framemd5 muxer in order to generate MD5 checksums for every frame of your input file. This allows you to verify losslessness when compared against the framemd5s of the output file.</dd>
|
<dt>-f framemd5</dt><dd>Decodes video with the framemd5 muxer in order to generate MD5 checksums for every frame of your input file. This allows you to verify losslessness when compared against the framemd5s of the output file.</dd>
|
||||||
<dt>-an</dt><dd>ignores the audio stream when creating framemd5 (audio no)</dd>
|
<dt>-an</dt><dd>ignores the audio stream when creating framemd5 (audio no)</dd>
|
||||||
<dt><i>framemd5_output_file</i></dt><dd>path, name and extension of the framemd5 file.</dd>
|
<dt><i>framemd5_output_file</i></dt><dd>path, name and extension of the framemd5 file.</dd>
|
||||||
</dl>
|
</dl>
|
||||||
@@ -666,6 +680,36 @@
|
|||||||
</div>
|
</div>
|
||||||
<!-- ends Make stream properties explicate -->
|
<!-- ends Make stream properties explicate -->
|
||||||
|
|
||||||
|
<!-- Crop video -->
|
||||||
|
<label class="recipe" for="crop_video">Crop video</label>
|
||||||
|
<input type="checkbox" id="crop_video">
|
||||||
|
<div class="hiding">
|
||||||
|
<h3>Crop video</h3>
|
||||||
|
<p><code>ffmpeg -i <i>input_file</i> -vf "crop=<i>width</i>:<i>height</i>" <i>output_file</i></code></p>
|
||||||
|
<p>This command crops the input video to the dimensions defined</p>
|
||||||
|
<dl>
|
||||||
|
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||||
|
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||||
|
<dt>-vf "<i>width</i>:<i>height</i>"</dt><dd>Crops the video to the given width and height (in pixels).<br>
|
||||||
|
By default, the crop area is centred: that is, the position of the top left of the cropped area is set to x = (<i>input_width</i> - <i>output_width</i>) / 2, y = <i>input_height</i> - <i>output_height</i>) / 2.
|
||||||
|
</dd>
|
||||||
|
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||||
|
</dl>
|
||||||
|
<p>It's also possible to specify the crop position by adding the x and y coordinates representing the top left of your cropped area to your crop filter, as such:</p>
|
||||||
|
<p><code>ffmpeg -i <i>input_file</i> -vf "crop=<i>width</i>:<i>height</i>[:<i>x_position</i>:<i>y_position</i>]" <i>output_file</i></code></p>
|
||||||
|
<h3>Examples</h3>
|
||||||
|
<p>The original frame, a screenshot of the SMPTE colourbars:</p>
|
||||||
|
<img class="sample-image" src="img/crop_example_orig.png" alt="VLC screenshot of Maggie Cheung">
|
||||||
|
<p>Result of the command <code>ffmpeg -i <i>smpte_coloursbars.mov</i> -vf "crop=500:500" <i>output_file</i></code>:</p>
|
||||||
|
<img class="sample-image-small" src="img/crop_example_aftercrop1.png" alt="VLC screenshot of Maggie Cheung, cropped from original">
|
||||||
|
<p>Result of the command <code>ffmpeg -i <i>smpte_coloursbars.mov</i> -vf "crop=500:500:0:0" <i>output_file</i></code>, appending <code>:0:0</code> to crop from the top left corner:</p>
|
||||||
|
<img class="sample-image-small" src="img/crop_example_aftercrop2.png" alt="VLC screenshot of Maggie Cheung, cropped from original">
|
||||||
|
<p>Result of the command <code>ffmpeg -i <i>smpte_coloursbars.mov</i> -vf "crop=500:300:500:30" <i>output_file</i></code>:</p>
|
||||||
|
<img class="sample-image-small" src="img/crop_example_aftercrop3.png" alt="VLC screenshot of Maggie Cheung, cropped from original">
|
||||||
|
<p class="link"></p>
|
||||||
|
</div>
|
||||||
|
<!-- ends Crop video -->
|
||||||
|
|
||||||
</div>
|
</div>
|
||||||
<div class="well">
|
<div class="well">
|
||||||
<h2 id="audio-files">Change or view audio properties</h2>
|
<h2 id="audio-files">Change or view audio properties</h2>
|
||||||
@@ -698,7 +742,7 @@
|
|||||||
<dl>
|
<dl>
|
||||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||||
<dt>-filter_complex </dt><dd>tells fmpeg that we will be using a complex filter</dd>
|
<dt>-filter_complex</dt><dd>tells fmpeg that we will be using a complex filter</dd>
|
||||||
<dt>"</dt><dd>quotation mark to start filtergraph</dd>
|
<dt>"</dt><dd>quotation mark to start filtergraph</dd>
|
||||||
<dt>[0:a:0][0:a:1]amerge[out]</dt><dd>combines the two audio tracks into one</dd>
|
<dt>[0:a:0][0:a:1]amerge[out]</dt><dd>combines the two audio tracks into one</dd>
|
||||||
<dt>"</dt><dd>quotation mark to end filtergraph</dd>
|
<dt>"</dt><dd>quotation mark to end filtergraph</dd>
|
||||||
@@ -710,7 +754,7 @@
|
|||||||
</dl>
|
</dl>
|
||||||
<p class="link"></p>
|
<p class="link"></p>
|
||||||
</div>
|
</div>
|
||||||
<!-- ends Combine audio tracks -->
|
<!-- ends Combine audio tracks -->
|
||||||
|
|
||||||
<!-- phase shift -->
|
<!-- phase shift -->
|
||||||
<label class="recipe" for="phase_shift">Inverses the audio phase of the second channel</label>
|
<label class="recipe" for="phase_shift">Inverses the audio phase of the second channel</label>
|
||||||
@@ -1122,12 +1166,12 @@
|
|||||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||||
<dt>-vf drawtext=</dt><dd>This calls the drawtext filter with the following options:
|
<dt>-vf drawtext=</dt><dd>This calls the drawtext filter with the following options:
|
||||||
<dl>
|
<dl>
|
||||||
<dt>fontfile=<i>font_path</i></dt><dd> Set path to font. For example in macOS: <code>fontfile=/Library/Fonts/AppleGothic.ttf</code></dd>
|
<dt>fontfile=<i>font_path</i></dt><dd>Set path to font. For example in macOS: <code>fontfile=/Library/Fonts/AppleGothic.ttf</code></dd>
|
||||||
<dt>fontsize=<i>font_size</i></dt><dd> Set font size. <code>35</code> is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.</dd>
|
<dt>fontsize=<i>font_size</i></dt><dd>Set font size. <code>35</code> is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.</dd>
|
||||||
<dt>text=<i>watermark_text</i> </dt><dd> Set the content of your watermark text. For example: <code>text='FFMPROVISR EXAMPLE TEXT'</code></dd>
|
<dt>text=<i>watermark_text</i></dt><dd>Set the content of your watermark text. For example: <code>text='FFMPROVISR EXAMPLE TEXT'</code></dd>
|
||||||
<dt>fontcolor=<i>font_colour</i> </dt><dd> Set colour of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
|
<dt>fontcolor=<i>font_colour</i></dt><dd>Set colour of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
|
||||||
<dt>alpha=0.4</dt><dd> Set transparency value.</dd>
|
<dt>alpha=0.4</dt><dd>Set transparency value.</dd>
|
||||||
<dt>x=(w-text_w)/2:y=(h-text_h)/2</dt><dd> Sets <i>x</i> and <i>y</i> coordinates for the watermark. These relative values will centre your watermark regardless of video dimensions.</dd>
|
<dt>x=(w-text_w)/2:y=(h-text_h)/2</dt><dd>Sets <i>x</i> and <i>y</i> coordinates for the watermark. These relative values will centre your watermark regardless of video dimensions.</dd>
|
||||||
</dl>
|
</dl>
|
||||||
Note: <code>-vf</code> is a shortcut for <code>-filter:v</code>.</dd>
|
Note: <code>-vf</code> is a shortcut for <code>-filter:v</code>.</dd>
|
||||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file.</dd>
|
<dt><i>output_file</i></dt><dd>path, name and extension of the output file.</dd>
|
||||||
@@ -1164,14 +1208,14 @@
|
|||||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||||
<dt>-vf drawtext=</dt><dd>This calls the drawtext filter with the following options:
|
<dt>-vf drawtext=</dt><dd>This calls the drawtext filter with the following options:
|
||||||
<dt>"</dt><dd>quotation mark to start drawtext filter command</dd>
|
<dt>"</dt><dd>quotation mark to start drawtext filter command</dd>
|
||||||
<dt>fontfile=<i>font_path</i></dt><dd> Set path to font. For example in macOS: <code>fontfile=/Library/Fonts/AppleGothic.ttf</code></dd>
|
<dt>fontfile=<i>font_path</i></dt><dd>Set path to font. For example in macOS: <code>fontfile=/Library/Fonts/AppleGothic.ttf</code></dd>
|
||||||
<dt>fontsize=<i>font_size</i></dt><dd> Set font size. <code>35</code> is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.</dd>
|
<dt>fontsize=<i>font_size</i></dt><dd>Set font size. <code>35</code> is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.</dd>
|
||||||
<dt>timecode=<i>starting_timecode</i> </dt><dd> Set the timecode to be displayed for the first frame. Timecode is to be represented as <code>hh:mm:ss[:;.]ff</code>. Colon escaping is determined by O.S, for example in Ubuntu <code>timecode='09\\:50\\:01\\:23'</code>. Ideally, this value would be generated from the file itself using ffprobe.</dd>
|
<dt>timecode=<i>starting_timecode</i></dt><dd>Set the timecode to be displayed for the first frame. Timecode is to be represented as <code>hh:mm:ss[:;.]ff</code>. Colon escaping is determined by O.S, for example in Ubuntu <code>timecode='09\\:50\\:01\\:23'</code>. Ideally, this value would be generated from the file itself using ffprobe.</dd>
|
||||||
<dt>fontcolor=<i>font_colour</i> </dt><dd> Set colour of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
|
<dt>fontcolor=<i>font_colour</i></dt><dd>Set colour of font. Can be a text string such as <code>fontcolor=white</code> or a hexadecimal value such as <code>fontcolor=0xFFFFFF</code></dd>
|
||||||
<dt>box=1</dt><dd> Enable box around timecode</dd>
|
<dt>box=1</dt><dd>Enable box around timecode</dd>
|
||||||
<dt>boxcolor=<i>box_colour</i></dt><dd> Set colour of box. Can be a text string such as <code>fontcolor=black</code> or a hexadecimal value such as <code>fontcolor=0x000000</code></dd>
|
<dt>boxcolor=<i>box_colour</i></dt><dd>Set colour of box. Can be a text string such as <code>fontcolor=black</code> or a hexadecimal value such as <code>fontcolor=0x000000</code></dd>
|
||||||
<dt>rate=<i>timecode_rate</i></dt><dd> Framerate of video. For example <code>25/1</code></dd>
|
<dt>rate=<i>timecode_rate</i></dt><dd>Framerate of video. For example <code>25/1</code></dd>
|
||||||
<dt>x=(w-text_w)/2:y=h/1.2</dt><dd> Sets <i>x</i> and <i>y</i> coordinates for the timecode. These relative values will horizontally centre your timecode in the bottom third regardless of video dimensions.</dd>
|
<dt>x=(w-text_w)/2:y=h/1.2</dt><dd>Sets <i>x</i> and <i>y</i> coordinates for the timecode. These relative values will horizontally centre your timecode in the bottom third regardless of video dimensions.</dd>
|
||||||
<dt>"</dt><dd>quotation mark to end drawtext filter command</dd>
|
<dt>"</dt><dd>quotation mark to end drawtext filter command</dd>
|
||||||
<dt><i>output_file</i></dt><dd>path, name and extension of the output file.</dd>
|
<dt><i>output_file</i></dt><dd>path, name and extension of the output file.</dd>
|
||||||
</dl>
|
</dl>
|
||||||
@@ -1180,6 +1224,25 @@
|
|||||||
</div>
|
</div>
|
||||||
<!-- ends Burn in timecode -->
|
<!-- ends Burn in timecode -->
|
||||||
|
|
||||||
|
<!-- Embed subtitles-->
|
||||||
|
<label class="recipe" for="embed_subtitles">Embed subtitles</label>
|
||||||
|
<input type="checkbox" id="embed_subtitles">
|
||||||
|
<div class="hiding">
|
||||||
|
<h3>Embed a subtitle file into a movie file </h3>
|
||||||
|
<p><code>ffmpeg -i <i>input_file</i> -i <i>subtitles_file</i> -c copy -c:s mov_text <i>output_file</i></code></p>
|
||||||
|
<dl>
|
||||||
|
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||||
|
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||||
|
<dt>-i <i>subtitles_file</i></dt><dd>path to subtitles file, e.g. <code>subtitles.srt</code></dd>
|
||||||
|
<dt>-c copy</dt><dd>enable stream copy (no re-encode)</dd>
|
||||||
|
<dt>-c:s mov_text</dt><dd>Encode subtitles using the <code>mov_text</code> codec. Note: The <code>mov_text</code> codec works for MP4 and MOV containers. For the MKV container, acceptable formats are <code>ASS</code>, <code>SRT</code>, and <code>SSA</code>.</dd>
|
||||||
|
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||||
|
</dl>
|
||||||
|
<p>Note: <code>-c:s</code> is a shortcut for <code>-scodec</code></p>
|
||||||
|
<p class="link"></p>
|
||||||
|
</div>
|
||||||
|
<!-- ends Embed subtitles -->
|
||||||
|
|
||||||
</div>
|
</div>
|
||||||
<div class="well">
|
<div class="well">
|
||||||
<h2 id="create-images">Create thumbnails or GIFs</h2>
|
<h2 id="create-images">Create thumbnails or GIFs</h2>
|
||||||
@@ -1387,7 +1450,7 @@
|
|||||||
<dt>"</dt><dd>quotation mark to start the lavfi filtergraph</dd>
|
<dt>"</dt><dd>quotation mark to start the lavfi filtergraph</dd>
|
||||||
<dt>movie='<i>input.mp4</i>'</dt><dd>declares video file source to apply filter</dd>
|
<dt>movie='<i>input.mp4</i>'</dt><dd>declares video file source to apply filter</dd>
|
||||||
<dt>,</dt><dd>comma signifies closing of video source assertion and ready for filter assertion</dd>
|
<dt>,</dt><dd>comma signifies closing of video source assertion and ready for filter assertion</dd>
|
||||||
<dt>signalstats=out=brng:</dt><dd>tells ffplay to use the signalstats command, output the data, use the brng filter</dd>
|
<dt>signalstats=out=brng</dt><dd>tells ffplay to use the signalstats command, output the data, use the brng filter</dd>
|
||||||
<dt>:</dt><dd>indicates there’s another parameter coming</dd>
|
<dt>:</dt><dd>indicates there’s another parameter coming</dd>
|
||||||
<dt>color=cyan[out]</dt><dd>sets the color of out-of-range pixels to cyan</dd>
|
<dt>color=cyan[out]</dt><dd>sets the color of out-of-range pixels to cyan</dd>
|
||||||
<dt>"</dt><dd>quotation mark to end the lavfi filtergraph</dd>
|
<dt>"</dt><dd>quotation mark to end the lavfi filtergraph</dd>
|
||||||
@@ -1471,7 +1534,7 @@
|
|||||||
<dt>-show_data</dt><dd>adds a short “hexdump” to show_streams command output</dd>
|
<dt>-show_data</dt><dd>adds a short “hexdump” to show_streams command output</dd>
|
||||||
<dt>-print_format</dt><dd>Set the output printing format (in this example “xml”; other formats include “json” and “flat”)</dd>
|
<dt>-print_format</dt><dd>Set the output printing format (in this example “xml”; other formats include “json” and “flat”)</dd>
|
||||||
</dl>
|
</dl>
|
||||||
<p>See also the <a href="www.ffmpeg.org/ffprobe.html" target="_blank"> FFmpeg documentation on ffprobe</a> for a full list of flags, commands, and options.</p>
|
<p>See also the <a href="http://www.ffmpeg.org/ffprobe.html" target="_blank"> FFmpeg documentation on ffprobe</a> for a full list of flags, commands, and options.</p>
|
||||||
<p class="link"></p>
|
<p class="link"></p>
|
||||||
</div>
|
</div>
|
||||||
<!-- ends Pull specs -->
|
<!-- ends Pull specs -->
|
||||||
@@ -1486,10 +1549,11 @@
|
|||||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||||
<dt>-map_metadata -1</dt><dd>sets metadata copying to -1, which copies nothing</dd>
|
<dt>-map_metadata -1</dt><dd>sets metadata copying to -1, which copies nothing</dd>
|
||||||
<dt>-vcodec copy</dt><dd>copies video track</dd>
|
<dt>-c:v copy</dt><dd>copies video track</dd>
|
||||||
<dt>-acodec copy</dt><dd>copies audio track</dd>
|
<dt>-c:a copy</dt><dd>copies audio track</dd>
|
||||||
<dt><i>output_file</i></dt><dd>Makes copy of original file and names output file</dd>
|
<dt><i>output_file</i></dt><dd>Makes copy of original file and names output file</dd>
|
||||||
</dl>
|
</dl>
|
||||||
|
<p>Note: <code>-c:v</code> and <code>-c:a</code> are shortcuts for <code>-vcodec</code> and <code>-acodec</code>.</p>
|
||||||
<p class="link"></p>
|
<p class="link"></p>
|
||||||
</div>
|
</div>
|
||||||
<!-- ends Strip metadata -->
|
<!-- ends Strip metadata -->
|
||||||
@@ -1674,7 +1738,7 @@
|
|||||||
</dl>
|
</dl>
|
||||||
<p class="link"></p>
|
<p class="link"></p>
|
||||||
</div>
|
</div>
|
||||||
<!-- ends Check FFV1 Fixity -->
|
<!-- ends Check FFV1 Fixity -->
|
||||||
|
|
||||||
<!-- Read/Extract EIA-608 Closed Captions -->
|
<!-- Read/Extract EIA-608 Closed Captions -->
|
||||||
<label class="recipe" for="readeia608">Read/Extract EIA-608 Closed Captioning</label>
|
<label class="recipe" for="readeia608">Read/Extract EIA-608 Closed Captioning</label>
|
||||||
@@ -1826,7 +1890,7 @@
|
|||||||
<dt>-c:a pcm_s16le</dt><dd>encodes the audio codec in <code>pcm_s16le</code> (the default encoding for wav files). <code>pcm</code> represents pulse-code modulation format (raw bytes), <code>16</code> means 16 bits per sample, and <code>le</code> means "little endian"</dd>
|
<dt>-c:a pcm_s16le</dt><dd>encodes the audio codec in <code>pcm_s16le</code> (the default encoding for wav files). <code>pcm</code> represents pulse-code modulation format (raw bytes), <code>16</code> means 16 bits per sample, and <code>le</code> means "little endian"</dd>
|
||||||
<dt>-t 10</dt><dd>specifies recording time of 10 seconds</dd>
|
<dt>-t 10</dt><dd>specifies recording time of 10 seconds</dd>
|
||||||
<dt>-c:v ffv1</dt><dd>Encodes to <a href="https://en.wikipedia.org/wiki/FFV1" target="_blank">FFV1</a>. Alter this setting to set your desired codec.</dd>
|
<dt>-c:v ffv1</dt><dd>Encodes to <a href="https://en.wikipedia.org/wiki/FFV1" target="_blank">FFV1</a>. Alter this setting to set your desired codec.</dd>
|
||||||
<dt><i>output_file</i>.wav</dt><dd>path, name and extension of the output file</dd>
|
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||||
</dl>
|
</dl>
|
||||||
<p class="link"></p>
|
<p class="link"></p>
|
||||||
</div>
|
</div>
|
||||||
@@ -1850,6 +1914,31 @@
|
|||||||
</div>
|
</div>
|
||||||
<!-- ends Broken File -->
|
<!-- ends Broken File -->
|
||||||
|
|
||||||
|
<!-- Game of Life -->
|
||||||
|
<label class="recipe" for="game_of_life">Conway's Game of Life</label>
|
||||||
|
<input type="checkbox" id="game_of_life">
|
||||||
|
<div class="hiding">
|
||||||
|
<h3>Conway's Game of Life</h3>
|
||||||
|
<p>Simulates <a href="https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life">Conway's Game of Life</a></p>
|
||||||
|
<p><code>ffplay -f lavfi life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800</code></p>
|
||||||
|
<dl>
|
||||||
|
<dt>ffplay</dt><dd>starts the command</dd>
|
||||||
|
<dt>-f lavfi</dt><dd>tells ffplay to use the <a href="http://ffmpeg.org/ffmpeg-devices.html#lavfi" target="_blank">Libavfilter</a> input virtual device</dd>
|
||||||
|
<dt>life=s=300x200</dt><dd>use the life filter and set the size of the video to 300x200</dd>
|
||||||
|
<dt>:</dt><dd>indicates there’s another parameter coming</dd>
|
||||||
|
<dt>mold=10:r=60:ratio=0.1</dt><dd>sets up the rules of the game: cell mold speed, video rate, and random fill ratio</dd>
|
||||||
|
<dt>:</dt><dd>indicates there’s another parameter coming</dd>
|
||||||
|
<dt>death_color=#C83232:life_color=#00ff00</dt><dd>specifies color for cell death and cell life; mold_color can also be set</dd>
|
||||||
|
<dt>,</dt><dd>comma signifies closing of video source assertion and ready for filter assertion</dd>
|
||||||
|
<dt>scale=1200:800</dt><dd>scale to 1280 width and 800 height</dd>
|
||||||
|
</dl>
|
||||||
|
<img src="img/life.gif" alt="GIF of above command">
|
||||||
|
<p>To save a portion of the stream instead of playing it back infinitely, use the following command:</p>
|
||||||
|
<p><code>ffmpeg -f lavfi -i life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800 -t 5 <i>output_file</i></code></p>
|
||||||
|
<p class="link"></p>
|
||||||
|
</div>
|
||||||
|
<!-- ends Game of Life -->
|
||||||
|
|
||||||
</div>
|
</div>
|
||||||
<div class="well">
|
<div class="well">
|
||||||
<h2 id="ocr">Use OCR</h2>
|
<h2 id="ocr">Use OCR</h2>
|
||||||
@@ -1966,20 +2055,44 @@
|
|||||||
<input type="checkbox" id="split_audio_video">
|
<input type="checkbox" id="split_audio_video">
|
||||||
<div class="hiding">
|
<div class="hiding">
|
||||||
<h3>Split audio and video tracks</h3>
|
<h3>Split audio and video tracks</h3>
|
||||||
<p><code>ffmpeg -i <i>input_file</i> -map <i>0:v:0 video_output_file</i> -map <i>0:a:0 audio_output_file</i></code></p>
|
<p><code>ffmpeg -i <i>input_file</i> -map 0:v:0 <i>video_output_file</i> -map 0:a:0 <i>audio_output_file</i></code></p>
|
||||||
<p>This command splits the original input file into a video and audio stream. The -map command identifies which streams are mapped to which file. To ensure that you’re mapping the right streams to the right file, run ffprobe before writing the script to identify which streams are desired.</p>
|
<p>This command splits the original input file into a video and audio stream. The -map command identifies which streams are mapped to which file. To ensure that you’re mapping the right streams to the right file, run ffprobe before writing the script to identify which streams are desired.</p>
|
||||||
<dl>
|
<dl>
|
||||||
<dt>ffmpeg</dt><dd>starts the command</dd>
|
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||||
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
<dt>-i <i>input_file</i></dt><dd>path, name and extension of the input file</dd>
|
||||||
<dt>-map <i>0:v:0</i></dt><dd>grabs the first video stream and maps it into:</dd>
|
<dt>-map 0:v:0</dt><dd>grabs the first video stream and maps it into:</dd>
|
||||||
<dt><i>video_output_file</i></dt><dd>path, name and extension of the video output file</dd>
|
<dt><i>video_output_file</i></dt><dd>path, name and extension of the video output file</dd>
|
||||||
<dt>-map <i>0:a:0</i></dt><dd>grabs the first audio stream and maps it into:</dd>
|
<dt>-map 0:a:0</dt><dd>grabs the first audio stream and maps it into:</dd>
|
||||||
<dt><i>audio_output_file</i></dt><dd>path, name and extension of the audio output file</dd>
|
<dt><i>audio_output_file</i></dt><dd>path, name and extension of the audio output file</dd>
|
||||||
</dl>
|
</dl>
|
||||||
<p class="link"></p>
|
<p class="link"></p>
|
||||||
</div>
|
</div>
|
||||||
<!-- ends Split audio and video tracks -->
|
<!-- ends Split audio and video tracks -->
|
||||||
|
|
||||||
|
<!-- Merge audio and video tracks -->
|
||||||
|
<label class="recipe" for="merge_audio_video">Merge audio and video tracks</label>
|
||||||
|
<input type="checkbox" id="merge_audio_video">
|
||||||
|
<div class="hiding">
|
||||||
|
<h3>Merge audio and video tracks</h3>
|
||||||
|
<p><code>ffmpeg -i <i>video_file</i> -i <i>audio_file</i> -map 0:v -map 1:a -c copy <i>output_file</i></code></p>
|
||||||
|
<p>This command takes a video file and an audio file as inputs, and creates an output file that combines the video stream in the first file with the audio stream in the second file.</p>
|
||||||
|
<dl>
|
||||||
|
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||||
|
<dt>-i <i>video_file</i></dt><dd>path, name and extension of the first input file (the video file)</dd>
|
||||||
|
<dt>-i <i>audio_file</i></dt><dd>path, name and extension of the second input file (the audio file)</dd>
|
||||||
|
<dt>-map <i>0:v</i></dt><dd>selects the video streams from the first input file</dd>
|
||||||
|
<dt>-map <i>1:a</i></dt><dd>selects the audio streams from the second input file</dd>
|
||||||
|
<dt>-c copy</dt><dd>copies streams without re-encoding</dd>
|
||||||
|
<dt><i>output_file</i></dt><dd>path, name and extension of the output file</dd>
|
||||||
|
</dl>
|
||||||
|
<p><b>Note:</b> in the example above, the video input file is given prior to the audio input file. However, input files can be added any order, as long as they are indexed correctly when stream mapping with <code>-map</code>. See the entry on <a href="#stream-mapping">stream mapping</a>.</p>
|
||||||
|
<h4>Variation:</h4>
|
||||||
|
<p>Include the audio tracks from both input files with the following command:</p>
|
||||||
|
<p><code>ffmpeg -i <i>video_file</i> -i <i>audio_file</i> -map 0:v -map 0:a -map 1:a -c copy <i>output_file</i></code></p>
|
||||||
|
<p class="link"></p>
|
||||||
|
</div>
|
||||||
|
<!-- ends Merge audio and video tracks -->
|
||||||
|
|
||||||
<!-- Create ISO -->
|
<!-- Create ISO -->
|
||||||
<label class="recipe" for="create_iso">Create ISO files for DVD access</label>
|
<label class="recipe" for="create_iso">Create ISO files for DVD access</label>
|
||||||
<input type="checkbox" id="create_iso">
|
<input type="checkbox" id="create_iso">
|
||||||
@@ -2060,26 +2173,26 @@
|
|||||||
<li>This is in daily use to live-stream a real-world TV show. No errors for nearly 4 years. Some parameters were found by trial-and-error or empiric testing. So suggestions/questions are welcome.</li>
|
<li>This is in daily use to live-stream a real-world TV show. No errors for nearly 4 years. Some parameters were found by trial-and-error or empiric testing. So suggestions/questions are welcome.</li>
|
||||||
</ol>
|
</ol>
|
||||||
<dl>
|
<dl>
|
||||||
<dt>ffmpeg </dt><dd>starts the command</dd>
|
<dt>ffmpeg</dt><dd>starts the command</dd>
|
||||||
<dt>-re </dt><dd>Read input at native framerate</dd>
|
<dt>-re</dt><dd>Read input at native framerate</dd>
|
||||||
<dt>-i input.mov </dt><dd>The input file. Can also be a <code>-</code> to use STDIN if you pipe in from webcam or SDI.</dd>
|
<dt>-i input.mov</dt><dd>The input file. Can also be a <code>-</code> to use STDIN if you pipe in from webcam or SDI.</dd>
|
||||||
<dt>-map 0 </dt><dd>map ALL streams from input file to output</dd>
|
<dt>-map 0</dt><dd>map ALL streams from input file to output</dd>
|
||||||
<dt>-flags +global_header </dt><dd>Don't place extra data in every keyframe</dd>
|
<dt>-flags +global_header</dt><dd>Don't place extra data in every keyframe</dd>
|
||||||
<dt>-vf scale="1280:-1" </dt><dd>Scale to 1280 width, maintain aspect ratio.</dd>
|
<dt>-vf scale="1280:-1"</dt><dd>Scale to 1280 width, maintain aspect ratio.</dd>
|
||||||
<dt>-pix_fmt yuv420p </dt><dd>convert to 4:2:0 chroma subsampling scheme</dd>
|
<dt>-pix_fmt yuv420p</dt><dd>convert to 4:2:0 chroma subsampling scheme</dd>
|
||||||
<dt>-level 3.1 </dt><dd>H264 Level (defines some thresholds for bitrate)</dd>
|
<dt>-level 3.1</dt><dd>H264 Level (defines some thresholds for bitrate)</dd>
|
||||||
<dt>-vsync passthrough </dt><dd>Each frame is passed with its timestamp from the demuxer to the muxer.</dd>
|
<dt>-vsync passthrough</dt><dd>Each frame is passed with its timestamp from the demuxer to the muxer.</dd>
|
||||||
<dt>-crf 26 </dt><dd>Constant rate factor - basically the quality</dd>
|
<dt>-crf 26</dt><dd>Constant rate factor - basically the quality</dd>
|
||||||
<dt>-g 50 </dt><dd>GOP size.</dd>
|
<dt>-g 50</dt><dd>GOP size.</dd>
|
||||||
<dt>-bufsize 3500k </dt><dd>Ratecontrol buffer size (~ maxrate x2)</dd>
|
<dt>-bufsize 3500k</dt><dd>Ratecontrol buffer size (~ maxrate x2)</dd>
|
||||||
<dt>-maxrate 1800k </dt><dd>Maximum bit rate</dd>
|
<dt>-maxrate 1800k</dt><dd>Maximum bit rate</dd>
|
||||||
<dt>-c:v libx264 </dt><dd>encode output video stream as H.264</dd>
|
<dt>-c:v libx264</dt><dd>encode output video stream as H.264</dd>
|
||||||
<dt>-c:a aac </dt><dd>encode output audio stream as AAC</dd>
|
<dt>-c:a aac</dt><dd>encode output audio stream as AAC</dd>
|
||||||
<dt>-b:a 128000 </dt><dd>The audio bitrate</dd>
|
<dt>-b:a 128000</dt><dd>The audio bitrate</dd>
|
||||||
<dt>-r:a 44100 </dt><dd>The audio samplerate</dd>
|
<dt>-r:a 44100</dt><dd>The audio samplerate</dd>
|
||||||
<dt>-ac 2 </dt><dd>Two audio channels</dd>
|
<dt>-ac 2</dt><dd>Two audio channels</dd>
|
||||||
<dt>-t ${STREAMDURATION} </dt><dd>Time (in seconds) after which the stream should automatically end.</dd>
|
<dt>-t ${STREAMDURATION}</dt><dd>Time (in seconds) after which the stream should automatically end.</dd>
|
||||||
<dt>-f tee </dt><dd>Use multiple outputs. Outputs defined below.</dd>
|
<dt>-f tee</dt><dd>Use multiple outputs. Outputs defined below.</dd>
|
||||||
<dt>"[movflags=+faststart]target-file.mp4|[f=flv]rtmp://stream-url/stream-id"</dt><dd>The outputs, separated by a pipe (|). The first is the local file, the second is the live stream. Options for each target are given in square brackets before the target.</dd>
|
<dt>"[movflags=+faststart]target-file.mp4|[f=flv]rtmp://stream-url/stream-id"</dt><dd>The outputs, separated by a pipe (|). The first is the local file, the second is the live stream. Options for each target are given in square brackets before the target.</dd>
|
||||||
</dl>
|
</dl>
|
||||||
<p class="link"></p>
|
<p class="link"></p>
|
||||||
|
|||||||
5
js/js.js
5
js/js.js
@@ -3,10 +3,11 @@ $(document).ready(function() {
|
|||||||
// open recipe window if a hash is found in URL
|
// open recipe window if a hash is found in URL
|
||||||
if(window.location.hash) {
|
if(window.location.hash) {
|
||||||
id = window.location.hash
|
id = window.location.hash
|
||||||
|
console.log(id.substring(1))
|
||||||
document.getElementById(id.substring(1)).checked = true;
|
document.getElementById(id.substring(1)).checked = true;
|
||||||
$('html, body').animate({ scrollTop: $(id).offset().top}, 1000);
|
$('html, body').animate({ scrollTop: $(id).offset().top}, 1000);
|
||||||
$(id).closest('div').find('.link').empty();
|
$(id).closest('div').find('.link').empty();
|
||||||
$(id).closest('div').find('.link').append("<small>Link to this command: <a href="+window.location.href+">"+window.location.href+"</a></small>");
|
$(id).closest('div').find('.link').append("<small>Link to this command: <a href='https://amiaopensource.github.io/ffmprovisr/index.html"+window.location.hash+"'>https://amiaopensource.github.io/ffmprovisr/index.html"+window.location.hash+"</a></small>");
|
||||||
}
|
}
|
||||||
|
|
||||||
// add hash URL when recipe is opened
|
// add hash URL when recipe is opened
|
||||||
@@ -14,7 +15,7 @@ $(document).ready(function() {
|
|||||||
id = $(this).attr("for");
|
id = $(this).attr("for");
|
||||||
window.location.hash = ('#' + id)
|
window.location.hash = ('#' + id)
|
||||||
$('#' + id).closest('div').find('.link').empty();
|
$('#' + id).closest('div').find('.link').empty();
|
||||||
$('#' + id).closest('div').find('.link').append("<small>Link to this command: <a href="+window.location.href+">"+window.location.href+"</a></small>");
|
$('#' + id).closest('div').find('.link').append("<small>Link to this command: <a href='https://amiaopensource.github.io/ffmprovisr/index.html"+window.location.hash+"'>https://amiaopensource.github.io/ffmprovisr/index.html"+window.location.hash+"</a></small>");
|
||||||
});
|
});
|
||||||
|
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -44,7 +44,7 @@ You can read our contributor code of conduct [here](https://github.com/amiaopens
|
|||||||
|
|
||||||
## Maintainers
|
## Maintainers
|
||||||
|
|
||||||
[Ashley Blewer](https://github.com/ablwr), [Katherine Frances Nagels](https://github.com/kfrn), [Kieran O'Leary](https://github.com/kieranjol) and [Reto Kromer](https://github.com/retokromer)
|
[Ashley Blewer](https://github.com/ablwr), [Katherine Frances Nagels](https://github.com/kfrn), [Kieran O'Leary](https://github.com/kieranjol), [Reto Kromer](https://github.com/retokromer) and [Andrew Weaver](https://github.com/privatezero)
|
||||||
|
|
||||||
## Contributors
|
## Contributors
|
||||||
* Gathered using [octohatrack](https://github.com/LABHR/octohatrack)
|
* Gathered using [octohatrack](https://github.com/LABHR/octohatrack)
|
||||||
@@ -104,4 +104,6 @@ All Contributors: 22
|
|||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />This <span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/InteractiveResource" rel="dct:type">work</span> by <a xmlns:cc="http://creativecommons.org/ns#" href="http://amiaopensource.github.io/ffmprovisr/" property="cc:attributionName" rel="cc:attributionURL">ffmprovisr</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.<br />Based on a work at <a xmlns:dct="http://purl.org/dc/terms/" href="https://github.com/amiaopensource/ffmprovisr" rel="dct:source">https://github.com/amiaopensource/ffmprovisr</a>.
|
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png"></a><br>
|
||||||
|
This <span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/InteractiveResource" rel="dct:type">work</span> by <a xmlns:cc="http://creativecommons.org/ns#" href="http://amiaopensource.github.io/ffmprovisr/" property="cc:attributionName" rel="cc:attributionURL">ffmprovisr</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.<br>
|
||||||
|
Based on a work at <a xmlns:dct="http://purl.org/dc/terms/" href="https://github.com/amiaopensource/ffmprovisr" rel="dct:source">https://github.com/amiaopensource/ffmprovisr</a>.
|
||||||
|
|||||||
Reference in New Issue
Block a user