From 1ac02df52a91490f6322eae395550a464c629ec0 Mon Sep 17 00:00:00 2001 From: kfrn Date: Tue, 17 Oct 2017 18:36:26 +1300 Subject: [PATCH] Reorder several sections of the page --- index.html | 404 ++++++++++++++++++++++++++--------------------------- 1 file changed, 202 insertions(+), 202 deletions(-) diff --git a/index.html b/index.html index a50f1f9..9a6893c 100644 --- a/index.html +++ b/index.html @@ -30,13 +30,13 @@

Table of Contents

Change video properties
Join/trim/create an excerpt
Work with interlaced video
-
Use filters or scopes
-
View or strip metadata
+
Overlay timecode or text on a video
Generate image files from a video
Generate an animated GIF
Create a video from image(s) and audio
-
Overlay timecode or text on a video
+
Use filters or scopes
Normalize/equalize audio
+
View or strip metadata
Preservation tasks
Generate test files
Repair a file
@@ -851,171 +851,74 @@

Check video file interlacement patterns

-

Use filters or scopes

- - - -
-

Creates a visualization of the bits in an audio stream

- -

ffplay -f lavfi "amovie=input_file, asplit=2[out1][a], [a]abitscope=colors=purple|yellow[out0]"

-

This filter allows visual analysis of the information held in various bit depths of an audio stream. This can aid with identifying when a file that is nominally of a higher bit depth actually has been 'padded' with null information. The provided GIF shows a 16 bit WAV file (left) and then the results of converting that same WAV to 32 bit (right). Note that in the 32 bit version, there is still only information in the first 16 bits.

-
-
ffplay -f lavfi
starts the command and tells ffplay that you will be using the lavfi virtual device to create the input
-
"
quotation mark to start the lavfi filtergraph
-
amovie=input_file
path, name and extension of the input file
-
asplit=2[out1][a]
splits the audio stream in two. One of these [a] will be passed to the filter, and the other [out1] will be the audible stream.
-
[a]abitscope=colors=purple|yellow[out0]
sends stream [a] into the abitscope filter, sets the colors for the channels to purple and yellow, and outputs the results to [out0]. This is what will be the visualization.
-
"
quotation mark to end the lavfi filtergraph
-
-
-

Comparison of mono 16 bit and mono 16 bit padded to 32 bit.

- bit_scope_comparison -
-
- - - - -
-

Plays a graphical output showing decibel levels of an input file

- -

ffplay -f lavfi "amovie='input.mp3', astats=metadata=1:reset=1, adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0:size=700x256:bg=Black[out]"

-
-
ffplay
starts the command
-
-f lavfi
tells ffplay to use the Libavfilter input virtual device
-
"
quotation mark to start the lavfi filtergraph
-
movie='input.mp3'
declares audio source file on which to apply filter
-
,
comma signifies the end of audio source section and the beginning of the filter section
-
astats=metadata=1
tells the astats filter to ouput metadata that can be passed to another filter (in this case adrawgraph)
-
:
divides between options of the same filter
-
reset=1
tells the filter to calculate the stats on every frame (increasing this number would calculate stats for groups of frames)
-
,
comma divides one filter in the chain from another
-
adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0
draws a graph using the overall peak volume calculated by the astats filter. It sets the max for the graph to 0 (dB) and the minimum to -30 (dB). For more options on data points that can be graphed see the FFmpeg astats documentation
-
size=700x256:bg=Black
sets the background color and size of the output
-
[out]
ends the filterchain and sets the output
-
"
quotation mark to end the lavfi filtergraph
-
-
-

Example of filter output

- astats example -
-
- - - - -
-

Shows all pixels outside of broadcast range

- -

ffplay -f lavfi "movie='input.mp4', signalstats=out=brng:color=cyan[out]"

-
-
ffplay
starts the command
-
-f lavfi
tells ffplay to use the Libavfilter input virtual device
-
"
quotation mark to start the lavfi filtergraph
-
movie='input.mp4'
declares video file source to apply filter
-
,
comma signifies closing of video source assertion and ready for filter assertion
-
signalstats=out=brng:
tells ffplay to use the signalstats command, output the data, use the brng filter
-
:
indicates there’s another parameter coming
-
color=cyan[out]
sets the color of out-of-range pixels to cyan
-
"
quotation mark to end the lavfi filtergraph
-
-
-

Example of filter output

- BRNG example -
-
- - - - -
-

Plays vectorscope of video

- -

ffplay input_file -vf "split=2[m][v], [v]vectorscope=b=0.7:m=color3:g=green[v], [m][v]overlay=x=W-w:y=H-h"

-
-
ffplay
starts the command
-
input_file
path, name and extension of the input file
-
-vf
creates a filtergraph to use for the streams
-
"
quotation mark to start filtergraph
-
split=2[m][v]
Splits the input into two identical outputs and names them [m] and [v]
-
,
comma signifies there is another parameter coming
-
[v]vectorscope=b=0.7:m=color3:g=green[v]
asserts usage of the vectorscope filter and sets a light background opacity (b, alias for bgopacity), sets a background color style (m, alias for mode), and graticule color (g, alias for graticule)
-
,
comma signifies there is another parameter coming
-
[m][v]overlay=x=W-w:y=H-h
declares where the vectorscope will overlay on top of the video image as it plays
-
"
quotation mark to end filtergraph
-
-
- +

Overlay timecode or text

- - -
-

This will play two input videos side by side while also applying the temporal difference filter to them

+ + +
+

Create centered, transparent text watermark

-

ffmpeg -i input01 -i input02 -filter_complex "[0:v:0]tblend=all_mode=difference128[a];[1:v:0]tblend=all_mode=difference128[b];[a][b]hstack[out]" -map [out] -f nut -c:v rawvideo - | ffplay -

+

E.g For creating access copies with your institutions name

+

ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:text=watermark_text:fontcolor=font_colour:alpha=0.4:x=(w-text_w)/2:y=(h-text_h)/2" output_file

ffmpeg
starts the command
-
-i input01 -i input02
Designates the files to use for inputs one and two respectively
-
-filter_complex
Lets FFmpeg know we will be using a complex filter (this must be used for multiple inputs)
-
"
quotation mark to start filtergraph
-
[0:v:0]tblend=all_mode=difference128[a]
Applies the tblend filter (with the settings all_mode and difference128) to the first video stream from the first input and assigns the result to the output [a]
-
[1:v:0]tblend=all_mode=difference128[b]
Applies the tblend filter (with the settings all_mode and difference128) to the first video stream from the second input and assigns the result to the output [b]
-
[a][b]hstack[out]
Takes the outputs from the previous steps ([a] and [b] and uses the hstack (horizontal stack) filter on them to create the side by side output. This output is then named [out])
-
"
quotation mark to end filtergraph
-
-map [out]
Maps the output of the filter chain
-
-f nut
Sets the format for the output video stream to Nut
-
-c:v rawvideo
Sets the video codec of the output video stream to raw video
-
-
tells FFmpeg that the output will be piped to a new command (as opposed to a file)
-
|
Tells the system you will be piping the output of the previous command into a new command
-
ffplay -
Starts ffplay and tells it to use the pipe from the previous command as its input
+
-i input_file
path, name and extension of the input file
+
-vf drawtext=
This calls the drawtext filter with the following options: +
+
fontfile=font_path
Set path to font. For example in macOS: fontfile=/Library/Fonts/AppleGothic.ttf
+
fontsize=font_size
Set font size. 35 is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.
+
text=watermark_text
Set the content of your watermark text. For example: text='FFMPROVISR EXAMPLE TEXT'
+
fontcolor=font_colour
Set colour of font. Can be a text string such as fontcolor=white or a hexadecimal value such as fontcolor=0xFFFFFF
+
alpha=0.4
Set transparency value.
+
x=(w-text_w)/2:y=(h-text_h)/2
Sets x and y coordinates for the watermark. These relative values will centre your watermark regardless of video dimensions.
+
+ Note: -vf is a shortcut for -filter:v.
+
output_file
path, name and extension of the output file.
-
-

Example of filter output

- astats example -
-
- -
-
-

View or strip metadata

+ - - -
-

Pull specs from video file

+ + +
+

Overlay image watermark on video

-

ffprobe -i input_file -show_format -show_streams -show_data -print_format xml

-

This command extracts technical metadata from a video file and displays it in xml.

+

ffmpeg -i input_video file -i input_image_file -filter_complex overlay=main_w-overlay_w-5:5 output_file

-
ffprobe
starts the command
-
-i input_file
path, name and extension of the input file
-
-show_format
outputs file container informations
-
-show_streams
outputs audio and video codec informations
-
-show_data
adds a short “hexdump” to show_streams command output
-
-print_format
Set the output printing format (in this example “xml”; other formats include “json” and “flat”)
+
ffmpeg
starts the command
+
-i input_video_file
path, name and extension of the input video file
+
-i input_image_file
path, name and extension of the image file
+
-filter_complex overlay=main_w-overlay_w-5:5
This calls the overlay filter and sets x and y coordinates for the position of the watermark on the video. Instead of hardcoding specific x and y coordinates, main_w-overlay_w-5:5 uses relative coordinates to place the watermark in the upper right hand corner, based on the width of your input files. Please see the FFmpeg documentation for more examples.
+
output_file
path, name and extension of the output file
-

See also the FFmpeg documentation on ffprobe for a full list of flags, commands, and options.

- + - - -
-

Strips metadata from video file

+ + +
+

Create a burnt in timecode on your image

-

ffmpeg -i input_file -map_metadata -1 -c:v copy -c:a copy output_file

+

ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:timecode=starting_timecode:fontcolor=font_colour:box=1:boxcolor=box_colour:rate=timecode_rate:x=(w-text_w)/2:y=h/1.2" output_file

ffmpeg
starts the command
-i input_file
path, name and extension of the input file
-
-map_metadata -1
sets metadata copying to -1, which copies nothing
-
-vcodec copy
copies video track
-
-acodec copy
copies audio track
-
output_file
Makes copy of original file and names output file
+
-vf drawtext=
This calls the drawtext filter with the following options: +
"
quotation mark to start drawtext filter command
+
fontfile=font_path
Set path to font. For example in macOS: fontfile=/Library/Fonts/AppleGothic.ttf
+
fontsize=font_size
Set font size. 35 is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.
+
timecode=starting_timecode
Set the timecode to be displayed for the first frame. Timecode is to be represented as hh:mm:ss[:;.]ff. Colon escaping is determined by O.S, for example in Ubuntu timecode='09\\:50\\:01\\:23'. Ideally, this value would be generated from the file itself using ffprobe.
+
fontcolor=font_colour
Set colour of font. Can be a text string such as fontcolor=white or a hexadecimal value such as fontcolor=0xFFFFFF
+
box=1
Enable box around timecode
+
boxcolor=box_colour
Set colour of box. Can be a text string such as fontcolor=black or a hexadecimal value such as fontcolor=0x000000
+
rate=timecode_rate
Framerate of video. For example 25/1
+
x=(w-text_w)/2:y=h/1.2
Sets x and y coordinates for the timecode. These relative values will horizontally centre your timecode in the bottom third regardless of video dimensions.
+
"
quotation mark to end drawtext filter command
+
output_file
path, name and extension of the output file.
+

Note: -vf is a shortcut for -filter:v.

- +
@@ -1156,74 +1059,131 @@

Create a video from an image and audio file.

-

Overlay timecode or text

+

Use filters or scopes

- - -
-

Create centered, transparent text watermark

+ + +
+

Creates a visualization of the bits in an audio stream

-

E.g For creating access copies with your institutions name

-

ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:text=watermark_text:fontcolor=font_colour:alpha=0.4:x=(w-text_w)/2:y=(h-text_h)/2" output_file

+

ffplay -f lavfi "amovie=input_file, asplit=2[out1][a], [a]abitscope=colors=purple|yellow[out0]"

+

This filter allows visual analysis of the information held in various bit depths of an audio stream. This can aid with identifying when a file that is nominally of a higher bit depth actually has been 'padded' with null information. The provided GIF shows a 16 bit WAV file (left) and then the results of converting that same WAV to 32 bit (right). Note that in the 32 bit version, there is still only information in the first 16 bits.

-
ffmpeg
starts the command
-
-i input_file
path, name and extension of the input file
-
-vf drawtext=
This calls the drawtext filter with the following options: -
-
fontfile=font_path
Set path to font. For example in macOS: fontfile=/Library/Fonts/AppleGothic.ttf
-
fontsize=font_size
Set font size. 35 is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.
-
text=watermark_text
Set the content of your watermark text. For example: text='FFMPROVISR EXAMPLE TEXT'
-
fontcolor=font_colour
Set colour of font. Can be a text string such as fontcolor=white or a hexadecimal value such as fontcolor=0xFFFFFF
-
alpha=0.4
Set transparency value.
-
x=(w-text_w)/2:y=(h-text_h)/2
Sets x and y coordinates for the watermark. These relative values will centre your watermark regardless of video dimensions.
-
- Note: -vf is a shortcut for -filter:v.
-
output_file
path, name and extension of the output file.
+
ffplay -f lavfi
starts the command and tells ffplay that you will be using the lavfi virtual device to create the input
+
"
quotation mark to start the lavfi filtergraph
+
amovie=input_file
path, name and extension of the input file
+
asplit=2[out1][a]
splits the audio stream in two. One of these [a] will be passed to the filter, and the other [out1] will be the audible stream.
+
[a]abitscope=colors=purple|yellow[out0]
sends stream [a] into the abitscope filter, sets the colors for the channels to purple and yellow, and outputs the results to [out0]. This is what will be the visualization.
+
"
quotation mark to end the lavfi filtergraph
+
+

Comparison of mono 16 bit and mono 16 bit padded to 32 bit.

+ bit_scope_comparison +
- + - - -
-

Overlay image watermark on video

+ + +
+

Plays a graphical output showing decibel levels of an input file

-

ffmpeg -i input_video file -i input_image_file -filter_complex overlay=main_w-overlay_w-5:5 output_file

+

ffplay -f lavfi "amovie='input.mp3', astats=metadata=1:reset=1, adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0:size=700x256:bg=Black[out]"

-
ffmpeg
starts the command
-
-i input_video_file
path, name and extension of the input video file
-
-i input_image_file
path, name and extension of the image file
-
-filter_complex overlay=main_w-overlay_w-5:5
This calls the overlay filter and sets x and y coordinates for the position of the watermark on the video. Instead of hardcoding specific x and y coordinates, main_w-overlay_w-5:5 uses relative coordinates to place the watermark in the upper right hand corner, based on the width of your input files. Please see the FFmpeg documentation for more examples.
-
output_file
path, name and extension of the output file
+
ffplay
starts the command
+
-f lavfi
tells ffplay to use the Libavfilter input virtual device
+
"
quotation mark to start the lavfi filtergraph
+
movie='input.mp3'
declares audio source file on which to apply filter
+
,
comma signifies the end of audio source section and the beginning of the filter section
+
astats=metadata=1
tells the astats filter to ouput metadata that can be passed to another filter (in this case adrawgraph)
+
:
divides between options of the same filter
+
reset=1
tells the filter to calculate the stats on every frame (increasing this number would calculate stats for groups of frames)
+
,
comma divides one filter in the chain from another
+
adrawgraph=lavfi.astats.Overall.Peak_level:max=0:min=-30.0
draws a graph using the overall peak volume calculated by the astats filter. It sets the max for the graph to 0 (dB) and the minimum to -30 (dB). For more options on data points that can be graphed see the FFmpeg astats documentation
+
size=700x256:bg=Black
sets the background color and size of the output
+
[out]
ends the filterchain and sets the output
+
"
quotation mark to end the lavfi filtergraph
+
+

Example of filter output

+ astats example +
- + - - -
-

Create a burnt in timecode on your image

+ + +
+

Shows all pixels outside of broadcast range

-

ffmpeg -i input_file -vf drawtext="fontfile=font_path:fontsize=font_size:timecode=starting_timecode:fontcolor=font_colour:box=1:boxcolor=box_colour:rate=timecode_rate:x=(w-text_w)/2:y=h/1.2" output_file

+

ffplay -f lavfi "movie='input.mp4', signalstats=out=brng:color=cyan[out]"

+
+
ffplay
starts the command
+
-f lavfi
tells ffplay to use the Libavfilter input virtual device
+
"
quotation mark to start the lavfi filtergraph
+
movie='input.mp4'
declares video file source to apply filter
+
,
comma signifies closing of video source assertion and ready for filter assertion
+
signalstats=out=brng:
tells ffplay to use the signalstats command, output the data, use the brng filter
+
:
indicates there’s another parameter coming
+
color=cyan[out]
sets the color of out-of-range pixels to cyan
+
"
quotation mark to end the lavfi filtergraph
+
+
+

Example of filter output

+ BRNG example +
+
+ + + + +
+

Plays vectorscope of video

+ +

ffplay input_file -vf "split=2[m][v], [v]vectorscope=b=0.7:m=color3:g=green[v], [m][v]overlay=x=W-w:y=H-h"

+
+
ffplay
starts the command
+
input_file
path, name and extension of the input file
+
-vf
creates a filtergraph to use for the streams
+
"
quotation mark to start filtergraph
+
split=2[m][v]
Splits the input into two identical outputs and names them [m] and [v]
+
,
comma signifies there is another parameter coming
+
[v]vectorscope=b=0.7:m=color3:g=green[v]
asserts usage of the vectorscope filter and sets a light background opacity (b, alias for bgopacity), sets a background color style (m, alias for mode), and graticule color (g, alias for graticule)
+
,
comma signifies there is another parameter coming
+
[m][v]overlay=x=W-w:y=H-h
declares where the vectorscope will overlay on top of the video image as it plays
+
"
quotation mark to end filtergraph
+
+
+ + + + +
+

This will play two input videos side by side while also applying the temporal difference filter to them

+ +

ffmpeg -i input01 -i input02 -filter_complex "[0:v:0]tblend=all_mode=difference128[a];[1:v:0]tblend=all_mode=difference128[b];[a][b]hstack[out]" -map [out] -f nut -c:v rawvideo - | ffplay -

ffmpeg
starts the command
-
-i input_file
path, name and extension of the input file
-
-vf drawtext=
This calls the drawtext filter with the following options: -
"
quotation mark to start drawtext filter command
-
fontfile=font_path
Set path to font. For example in macOS: fontfile=/Library/Fonts/AppleGothic.ttf
-
fontsize=font_size
Set font size. 35 is a good starting point for SD. Ideally this value is proportional to video size, for example use ffprobe to acquire video height and divide by 14.
-
timecode=starting_timecode
Set the timecode to be displayed for the first frame. Timecode is to be represented as hh:mm:ss[:;.]ff. Colon escaping is determined by O.S, for example in Ubuntu timecode='09\\:50\\:01\\:23'. Ideally, this value would be generated from the file itself using ffprobe.
-
fontcolor=font_colour
Set colour of font. Can be a text string such as fontcolor=white or a hexadecimal value such as fontcolor=0xFFFFFF
-
box=1
Enable box around timecode
-
boxcolor=box_colour
Set colour of box. Can be a text string such as fontcolor=black or a hexadecimal value such as fontcolor=0x000000
-
rate=timecode_rate
Framerate of video. For example 25/1
-
x=(w-text_w)/2:y=h/1.2
Sets x and y coordinates for the timecode. These relative values will horizontally centre your timecode in the bottom third regardless of video dimensions.
-
"
quotation mark to end drawtext filter command
-
output_file
path, name and extension of the output file.
+
-i input01 -i input02
Designates the files to use for inputs one and two respectively
+
-filter_complex
Lets FFmpeg know we will be using a complex filter (this must be used for multiple inputs)
+
"
quotation mark to start filtergraph
+
[0:v:0]tblend=all_mode=difference128[a]
Applies the tblend filter (with the settings all_mode and difference128) to the first video stream from the first input and assigns the result to the output [a]
+
[1:v:0]tblend=all_mode=difference128[b]
Applies the tblend filter (with the settings all_mode and difference128) to the first video stream from the second input and assigns the result to the output [b]
+
[a][b]hstack[out]
Takes the outputs from the previous steps ([a] and [b] and uses the hstack (horizontal stack) filter on them to create the side by side output. This output is then named [out])
+
"
quotation mark to end filtergraph
+
-map [out]
Maps the output of the filter chain
+
-f nut
Sets the format for the output video stream to Nut
+
-c:v rawvideo
Sets the video codec of the output video stream to raw video
+
-
tells FFmpeg that the output will be piped to a new command (as opposed to a file)
+
|
Tells the system you will be piping the output of the previous command into a new command
+
ffplay -
Starts ffplay and tells it to use the pipe from the previous command as its input
-

Note: -vf is a shortcut for -filter:v.

+
+

Example of filter output

+ astats example +
- +
@@ -1326,6 +1286,46 @@

Two Pass Loudness Normalization

+
+
+

View or strip metadata

+ + + +
+

Pull specs from video file

+ +

ffprobe -i input_file -show_format -show_streams -show_data -print_format xml

+

This command extracts technical metadata from a video file and displays it in xml.

+
+
ffprobe
starts the command
+
-i input_file
path, name and extension of the input file
+
-show_format
outputs file container informations
+
-show_streams
outputs audio and video codec informations
+
-show_data
adds a short “hexdump” to show_streams command output
+
-print_format
Set the output printing format (in this example “xml”; other formats include “json” and “flat”)
+
+

See also the FFmpeg documentation on ffprobe for a full list of flags, commands, and options.

+
+ + + + +
+

Strips metadata from video file

+ +

ffmpeg -i input_file -map_metadata -1 -c:v copy -c:a copy output_file

+
+
ffmpeg
starts the command
+
-i input_file
path, name and extension of the input file
+
-map_metadata -1
sets metadata copying to -1, which copies nothing
+
-vcodec copy
copies video track
+
-acodec copy
copies audio track
+
output_file
Makes copy of original file and names output file
+
+
+ +

Preservation tasks