VMAF
The content in this entry is incomplete & is in the process of being completed.
Short for Video Multimethod Assessment Fusion, VMAF is a full reference video quality assessment algorithm developed primarily by Netflix.
Installationâ
Vmaf comes as a part of libvmaf. There are two ways it is commonly used:
- As an FFmpeg filter
- As a standalone binary
The instructions below are written for Linux & macOS. On Windows, you can use the Windows Subsystem for Linux to follow along.
- Standalone Binary
- FFmpeg Filter
In order to build from source, follow the instructions below.
-
Install the required dependencies via your package manager of choice. The necessary dependencies are
nasm
,ninja-build
,doxygen
, &xxd
. -
Clone the repository & enter the corresponding directory
git clone https://github.com/Netflix/vmaf/
cd vmaf/
- Compile with
meson
&ninja
meson setup libvmaf libvmaf/build --buildtype release -Denable_float=true
sudo ninja -vC libvmaf/build install
Now, you can run the VMAF binary with the following command:
vmaf --help
If you would not like to build from source, you may grab the latest build from the VMAF GitHub releases for your operating system.
Now, you can:
/path/to/vmaf --reference refrence.y4m --distorted distorted.y4m
Tip: If the VMAF binary exists but is not market as executable, you might need to chmod +x /path/to/vmaf
Explainer on command line flags can be found here
The disadvantage of using the bin is that you need .yuv|y4m files, you can that overcome by using named pipes Simple example using ffmpeg as a decoder:
# create the pipes
mkfifo ref.pipe
mkfido dist.pipe
# run these each in a new terminal, order docent matter
ffmpeg -v error -i ref.mkv -strict -1 -f yuv4mpegpipe - > ref.pipe
ffmpeg -v error -i dist.mkv -strict -1 -f yuv4mpegpipe - > dist.pipe
# after starting the two ffmpeg processes,
# start the vmaf in a new terminal
/path/to/vmaf --reference ref.pipe --distorted dist.pipe
# delete the pipes after usage
rm ref.pipe dist.pipe
The Advantages of this are:
- No need for a ffmpeg build with
--enable-libvmaf
- Clear & simple usage of VMAF's various options, like
--aom_ctc
Disadvantages are:
- Difficult/awkward to use without a wrapper script
If you are not sure if you have VMAF installed, you can check by running ffmpeg -help
and looking for whether or not the --enable-libvmaf
flag appears in the banner that is printed to the terminal. If you do not see this, you will need to build ffmpeg from source with the --enable-libvmaf
flag or grab a pre-compiled build of FFmpeg with the flag enabled.
Via the VMAF github repo:
Using VMAF with FFmpegâ
After installing libvmaf
, you can use it with FFmpeg. Under the FFmpeg directory, configure, build and install FFmpeg with:
./configure --enable-libvmaf
make -j4
make install
Using FFmpeg+libvmaf is very powerful, as you can create complex filters to calculate VMAF directly on videos of different encoding formats and resolutions. For the best practices of computing VMAF at the right resolution, refer to our tech blog.
We provide a few examples how you can construct the FFmpeg command line and use VMAF as a filter. Note that you may need to download the test videos from vmaf_resource.
Below is an example on how you can run FFmpeg+libvmaf on a pair of YUV files. First, download the reference video src01_hrc00_576x324.yuv
and the distorted video src01_hrc01_576x324.yuv
. -r 24
sets the frame rate (note that it needs to be before -i
), and PTS-STARTPTS
synchronizes the PTS (presentation timestamp) of the two videos (this is crucial if one of your videos does not start at PTS 0, for example, if you cut your video out of a long video stream). It is important to set the frame rate and the PTS right, since FFmpeg filters synchronize based on timestamps instead of frames.
The log_path
is set to standard output /dev/stdout
. It uses the model_path
at location /usr/local/share/model/vmaf_float_v0.6.1.json
(which is the default and can be omitted).
ffmpeg -video_size 576x324 -r 24 -pixel_format yuv420p -i src01_hrc00_576x324.yuv \
-video_size 576x324 -r 24 -pixel_format yuv420p -i src01_hrc01_576x324.yuv \
-lavfi "[0:v]setpts=PTS-STARTPTS[reference]; \
[1:v]setpts=PTS-STARTPTS[distorted]; \
[distorted][reference]libvmaf=log_fmt=xml:log_path=/dev/stdout:model_path={your_vmaf_dir}/model/vmaf_v0.6.1.json:n_threads=4" \
-f null -
The expected output is:
[libvmaf @ 0x7fcfa3403980] VMAF score: 76.668905
Below is a more complicated example where the inputs are packaged .mp4
files. It takes in 1) a reference video Seeking_30_480_1050.mp4
of 480p and 2) a distorted video Seeking_10_288_375.mp4
of 288p upsampled to 720x480
using bicubic, and compute VMAF on the two 480p videos. Bicubic is used as the recommended upsampling method (also see the techblog for more details).
ffmpeg \
-r 24 -i Seeking_30_480_1050.mp4 \
-r 24 -i Seeking_10_288_375.mp4 \
-lavfi "[0:v]setpts=PTS-STARTPTS[reference]; \
[1:v]scale=720:480:flags=bicubic,setpts=PTS-STARTPTS[distorted]; \
[distorted][reference]libvmaf=log_fmt=xml:log_path=/dev/stdout:model_path={your_vmaf_dir}/model/vmaf_v0.6.1.json:n_threads=4" \
-f null -
The expected output is:
[libvmaf @ 0x7fb5b672bc00] VMAF score: 51.017497
See the FFmpeg's guide to libvmaf, the FFmpeg Filtering Guide for more examples of complex filters, and the Scaling Guide for information about scaling and using different scaling algorithms.
Note about the model path on Windowsâ
Due to Windows not having a good default for where to pull the VMAF model from, you will always need to specify model_path
when calling libvmaf through ffmpeg
. However, you will need to be careful about the path you pass to model_path
.
If you are using a relative path for your model_path
, you can completely ignore this whole section, else if you are trying to use an absolute Windows path (D:\mypath\vmaf_v0.6.1.json
) for your model_path
argument, you will need to be careful so ffmpeg
passes the right path to libvmaf
.
The final command line will depend on what shell you are running ffmpeg
through, so you will need to go through the following steps to make sure your path is okay.
-
Convert all of the backslashes
\
to forward slashes/
(D:/mypath/vmaf_v0.6.1.json
) -
Escape the colon
:
character by using a backslash\
(D\:/mypath/vmaf_v0.6.1.json
) -
Then escape that backslash with another backslash (
D\\:/mypath/vmaf_v0.6.1.json
) -
The next step will depend on the shell that will run
ffmpeg
:For PowerShell and Command Prompt, this will be enough and your final
ffmpeg
command line will look something like./ffmpeg.exe -i dist.y4m -i ref.y4m \
-lavfi libvmaf=model_path="D\\:/mypath/vmaf_v0.6.1.json" \
-f null -Quoting the pathNote: I only quoted the path part for trivial reasons and in this specific case, it can be unquoted or you can quote the whole part after lavfi starting from
libvmaf
tojson
and it should give the same result due to neither shell treating the\
as a special characterFor bash or specifically msys2 bash, it has some additional considerations. The first thing to know is that bash treats the backslash character
\
a bit special in that it's an escape character normally when not put inside single quotes. The second thing to know is that msys2's bash attempts convert a posix-like path (/mingw64/share/model/vmaf_v0.6.1.json
) to a Windows mixed path (D:/msys2/mingw64/share/model/vmaf_v0.6.1.json
) when passing arguments to a program. Normally, this would be fine, however, in our case, this works against us since we cannot allow it to convert the path to a normal path with an un-escaped colon. For this, we will need to not only escape the escaped backslash, but we will also need to pass theMSYS2_ARG_CONV_EXCL
environment variable with the value of*
to make sure it doesn't apply that special conversion on any of the argumentsMSYS2_ARG_CONV_EXCL="*" \
./ffmpeg.exe -i dist.y4m -i ref.y4m -lavfi \
libvmaf=model_path="D\\\:/mypath/vmaf_v0.6.1.json" -f null -QuotesNote: in this case, the quotes are not as trivial as the PowerShell/cmd version, as removing the quotes entirely will require you to re-escape the backslash resulting in 4 total backslashes, but quoting the whole argument will be fine.
Single QuotesSecond Note: if you use single quotes around the path, it will be fine as well and the final command line would look like
MSYS2_ARG_CONV_EXCL="*" \
./ffmpeg.exe -i dist.y4m -i ref.y4m -lavfi \
libvmaf=model_path='D\\:/mypath/vmaf_v0.6.1.json' -f null -with only a double backslash instead of a triple.
Scoringâ
scores range from 0 to 100, and are best interpreted in a linear way, 100 meaning perfect quality, 0 meaning not recognisable, more info in the Best Practices section here It aligns with mean opinion scores (MOS) really well at low/medium bitrates, as stated in this benchmark
Some weaknessesâ
- Newer codecs like AV1 and VVC introduce new kinds of artifacting that v0.6.2 (current model as of Jan 2024) doesn't recognise, that's why its performance might degrade, for example, high motion scenes being affected badly
- It's bad at "transparent" levels of quality, kinds of quality that the average viewer might not notice
- Synthetic grain throws off scores, this issue is not isolated to vmaf, but it should be noted regardless
place -filmgrain 0
before -i
in the above ffmpeg commands, limited to decoding with dav1d
TODO: replace this tip with an export_side_data solution
- As of January 2024, it doesn't work on HDR content, nothing prevents you from feeding it un-tonemapped PQ but scores will be off
- In contrast with SSIMULACRA2, it focuses on appeal, not necessarily on fidelity. They often align, but not always.
- Due to the ML nature, comparing the same video to itself will not always result in a score of 100
Comparing to SSIMULACRA2â
One big advantage over SSIMULACRA2 is the inclusion of some temporal information. This means that VMAF weights frames that have a lot of motion higher. Meanwhile, SSIMULACRA2 based solutions compare each frame to the reference frame individually, since it is an image metric at heart. VMAF also wins in speed and general ease of use.