Main Page

From hdrvdp
Jump to: navigation, search
Logo horiz 256.png

About HDR-VDP

Input images and output predictions of the HDR-VDP-2. HDR image courtesy of HDR-VFX, LLC 2008.

HDR-VDP is a visual metric that compares a pair of images (a reference and a test image) and predicts:

  • Visibility - what is the probability that the differences between the images are visible for an average observer;
  • Quality - what is the quality degradation with respect to the reference image, expressed as a mean-opinion-score.

The metric can be used for testing fidelity (e.g. how distracting are image compression distortions), or visibility (is the information sufficiently visible).

The image on the right shows how two input images, a reference image (upper left) and a distorted image (lower left), are processed with the HDR-VDP-2 to produce a probability of detection map, overall probability value <math>P_{det}</math> (from 0 to 1), and a quality predictor <math>Q_{MOS}</math> (from 0 to 100). The probability of detection map tells how likely we will notice a difference between the two images. Red colour denotes high probability, green - low probability. As the distortion is an interleaved pattern of noise and blur, the highest probability of detection is either in the flat areas (for noise) or in the high-contrast areas (for blur).

Although there are dozens of visible difference metrics that serve a similar purpose, the HDR-VDP-2 (Visual Difference Predictor for HDR images) has several unique advantages:

  • It works with a full range of luminance values that can be found in the real-world (HDR images), not only for the luminance range that can be shown on a standard display.
  • The complete source code of the metric is available (but do not forget to cite us :-)
  • It produces separate predictions for visibility and quality. These two measures serve very different purposes and are not necessarily very well correlated.
  • Is extensively tested and calibrated against actual measurements (see Calibration datasets and reports below) to ensure the highest possible accuracy.

The HDR-VDP-2 works within the complete range of luminance the human eye can see. An input to the metric is a pair of high dynamic range (HDR) images, or a pair of ordinary 8-bits-per-color images, converted to the actual luminance values assuming a certain display model. The proposed metric takes into account the aspects that are relevant for viewing high-contrast stimuli, such as scattering of the light in the optics (OTF), the photoreceptor non-linear response to light, and local adaptation.

What is new in HDR-VDP-3

HDR-VDP-3 adds new features, such as age-adaptive predictions, is trained on multiple "tasks", and can run much faster on CUDA. The most important changes are:

  • HDR-VDP-3 now requires an additional parameter `task` which controls the type of image comparison:
 `side-by-side` - side-by-side comparison of two images
 `flicker` - the comparison of two images shown in the same place and swapped every 0.5 second
 `detection` - detection of a single difference in a multiple-alternative-forced-choice task (the same task as in HDR-VDP-2)
 `quality` - prediction of image quality
 `civdm` - contrast invariant visual difference metric that can compare LDR and HDR images
  • civdm is a new experimental metric based on the ideas from [9]. See examples/compare_hdr_vs_tonemapped.m
  • CSF function has been refitted to newer data (from 2014)
  • MTF can be disabled or switched to the CIE99 Glare Spread Function
  • The metric now accounts for the age-related effects, as described in [4]
  • The metric includes a model of local adaptation from [5]
  • The tasks `side-by-side` and `flicker` have been calibrated on large datasets from [6] and [7].
  • The task `quality` has been recalibrated using a new UPIQ dataset [8] with other 4000 SDR and HDR images, all scaled in JOD units.
  • Added multiple examples in the `examples` folder
  • The code has been re-organized and tested to run on a recent version of Matlab (2022a).
  • The code runs in Octave.
  • The code runs on a GPU (in Matlab, if CUDA is available).

The differences are explained in more detail in the publication [3].

What is new in HDR-VDP-2

HDR-VDP-2 is a major revision of the original HDR-VDP. The entire architecture of the metric and the visual model have been changed to improve the accuracy of the predictions. The most important changes are:

  • The metric predicts both visibility (detection/discrimination) and image quality (mean-opinion-score).
  • The metric is based on new CSF measurements, made in consistent viewing conditions for a large variety of background luminance and spatial frequencies.
  • The new metric models L-, M-, S-cone and rod sensitivities and is sensitive to different spectral characteristics of the incoming light.
  • Photoreceptor light sensitivity is modelled separately for cones and rods, though L- and M- cones share the same characteristic.
  • The intra-ocular light scatter function (glare) has been improved by fitting to the experimental data.
  • The metric uses a steerable pyramid rather than a cortex transform to decompose an image into spatially- and orientation-selective bands. Steerable filter introduces less ringing and in the general case is computationally more efficient.
  • The new model of contrast masking introduces inter-band masking and the effect of CSF flattening.
  • A simple spatial-integration formula using probability summation is used to account for the effect of stimuli size.

The previous version of the HDR-VDP can still be found at the MPI web pages and in the SourceForge file archive.

News

  • 6 July 2023 - The HDR-VDP-2 original paper [1] received the test-of-time award from SIGGRAPH. From the announcement: "Test-of-Time Award papers that have had a significant and lasting impact on computer graphics and interactive techniques over at least a decade."
  • 30 April 2023 - HDR-VDP 3.0.7 released - adds support for CUDA (Matlab) and Octave (CPU only).
  • 31 December 2022 - HDR-VDP-3 won the HDR Video Quality Measurement Grand Challenge at WACV conference. This was achieved with a metric that has not be retrained specifically for the challenge and which does not involve deep learning models.
  • 8 June 2020 - A major release of the HDR-VDP-3 is now available at [1]. Check the README.md file for the list of major changes. This web page has not been updated to reflect some major changes in the metrics.
  • 29 March 2020 - A minor update to HDR-VDP-2 (v2.2.2) is now available for download. It fixes a few issues with running the metric on newer versions of Matlab and it includes a few examples showing how to run the metric on HDR and SDR images (in the examples folder).
  • 24 October 2014 - HDR-VDP-2.2.0 released - improved quality predictions in HDR images, simplified installation. Details can be found in the "HDR-VDP-2.2" paper (see Publications section below).
  • 28 January 2013 - We published a paper with the details of the CSF measurements (see [Kim et al. 2013] below).
  • 30 August 2011 - HDR-VDP-2.1.1 released

This is a minor release that fixes the equation for the CSF, which was inconsistent with the paper. New parameter values are provided for the fixed CSF.

  • 17 June 2011 - HDR-VDP-2.1 released

Revision 2.1 fixes an important bug that caused the nCSF to remain fixed below 1 cd/m^2. To extend the operational dynamic range, the CSF was measured at an additional luminance level of 0.002 cd/m^2. The CSF was also measured for all observers, resulting in a more accurate CSF function fit. The predictions are improved for the majority of datasets.

  • 10 May 2011 - HDR-VDP-2.0 matlab code available for download.
  • 28 April 2011 - Together with the release of HDR-VDP-2, the project home page was moved to Trac wiki. Wiki will hopefully be easier to maintain.
  • 30 March 2011 - Our paper on HDR-VDP-2 (see Literature below) has been accepted for SIGGRAPH 2011 conference in Vancouver. This is the highest esteem conference in computer graphics. Many thanks for the entire team who worked on the project and for those who participated in the experiments.

Documentation

After installing HDR-VDP-2 check the documentation for the hdrvdp matlab function ("doc hdrvdp" in matlab). Make also sure to check the Frequently Asked Questions.

The best forum to ask questions is the HDR-VDP Google group. You can also contact the author directly.

Publications

Logo siggraph11.png

If you find the metric useful, please cite the paper below and include the version number, for example, "HDR-VDP-2.2.2 [Mantiuk et al., 2013]". Mention also which predictor (Q, Q_MOS, P_det, etc.) is reported in your paper. The version number should be included in order to make sure that your results can be reproduced. As new data sets become available, we will be updating the HDR-VDP-2 code and its calibration parameters and releasing new versions, but the older version will still be available for download. The HDR-VDP-2 version can be queried by calling the function hdrvdp_version. It will return a fractional number, such as 2.21, which should be interpreted as release 2.2.1.

Main paper:

[1] HDR-VDP-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions
Rafał Mantiuk, Kil Joong Kim, Allan G. Rempel and Wolfgang Heidrich.
In: ACM Transactions on Graphics (Proc. of SIGGRAPH'11), 30(4), article no. 40, 2011
DOI 10.1145/1964921.1964935 pre-print PDF
SIGGRAPH presentation slides

Improved quality predictions for HDR images (HDR-VDP-2.2):

[2] HDR-VDP-2.2: A Calibrated Method for Objective Quality Prediction of High Dynamic Range and Standard Images.
Manish Narwaria, Rafal K. Mantiuk, Mattheiu Perreira Da Silva and Patrick Le Callet.
pre-print PDF
In Journal of Electronic Imaging, 24(1), 2015 (2015).

A short paper explaining the main changes introduced in HDR-VDP-3:

[3] HDR-VDP-3: A multi-metric for predicting image differences, quality and contrast distortions in high dynamic range and regular content
Rafal K. Mantiuk, Dounia Hammou, Param Hanji.
In: arXiv pre-print arXiv:2304.13625, 2023
Publication link

Age-adaptive predictions:

[4] Mantiuk, R. K., & Ramponi, G. (2018).
Age-dependent predictor of visibility in complex scenes.
Journal of the Society for Information Display, 1–21.
Publication link

A model of local adaptation:

[5] Vangorp, P., Myszkowski, K., Graf, E. W., & Mantiuk, R. K. (2015).
A model of local adaptation.
ACM Transactions on Graphics, 34(6), 1–13.
Publication link

The tasks `side-by-side` and `flicker` have been calibrated on large datasets from:

[6] Wolski, K., Giunchi, D., Ye, N., Didyk, P., Myszkowski, K., Mantiuk, R., … Mantiuk, R. K. (2018).
Dataset and Metrics for Predicting Local Visible Differences.
ACM Transactions on Graphics, 37(5), 1–14.
Publication link

and

[7] Ye, N., Wolski, K., & Mantiuk, R. K. (2019).
Predicting Visible Image Differences Under Varying Display Brightness and Viewing Distance.
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5429–5437.
Publication link

The task `quality` has been calibrated on a new UPIQ dataset:

[8] Consolidated dataset and metrics for high-dynamic-range image quality
Aliaksei Mikhailiuk, Maria Pérez-Ortiz, Dingcheng Yue, Wilson Suen and Rafał K. Mantiuk.
In: IEEE Transactions on Multimedia, 2021
Publication link Project page

The 'civdm' metric is experimental and based on the ideas from:

[9] Aydin, T. O., Mantiuk, R., Myszkowski, K., & Seidel, H.-P. (2008).
Dynamic range independent image quality assessment.
ACM Transactions on Graphics (Proc. of SIGGRAPH), 27(3), 69.
Publication link

The details on the CSF measurements:

[10] Measurements of achromatic and chromatic contrast sensitivity functions for an extended range of adaptation luminance
Kil Joong Kim, Rafał Mantiuk, Kyoung Ho Lee.
In: Proc. of Human Vision and Electronic Imaging XVIII, IS&T/SPIE's Symposium on Electronic Imaging, article no. 8651-47, 2013
pre-print PDF

Help and support

If you have a question or would like to report a problem with the HDR-VDP-2, you can post your question on the HDR-VDP discussion group. Note that the group is moderated because of the large amount of SPAM, so that you may need to wait a day or two before your post appears on the group.

If you represent a company, we encourage that you enquire about about a consulting service arrangement by e-mailing [mantiuk@gmail.com]. Such an arrangement will ensure confidentially and much broader form of support, including an advise on the best use of the metric in a particular application, customizations as well as custom calibration and testing. We are also looking for sponsors of studentships (Msc and Phd), which is a longer-term but also more cost effective form of technology transfer, especially for UK-based companies.

Download

The current version of the HDR-VDP-3 is available as a Matlab / GNU Octave code and can be downloaded from SourceForge. Older versions of the metric can also be found there.

Python port of HDR-VDP-2 (HDR-VDP-3 in preparation) can be found in the JPEG AI metrics repo.

Calibration datasets and reports

Great care was taken to calibrate the HDR-VDP-2 with the experimental data. Check the calibration reports for the current release of the metric.

If you want to use any of the datasets we used for calibration, send me an e-mail.

Other metrics

  • FovVideoVDP - a video and image quality metric, implemented in Python and Matlab. It can be used with both foveated and non-foveated content, SDR or HDR. It lacks some more advanced components of HDR-VDP-3, but it is much faster and more suitable for video.
  • Deep Photometric Visual Metric - an image visibility metric, which can predict in which parts of the image the differences will be visible. It is equivalent to the `side-by-side` mode of the HDR-VDP-3, but trained CNN predicts the visibility.
  • PU21 - a colour transfer function that enables any quality metric to process HDR images.