SFB TRR 161 TP A 05 Bild- und Video-Qualitätsbewertung: Benchmarks, hybride Methoden und perzeptuelle, dynamische Metriken
- FB Informatik und Informationswissenschaft
- SFB TRR 161 Quantitative Methoden für Visual Computing
|(2022): Boosting for Visual Quality Assessment with Applications for Frame Interpolation Methods||
dc.contributor.author: Men, Hui
|(2020): Subjective Assessment of Global Picture-Wise Just Noticeable Difference 2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW). - Piscataway, NJ : IEEE, 2020. - ISBN 978-1-72811-485-9||
The picture-wise just noticeable difference (PJND) for a given image and a compression scheme is a statistical quantity giving the smallest distortion that a subject can perceive when the image is compressed with the compression scheme. The PJND is determined with subjective assessment tests for a sample of subjects. We introduce and apply two methods of adjustment where the subject interactively selects the distortion level at the PJND using either a slider or keystrokes. We compare the results and times required to those of the adaptive binary search type approach, in which image pairs with distortions that bracket the PJND are displayed and the difference in distortion levels is reduced until the PJND is identified. For the three methods, two images are compared using the flicker test in which the displayed images alternate at a frequency of 8 Hz. Unlike previous work, our goal is a global one, determining the PJND not only for the original pristine image but also for a sequence of compressed versions. Results for the MCL-JCI dataset show that the PJND measurements based on adjustment are comparable with those of the traditional approach using binary search, yet significantly faster. Moreover, we conducted a crowdsourcing study with side-byside comparisons and forced choice, which suggests that the flicker test is more sensitive than a side-by-side comparison.
|(2018): Blind Image and Video Quality Assessment||
The popularity and affordability of handheld imaging devices, especially smartphones, along with the rapid development of social media such as Facebook, Flickr, and YouTube have made videos and images a popular and integral part of everyday communication. With the development of image and video transmission systems and the advancement of consumer video technologies, it is becoming increasingly important to improve visual quality in order to meet the quality expectations of the end users. This thesis focuses on designing algorithms that accurately predict the perceptual and technical quality of images or videos as well as constructing authentic video quality databases. Image quality assessment (IQA) can be classified based on the amount of information available to the algorithm. This thesis focuses on blind or no-reference image quality assessment (BIQA) where only the input (possibly distorted) image is available for the algorithm. The BIQA is further classified into two main groups based on the needs for subjective mean opinion scores (MOS): the BIQA methods that need the corresponding MOS for an input image in their training phase which are known as opinion-aware, and the fully blind IQA methods, which have no access to any subjective scores. In Chapters 3 and 4, we propose two opinion-aware image quality methods. The first is based on the Wakeby modeling of the natural scene statistics (NSS), and the second incorporates aesthetics and content information as well as NSS features in order to predict more accurately the human judgment of image quality. The development of modern imaging technology in smartphones allows a variety of image and video applications, such as iris recognition systems, to be integrated into mobile devices. Ensuring the quality of acquired iris images in visible light poses many challenges to iris recognition in an uncontrolled environment. In Chapter 5, we propose a real-time, general–purpose, and fully blind image quality metric for filtering iris images with poor quality to improve the recognition performance of iris recognition systems and to reduce the false rejection rate. Training machine learning methods for video quality assessment (VQA) require a wide range of video sequences with diverse semantic contexts, visual appearance, and types and combinations of quality distortions. Existing VQA databases are mostly benchmarks which are meant for training restricted quality models and they contain few original content videos that have been artificially distorted without concern for the dataset ecological validity. Chapter 6 discusses the results of our joint work of the Multimedia Signal Processing (MMSP) group of the University of Konstanz. We present the challenges and choices we have made in creating VQA databases with “in the wild” authentic distortions, depicting a wide variety of content. Due to a large number of videos, we crowdsourced the subjective scores using the widely used Figure Eight platform.
|SFB||631/15 TP A 05|
|Period:||01.07.2015 – 30.06.2019|