We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Feature based no-reference continuous video quality prediction model for coded stereo video

Formal Metadata

Title
Feature based no-reference continuous video quality prediction model for coded stereo video
Title of Series
Part Number
23
Number of Parts
31
Author
License
CC Attribution - NoDerivatives 2.0 UK: England & Wales:
You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
In this paper, we propose a continuous no-reference video quality evaluation model for MPEG-2 MP@ML coded stereoscopic video based on spatial, temporal, and disparity features with the incorporation of human visual system characteristics. We believe edge distortion is a major concern to perceive spatial distortion throughout any image frame which is strongly dependent on smooth and non-smooth areas of the frame. We also claim that perceived depth of any image/ video is mainly dependent on central objects/ structures of the image/ video contents. Thus, visibility of depth is firmly dependent on the objects’ distance such as near, far, and very far. Subsequently, temporal perception is mostly based on jerkiness of video and it is dependent on motion as well as scene content of the video. Therefore, segmented local features such as smooth and non-smooth area based edge distortion, and the objects’ distance based depth measures are evaluated in this method. Subsequently, video jerkiness is estimated based on segmented temporal information. Different weighting factors are then applied for the different edge distortion and depth features to measure the overall features of a temporal segment. All features are calculated separately for each temporal segment in this method. Subjective stereo video database, which considered both symmetric and asymmetric coded videos, is used to verify the performance of the model. The result indicates that our proposed model has sufficient prediction performance.