Chrome Extension
WeChat Mini Program
Use on ChatGLM

RankDVQA-mini: Knowledge Distillation-Driven Deep Video Quality Assessment

PCS(2024)

Cited 0|Views7
No score
Abstract
Deep learning-based video quality assessment (deep VQA) has demonstratedsignificant potential in surpassing conventional metrics, with promisingimprovements in terms of correlation with human perception. However, thepractical deployment of such deep VQA models is often limited due to their highcomputational complexity and large memory requirements. To address this issue,we aim to significantly reduce the model size and runtime of one of thestate-of-the-art deep VQA methods, RankDVQA, by employing a two-phase workflowthat integrates pruning-driven model compression with multi-level knowledgedistillation. The resulting lightweight full reference quality metric,RankDVQA-mini, requires less than 10full version (14prediction performance that is superior to most existing deep VQA methods. Thesource code of the RankDVQA-mini has been released athttps://chenfeng-bristol.github.io/RankDVQA-mini/ for public evaluation.
More
Translated text
Key words
Video quality assessment,deep learning,model compression,knowledge distillation,RankDVQA-mini
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined