학술논문

Adapting Learned Image Codecs to Screen Content via Adjustable Transformations
Document Type
Working Paper
Source
Subject
Electrical Engineering and Systems Science - Image and Video Processing
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Machine Learning
Language
Abstract
As learned image codecs (LICs) become more prevalent, their low coding efficiency for out-of-distribution data becomes a bottleneck for some applications. To improve the performance of LICs for screen content (SC) images without breaking backwards compatibility, we propose to introduce parameterized and invertible linear transformations into the coding pipeline without changing the underlying baseline codec's operation flow. We design two neural networks to act as prefilters and postfilters in our setup to increase the coding efficiency and help with the recovery from coding artifacts. Our end-to-end trained solution achieves up to 10% bitrate savings on SC compression compared to the baseline LICs while introducing only 1% extra parameters.
Comment: 7 pages, 6 figures, 2 tables