The market is heating up with solutions that claim to reduce the size of video files by impressive percentages, some up to 50%. But reducing bitrate is only one side of the coin – what happens to the quality of the video when bitrate is reduced? How can you make sure that the right bits are eliminated and the visual quality is not compromised? I only know one way to do it right: Verifying video quality using perceptual quality measuring.
Perceptual quality measuring goes beyond existing quality measures such as PSNR and SSIM, and quantifies the actual visual quality of the content, demonstrating a very high correlation with subjective human results. How can I be so sure? With 6 years of intensive research and development here at Beamr, you’ll have to trust me on this.
I’m also relying on testing according to the strict ITU BT.500 standard requirements for image quality, where the correlation of our own quality measure with subjective results has been proven. I can vouch for the testing we conducted with real users under actual viewing conditions that validated that the optimized version is imperceptible to the original.
Video optimization based on perceptual quality measuring is done by determining the subjective quality level of the input and output video streams, using this information to control the video encoding process. The key advantage of this method is guaranteeing the preservation of perceptual quality by actually checking the encoded content in a closed loop, on a frame-by-frame basis. Then specific encoding parameters are modified until every output frame is perceptually identical to the input frame.
Other optimization methods use pre-filtering, meaning that they work in an open loop since the video is filtered before it’s encoded. These methods don’t check the output frames and are unable to feed back the information to the pre-filter since they don’t employ a quality measure. This means they have no way of guaranteeing that the output of the encoding or optimization process is perceptually identical to the input.
Furthermore, some optimization solutions require information about the viewing conditions of the video, which creates the need for special client-side software and a real-time back channel between the client and pre-filter. In absence of such information, the pre-filter makes assumptions about the viewing conditions, which may not be accurate.
Based on what I hear from premium content owners and producers, pre-filtering solutions are suitable only for specific use cases, such as videos that originate from low-quality smartphones and contain a lot of noise. But they are a non-starter in any general application, where video quality levels are to be held high, regardless of the input content.
In these applications, closed-loop inspection of the output video frame, while comparing the result to the input frame using a subjective quality measure (provided it features high correlation with human vision), is the only way to ensure that the perceptual quality of the video is never compromised.
I hope you now have a better understanding of the difference between the various optimization methods, and the advantage of perceptual quality measuring over all other methods. If you have any questions, please contact me through Facebook, Twitter or LinkedIn.