Translating Opinions into Fact When it Comes to Video Quality

This post was originally featured at https://www.linkedin.com/pulse/translating-opinions-fact-when-comes-video-quality-mark-donnigan 

In this post, we attempt to de-mystify the topic of perceptual video quality, which is the foundation of Beamr’s content adaptive encoding and content adaptive optimization solutions. 

National Geographic has a hit TV franchise on its hands. It’s called Brain Games starring Jason Silva, a talent described as “a Timothy Leary of the viral video age” by the Atlantic. Brain Games is accessible, fun and accurate. It’s a dive into brain science that relies on well-produced demonstrations of illusions and puzzles to showcase the power — and limitation — of the human brain. It’s compelling TV that illuminates how we perceive the world.(Intrigued? Watch the first minute of this clip featuring Charlie Rose, Silva, and excerpts from the show: https://youtu.be/8pkQM_BQVSo )

At Beamr, we’re passionate about the topic of perceptual quality. In fact, we are so passionate, that we built an entire company based on it. Our technology leverages science’s knowledge about the human vision system to significantly reduce video delivery costs, reduce buffering & speed-up video starts without any change in the quality perceived by viewers. We’re also inspired by the show’s ability to turn complex things into compelling and accessible, without distorting the truth. No easy feat. But let’s see if we can pull it off with a discussion about video quality measurement which is also a dense topic.

Basics of Perceptual Video Quality

Our brains are amazing, especially in the way we process rich visual information. If a picture’s worth 1,000 words. What’s 60 frames per second in 4k HDR worth?

The answer varies based on what part of the ecosystem or business you come from, but we can all agree that it’s really impactful. And data intensive, too. But our eyeballs aren’t perfect and our brains aren’t either – as Brain Games points out. As such, it’s odd that established metrics for video compression quality in the TV business have been built on the idea that human vision is mechanically perfect.

See, video engineers have historically relied heavily on two key measures to evaluate the quality of a video encode: Peak Signal to Noise Ratio, or PSNR, and Structured Similarity, or SSIM. Both metrics are ‘objective’ metrics. That is, we use tools to directly measure the physics of the video signal and construct mathematical algorithms from that data to create metrics. But is it possible to really quantify a beautiful landscape with a number? Let’s see about that.

PSNR and SSIM look at different physics properties of a video, but the underlying mechanics for both metrics are similar. You compress a source video where the properties of the “original” and derivative are then analyzed using specific inputs, and metrics calculated for both. The more similar the two metrics are, the more we can say that the properties of each video are similar, and the closer we can define our manipulation of the video, i.e. our encode, as having a high or acceptable quality.

Objective Quality vs. Subjective Quality


However, it turns out that these objectively calculated metrics do not correlate well to the human visual experience. In other words, in many cases, humans cannot perceive variations that objective metrics can highlight while at the same time, objective metrics can miss artifacts a human easily perceives.

The concept that human visual processing might be less than perfect is intuitive. It’s also widely understood in the encoding community. This fact opens a path to saving money, reducing buffering and speeding-up time-to-first-frame. After all, why would you knowingly send bits that can’t be seen?

But given the complexity of the human brain, can we reliably measure opinions about picture quality to know what bits can be removed and which cannot? This is the holy grail for anyone working in the area of video encoding.

Measuring Perceptual Quality

Actually, a rigorous, scientific and peer-reviewed discipline has developed over the years to accurately measure human opinions about the picture quality on a TV. The math and science behind these methods are memorialized in an important ITU standard on the topic originally published in 2008 and updated in 2012. ITU BT.500 (International Telecommunications Union is the largest standards committee in global telecom.) I’ll provide a quick rundown.

First, a set of clips is selected for testing. A good test has a variety of clips with diverse characteristics: talking heads, sports, news, animation, UGC – the goal is to get a wide range of videos in front of human subjects.

Then, a subject pool of sufficient size is created and screened for 20/20 vision. They are placed in a light-controlled environment with a screen or two, depending on the set-up and testing method.

Instructions for one method is below, as a tangible example.

In this experiment, you will see short video sequences on the screen that is in front of you. Each sequence will be presented twice in rapid succession: within each pair, only the second sequence is processed. At the end of each paired presentation, you should evaluate the impairment of the second sequence with respect to the first one.

You will express your judgment by using the following scale:

5 Imperceptible

4 Perceptible but not annoying

3 Slightly annoying

2 Annoying

1 Very annoying

Observe carefully the entire pair of video sequences before making your judgment.

As you can imagine, testing like this is an expensive proposition indeed. It requires specialized facilities, trained researchers, vast amounts of time, and a budget to recruit subjects.

Thankfully, the rewards were worth the effort for teams like Beamr that have been doing this for years.

It turns out, if you run these types of subjective tests, you’ll find that there are numerous ways to remove 20 – 50% of the bits from a video signal without losing the ‘eyeball’ video quality – even when the objective metrics like PSNR and SSIM produce failing grades.

But most of the methods that have been tried are still stuck in academic institutions or research labs. This is because the complexities of upgrading or integrating the solution into the playback and distribution chain make them unusable. Have you ever had to update 20 million set-top boxes? Well if you have, you know exactly what I’m talking about.

We know the broadcast and large scale OTT industry, which is why when we developed our approach to measuring perceptual quality and applied it to reducing bitrates, we were insistent on staying 100% inside the standard of AVC H.264 and HEVC H.265.

By pioneering the use of perceptual video quality metrics, Beamr is enabling media and entertainment companies of all stripes to reduce the bits they send by up to 50%. This reduces re-buffering events by up to 50%, improves video start time by 20% or more, and reduces storage and delivery costs.

Fortunately, you now understand the basics of perceptual video quality. You also see why most of the video engineering community believes content adaptive sits at the heart of next-generation encoding technologies.

Unfortunately, when we stated above that there were “all kinds of ways” to reduce bits up to 50% without sacrificing ‘eyeball video quality’, we skipped over some very important details. Such as, how we can utilize subjective testing techniques on an entire catalog of videos at scale, and cost efficiently.

Next time: Part 2 and the Opinionated Robot

Looking for better tools to assess subjective video quality?

You definitely want to check out Beamr’s VCT which is the best software player available on the market to judge HEVC, AVC, and YUV sequences in modes that are highly useful for a video engineer or compressionist.

VCT is available for Mac and PC. And best of all, we offer a FREE evaluation to qualified users.

Learn more about VCT: http://beamr.com/h264-hevc-video-comparison-player/

 

VCT, the Secret to Confident Subjective Video Quality Testing

We can all agree that analyzing video quality is one of the biggest challenges when evaluating codecs. Companies use a combination of objective and subjective tests to validate encoder efficiency. In this post, I’ll explore why it is difficult to measure video quality with quantitative metrics alone because they fail to meet the subjective quality perception ability of the human eye.

Furthermore, we’ll look at why it’s important to equip yourself with the best resources when doing subjective testing, and how Beamr’s VCT visual comparison tool can help you with video quality testing.

But first, if you haven’t done so already, be sure to download your free trial of VCT here.

OBJECTIVE TESTING

The most common objective measurement used today is pixel-based Peak Signal to Noise Ratio (PSNR). PSNR is a popular test to use because it is easy to calculate and nearly everyone working in video is familiar with interpreting its values. But it does have limitations. Typically a higher PSNR value correlates to higher quality, while a lower PSNR value correlates to lower quality. However, since this test measures pixel-based mean-squared error over an entire frame; measuring the quality of a frame (or collection of frames) using a single number does not always parallel true subjective quality.

PSNR gives equal weight to every pixel in the frame and each frame in a sequence, ignoring many factors that can affect human perception. For example, below are 2 encoded images of the same frame.1 Image (a) and Image (b) have the same PSNR, which should theoretically correlate to two encoded images of the same quality. However, it is easy to see the difference in this example of perceived quality as viewers would rate Image (a) as exceptionally higher quality than Image (b).

Example: 

PSNR value example of why it shouldn't be the absolute measurement for assessing video quality

Due to the inconsistencies of error-based methods, like PSNR to adequately mimic human eye perception, other methods for analyzing video quality have been developed, including the Structural Similarity Index Metric (SSIM) which measures structural distortion. Unlike PSNR, SSIM addresses image degradation as measures of the perceived change in three major aspects of images: luminance, contrast, and correction. SSIM has gained popularity, but as with PSNR, it has its limitations. Studies have suggested that SSIM’s performance is equal to PSNR’s performance and some have cited evidence of a systematic relationship between SSIM and Mean Squared Error (MSE).2

While SSIM and other quantitative measures including multi-scale structural similarity (MS-SSIM) and the Sarnoff Picture Quality Rating (PQR) have made significant gains, none can truly deliver the same assurance as subjective evaluation, using the human eye. It is also important to note that the two most widely used objective quality metrics mentioned above, PSNR and SSIM, were designed to evaluate static image quality. This means that both algorithms provide no meaningful information regarding motion artifacts, whereby limiting the effectiveness of the metric with regards to video.

SUBJECTIVE TESTING

While objective methods attempt to model human perception, there are no substitutes for subjective “golden-eye” tests. But we are all familiar with the drawbacks of subjectivity analysis, including variance of individual quality perception and the difficulties of executing proper subjective tests in 100% controlled viewing environments so that a large number of testers can participate. Evaluating video using subjective visual tests can reveal key differences that may not get caught by objective measures alone. Which is why it is important to use a combination of both objective and subjective testing methodologies.

One of the logistic difficulties of performing subjective quality comparisons is coordinating simultaneous playback of two streams. Recognizing some of the drawbacks of current subjective evaluation methods, in particular single-stream playback or awkward dual-stream review workarounds, Beamr spent years in research and development to build a tool that offers simultaneous playback of two videos with various comparison modes, to significantly improve the golden-eye test execution necessary to properly evaluate encoder efficiency.

Powered by our professional HEVC and H.264 codec SDK decoders, the Beamr video comparison tool VCT allows encoding engineers and compressionists to play back two frame-synchronized independent HEVC, H.264, or YUV sequences simultaneously. And compare the quality of these streams in four modes:

  1. Split screen
  2. Side-by-side
  3. Overlay
  4. and the newest mode Butterfly

MPEG2-TS and MP4 files containing either HEVC or H.264 elementary streams are also supported. Additionally, VCT displays valuable clip information such as bit-rate, screen resolution, frame rate, number of frames, and other important video information.

Developed in 2012, VCT was the industry’s first internal software player offered as a tool to help Beamr customers conduct subjective testing while evaluating our encoder’s efficiency. Today, VCT has been tested by many content and equipment companies from around the world in multiple markets including broadcast, mobile, and internet streaming, making it the defacto standard for subjective golden-eye video quality testing and evaluation.

VCT BENEFITS AND TIPS

Your FREE trial of VCT will come with an extensive user guide that contains everything you need to get started. But we know you are eager to begin your testing, so following are a few quick tips we trust you will find useful. Take advantage of this “golden” opportunity and get started today!

Note: use Command (⌘) instead of Ctrl for the OS X version of VCT.

  1.      Split Screen Comparison Mode:
    • Benefits:
      • Great for viewing two clips when only one screen is available.
      • Moving slider bar allows you to clearly see quality difference between two streams in your desired region of interest. For example, you can move the slider bar back and forth across a face to see quality differences between two discrete files.
    • Pro Tips:
      • Use the keyboard shortcut Ctrl + \ to re-center the slider bar after it is moved.
      • Shortcut key Ctrl + Tab allows you to change which video appears on the left or right of the slider bar.

VCT split screen comparison mode for subjective video quality assessment

 

  1.       Side-by-side Comparison Mode:
    • Benefits:
      • Great for tradeshows. Solves the lack of synchronization of side by side comparison tests when using two independent players.
      • Single control for both streams.
    • Pro Tip:
      • Shortcut key Ctrl + Tab allows you to change which video appears on which screen without moving the windows.

VCT side-by-side comparison mode for subjective video quality assessment

 

  1.       Overlay Comparison Mode:
    • Benefits:
      • Great for viewing the full frame of one stream on a single window.
    • Tips:
      • Shortcut key Ctrl + Tab allows you to cycle between the two videos. If you do this fast it is a great way to easily see quality differences between the two streams that you might not have noticed.

Overlay Mode

 

  1.      Butterfly Comparison Mode:
    • Benefits:
      • Very useful for determining the accuracy of the encoding process. The butterfly mode displays mirrored images of two sequences to help you assess whether an artifact occurs in the source when comparing an encoded sequence to the original.
    • Tips:
      • Use shortcut key Ctrl + \ to reset the frame to the leftmost view in and use shortcut Ctrl + Alt + \ to switch to the rightmost view in butterfly mode.
      • Use shortcut key Ctrl + [ and Ctrl + ] to move image in butterfly mode left/right.

VCT butterfly comparison mode for subjective video quality assessment

  1.      Other Useful Tips:
    • Ctrl + m allows you to toggle through the 4 comparison modes.
    • Shift + Left Click opens the magnifier tool that allows you to zoom into hard to see areas of the video.
    • Easily scale frames of different resolutions to the same resolution by clicking “scale to same look” on the main menu
    • NEW automatic download feature on the splash screen notifies you of the latest version updates to ensure you’re always up to date.
    • For more great features be sure to check out the VCT userguide beamr.com/vct/userguide.com.

 

Reference:

(1)   P. M. Arun Kumar and S. Chandramathi. Video Quality Assessment Methods: A Bird’s-Eye View

(2)   Richard Dosselmann and Xue Dong Yang. A Formal Assessment of the Structural Similarity Index

Will Virtual Reality Determine the Future of Streaming?

As video services take a more aggressive approach to virtual reality (VR), the question of how to scale and deliver this bandwidth intensive content must be addressed to bring it to a mainstream audience.

While we’ve been talking about VR for a long time you can say that it was reinvigorated when Oculus grabbed the attention of Facebook who injected 2 billion in investment based on Mark Zuckerberg’s vision that VR is a future technology that people will actively embrace. Industry forecasters tend to agree, suggesting VR will be front and center in the digital economy within the next decade. According to research by Canalys, vendors will ship 6.3 million VR headsets globally in 2016 and CCS Insights suggest that as many as 96 million headsets will get snapped up by consumers by 2020.

One of VR’s key advantages is the fact that you have the freedom to look anywhere in 360 degrees using a fully panoramic video in a highly intimate setting. Panoramic video files and resolution dimensions are large, often 4K (4096 pixels wide, 2048 pixels tall, depending on the standard) or bigger.

While VR is considered to be the next big revolution in the consumption of media content, we also see it popping up in professional fields such as education, health, law enforcement, defense telecom and media. It can provide a far more immersive live experience than TV, by adding presence, the feeling that “you are really there.”

Development of VR projects have already started to take off and high-quality VR devices are surprisingly affordable. Earlier this summer, Google announced that 360-degree live streaming support was coming to YouTube.

Of course, all these new angles and sharpness of imagery creates new and challenging sets of engineering hurdles which we’ll discuss below.

Resolution and, Quality?

Frame rate, resolution, and bandwidth are affected by the sheer volume of pixels that VR transmits. Developers and distributors of VR content will need to maximize frame rates and resolution throughout the entire workflow. They must keep up with the wide range of viewers’ devices as sporting events in particular, demand precise detail and high frame rates, such as what we see with instant replay, slow motion, and 360-degree cameras.

In a recent Vicon industry survey, 28 percent of respondents stated that high-quality content was important to ensuring a good VR experience. Let’s think about simple file size comparisons – we already know that ultra HD file sizes take up considerably more storage space than SD and the greater the file size, the greater a chance it will impede the delivery. VR file sizes are no small potatoes.  When you’re talking about VR video you’re talking about four to six times the foundational resolution that you are transmitting. And, if you thought that Ultra HD was cumbersome, think about how you’re going to deal with resolutions beyond 4K for an immersive VR HD experience.

In order to catch up with the file sizes we need to continue to develop video codecs that can quickly interpret the frame-by-frame data. HEVC is a great starting point but frankly given hardware device limitations many content distributors are forced to continue using H.264 codecs. For this reason we must harness advanced tools in image processing and compression. An example of one approach would be content adaptive perceptual optimization.

I want my VR now! Reaching End Users

Because video content comes in a variety of file formats including combinations of stereoscopic 3D, 360 degree panoramas and spherical views – they all come with obvious challenges such as added strain on processors, memory, and network bandwidth. Modern codecs today use a variety of algorithms to quickly and efficiently detect these similarities, but they are usually tailored to 2D content. However, a content delivery mechanism must be able to send this to every user and should be smart to optimize the processing and transmitting of video.

Minimizing latency, how long can you roll the boulder up the hill?

We’ve seen significant improvements in the graphic processing capabilities of desktops and laptops. However, to take advantage of the immersive environment that VR offers, it’s important that high-end graphics are delivered to the viewer as quickly and smoothly as possible. The VR hardware also needs to display large images properly and with the highest fidelity and lowest latency. There really is very limited room for things like color correction or for adjusting panning from different directions for instance. If you have to stitch or rework artifacts, you will likely lose ground. You need to be smart about it. Typical decoders for tablets or smart TVs are more likely to cause latency and they only support lower framerates. This means how you build the infrastructure will be the key to offering image quality and life-like resolution that consumers expect to see.

Bandwidth, where art thou?

According to Netflix, for an Ultra HD streaming experience, your Internet connection must have a speed of 25 Mbps or higher. However, according to Akamai, the average Internet speed in the US is only approximately 11 Mbps. Effectively, this prohibits live streaming on any typical mobile VR device which to achieve the quality and resolution needed may need 25 Mbps minimum.

Most certainly the improvements in graphic processing and hardware will continue to drive forward the realism of the immersive VR content, as the ability to render an image quickly becomes easier and cheaper. Just recently, Netflix jumped on the bandwagon and became the first of many streaming media apps to launch on Oculus’ virtual reality app store. As soon as all the VR display devices are able to integrate with these higher resolution screens, we will see another step change in the quality and realism of virtual environments. But will the available bandwidth be sufficient, is a very real question. 

To understand the applications for VR, you really have to see it to believe it

A heart-warming campaign from Expedia recently offered children at a research hospital in Memphis Tennessee the opportunity to be taken on a journey of their dreams through immersive, real-time virtual travel – all without getting on a plane:  https://www.youtube.com/watch?time_continue=179&v=2wQQh5tbSPw

The National Multiple Sclerosis Society also launched a VR campaign that inventively used the tech to give two people with MS the opportunity to experience their lifelong passions. These are the type of immersive experiences we hope will unlock a better future for mankind. We applaud the massive projects and time spent on developing meaningful VR content and programming such as this.

Frost & Sullivan estimates that $1.5 billion is the forecasted revenue from Pay TV operators delivering VR content by 2020. The adoption of VR in my estimation is only limited by the quality of the user experience, as consumer expectation will no doubt be high.

For VR to really take off, the industry needs to address some of these challenges making VR more accessible and most importantly with unique and meaningful content. But it’s hard to talk about VR without experiencing it. I suggest you try it – you will like it.

M-GO Upgrades Streaming UX With Beamr Video

It is with great joy and excitement that we announce a new member to the Beamr Video family: M-Go, a premium over-the-top VOD service that is a joint venture between Technicolor and DreamWorks Animation. M-GO is leveraging strategic partnerships with tier-one media companies to grow its vast premium content catalog, including 4K UHD titles, and have recently announced CE partnerships with Samsung and LG to secure availability on all major platforms.

M-Go-Beamr-Video

Having Beamr Video integrated with M-GO’s platform means a breakthrough in video quality and bandwidth utilization. It fits perfectly with M-GO’s strategy to leverage the best available technologies to address the growing bandwidth squeeze challenge, and they have found our technology to deliver network-friendly streams with excellent image quality, resulting in enhanced user experience and significant cost savings.

Based on a patent-pending perceptual quality measure, our software automatically reduces the bitrate of any H.264 or HEVC video stream by up to 50 percent while retaining the full perceptual quality and format of the original file. Our technology enables a smoother streaming experience with reduced buffering and faster stream starts, resulting in increased ARPU and higher customer satisfaction, in addition to reduced distribution costs. Recognizing these advantages, M-GO is now integrating Beamr Video into its video delivery workflow.

Are you looking to improve user experience and reduce the costs associated with storing and transmitting media files just like M-Go? We work with the world’s leading content providers, aggregators and media companies to enable an optimal user experiences across any user device or platform. For more information, you can check out our website – www.BeamrVideo.com.

You can read the full press release here

Using Media Optimization to Improve Streaming Performance

Back in November, we attended Streaming Media West, where our Director of Sales and Strategy, Mark Donnigan, moderated a panel on Media Optimization at Streaming Media West. The panel included the following industry experts:

Brad Collar, SVP, Warner Bros. Technical Operations (GDMX)

Samir Ahmed, CTO, M-GO

Glen Marzan, VP, Information Technology Production Services & Studio Operations, Sony Pictures Entertainment

Tim Miller, Director, Back-end Engineering, Yahoo! Flickr

See how Yahoo!, Sony Pictures, M-GO and Warner Bros. use Beamr Optimization solutions to improve the quality of experience for their customers.

Industry Executives Acknowledge Beamr’s Media Optimization Leadership

This week at Streaming Media West, Beamr’s Director of Sales and Strategy, Mark Donnigan, moderated a panel on Media Optimization at Streaming Media West. The panel included industry experts from Sony, Warner Brothers, Yahoo! and M-Go, who discussed their usage of media optimization technologies to improve user experience, provide the best quality on every device, and reduce storage and delivery costs. The panel was united on two things:

1) The need for media optimization to meet the challenges of increased video traffic over limited available bandwidth.
2) The fact that Beamr provides the best media optimization solution in the industry.

Beamr-Streaming-Media-West

Watch Mark Donnigan interviewing Sony Pictures, M-GO, Yahoo! and Warner Bros. about their use of Beamr Optimizer to improve user experience, reduce distribution costs, and delight users.

Beamr Video is Heading to Streaming Media West

UPDATE: Watch the video here.

We are headed to Streaming Media West in Huntington Beach, California! This is the place to learn all about cutting-edge online video technologies, and new business strategies, filled with case studies, how-to sessions, panel discussions, and in-depth tutorials.
On Wednesday, November 19 at 1:45 PM, our very own Mark Donnigan will moderate the panel “Using Media Optimization to Improve Streaming Performance”. For this panel, we are bringing together some of the biggest experts in media and entertainment:
Brad Collar, SVP – Warner Bros. Technical Operations (GDMX)
Samir Ahmed, CTO – M-GO
Glen Marzan, VP, Information Technology Production Services & Studio Operations – Sony Pictures Entertainment
Tim Miller, Director, Back-end Engineering – Yahoo! Flickr

Streaming-Media-East
Come and learn how these companies apply media optimization to solve the most common problems with streaming like slow stream starts, chronic buffering and network peering congestion.
We are 100% focused on improving the quality, speed and user experience of both photo and video sharing. Our technology is installed in three of Hollywood’s largest studios, and its clients include Sony Crackle, a premium advertising sponsored SVOD service, Interlude, Netflix, Groupon and others. We hope you can join us for this exciting session!

How Clogged Will the Internet be by 2020?

How clogged will the internet be by 2020? Well, let’s take a look at a few things. The number of TV sets connected to the internet will reach 965 million by 2020, that’s up from 103 million at the end of 2010 and the 339 million expected at the end of 2014.The number of televisions connected via media streaming devices and dongles is forecasted to reach 183 million in 2020, up from 36 million by the end of 2014. By the end of this decade, nearly half of television households worldwide will be watching some form of online television or video, with around 200 million homes subscribing to an online video on demand package.

Source: Digital TV Research

So what does all this mean for internet service providers? Massive companies like AT&T and Comcast have spent the first two months of 2014 announcing plans to close and control the internet through additional fees, and pay-to-play schemes. Today’s consumer is streaming more video than ever before, and they see it from their congested network. ISPs and infrastructure providers can’t keep up with the consistent bandwidth required to enable a high-quality service for OTT customers.

AT&T-Comcast

This is just the beginning of the internet bottleneck issue, and it’s only 2014. What’s going to happen when 2020 hits? Cisco recently reported in its Visual Networking Index report that by 2018, video will comprise a whopping 79 percent of global consumer Internet traffic.

An obvious solution to unclog the internet is to reduce video bitrates in order to lower the bandwidth requirements of the streamed video files. However, because the video quality is directly related to the bitrate allocated to the video stream, blindly lowering the bitrate will result in a poor viewing experience and unsatisfied customers—an option that is unacceptable in the age of retina displays and UHD 4k televisions.

Another solution is caching the most frequently viewed video files at the network edges. This ensures that when a popular video file is being requested by a user, it can be streamed from a location that is close to the user’s physical location, and does not have to travel again over the Internet backbone. Since most of the online video traffic is generated by a relatively small number of popular streams, caching those streams can be cost-effective when taking into account the storage costs of the cached files vs. the delivery costs of each copy that travels over the network.

Adaptive bitrate streaming is another common solution used by content delivery networks. This method detects a user’s bandwidth and CPU capacity in real time, then adjusts the quality of a video stream accordingly. While this strategy can provide consistent streaming on high-end and low-end connections, it incurs additional storage and encoding costs, and has a challenge to maintain overall quality on a global scale.

Finally, there’s media optimization, which takes an already-compressed video stream, analyzes its perceptual properties, and encodes it to a lower bitrate to increase streaming speeds without affecting the original video quality. This would be like taking a ball of modeling clay and squeezing it to make it smaller: It still has the same amount of clay, but occupies a smaller amount of space. Some forms of media optimization may struggle to maintain the quality of the video while reducing file size, but when done correctly using a reliable perceptual quality measure, this process can reduce a bitrate and file size by 20-50 percent while retaining the full perceptual quality. And that, ladies and gentlemen, is Beamr Video.

While these major players continue to sort through the congestion issues, utilizing current solutions like caching, adaptive bitrate streaming and media optimization can alleviate the bandwidth bottleneck problem while providing a win-win-win situation for content providers, telcos and end users.

Beamr Video Optimizes Videos and. . . T-shirts?

As we were getting ready for IBC a few weeks ago, we started thinking about how making the visit to our booth more special. The idea of giving away T-shirts came up, and since we didn’t think people would be happy to go around wearing a Beamr Video logo, we decided to put a nice slogan on them: “Network Friendly”. This is the essence of what Beamr Video does: It reduces the bitrate of videos without compromising their quality, resulting in video files that are more “network friendly” – they don’t clog the network as much as regular videos, and provide a better streaming experience to the end user. So who wouldn’t want to wear a T-shirt saying that they’re Network Friendly? It’s a very friendly statement and a conversation starter…

But now we faced a problem: We estimated that we could give away 200 T-shirts at the show, but we had no room to store this many T-shirts in our booth – only a small storage space was provided by the organizers. And, shipping the T-shirts to the show would cost us a lot of money. So we thought: How can we reduce the storage requirements and the delivery costs of our T-shirts? And then it hit us: Optimization! This is what we must do to the T-shirts! In the same way that Beamr Video optimizes the videos to reduce storage and delivery costs, we would do the same by compressing our T-shirts.

Immediately we started looking for vendors of compressed T-shirts that could deliver the goods in time for IBC. Luckily we encountered GoTeez, that said the T-shirts would be ready on time. How is a compressed T-shirt even made? Check out this video.

So: Problem solved! And, we now had the perfect pitch for IBC. This is what we told every visitor after explaining the benefits of Beamr Video, and just before they left the booth:
“Oh, and one more thing: We also compress T-shirts! We had a problem of storage and delivery costs in bringing the T-shirts to the show, so we optimized them for delivery – which is exactly what Beamr Video does to your videos. And just like Beamr Video, the T-shirts are fully standard, and you can use a standard “decoder” to de-compress them: Just open the packaging, unfold the shirt and iron it using a standard iron, and you have your full-sized T-shirt!”. The result: Everyone left the booth with a big smile, and we were confident that our message came across clearly…

Beamr-video-Video-Optimization-T-shirt

Cutting Bitrate by 50% Just Became a Reality

It is with great joy and excitement that we announce the launch of Beamr 2.0 today. It was just one short year ago that we launched Beamr Video and set out to reduce the bitrate of any H.264 or HEVC video stream by up to fifty percent, enabling a smoother streaming experience with reduced buffering and a faster stream start. So what’s new in Beamr 2.0?

Beamr Video 2.0 now offers a web dashboard and multi-core processing capabilities. The dashboard lets you easily monitor and control the video optimization process. With the dashboard, users can view the progress and optimization parameters used for each job, check the overall and average bitrate savings across all jobs, and monitor the system resource utilization for CPU and memory.

The multi-core processing capabilities enables the most efficient usage of computing resources. Once a user selects the number of cores allocated for processing, the tool divides the video file into multiple segments and processes them in tandem on different cores, ensuring the maximum performance and fastest turn-around times for a user’s video optimization jobs. Once the optimization is complete, Beamr Video 2.0 “stitches” the segments back together to create the output file.

Will be at IBC 2014 in Amsterdam? Stop by our booth, RAI, Hall 3, Booth B20, to see Beamr 2.0 in action. If you can’t meet us at IBC, you can always request a free trial by clicking here.