Data Caps, Zero-rated, Net Neutrality: The Video Tsunami Doesn’t Take Sides

We Need to Work Together to Conserve Bits in the Zettabyte Era

Over the past year, and again last week, there has been no shortage of articles and discussion around data caps, binge-on, zero rated content, and of course network neutrality.

We know the story. Consumer demand for Internet and over-the-top video content is insatiable. This is creating an unstoppable tsunami of video.

Vendors like Cisco have published the Visual Network Index to help the industry forecast how big that wave is, so we can work together to find sustainable ways to deliver it.

The Cisco VNI is projecting that internet video traffic will more than double to 2.3 Zettabytes by 2020. (Endnote 1.) To put it another way, that’s 1.3 Billion DVDs of video crossing the internet daily in 2020, versus the 543 Million DVDs of video that crossed the internet today.

That’s still tough to visualize, so here’s a back-of-the-envelope thought experiment

Let’s take the single largest TV event in history, Super Bowl 49.

114 million viewers on average, every minute, watched Super Bowl 49 in 2015. The broadcast is about 3 hours and 35 minutes.  We might say that 24.5 Billion cumulative viewer-minutes of video were watched.

Assume that a DVD holds 180 minutes of video. (Note, this is an inexact guess assuming a conservative video quality.) If one person watched 543 Million DVDs of video, she would have to spend 97.8 billion cumulative minutes watching all of it. That’s four Super Bowl 49s every day.

And in 2020, it’s going to be close to 10 Super Bowl 49s of cumulative viewer-minutes of video trafficking across the network. In one day.

That is a lot of traffic and it is going to be hard work to transport those bits in a reliable, high-quality fashion that is also economically sustainable.

And that’s true no matter whether you are a network operator or an over-the-top content distributor. Here’s why.

All Costs are Variable in the Long-run

Recently, Comcast and Netflix have agreed to partner, which bodes well for both companies’ business models, and for the consumer at large. However, last week there were several news headlines about data caps and zero-rated content. These will undoubtedly continue.

Now, it’s obvious that OTT companies like Netflix & M-GO need to do everything they can to reduce the costs of video delivery. That’s why both companies have pioneered new approaches to video quality optimization.

On the other hand, it might seem that network operators have a fixed cost structure that gives them wiggle room for sub-optimal encodes.

But it’s worth noting this important economic adage: In the long run, all costs are variable. When you’re talking about the kind of growth in video traffic that industry analysts are projecting to 2020, everything is a variable cost.

And when it comes to delivering video sustainably, there’s no room for wasting bits. Both network operators and over-the-top content suppliers will need to do everything they can to lower the number of bits they transport without damaging the picture quality of the video.

In the age of the Zettabyte, we all need to be bit conservationists.

 

Endnote 1: http://www.cisco.com/c/dam/m/en_us/solutions/service-provider/vni-forecast-widget/forecast-widget/index.html

Net Neutrality Means Bye Bye HOV Lanes

What started as a rant from comedian John Oliver calling for support of an open internet, has now developed into a monumental philosophical debate riddled with controversy that is centered upon the Federal Communications Commission’s (FCC) attempt to reclassify broadband based on its role in modern day life. For the past two decades, the Internet has been unregulated and free, but that’s about to change and with this change, a light is shined on the importance of optimization solutions for bandwidth savings.

What is net neutrality?

In broad terms, net neutrality is the idea or principle of having the Internet equally accessible for all. Advocates proclaim it’s meant to stop internet service providers including mobile wireless services from discriminating against any application or content that is shared over their networks. It is the idea of keeping the internet free of “fast lanes” that give preferential treatment to companies and websites willing to pay more to deliver their content faster. It, therefore, prohibits internet service providers from putting everyone else in “slow lanes” that are said to discriminate against the “smaller” companies that cannot afford to pay a premium.

Broadband is now considered a utility

The decision by the United States Court of Appeals for the District of Columbia Circuit maintained the FCC’s view that broadband Internet should be reclassified as a utility, telecommunications service as opposed to information services, giving them more power to enforce stricter rules and regulations in a once free market.  Essentially, this ruling solidifies that the internet has become as important to the American way of life as the telephone and forces providers to adhere to some of the same regulations faced by phone companies.

But what’s the point? In theory, the reclassification will increase competition among service providers, which as we learned in economics class, should cut costs to consumers, and raise the quality of service as well as drive innovation. But opposing forces claim this new open policy will hurt the consumer as prices increase due to lack of incentive for companies to build out and update their networks as needed to meet consumer demand, which is growing exponentially due to video. In fact, Cisco reported that by 2019, video will represent 80% of all global consumer Internet traffic. 

Whatever side you fight for, in the end, what net neutrality comes down to is the commoditization of the internet… bits.  

Bandwidth savings is now a necessity making optimization a requirement

As a service provider, now that you can no longer pay to get your bits (content) delivered faster or more efficiently, this means everyone is on the same playing field.  We are all stuck in the same traffic jam.  Couple this fact with the increasing consumer demands for a reliable and stable experience and in one of the most infamous phrases ever uttered, “Houston we have a problem.”  But the good news is that companies have developed solutions to improve encoding efficiency which can lead to additional bitrate reductions of 20-40% without a perceptible change in viewing quality.

Optimization solutions, like Beamr’s JPEGmini and Beamr Video, have already become well entrenched in the fabric of media workflows, especially video, and are likely to grow even more in their importance over the years to come. Why? Because the name of the game is getting content to your consumer with the best quality and highest user experience possible, and now the only way to do that is to increase file efficiency by optimizing your file sizes to conserve bandwidth so you can cut through the clutter.

For example, with an image intensive website, JPEGmini can cut the file size of your photos by up to 80% without compromising quality, whereby giving your user faster page load times.  Which by the way, Google recently made a change in their ranking algorithms to weight website searches based on page load time and mobile performance.  In short, if your site is slow, you are going to see other sites rank more highly on Google search engine results pages.  For video, Beamr Video is able to reduce H.264 and HEVC bitrates by 20-40% without any compromise in quality, whereby enabling a smoother streaming experience. With the ruling as it stands, only companies that can lower file size and save bandwidth will break through congested networks to deliver content that meets consumer expectations for entertainment anytime, anywhere and on any device.

BONUS: Four secrets to keep in mind when selecting a video bitrate reduction solution

  1. Automation is your friend.  Though an encoder in the hands of a well-trained expert can yield amazingly small files with good quality, for nearly all streaming service providers, automation will be required to deploy a bitrate reduction solution at scale.  And anything less than scale will not have a meaningful impact. For this reason, you should select a product that can give you the ability to automate the analysis and workflow implementation.  
  2. Solutions that work at the frame level will give you the best overall video quality.  Products that analyze video to determine an optimum fixed bitrate cannot provide the level of granularity that you will want to have as methods that perform this analysis at the frame or scene level.  Beamr Video operates the analysis and compression parameter settings on a frame basis and in a closed loop, to ensure that the optimal allocation of bits per frame is achieved.  No other method of optimization can achieve the same level of quality across a disparate video library as Beamr Video.
  3. Leave neural networks to Google, you want a solution that uses a perceptual quality metric to validate video quality.  A sophisticated quality measure that is closely aligned to human vision will deliver a more accurate result across an entire library of content.  Quality measures such as PSNR and SSIM provide inconsistent results and cannot be used with a high degree of accuracy.  These quality measures have some application for classifying content, but they are less useful as a mechanism to drive an encoder to achieve the best quality possible.
  4. Guaranteed quality, is there any other choice?  Sophisticated encoder configurations that use CRF (Constant Rate Factor)  on first look can appear to be viable, however, without a quality verification step that is post-encode the final result will likely contain degradation or other transient quality issues.  If the product or technique that you have selected does not operate in a closed loop or contain some method for verifying quality, the potential to introduce artifacts will be unacceptably high.  

 

For more information or to receive a free trial of Beamr Video, please visit http://beamr.com/product