How clogged will the internet be by 2020? Well, let’s take a look at a few things. The number of TV sets connected to the internet will reach 965 million by 2020, that’s up from 103 million at the end of 2010 and the 339 million expected at the end of 2014.The number of televisions connected via media streaming devices and dongles is forecasted to reach 183 million in 2020, up from 36 million by the end of 2014. By the end of this decade, nearly half of television households worldwide will be watching some form of online television or video, with around 200 million homes subscribing to an online video on demand package.

Source: Digital TV Research

So what does all this mean for internet service providers? Massive companies like AT&T and Comcast have spent the first two months of 2014 announcing plans to close and control the internet through additional fees, and pay-to-play schemes. Today’s consumer is streaming more video than ever before, and they see it from their congested network. ISPs and infrastructure providers can’t keep up with the consistent bandwidth required to enable a high-quality service for OTT customers.

AT&T-Comcast

This is just the beginning of the internet bottleneck issue, and it’s only 2014. What’s going to happen when 2020 hits? Cisco recently reported in its Visual Networking Index report that by 2018, video will comprise a whopping 79 percent of global consumer Internet traffic.

An obvious solution to unclog the internet is to reduce video bitrates in order to lower the bandwidth requirements of the streamed video files. However, because the video quality is directly related to the bitrate allocated to the video stream, blindly lowering the bitrate will result in a poor viewing experience and unsatisfied customers—an option that is unacceptable in the age of retina displays and UHD 4k televisions.

Another solution is caching the most frequently viewed video files at the network edges. This ensures that when a popular video file is being requested by a user, it can be streamed from a location that is close to the user’s physical location, and does not have to travel again over the Internet backbone. Since most of the online video traffic is generated by a relatively small number of popular streams, caching those streams can be cost-effective when taking into account the storage costs of the cached files vs. the delivery costs of each copy that travels over the network.

Adaptive bitrate streaming is another common solution used by content delivery networks. This method detects a user’s bandwidth and CPU capacity in real time, then adjusts the quality of a video stream accordingly. While this strategy can provide consistent streaming on high-end and low-end connections, it incurs additional storage and encoding costs, and has a challenge to maintain overall quality on a global scale.

Finally, there’s media optimization, which takes an already-compressed video stream, analyzes its perceptual properties, and encodes it to a lower bitrate to increase streaming speeds without affecting the original video quality. This would be like taking a ball of modeling clay and squeezing it to make it smaller: It still has the same amount of clay, but occupies a smaller amount of space. Some forms of media optimization may struggle to maintain the quality of the video while reducing file size, but when done correctly using a reliable perceptual quality measure, this process can reduce a bitrate and file size by 20-50 percent while retaining the full perceptual quality. And that, ladies and gentlemen, is Beamr Video.

While these major players continue to sort through the congestion issues, utilizing current solutions like caching, adaptive bitrate streaming and media optimization can alleviate the bandwidth bottleneck problem while providing a win-win-win situation for content providers, telcos and end users.


Share