Can we profitably surf the Video Zettabyte Tsunami?

Two key ingredients are in place. But we need to get started now.

In a previous post, we warned about the Zettabyte video tsunami – and the accompanying flood of challenges and opportunities for video publishers of all stripes, old and new. 

Real-life tsunamis are devastating. But California’s all about big wave surfing, so we’ve been asking this question: Can we surf this tsunami?

The ability to do so is going to hinge on economics. So a better phrasing is perhaps: Can we profitably surf this video tsunami?

Two surprising facts came to light recently that point to an optimistic answer, and so we felt it was essential to highlight them.

1. The first fact is about the Upfronts – and it provides evidence that 4K UHD content can drive growth in top-line sales for media companies.

The results from the Upfronts – the annual marketplace where networks sell ad inventory to premium brand marketers – provided TV industry watchers a major upside surprise. This year, the networks sold a greater share of ad inventory at their upfront events, and at higher prices too. As Brian Steinberg put it in his July 27, 2016 Variety1 article:

“The nation’s five big English-language broadcast networks secured between $8.41 billion and $9.25 billion in advance ad commitments for primetime as part of the annual “upfront” market, according to Variety estimates. It’s the first time in three years they’ve managed to break the $9 billion mark. The upfront finish is a clear signal that Madison Avenue is putting more faith in TV even as digital-video options abound.”

Our conclusion? Beautiful, immersive content environments with a more limited number of high-quality ads can fuel new growth in TV. And 4K UHD, including the stunning impact of HDR, is where some of this additional value will surely come from.

Conventional wisdom is that today’s consumers are increasingly embracing ad-free SVOD OTT content from premium catalogs like Netflix, even when they have to pay for it. Since they are also taking the lead on 4K UHD content programming, that’s a great sign that higher value 4K UHD content will drive strong economics. But the data from the Upfronts also seems to suggest that premium ad-based TV content can be successful as well, especially when the Networks create immersive, clutter-free environments with beautiful pictures. 

Indeed, if the Olympics are any measure, Madison Avenue has received the message and turned up their game on the creative. I saw more than a few head-turning :30-second spots. Have you seen the Chobani ads in pristine HD? They’re as powerful as it gets.2

Check out this link to see the ads.

2. The second fact is about the operational side of the equation.

Can we deliver great content at a reasonable cost to a large enough number of homes?  On that front, we have more good news. 

The Internet in the United States is getting much faster. This, along with advanced methods of compression including HEVC, Content Adaptive Encoding and Perceptual Quality Metrics, will result in a ‘virtual upgrade’ of existing delivery network infrastructure. In particular, Ookla’s Speedtest.net published data on August 3, 2016 contained several stunning nuggets of information. But before we reveal the data, we need to provide a bit of context.

It’s important to note that 4K UHD content requires bandwidth of 15 Mbps or greater. Let’s be clear, this assumes Content Adaptive Encoding, Perceptual Quality Metrics, and HEVC compression are all used in combination. However, in Akamai’s State of the Internet report released in Q1 of this year, only 35% of the US population could access broadband speeds of 15 Mbps.

(Note: We have seen suggestions that 4K UHD content requires up to 25 Mbps. Compression technologies improve over time and those data points may well be old news. Beamr is on the cutting edge of compression and we firmly believe that 10 – 15 Mbps is the bandwidth needed – today – to achieve stunning 4K UHD audio visual quality.)

And that’s what makes Ookla’s data so important. Ookla found that in the first 6 months of 2016, fixed broadband customers saw a 42% year-over-year increase in average download speeds to a whopping 54.97 Mbps. Even more importantly, while 10% of Americans lack basic access to FCC target speeds of 25 Mbps, only 4% of urban Americans lack access to those speeds. This speed boost seems to be a direct result of industry consolidation, network upgrades, and growth in fiber optic deployments.

After seeing this news, we also decided to take a closer look at that Akamai data. And guess what we found? A steep slope upward from prior quarters (see chart below).

To put it back into surfing terms: Surf’s Up!
time-based-trends-in-internet-connection-speeds-and-adoption-rates

References:

(1) “How TV Tuned in More Upfront Ad Dollars: Soap, Toothpaste and Pushy Tactics” Brian Steinberg, July 27, 2016: http://variety.com/2016/tv/news/2016-tv-upftont-networks-advertising-increases-1201824887/ 

(2)  Chobani ad examples from their YouTube profile: https://www.youtube.com/watch?v=DD5CUPtFqxE&list=PLqmZKErBXL-Nk4IxQmpgpL2z27cFzHoHu

Data Caps, Zero-rated, Net Neutrality: The Video Tsunami Doesn’t Take Sides

We Need to Work Together to Conserve Bits in the Zettabyte Era

Over the past year, and again last week, there has been no shortage of articles and discussion around data caps, binge-on, zero rated content, and of course network neutrality.

We know the story. Consumer demand for Internet and over-the-top video content is insatiable. This is creating an unstoppable tsunami of video.

Vendors like Cisco have published the Visual Network Index to help the industry forecast how big that wave is, so we can work together to find sustainable ways to deliver it.

The Cisco VNI is projecting that internet video traffic will more than double to 2.3 Zettabytes by 2020. (Endnote 1.) To put it another way, that’s 1.3 Billion DVDs of video crossing the internet daily in 2020, versus the 543 Million DVDs of video that crossed the internet today.

That’s still tough to visualize, so here’s a back-of-the-envelope thought experiment

Let’s take the single largest TV event in history, Super Bowl 49.

114 million viewers on average, every minute, watched Super Bowl 49 in 2015. The broadcast is about 3 hours and 35 minutes.  We might say that 24.5 Billion cumulative viewer-minutes of video were watched.

Assume that a DVD holds 180 minutes of video. (Note, this is an inexact guess assuming a conservative video quality.) If one person watched 543 Million DVDs of video, she would have to spend 97.8 billion cumulative minutes watching all of it. That’s four Super Bowl 49s every day.

And in 2020, it’s going to be close to 10 Super Bowl 49s of cumulative viewer-minutes of video trafficking across the network. In one day.

That is a lot of traffic and it is going to be hard work to transport those bits in a reliable, high-quality fashion that is also economically sustainable.

And that’s true no matter whether you are a network operator or an over-the-top content distributor. Here’s why.

All Costs are Variable in the Long-run

Recently, Comcast and Netflix have agreed to partner, which bodes well for both companies’ business models, and for the consumer at large. However, last week there were several news headlines about data caps and zero-rated content. These will undoubtedly continue.

Now, it’s obvious that OTT companies like Netflix & M-GO need to do everything they can to reduce the costs of video delivery. That’s why both companies have pioneered new approaches to video quality optimization.

On the other hand, it might seem that network operators have a fixed cost structure that gives them wiggle room for sub-optimal encodes.

But it’s worth noting this important economic adage: In the long run, all costs are variable. When you’re talking about the kind of growth in video traffic that industry analysts are projecting to 2020, everything is a variable cost.

And when it comes to delivering video sustainably, there’s no room for wasting bits. Both network operators and over-the-top content suppliers will need to do everything they can to lower the number of bits they transport without damaging the picture quality of the video.

In the age of the Zettabyte, we all need to be bit conservationists.

 

Endnote 1: http://www.cisco.com/c/dam/m/en_us/solutions/service-provider/vni-forecast-widget/forecast-widget/index.html

Translating Opinions into Fact When it Comes to Video Quality

This post was originally featured at https://www.linkedin.com/pulse/translating-opinions-fact-when-comes-video-quality-mark-donnigan 

In this post, we attempt to de-mystify the topic of perceptual video quality, which is the foundation of Beamr’s content adaptive encoding and content adaptive optimization solutions. 

National Geographic has a hit TV franchise on its hands. It’s called Brain Games starring Jason Silva, a talent described as “a Timothy Leary of the viral video age” by the Atlantic. Brain Games is accessible, fun and accurate. It’s a dive into brain science that relies on well-produced demonstrations of illusions and puzzles to showcase the power — and limitation — of the human brain. It’s compelling TV that illuminates how we perceive the world.(Intrigued? Watch the first minute of this clip featuring Charlie Rose, Silva, and excerpts from the show: https://youtu.be/8pkQM_BQVSo )

At Beamr, we’re passionate about the topic of perceptual quality. In fact, we are so passionate, that we built an entire company based on it. Our technology leverages science’s knowledge about the human vision system to significantly reduce video delivery costs, reduce buffering & speed-up video starts without any change in the quality perceived by viewers. We’re also inspired by the show’s ability to turn complex things into compelling and accessible, without distorting the truth. No easy feat. But let’s see if we can pull it off with a discussion about video quality measurement which is also a dense topic.

Basics of Perceptual Video Quality

Our brains are amazing, especially in the way we process rich visual information. If a picture’s worth 1,000 words. What’s 60 frames per second in 4k HDR worth?

The answer varies based on what part of the ecosystem or business you come from, but we can all agree that it’s really impactful. And data intensive, too. But our eyeballs aren’t perfect and our brains aren’t either – as Brain Games points out. As such, it’s odd that established metrics for video compression quality in the TV business have been built on the idea that human vision is mechanically perfect.

See, video engineers have historically relied heavily on two key measures to evaluate the quality of a video encode: Peak Signal to Noise Ratio, or PSNR, and Structured Similarity, or SSIM. Both metrics are ‘objective’ metrics. That is, we use tools to directly measure the physics of the video signal and construct mathematical algorithms from that data to create metrics. But is it possible to really quantify a beautiful landscape with a number? Let’s see about that.

PSNR and SSIM look at different physics properties of a video, but the underlying mechanics for both metrics are similar. You compress a source video where the properties of the “original” and derivative are then analyzed using specific inputs, and metrics calculated for both. The more similar the two metrics are, the more we can say that the properties of each video are similar, and the closer we can define our manipulation of the video, i.e. our encode, as having a high or acceptable quality.

Objective Quality vs. Subjective Quality


However, it turns out that these objectively calculated metrics do not correlate well to the human visual experience. In other words, in many cases, humans cannot perceive variations that objective metrics can highlight while at the same time, objective metrics can miss artifacts a human easily perceives.

The concept that human visual processing might be less than perfect is intuitive. It’s also widely understood in the encoding community. This fact opens a path to saving money, reducing buffering and speeding-up time-to-first-frame. After all, why would you knowingly send bits that can’t be seen?

But given the complexity of the human brain, can we reliably measure opinions about picture quality to know what bits can be removed and which cannot? This is the holy grail for anyone working in the area of video encoding.

Measuring Perceptual Quality

Actually, a rigorous, scientific and peer-reviewed discipline has developed over the years to accurately measure human opinions about the picture quality on a TV. The math and science behind these methods are memorialized in an important ITU standard on the topic originally published in 2008 and updated in 2012. ITU BT.500 (International Telecommunications Union is the largest standards committee in global telecom.) I’ll provide a quick rundown.

First, a set of clips is selected for testing. A good test has a variety of clips with diverse characteristics: talking heads, sports, news, animation, UGC – the goal is to get a wide range of videos in front of human subjects.

Then, a subject pool of sufficient size is created and screened for 20/20 vision. They are placed in a light-controlled environment with a screen or two, depending on the set-up and testing method.

Instructions for one method is below, as a tangible example.

In this experiment, you will see short video sequences on the screen that is in front of you. Each sequence will be presented twice in rapid succession: within each pair, only the second sequence is processed. At the end of each paired presentation, you should evaluate the impairment of the second sequence with respect to the first one.

You will express your judgment by using the following scale:

5 Imperceptible

4 Perceptible but not annoying

3 Slightly annoying

2 Annoying

1 Very annoying

Observe carefully the entire pair of video sequences before making your judgment.

As you can imagine, testing like this is an expensive proposition indeed. It requires specialized facilities, trained researchers, vast amounts of time, and a budget to recruit subjects.

Thankfully, the rewards were worth the effort for teams like Beamr that have been doing this for years.

It turns out, if you run these types of subjective tests, you’ll find that there are numerous ways to remove 20 – 50% of the bits from a video signal without losing the ‘eyeball’ video quality – even when the objective metrics like PSNR and SSIM produce failing grades.

But most of the methods that have been tried are still stuck in academic institutions or research labs. This is because the complexities of upgrading or integrating the solution into the playback and distribution chain make them unusable. Have you ever had to update 20 million set-top boxes? Well if you have, you know exactly what I’m talking about.

We know the broadcast and large scale OTT industry, which is why when we developed our approach to measuring perceptual quality and applied it to reducing bitrates, we were insistent on staying 100% inside the standard of AVC H.264 and HEVC H.265.

By pioneering the use of perceptual video quality metrics, Beamr is enabling media and entertainment companies of all stripes to reduce the bits they send by up to 50%. This reduces re-buffering events by up to 50%, improves video start time by 20% or more, and reduces storage and delivery costs.

Fortunately, you now understand the basics of perceptual video quality. You also see why most of the video engineering community believes content adaptive sits at the heart of next-generation encoding technologies.

Unfortunately, when we stated above that there were “all kinds of ways” to reduce bits up to 50% without sacrificing ‘eyeball video quality’, we skipped over some very important details. Such as, how we can utilize subjective testing techniques on an entire catalog of videos at scale, and cost efficiently.

Next time: Part 2 and the Opinionated Robot

Looking for better tools to assess subjective video quality?

You definitely want to check out Beamr’s VCT which is the best software player available on the market to judge HEVC, AVC, and YUV sequences in modes that are highly useful for a video engineer or compressionist.

VCT is available for Mac and PC. And best of all, we offer a FREE evaluation to qualified users.

Learn more about VCT: http://beamr.com/h264-hevc-video-comparison-player/

 

Will Virtual Reality Determine the Future of Streaming?

As video services take a more aggressive approach to virtual reality (VR), the question of how to scale and deliver this bandwidth intensive content must be addressed to bring it to a mainstream audience.

While we’ve been talking about VR for a long time you can say that it was reinvigorated when Oculus grabbed the attention of Facebook who injected 2 billion in investment based on Mark Zuckerberg’s vision that VR is a future technology that people will actively embrace. Industry forecasters tend to agree, suggesting VR will be front and center in the digital economy within the next decade. According to research by Canalys, vendors will ship 6.3 million VR headsets globally in 2016 and CCS Insights suggest that as many as 96 million headsets will get snapped up by consumers by 2020.

One of VR’s key advantages is the fact that you have the freedom to look anywhere in 360 degrees using a fully panoramic video in a highly intimate setting. Panoramic video files and resolution dimensions are large, often 4K (4096 pixels wide, 2048 pixels tall, depending on the standard) or bigger.

While VR is considered to be the next big revolution in the consumption of media content, we also see it popping up in professional fields such as education, health, law enforcement, defense telecom and media. It can provide a far more immersive live experience than TV, by adding presence, the feeling that “you are really there.”

Development of VR projects have already started to take off and high-quality VR devices are surprisingly affordable. Earlier this summer, Google announced that 360-degree live streaming support was coming to YouTube.

Of course, all these new angles and sharpness of imagery creates new and challenging sets of engineering hurdles which we’ll discuss below.

Resolution and, Quality?

Frame rate, resolution, and bandwidth are affected by the sheer volume of pixels that VR transmits. Developers and distributors of VR content will need to maximize frame rates and resolution throughout the entire workflow. They must keep up with the wide range of viewers’ devices as sporting events in particular, demand precise detail and high frame rates, such as what we see with instant replay, slow motion, and 360-degree cameras.

In a recent Vicon industry survey, 28 percent of respondents stated that high-quality content was important to ensuring a good VR experience. Let’s think about simple file size comparisons – we already know that ultra HD file sizes take up considerably more storage space than SD and the greater the file size, the greater a chance it will impede the delivery. VR file sizes are no small potatoes.  When you’re talking about VR video you’re talking about four to six times the foundational resolution that you are transmitting. And, if you thought that Ultra HD was cumbersome, think about how you’re going to deal with resolutions beyond 4K for an immersive VR HD experience.

In order to catch up with the file sizes we need to continue to develop video codecs that can quickly interpret the frame-by-frame data. HEVC is a great starting point but frankly given hardware device limitations many content distributors are forced to continue using H.264 codecs. For this reason we must harness advanced tools in image processing and compression. An example of one approach would be content adaptive perceptual optimization.

I want my VR now! Reaching End Users

Because video content comes in a variety of file formats including combinations of stereoscopic 3D, 360 degree panoramas and spherical views – they all come with obvious challenges such as added strain on processors, memory, and network bandwidth. Modern codecs today use a variety of algorithms to quickly and efficiently detect these similarities, but they are usually tailored to 2D content. However, a content delivery mechanism must be able to send this to every user and should be smart to optimize the processing and transmitting of video.

Minimizing latency, how long can you roll the boulder up the hill?

We’ve seen significant improvements in the graphic processing capabilities of desktops and laptops. However, to take advantage of the immersive environment that VR offers, it’s important that high-end graphics are delivered to the viewer as quickly and smoothly as possible. The VR hardware also needs to display large images properly and with the highest fidelity and lowest latency. There really is very limited room for things like color correction or for adjusting panning from different directions for instance. If you have to stitch or rework artifacts, you will likely lose ground. You need to be smart about it. Typical decoders for tablets or smart TVs are more likely to cause latency and they only support lower framerates. This means how you build the infrastructure will be the key to offering image quality and life-like resolution that consumers expect to see.

Bandwidth, where art thou?

According to Netflix, for an Ultra HD streaming experience, your Internet connection must have a speed of 25 Mbps or higher. However, according to Akamai, the average Internet speed in the US is only approximately 11 Mbps. Effectively, this prohibits live streaming on any typical mobile VR device which to achieve the quality and resolution needed may need 25 Mbps minimum.

Most certainly the improvements in graphic processing and hardware will continue to drive forward the realism of the immersive VR content, as the ability to render an image quickly becomes easier and cheaper. Just recently, Netflix jumped on the bandwagon and became the first of many streaming media apps to launch on Oculus’ virtual reality app store. As soon as all the VR display devices are able to integrate with these higher resolution screens, we will see another step change in the quality and realism of virtual environments. But will the available bandwidth be sufficient, is a very real question. 

To understand the applications for VR, you really have to see it to believe it

A heart-warming campaign from Expedia recently offered children at a research hospital in Memphis Tennessee the opportunity to be taken on a journey of their dreams through immersive, real-time virtual travel – all without getting on a plane:  https://www.youtube.com/watch?time_continue=179&v=2wQQh5tbSPw

The National Multiple Sclerosis Society also launched a VR campaign that inventively used the tech to give two people with MS the opportunity to experience their lifelong passions. These are the type of immersive experiences we hope will unlock a better future for mankind. We applaud the massive projects and time spent on developing meaningful VR content and programming such as this.

Frost & Sullivan estimates that $1.5 billion is the forecasted revenue from Pay TV operators delivering VR content by 2020. The adoption of VR in my estimation is only limited by the quality of the user experience, as consumer expectation will no doubt be high.

For VR to really take off, the industry needs to address some of these challenges making VR more accessible and most importantly with unique and meaningful content. But it’s hard to talk about VR without experiencing it. I suggest you try it – you will like it.

The TV of Tomorrow Needs Standards Today: Why the streaming video industry must work together to solve video delivery quality issues

Nearly 50 percent of Americans have an entertainment subscription service like Netflix, Amazon Prime, or Hulu, accessed via a connected television or devices like Amazon Fire TV, Roku, or Apple TV, according to recent research from Nielsen. Furthermore, a quarter of those in the coveted 18-to-34 demographic have either cut their cable or satellite services or never signed up for a pay-TV package, according to ComScore.

It’s Not Just Millennials Cutting the Cord – Content Providers Are Too

For decades cable and satellite services provided the exclusive gateway to mass audiences for premium and niche content channels.  Today, with the ease and availability to go consumer-direct via the Internet and over-the-top streaming (OTT), new networks are joining video platforms and licensing content to transactional, and subscription video-on-demand services, at an unprecedented rate.  The future of streamed TV whenever and wherever the viewer desires, is becoming a reality.  Or is already the reality for an ever-growing percentage of US households.

Yet to reach the consumer where they are means today’s content publisher must support a wide array of devices and players to enable video viewing ‘anytime and anywhere’ across computers, televisions, and mobile devices.  But device capabilities can vary significantly, and any modification means the content publisher must build different applications to support each device, and to ensure the best possible user experience.

Solving these issues will require collaboration among many players, who each have a vested interest in building the digital (streaming) OTT industry, in a quest to meet and exceed the “broadcast quality” standard that viewers have come to expect.

As streaming or OTT moves from a novelty to dominate distribution method, viewers are demanding better quality.  Leading streaming experience measurement company, Conviva, consistently reports in their user experience consumer survey results, that re-buffering events and video quality are the most cited frustrations for consumers watching online video.  With the adoption of new technologies such as 4K, virtual reality and OTT delivery of broadcast events, the demands on bandwidth will notably increase. Which explains why M-GO, a leading video on demand premium movie service partnered with Samsung and recently acquired by Fandango, reported that when they reduced bitrates using perceptual content adaptive technology, they experienced improvements in their streaming user experience and consumer satisfaction.

The key role that video quality plays in impacting user engagement and UX – and consequently service provider revenues, has incited recent efforts to improve video quality. This includes progress on adaptive bitrate selection, better transport-layer algorithms, and CDN optimization. Think about it – a single IP video packet contains approximately 1,400 bytes of information, and each IP packet contains multiple MPEG encapsulated video packets. The loss of even one IP packet can lead to video impairments lasting a half second or more.

The Need for Standardization Before Reaching the End User

While efforts are valuable and demonstrate potential improvements, one of the key missing pieces is an understanding of the structure that handcuffs video quality. That starts at the client-side before reaching the client. Standardization of online video quality, particularly the quality of experience (QoE), is more important than ever. But traditional methods of measuring quality do not translate well to OTT video.

Pay TV operators such as cable companies, have a specific advantage when it comes to the quality they can deliver, and that is, they control every aspect of the delivery process including the network and playback device, known as the STB or set-top-box. In contrast, the OTT delivery structure is fragmented, dangling by multiple vendors – from delivery, storage, transcoding – all who are responsible for parts of the overall system. Viewers care little about the complex network or routes involved to get their content to a device.  They simply expect the same high-quality viewer experience that they are accustomed to with traditional pay TV or broadcast systems.

This fragmentation, coupled with numerous formats that must be supported across devices, the need for standardization and the related challenges are apparent. While we rely on monitoring and analysis, there is enough variation in the measurement methodologies and definitions across the industry to impede our ability to not only maintain – but improve video quality. More than one video engineer would likely admit privately, that they spend their day just making sure the video is working, and only after this task is accomplished, do they consider what can be done to improve the quality of the video that they are delivering.

Strides are being made to develop and evangelize best practices for high-quality delivery of video over the Internet, thanks in part to the Streaming Video Alliance (SVA). The recommendations, requirements, and guidelines being assembled by the SVA are helping to define new industry architectures and contribute to building best practices across the streaming video ecosystem to accelerate adoption worldwide.

Standards Pave the Way for Valuable Metrics

Without agreed-upon industry standards for both quality of service (QoS) and quality of experience (QoE), there can be no objective benchmark or performance measurement in the video delivery ecosystem.

The SVA’s guidelines define a common language that describes the effectiveness of network delivery and outlines key network delivery metrics: ideal video startup time, acceptable re-buffering ratio, average video encoding bitrates, and video start failure metrics.

The alliance’s main focus is on the bits you’ll never see – like optimization and delivery techniques.  As a technology enabler that addresses these bits, improving all the above, we are excited to join with content providers, CDNs and service providers to tackle the most pressing issues of streaming video delivery.

Content Is Going Everywhere

To feed the beast, the industry must band together to provide constant easy access to high-quality video content.  The name of the game is getting content to your consumer with the best quality and highest user experience possible, and the only way to do that is to increase file efficiency by optimizing file sizes and conserving bandwidth, to cut through the Internet clutter.

Today, consumers have widespread access to streaming video services with content choices coming online in ever greater quantities and at vastly improved quality. What is critically lacking is a broad-spectrum understanding of the nature of video quality problems as they occur.  Also, the cost of bandwidth in the age of data caps continues to be an open question. To help navigate through the clutter and help answer critical questions, visit our resource center for useful information.

 

How HDR, Network Function Virtualization, and IP Video are Shaping Cable

Beamr just returned from the Internet & Television Expo, or INTX, previously known as the Cable Show, where we identified three technology trends that are advancing rapidly and for some, are even here now. They are HDR, Network Function Virtualization, and IP Video.

HDR (High Dynamic Range) is probably the most exciting innovation in display technology in recent years.

There is a raging debate about resolution, “are more pixels really better?” But there is no debating the visual impact of HDR. Which is why it’s great to see TVs in the market that can display HDR reaching lower and lower price points, with better and better performance. However being able to display HDR is not enough. Without content there is no impact.

For this reason, Comcast EVP and CTO Tony Werner’s announcement at INTX that on July 4th, Comcast will be shipping their Xi5 STB to meet NBC Universal’s schedule of transmitting select Olympic events in HDR, is a huge deal. Though there will be limited broadcast content available in HDR, once Comcast has a sufficiently high number of HDR set top boxes in the field, and as consumers buy more HDR enabled TVs, the HDR bit will flip from zero to one and we’ll wonder how we ever watched TV without it.

Virtualization is coming and already here for some cable companies.

Though on the surface NFV (Network Function Virtualization) may be thought of as nothing more than the cable industry moving their data centers to the cloud, it’s actually much more than that. NFV offers an alternative to design, deploy and manage networking services by allowing network functions to run in software rather than traditional, “purpose-built” hardware appliances. In turn, this helps alleviate the limitations of designing networks using these “fixed” hardware appliances, giving network architects a lot more flexibility.

There are two places in the network where the efficiencies of virtualization can be leveraged, Access and Video. By digitizing access, the Virtual CCAP removes the physical CCAP and CMTS completely, allowing the control plane of the DOCSIS to be virtualized. Distributing PHY and the MAC is a critical step, but separating their functions is ground zero for virtualization.

Access virtualization is exciting, but what’s of great interest to those involved in video is virtualizing the video workflow from ingest to play out. This includes the encoding, transcoding, ad insertion, and packaging steps and is mainly tailored for IP video, though one cable operator took this approach to the legacy QAM delivery by leveraging converged services for IP and QAM. In doing this, the operator is able to simplify their video ingest workflow.

By utilizing a virtualized approach, operators are able to build more agile and flexible video workflows using “best of bread” components. Meaning they can hand pick the best transcoder, packager, etc. from separate vendors if needed. It also allows operators to select the best codec and video optimizer solutions, processes that are considered to be the most crucial parts of the video ingestion workflow, as the biggest IP (intellectual property) is within the video processing, not packaging, DRM etc. With content adaptive encoding and optimization solutions being introduced in the last few years, if an operator has a virtualized video workflow, they can be free to add innovations as they are introduced to the market. Gone are the days where service providers are forced to buy an entire solution from one vendor using proprietary customized hardware.

Having the IT industry (CPU, networking, storage) make tremendous progress in running video processing, packagers, streamer as software-only solutions on standard COTS hardware, this virtualization concept helps vendors focus on their core expertise, whether it is video processing, workflow, streamer, ad etc.

Virtualization can lower TCO, but it can also introduce operational and management challenges. Today service providers buy “N” transcoders, “N” streamers etc. to accommodate peak usage requirements. With virtualization the main advantage is to share hardware, so that overall less hardware is needed, which can lower TCO as file based transcoders could be run during off peak times (middle of the night) while more streamers are needed during peak times to accommodate a higher volume of unicast stream sessions (concurrency). This will require new methods of pay per usage, as well as sophisticated management and workflow solutions to initiate and kill instances when demand is high or when it drops.

For this reason we are seeing some vendors align with this strategy. Imagine Communications is entering the market with solutions for providing workflow management tools that are agnostic to the video processing blocks. Meanwhile, Cisco and Ericsson provide open workflows capable of interoperating with their transcoders, packagers, etc. while being open to third party integration. This opens the door for vendors like Beamr to provide video processing applications for encoding and perceptual quality optimization.

It is an IP Video world and that is a good thing.

Once the network is virtual, it flattens the distribution architecture so no longer does an operator need to maintain separate topologies for service delivery to the home, outside the home, fixed wire, wireless, etc. The old days of having RF, on net, and off net (OTT) systems, are quickly moving behind us.

IP video is the enabler that frees up new distribution and business models, but most importantly meets the expectation of end-users to access their content anywhere, on any device and at anytime. Of course there is that little thing called content licensing that can hold back the promise of anytime, anywhere, anyplace, especially for sports – but in time, as content owners adapt to the reality that by opening up availability they will spur not hamper consumption, it may not be long before the user is able to enjoy entertainment content on the terms they are willing to pay for.

Could we be entering the golden age of cable? I guess we’ll have to wait and see. One thing is certain. Vendors should ask themselves whether they are able to be the best in every critical path of the workflow. Because what is obvious, is that service providers will be deciding for them, as there is no solution from a single vendor that can be best of breed in todays modern network and video architectures. Vendors who adapt to changes in the market, due to virtualization, will be the leaders of the future.

At Beamr we have a 60 person engineering team focused solely on the video processing block of the virtualized network, specifically HEVC and H.264 encoding and content adaptive optimization solutions. Our team comes into the office every day with the single objective of pushing the boundary for delivering the highest quality video at the lowest bitrates possible. The innovations we are developing translate to improved customer experience and video quality whether that is 4k HDR with Dolby Vision, or reliable 1080p on a tablet.

IP Video is here, and in tandem with virtualized networks and the transition of video from QAM to the DOCSIS network, we are reaching a technology inflection point that is enabling better quality video than previous technological generations were able to deliver. We think it’s an exciting time to be in cable!