How to deal with the tension on the mobile network – part 2 (VIDEO Interview)

In late July, I reported on the “news” that Verizon was throttling video traffic for some users. As usual, the facts around this seemingly punitive act were not fully understood, which triggered this blog post.

At IBC last month (September 2017), I was interviewed by RapidTV where much of the conversation was around the Apple news of their support for HEVC across the device ecosystem running iOS 11 and High Sierra. As I was reviewing this interview, it seemed natural to publish it as a follow up to the original post.

There is no doubt that mobile operators are under pressure as a result of the network crushing video traffic they are being forced to deliver. But the good news is that for those operators who adopt HEVC, they are going to enjoy significant bitrate efficiencies, possibly as high as 50%. And for many services, though they will chose to take some savings, this means they’ll be able to upgrade their resolutions to full 1080p while simultaneously improving the video quality they are delivering.

I hope you find this video insightful. Our team has a very simple evaluation offer to discuss with all qualified video services and video distributors. Just send an email to sales@beamr.com and we’ll get in touch with the details.

How to deal with the tension of video on the mobile network – Part 1

Last week, the Internet erupted in furor over Verizon’s alleged “throttling” of video streaming services over their mobile network. With a quick glance at the headlines, and to the uninitiated, this could be perceived as an example of a wireless company taking their market dominance too far. Most commenters were quick to pontificate calling “interference” by Verizon a violation of net neutrality.

But this article isn’t about the argument for, or against, network neutrality. Instead, let’s examine the tension that exists as a result of the rapid increase in video consumption on mobile devices for the OTT and video streaming industry. Let’s explore why T-Mobile, Verizon, and others that have yet to come forward, feel the need to reduce the size of the video files that are streaming across their networks.

Cisco reports that by 2021, 82% of all Internet traffic will be video, and for the mobile network video is set to explode equally so that by 2022 75% of data flowing over a mobile network will be video according to Ericsson. This increase of video over the mobile network means by 2021, the average user is set to consume a whopping 8.9GB of data every month as reported by BGR. These data points reveal why escalating consumption of video by wireless subscribers is creating tension in the ecosystem.

So what are the wireless operators trying to achieve by reducing the bitrates of video that is being delivered on their network?

Many mobile service operators offer their own entertainment video service packages, which means they are free to deliver the content in the quality that is consistent with their service level positioning. For some, this may be low to medium quality, but most viewers won’t settle for anything short of medium to high quality.

As most mobile networks have internal video distribution customers such as AT&T with DirecTV Now, at the same time, AT&T delivers video for Netflix. Which means, DirecTV Now is free to modify the encoded files to the maximum extent in order to achieve a perfect blend of quality and low bitrate, while for premium services like Netflix, the video packets cannot be touched due to DRM and the widespread adoption of HTTPS encryption. The point is, mobile carriers don’t always control the formats or quality of video that they carry over the network and for this reason, every content owner and video distributor should have an equal interest in pre-packaging (optimizing) their content for the highest quality and smallest file size possible.

As consumers grow more savvy to the difference in video and service quality between content services, many are becoming less willing to compromise. After all, you don’t invest in a top-of-the-line phone with an AMOLED screen to watch blocky low resolution video. Yet, because of the way services deliver content to mobile devices, in some cases, the full quality of the devices’ screen is unable to be realized by the consumer.

We see this point accentuated when a mobile network operator implements technology designed to reduce the resolution, or lower video complexity, in order to achieve a reduced bandwidth target. Attempts are made to make these changes while preserving the original video quality as much as possible, but it stands to reason that if you start with 1080p (full HD) and reduce the resolution to 480p (standard definition), the customer experience will suffer. Currently, the way bandwidth is being reduced on mobile networks is best described as a brute force method. In scenarios where mobile operators force 480p, the bitrate is reduced at the expense of resolution. But is this the best approach? Let’s take a look.

Beamr published a case study with findings from M-GO where our optimization solution helped to reduce buffering events by up to 50%, and improved stream start times by as much as 20%. These are impressive achievements, and indicative of the value of optimizing video for the smallest size possible, provided the original quality is retained.

A recent study “Bit rate and business model” published by Akamai in conjunction with Sensum also supports M-GO and Conviva’s Viewer Experience Report findings. In the Akamai/Sensum study, the human reaction to quality was measured and the researchers found that three out of four participants would stop using a service after even a few re-buffering events.

For the study, viewers were split into two control groups with one group exposed only to a lower resolution (quality) stream that contained at least one stream interruption (re-buffering event). This group was 20% less likely to associate a positive word with the viewing experience as compared to viewers who watched the higher quality full resolution stream that played smoothly without buffering (resolutions displayed were up to 4K). Accordingly, lower quality streams lead to a 16% increase in negative emotions, while higher quality streams led to a 20% increase in emotional engagement.

There are those who claim “you can’t see 4K”, or use phrases like “smarter pixels not more pixels.” With the complexity of the human visual system and its interconnection to our brain, the Akamai study shows that our physiological systems are able to detect differences between higher resolution and lower resolution. These disruptions were validated by changes in the viewers eye movements, breathing patterns, and increased perspiration.

Balancing the needs of the network, video distributor, and consumer.

  • Consumers expect content at their fingertips, and they also expect the total cost of the content and the service needed to deliver it, to be affordable.
  • Service providers are driven by the need to deliver higher quality video to increase viewer engagement.
  • Mobile network operators welcome any application that drives more demand for their product (data) with open arms, yet, need to face the challenge of how to deal with this expanding data which is beginning to outstrip the customers willingness to pay.

Delivering content over the Internet is not free as some assume. Since the streaming video distributor pays the CDN by the size of the package, e.g. gigabytes delivered, they are able to exploit the massive network investments made by mobile operators. Meanwhile, they (or more specifically their end-customers) carry the expectation that the capacity needed to deliver their videos to meet demand, will always be available. Thus, a network operator must invest ahead of revenues with the promise that growth will meet the investment.

All of this can be summed up by this simple statement, “If you don’t take care of the bandwidth, someone else will.”

Video codecs are evolutionary with each progressive codec being more efficient than the last. The current standard is H.264 and though this codec delivers amazing quality with reasonable performance and bitrate reduction, it’s built on a standard that is now fourteen years old. However, as even entry level mobile phones now support 1080p, video encoding engineers are running into an issue with H.264 not able to reach the quality they need below 3 Mbps. In fact, some distributors are pushing their H.264 bitrates lower than  3Mbps for 1080p, but in doing so they must be willing to introduce noticeable artifacts. So the question is, how do we get to 2 Mbps or lower, but with the same quality of 3-4 Mbps, and with the original resolution?

Enter HEVC.

With Apple’s recent announcement to support HEVC across as many as 400 million devices with HW decoding, content owners should be looking seriously to adopt HEVC in order to realize the 40% reduction in bitrate that Apple is reporting, over H.264. But how exactly can HEVC bring relief to an overburdened mobile network?

In the future it can be argued that once HEVC has reached broad adoption, the situation we have today with bitrates being higher than we’d like, will no longer exist. After all, if you could flip a switch and reduce all the video traffic on the network by 40% with a more efficient compression scheme (HEVC), then it’s quite possible that we’ll push the bandwidth crunch out for another 3-5 years.

But this thinking is more related to fairytales and unicorns than real life. For one thing, video encoding workflows and networks do not function like light switches. Not only does it takes time to integrate and test new technology, but a big issue is that video consumption and advanced entertainment experiences, like VR, AR, and 360, will consume the new white space as quickly as it becomes available, bringing us back to where we are today.

Meeting the bandwidth challenge will require us working together.

In the above scenario, there is a shared responsibility on both the distributor and the network to each play their role in guaranteeing that quality remains high while not wasting bits. For those who are wondering, inefficient encoding methods, or dated codecs such as H.264 fall into the “inefficient” category.

The Internet is a shared resource and whether it stays under some modicum of government regulation, or becomes open again, it’s critical for all members of the ecosystem to recognize that the network is not of infinite capacity and those using it to distribute video should respect this by taking the following steps:

  1. Adopt HEVC across all platforms and resolutions. This step alone will yield up to a 40% reduction over your current H.264 bandwidths.
  2. Implement advanced content-adaptive technologies such as Beamr CABR (CABR stands for Content-Adaptive Bitrate) which can enable a further reduction of video bitrates over the 40% that HEVC affords, by an additional 30-50%.
  3. Adopt just in time encoding that can allow for real-time dynamic control of bitrate based on the needs of the viewing device and network conditions. Intel and Beamr have partnered to offer an ultra-high density and low cost HEVC 4K, live 10bit encoding solution using the E3 platform with IRIS PRO P580 graphics accelerator.

In conclusion.

  • With or without network neutrality, reducing video bandwidth will be a perpetual need for the foreseeable future. Whether to delay capex investment, or to meet competitive pressure on video quality, or simply to increase profitability and decrease opex, the benefits to always delivering the smallest file and stream sizes possible, are easy to model.
  • The current method of brute forcing lower resolutions, or transcoding to reduced framerate will not be sustainable as consumers are expecting the original experience to be delivered. The technical solutions implemented must deliver high quality and be ready for next generation entertainment experiences. At the same time, if you don’t work to trim the fat from your video files, someone else may do it, and it most certainly will be at the expense of video quality and user experience.
  • HEVC and Beamr CABR represent the state of the art in high quality video encoding and bitrate reduction (optimization) without compromise.

If you’d like to learn more, keep an eye out for part two in this series, or take a moment to read this relevant article: It’s Unreasonable to Expect ISP’s Alone to Finance OTT Traffic

In the meantime, you can download our VP9 vs. HEVC white paper, learn how to encode content for the future, or contact us at sales@beamr.com to talk further.

 

How the Magic of Beamr Beats SSIM and PSNR

Every video encoding professional faces the dilemma of how best to detect artifacts and measure video quality. If you have the luxury of dealing with high bitrate files then this becomes less of an “issue” since for many videos throwing enough bits at the problem means an acceptably high video quality is nearly guaranteed. However, for those living in the real world where 3 Mbps is the average bitrate that they must target, then compressing at scale requires metrics (algorithms) to help measure and analyze the visual artifacts in a file after encoding. This process is becoming even more sophisticated as some tools enable a quality measure to feed back into the encoding decision matrix, but more commonly quality measures are used as a part of the QC step. For this post we are going to focus on the application of quality measures used as part of the encoding process.

There are two common quality measures, PSNR and SSIM that we will discuss, but as you will see there is a third one and that is the Beamr quality measure that the bulk of this article will focus on.

PSNR, the Original Objective Quality Measure

PSNR, peak signal-to-noise ratio represents the ratio between the highest power of an original signal and the power level of the distortion. PSNR is one of the original engineering metrics that is used to measure the quality of image and video codecs. When comparing or measuring the quantitative quality of two files such as an original and a compressed version, PSNR attempts to approximate the difference between the compressed and the original. A significant shortcoming is that PSNR may indicate that the reconstruction is of suitably high quality when in some cases it is not. For this reason a user must be careful to not hold the results in high regard.

What is SSIM?

SSIM or the structured similarity index is a technique to predict the perceived quality of digital images and videos. The initial version was developed at the University of Texas at Austin while the full SSIM routine was developed jointly at New York University’s Laboratory for Computational Vision. SSIM is a perceptual model based algorithm that takes into account image degradation as a perceived shift in structural information, while including crucial perceptual detail, such as luminance and contrast masking. The difference compared with other techniques like PSNR is that this approach attempts to estimate absolute errors.

The basis of SSIM is the assumption that pixels have strong inter-dependencies and these dependencies contain needed information about the structure of the object in the scene, GOP or adjacent frames. Put simply, structured similarity is used for computing the similarity of two images. SSIM is a full reference metric where the computation and measurement of image quality is based on an uncompressed image as a reference. SSIM was developed as a step up over traditional methods such as PSNR (peak signal-to-noise ratio) which has proven to be uncorrelated with human vision. Yet, unfortunately SSIM itself is not perfect and can be easily fooled as shown by the following graphic which illustrates that though the original and compressed are closely correlated visually, PSNR and SSIM scored them as being not similar. Meanwhile, Beamr and MOS (mean opinion score), show them as being closely correlated.
beamr_ssim_psnr_2

Beamr Quality Measure

The Beamr quality measure is based on a proprietary, low complexity, reliable, perceptually aligned quality measure. The existence of this measure enables controlling a video encoder, to obtain an output clip with (near) maximal compression of the video input, while still maintaining the input video resolution, format and visual quality (PQ). This is performed by controlling the compression level of each frame, or GOP, in the video sequence, in such a way that is as deeply compressed as it can be, while still resulting in a perceptually identical output.

The Beamr quality measure is also a full-reference measure, i.e. it indicates a quality of a recompressed image or video frame when compared to a reference or original image or video frame, which is in accordance with the challenges our technology aims to tackle such as reducing bitrates to the maximum extent possible without imposing any quality degradation from the original. (as perceived by the human visual system). The Beamr quality measure calculation consists of two parts: A pre-process of the input video frames in order to obtain various score configuration parameters, and an actual score calculation done per candidate recompressed frame. Following is a system diagram of how the Beamr quality measure would interact with an encoder.
beamr_ssim_psnr_1

Application of the Beamr Quality Measure in an Encoder

The Beamr quality measure when integrated with an encoder enables the bitrate of video files to be reduced by up to an additional 50% over the current state of the art standard compliant block based encoders, without compromising image quality or changing the artistic intent. If you view a source video and a Beamr-optimized video side by side, they will look exactly the same to the human eye.

A question we get asked frequently is “How do you perform the “magic” of removing bits with no visual impact?”  

Well, believe it or not there is no magic here, just solid technology that has been actively in development since 2009, and is now covered by 26 granted patents and over 30 additional patent applications.  

When we first approached the task of reducing video bitrates based on the needs of the content and not a rudimentary bitrate control mechanism, we asked ourselves a simple starting question, “Given that the video file has already been compressed, how many additional bits can the encoder remove before the typical viewer would notice?”

There is a simple manual method of answering this question, just take a typical viewer, show them the source video and the processed video side by side, and then start turning down the bitrate knob on the processed video, by gradually increasing the compression.  And at some point, the user will say “Stop! Now I can see the videos are no longer the same!”  

At that point, turn the compression knob slightly backwards, and there you have it – a video clip that has an acceptably lower bitrate than the source, and just at the point before the average user can notice the visual differences.

Of course I recognize what you are likely thinking, “Yes, this solution clearly works, but it doesn’t scale!” and you are correct. Unfortunately many academic solutions suffer from this problem. They make for good hand built demos in carefully controlled environments with hand picked content, but put them out in the “wild” and they fall down almost immediately. And I won’t even go into the issues of varying perception among viewers of different ages, or across multiple viewing conditions.

Another problem with such a solution is that different parts of the videos, such as different scenes and frames, require different bitrates.  So the question is, how do you continually adjust the bitrate throughout the video clip, all the time confirming with your test viewer that the quality is still acceptable?  Clearly this is not feasible.

Automation to the Rescue

Today, it seems the entire world is being infected with artificial intelligence which in many cases is not much more than automation that is smart and able to adapt to its environment. So we too looked for a way to automate this image analysis process. That is take a source video, and discover a way to reduce the “non-visible” bits in a fully automatic manner, with no human intervention involved. A suitable solution would  enable the bitrate to vary continuously throughout the video clip based on the needs of the content at that moment.

What is CABR?

You’ve heard of VBR or variable bitrate, Beamr has coined the term CABR or content-adaptive bitrate to summarize the process just described where the encoder is adjusted at the frame level based on quality requirements, rather than relying only on a bit budget to make decisions of where bits are applied and the number needed. But we understood that in order to accomplish the vision of CABR, we would need to be able to simulate perception of a human viewer.  

We needed an algorithm that would answer the question, “Given two videos, can a human viewer tell them apart?”  This algorithm is called a Perceptual Quality Measure and it is the very essence of what sets Beamr so far apart from every other encoding solution in the market today.

A quality measure is a mathematical formula, which tries to quantify the differences between two video frames.  To implement our video optimization technology, we could have used one of the well-known quality measures, such as PSNR (Peak Signal to Noise Ratio) or SSIM (Structural SIMilarity). But as already discussed, the problem with these existing quality measures is that they are simply not reliable enough as they do not correlate highly enough with human vision.

There are other sophisticated quality measures which correlate highly enough with human viewer opinions to be useful, but since they require extensive CPU power they cannot be utilized in an encoding optimization process, which requires computing the quality measures several times for each input frame.

Advantages of the Beamr Quality Measure

With the constraints of objective quality measures we had no choice but to develop our own quality measure, and we developed it with a very focused goal: To identify and quantify the specific artifacts created by block-based compression methods.

All of the current image and video compression standards, including JPEG, MPEG-1, MPEG-2, H.264 (AVC) and H.265 (HEVC) are built upon block based principles.

They divide an image into blocks, attempt to predict the block from previously encoded pixels, and then transform the block into the frequency domain, and quantize it.  

All of these steps create specific artifacts, which the Beamr quality measure is trained to detect and measure.  So instead of looking for general deformations, such as out of focus images, missing pixels etc. which is what general quality measures do, in contrast, we look for artifacts that were created by the video encoder.

This means that our quality measure is tightly focused and extremely efficient, and as a result, the CPU requirements of our quality measure are much lower than quality measures that try to model the Human Visual System (HVS).

Beamr Quality Measure and the Human Visual System

After years of developing our quality measure, we put it to the test, under the strict requirements of ITU BT-500, which is an international standard for testing image quality.  We were happy to find that the correlation of our quality measure with subjective (human) results was extremely high.  

When the testing was complete, we felt certain this revolutionary quality measure was ready for the task of accurately comparing two images for similarity, from a human point of view.

But compression artifacts are only part of the secret. When a human looks at an image or video, the eye and the brain are drawn to particular places in the scene, for example, places where there is movement, and in fact we are especially “tuned” to capture details in faces.

Since our attention is focused on these areas, artifacts are more disturbing than the same artifacts in other areas of the image, such as background regions or out-of-focus areas. For this reason the Beamr quality measure takes this into account, and it ensures that when we measure quality proper attention is given to the areas that require it.

Furthermore, the Beamr quality measure takes into account temporal artifacts, introduced by the encoder, because it is not sufficient to ensure that each frame is not degraded, it is also necessary to preserve the quality and feel of the video’s temporal flow.

The Magic of Beamr

With the acquisition last year of Vanguard Video many industry observers have gone public with the idea that the combination of our highly innovative quality measure tightly integrated with the world’s best encoder, could lead to a real shake up of the ecosystem.

We encourage you to see for yourself what is possible when the world’s most advanced perceptual quality measure becomes the rate-control mechanism for the industry’s best quality software encoder. Check out Beamr Optimizer.

Connecting Virtual Reality with the History of Encoding Technology

Two fun and surprising brain factoids are revealed that connect virtual reality with the history of encoding technology.

Bloomberg featured a Charlie Rose interview with Jeremy Bailenson, the founding director of Stanford University’s Virtual Human Interaction Lab. Not surprisingly, the lab houses the sharpest minds and insightful datasets in its discipline of focus: Virtual Reality.

It’s a 20-minute video that touches on some fascinating elements of VR – very few of which are about commercial television or sports entertainment experiences.

In fact, it is as much an interview about the brain, human interaction, and the physical body, as it is about media and entertainment.

As Jeremy says: “The medium [of VR] puts you inside of the media. It feels like you are actually doing something.”

Then, he states our first stunning fact about the brain, which illustrates why VR will be so impactful on modern civilization:

We can’t tell the difference!

Professor Bailenson: “The brain is going to treat that as if it is a real experience. Humans have been around for a very long time [evolving in the real world.] The brain hasn’t yet evolved to really understand the difference between a compelling virtual reality experience and a real one.”

The full video is here.

So there you have it. Our brains are nothing short of miraculous, but they’ve evolved some peculiar wiring to say the least. To put it bluntly, while humans are exceptionally clever in many ways, we’re not so much in others.

Which is the perfect segue into my second surprising factoid about the brain, and it’s taken 25 years for commercial video markets to exploit this fact!

To be fair, that’s not an exact statement, but here’s the timeline for reference.

According to Wikipedia, Cinepak was one of the very first commercial implementations of video compression technology. It made it possible to watch video utilizing CD-ROM. (Just typing the words taps into nostalgia.) Cinepak was released in 1991 and became part of Apple’s QuickTime toolset a year later.

It was 16 years later, in 2007, that the Video Quality Experts Group decided to create and benchmark a new metric that – while not perfect – served as a milestone amongst the video coding community. For the first time, there was a recognition that maximum compression required us to take human vision biology into account when designing algorithms to shrink video files. Their perceptual metric was known as Perceptual Evaluation of Video Quality, and despite its impracticality for implementation, it became a part of the International Telecommunications Union standards.

Then in 2009, Beamr was formed to solve the very real need to reduce file sizes while retaining quality. This need became evident after an encounter with a consumer technology products company who indicated the massive cost of storage for digital media was an inhibitor for them to offer services that could extend the capacity of their devices. So we set out to solve the technical challenge of reducing redundant bits without compromising quality, and to do this in a fully automatic manner. The result? 50 patents have now been granted or are pending. And we have commercial implementations of our solution that have been working in some of the largest new media and video distribution platforms for more than three years.

But beyond this, there is another subjective datapoint taken from Beamr’s experience over the last few quarters where many of the conversations and evaluations that we are entering into about next-generation encoding are not limited to advanced codecs but rather subjective quality metrics – and leveraging our knowledge of the human vision system to remove bits in a compressed video file with no human noticeable difference.

As VR, 360, UHD, HDR and other exciting new consumer entertainment technologies are beginning to take hold in the market, never before has there been a greater need to advance the state of the art in the area of maximizing quality at a given bitrate. Beamr was the first company to step up to address and solve this challenge, and with our demonstrable quality, it’s not a stretch to suggest that we have the lead.

More information on Beamr’s software encoding and optimization solutions can be found at beamr.com

2016 Paves the Way for a Next-Gen Video Encoding Technology Explosion in 2017

2016 has been a significant year for video compression as 4K, HDR, VR and 360 video picked up steam, paving the road for an EXPLOSION of HEVC adoption in 2017. With HEVC’s ability to reduce bitrate and file sizes up to 50% over H.264, it is no surprise that HEVC has transitioned to be the essential enabler of high-quality and reliable streaming video powering all the new and exciting entertainment experiences being launched.

Couple this with the latest announcement from HEVC Advance removing royalty uncertainties that plagued the market in 2016 and we have a perfect marriage of technology and capability with HEVC.

In this post we’ll discuss 2016 from the lenses of Beamr’s own product and company news, combined with notable trends that will shape 2017 in the advanced video encoding space.  

>> The Market Speaks: Setting the Groundwork for an Explosion of HEVC

The State of 4K

With 4K content creation growing and the average selling price of UHD 4K TVs dropping (and being adopted faster than HDTVs), 4K is here and the critical mass of demand will follow closely. We recently did a little investigative research on the state of 4K and four of the most significant trends pushing its adoption by consumers:

  • The upgrade in picture quality is significant and will drive an increase in value to the consumer – and, most importantly, additional revenue opportunities for services as consumers are preconditioned to pay more for a premium experience. It only takes a few minutes viewing time to see that 4K offers premium video quality and enhances the entertainment experience.
  • Competitive forces are operating at scale – Service Providers and OTT distributors will drive the adoption of 4K. MSO are upping their game and in 2017 you will see several deliver highly formidable services to take on pure play OTT distributors. Who’s going to win, who’s going to lose? We think it’s going to be a win-win as services are able to increase ARPUs and reduce churn, while consumers will be able to actually experience the full quality and resolution that their new TV can deliver.
  • Commercially available 4K UHD services will be scaling rapidly –  SNL Kagan forecasts the number of global UHD Linear channels at 237 globally by 2020, which is great news for consumers. The UltraHD Forum recently published a list of UHD services that are “live” today numbering 18 VOD and 37 Live services with 8 in the US and 47 outside the US. Clearly, content will not be the weak link in UHD 4K market acceptance for much longer.
  • Geographic deployments — 4K is more widely deployed in Asia Pacific and Western Europe than in the U.S. today. But we see this as a massive opportunity since many people are traveling abroad and thus will be exposed to the incredible quality. They will then return home to question their service provider, why they had to travel outside the country to see 4K. Which means as soon as the planned services in the U.S. are launched, they will likely attract customer more quickly than we’ve seen in the past.

HDR adds WOW factor to 4K

High Dynamic Range (HDR) improves video quality by going beyond more pixels to increase the amount of data delivered by each pixel. HDR video is capable of capturing a larger range of brightness and luminosity to produce an image closer to what can be seen in real life. Show anyone HDR content encoded in 4K resolution, and it’s no surprise that content providers and TV manufacturers are quickly jumping on board to deliver content with HDR. Yes, it’s “that good.” There is no disputing that HDR delivers the “wow” factor that the market and consumers are looking for. But what’s even more promising is the industry’s overwhelmingly positive reaction to it. Read more here.

Beamr has been working with Dolby to enable Dolby Vision HDR support for several years now, even jointly presenting a white paper at SMPTE in 2014. The V.265 codec is optimized for Dolby Vision and HDR10 and takes into account all requirements for both standards including full support for VUI signaling, SEI messaging, SMPTE ST 2084:2014 and ITU-R BT.2020. For more information visit http://beamr.com/vanguard-by-beamr-content-adaptive-hevc-codec-sdk

Beamr is honored to have customers who are best in class and span OTT delivery, Broadcast, Service Providers and other entertainment video applications. From what we see and hear, studios are uber excited about HDR, cable companies are prepping for HDR delivery, Satellite distributors are building the capability to distribute HDR, and of course OTT services like Netflix, FandangoNow (formerly M-GO), VUDU, and Amazon are already distributing content using either Dolby Vision or HDR10 (or both). If your current video encoding workflow cannot fully support or adequately encode content with HDR, it’s time to update. Our V.265 video encoder SDK is a perfect place to start.

VR & 360 Video at Streamable Bitrates

360-degree video made a lot of noise in 2016.  YouTube, Facebook and Twitter added support for 360-degree videos, including live streaming in 360 degrees, to their platforms. 360-degree video content and computer-generated VR content is being delivered to web browsers, mobile devices, and a range of Virtual Reality headsets.  The Oculus Rift, HTC Vive, Gear VR and Daydream View have all shipped this year, creating a new market for immersive content experiences.

But, there is an inherent problem with delivering VR and 360 video on today’s platforms.  In order to enable HD video viewing in your “viewport” (the part of the 360-degree space that you actually look at), the resolution of the full 360 video delivered to you should be 4K or more.  On the other hand, the devices on the market today which are used to view this content, including desktops, mobile devices and VR headsets only support H.264 video decoding. So delivering the high-resolution video content requires very high bitrates – twice as much as using the more modern HEVC standard.

The current solution to this issue is lowered video quality in order to fit the H.264 video stream into a reasonable bandwidth. This creates an experience for users which is not the best possible, a factor that can discourage them from consuming this newly-available VR and 360 video content.  But there’s one thing we know for sure – next generation compression including HEVC and content adaptive encoding – and perceptual optimization – will be a critical part of the final solution. Read more about VR and 360 here.

Patent Pool HEVC Advance Announces “Royalty Free” HEVC software

As 4K, HDR, VR and 360 video gathers steam, Beamr has seen the adoption rate moving faster than expected, but with the unanswered questions around royalties, and concerns of who would shoulder the cost burden, distributors have been tentative. The latest move by HEVC Advance to offer a royalty free option is meant to encourage and accelerate the adoption (implementation) of HEVC, by removing royalty uncertainties.

Internet streaming distributors and software application providers can be at ease knowing they can offer applications with HEVC software decoders without incurring onerous royalties or licensing fees. This is important as streaming app content consumption continues to increase, with more and more companies investing in its future.

By initiating a software-only royalty solution, HEVC Advance expects this move to push the rest of the market i.e. device manufacturers and browser providers to implement HEVC capability in their hardware and offer their customers the best and most efficient video experience possible.

 

>> 2017 Predictions

Mobile Video Services will Drive the Need for Content-adaptive Optimization

Given the trend toward better quality and higher resolution (4K), it’s more important than ever for video content distributors to pursue more efficient methods of encoding their video so they can adapt to the rapidly changing market, and this is where content-adaptive optimization provides a massive benefit.

The boundaries between OTT services and traditional MSO (cable and satellite) are being blurred now that all major MSOs include TVE (TV Everywhere streaming services with both VOD and Linear channels) in their subscription packages (some even break these services out separately as is the case with SlingTV). And in October, AT&T CEO Randall Stephenson vowed that DirecTV Now would disrupt the pay-TV business with revolutionary pricing for an  Internet-streaming service at a mere $35 per month for a package with more than 100 channels.

And get this – AT&T wireless is adopting the practice of “zero rating” for their customers, that is, they will not count the OTT service streaming video usage toward the subscriber’s monthly data plan. This represents a great value for customers, but there is no doubt that it puts pricing pressure on the operational side of all zero rated services.

2017 is the year that consumers will finally be able to enjoy linear as well as VOD content anywhere they wish even outside the home.

Beamr’s Contribution to MSOs, Service Providers, and OTT Distributors is More Critical Than Ever

When reaching to consumers across multiple platforms, with different constraints and delivery cost models, Beamr’s content adaptive optimizer perfects the encoding process to the most efficient quality and bitrate combination.

Whether you pay by the bit delivered to a traditional CDN provider, or operate your own infrastructure, the benefits of delivering less traffic are realized with improved UX such as faster stream start times and reduced re-buffering events, in addition to the cost savings. One popular streaming service reported to us that after implementing our content-adaptive optimization solution their rebuffering events as measured on the player were reduced by up to 50%, while their stream start times improved 20%.

Recently popularized by Netflix and Google, content-adaptive encoding is the idea that not all videos are created equal in terms of their encoding requirements. Content-adaptive optimization complements the encoding process by driving the encoder to the lowest bitrate possible based on the needs of the content, and not a fixed target bitrate (as seen in traditional encoding processes and products).

A content-adaptive solution can optimize more efficiently by analyzing already-encoded video on a frame-by-frame and scene-by-scene level, detecting areas of the video that can be further compressed without losing perceptual quality (e.g. slow motion scenes, smooth surfaces).

Provided the perceptual quality calculation is performed at the frame level with an optimizer that contains a closed loop perceptual quality measure, the output can be guaranteed to be the highest quality at the lowest bitrate possible. Click the following link to learn how Beamr’s patented content adaptive optimization technology achieves exactly this result.

Encoding and Optimization Working Together to Build the Future

Since the content-adaptive optimization process is applied to files that have already been encoded, by combining an industry leading H.264 and HEVC encoder with the best optimization solution (Beamr Video), the market will be sure to benefit by receiving the highest quality video at the lowest possible bitrate and file size. As a result, this will allow content providers to improve the end-user experience with high quality video, while meeting the growing network constraints due to increased mobile consumption and general Internet congestion.

Beamr made a bold step towards delivering on this stated market requirement by disrupting the video encoding space when in April 2016 we acquired Vanguard Video – a premier video encoding and technology company. This move will benefit the industry starting in 2017 when we introduce a new class of video encoder that we call a Content Adaptive Encoder.

As content adaptive encoding techniques are being adopted by major streaming services and video platforms like YouTube and Netflix, the market is gearing up for more advanced rate control and optimization methods, something that fits our perceptual quality measure technology perfectly. This fact when combined with Beamr having the best in class HEVC software encoder in the industry, will yield exciting benefits for the market. Read the Beamr Encoder Superguide that details the most popular methods for performing content adaptive encoding and how you can integrate them into your video workflow.

One Year from Now…

In one year from now when you read our post summarizing 2017 and heralding 2018, what you will likely hear is that 2017 was the year that advanced codecs like HEVC combined with efficient perceptually based quality measures, such as Beamr’s, provide an additional 20% or greater bitrate reduction.

The ripple effect of this technology leap will be that services struggling to compete today on quality or bitrate, may fall so far behind that they lose their ability to grow the market. We know of many multi-service operator platforms who are gearing up to increase the quality of their video beyond the current best of class for OTT services. That is correct, they’ve watched the consumer response to new entrants in the market offering superior video quality, and they are not sitting still. In fact, many are planning to leapfrog the competition with their aggressive adoption of content adaptive perceptual quality driven solutions.  

If any one service assumes they have the leadership position based on bitrate or quality, 2017 may prove to be a reshuffling of the deck.

For Beamr, the industry can expect to see an expansion of our software encoder line with the integration of our perceptual quality measure which has been developed over the last 7 years, and is covered by more than 50 patents granted and pending. We are proud of the fact that this solution has been shipping for more than 3 years in our stand-alone video and photo optimizer solutions.

It’s going to be an exciting year for Beamr and the industry and we welcome you to join us. If you are intrigued and would like to learn more about our products or are interested in evaluating any of our solutions, check us out at beamr.com.

Before you evaluate x265, read this!

With video consumption rising and consumer preferences shifting to 4K UHD this is contributing to an even faster adoption rate than what we saw with the move to HD TV. Consumer demand for a seamless (buffer-free) video experience is a new expectation, and with the latest announcement from HEVC Advance removing royalty uncertainties in the market it’s time to start thinking about building and deploying an HEVC workflow, starting with a robust HEVC encoder.

As you may know, Beamr’s V.265 was the first commercially deployed HEVC codec SDK and it is in use today by the world’s largest OTT streaming service. Even still, we receive questions regarding V.265 in comparison to x265 and in this post we’d like to address a few of them.

In future posts, we will discuss the differences in two distinct categories, performance (speed) and quality, but in this post we’ll focus on feature-related differences between V.265 and x265.

Beginning with our instruction set, specifically support for X86/x64 SMP Architecture, V.265 is able to improve encoding performance by leveraging a resource efficient architecture that is used by most multiprocessors today. Enabling this type of support allows each processor to execute different programs while working on discrete data sets to afford the capability of sharing common resources (memory, I/O device interrupt system and so on) that are connected using a system bus or a crossbar. The result is a notable increase in overall encoding speed with V.265 over x265. For any application where speed is important, V.265 will generally pull ahead as the winner.

Another area V.265 shines compared to x265 is with its advanced preprocessing algorithm support that provides resizing and de-interlacing. As many of you know, working with interlaced video can lead to poor video quality so to try and minimize the various visual defects V.265 uses a variety of techniques like line doubling where our smart algorithms are able to detect and fill in an empty row by averaging the line above and the line below. The advantages of having a resizing feature is recognizable, largely saving time and resources, and out of the box V.265 allows you to easily convert video from one resolution to another (i.e. 4K to HD). One note, we are aware that x265 supports these features via FFMPEG. However in the case that a user is not able to use FFMPEG, the fact that V.265 supports them directly is a benefit.

V.265 boasts an unmatched pre-analysis library with fading detection and complexity analysis capabilities not supported in x265. An application for the V.265 library is video segmentation that is problematic with many encoders because of the different ways two consecutive shots may be linked. In V.265, the fading detection method detects the type of gradual transition, fade type etc. which is needed to detect hard to recognize soft cuts. V.265’s complexity analysis is able to discriminate temporal and spatial complexity in video sequences with patented multi-step motion estimation methods that are more advanced than standard “textbook” motion estimation algorithms. The information gained from doing a video complexity analysis is used during the encoding process to improve encoding quality especially during transitions between scenes.

One of the most significant features V.265 offers compared to x265 is multistreaming (ABR) support. V.265 can produce multiple GOP-aligned video output streams that are extremely important when encoding for adaptive streaming. It is critical that all bitrates have IDRs aligned to enable seamless stream switching, which V.265 provides.

Additionally, with V.265 users can produce multiple GOP-aligned HEVC streams from a single input. This is extremely important for use cases when a user has one chance to synchronize video of different resolutions and bitrates.  Multistreaming helps to provide encoded data to HLS or DASH packagers in an optimal way and it provides performance savings – especially when the service must output multiple streams of the same resolution, but at varying bitrates.


Another significant feature V.265 has over x265 is its content adaptive speed settings that makes codec configuration more convenient such as real-time compared to VOD workflows. Currently we offer presets ranging from ultra fast for extremely low latency live broadcast streams to the highest quality VOD.

To combat packet losses and produce the most robust stream possible, V.265 supports slicing by slice compressed size which produces encoded slices of limited sized (typically the size of a network packet) for use in an error prone network. This is an important feature for anyone distributing content on networks with highly variable QoS.

Continuing on to parallel processing features, V.265 offers support for tiles that divides the frame into a grid of rectangular regions that can be independently decoded and encoded. Enabling this feature increases encoding performance.

V.265 is regarded as one of the most robust codecs in the market because of its ability to suit both demanding real-time and offline file based workflows. To deliver the industry leading quality that makes V.265 so powerful, it offers motion estimation features like patented high performance search algorithms and motion vectors over a picture boundary to provide additional quality improvements over x265.

For encoding by frame-type, V.265 offers Bi- and uni-directional non-reference P-frames which is useful where low-delay encoding is needed to improve temporal scalability

As for encoding tools, V.265 offers a unique set of tools over x265:

  1. Joint bi-directional Motion Vector Search which is an internal motion estimation encoding technique that provides a better bi-direction motion vector search.
  2. Sub-LCU QP modulation that allows the user to change QP from block to block inside LCU as a way to control in-frame bits/quality more precisely.
  3. Support for up to 4 temporal layers of multiple resolutions in the same bitstream to help with changing network conditions.
  4. Region of Interest (ROI) control which allows for encoding of a specific ROI with a particular encoding parameter (qp) to add flexibility and improve encoding quality.

Another major advantage over x265 is the proprietary rate control implementation offered with V.265. This ensures target bitrates are always maintained.

The more supplemental enhancement information (SEI) messages a codec supports the more video usability information (VUI) metadata that may be delivered to the decoder in an encoded bitstream. For this reason, Beamr found it necessary to include in V.265 support for Recovery point, Field indication, Decoded Picture Hash, User data unregistered, and User data as specified by ITU-T T.35.

V.265’s ability to change encoding parameters on the fly is another extremely important feature that sets it apart from x265. With the ability to change encoder resolution, bitrate, and other key elements of the encoding profile, video distributors can achieve a significant advantage by creating recipes appropriate to each piece of content without needing to interrupt their workflows or processing cycles to reset and restart an encoder.

We trust this feature comparison was useful. In the event that you require more information or would like to evaluate the V.265, feel free to reach out to us at http://beamr.com/info-request and someone will get in touch to discuss your application and interest.

Patent Pool HEVC Advance Responds: Announces “Royalty Free” HEVC Software

HEVC Advance Releases New Software Policy

November 22nd 2016 may be shown by history as the day that wholesale adoption of HEVC as the preferred next generation codec began. For companies like Beamr who are innovating on next-generation video encoding technologies such as HEVC, the news HEVC Advance announced on to drop royalties (license fees) on certain applications of their patents is huge.

In their press release, HEVC Advance, the patent pool for key HEVC technologies stated that they will not seek a license fee or royalties on software applications that utilize the HEVC compression standard for encoding and decoding. This carve out only applies to software which is able to be run on commodity servers, but we think the restriction fits beautifully with where the industry is headed.

Did you catch that? NO HEVC ROYALTIES FOR SOFTWARE ENCODERS AND DECODERS!

Specifically, the policy will protect  “application layer software downloaded to mobile devices or personal computers after the initial sales of the device, where the HEVC encoding or decoding is fully executed in software on a general purpose CPU” from royalty and licensing fees.  

Requirements of Eligible Software

For those trying to wrap their heads around eligibility, the new policy outlines three requirements which the software products performing HEVC decoding or encoding must meet:

  1. Application layer software, or codec libraries used by application layer software, enabling software-only encoding or decoding of HEVC.
  2. Software downloaded after the initial sale of a related product (mobile device or desktop personal computer). In the case of software which otherwise would fit the exclusion but is being shipped with a product, then the manufacturer of the product would need to pay a royalty.
  3. Software must not be specifically excluded.

Examples of exempted software applications where an HEVC decode royalty will likely not be due includes web browsers, personal video conferencing software and video players provided by various internet streaming distributors or software application providers.

For more information check out  https://www.hevcadvance.com/

As stated previously, driven by the rise of virtual private and public cloud encoding workflows, provided an HEVC encoder meets the eligibility requirements, for many companies it appears that there will not be an added cost to utilize HEVC in place of H.264.

A Much Needed Push for HEVC Adoption

As 4k, HDR, VR and 360 video are gathering steam, Beamr has seen the adoption rate moving faster than expected, but with the unanswered questions around royalties, and concerns of the cost burden, even the largest distributors have been tentative. This move by HEVC Advance is meant to encourage and accelerate the adoption (implementation) of HEVC, by removing uncertainties in the market.

Internet streaming distributors and software application providers can be at ease knowing they can offer applications with HEVC software decoders without incurring onerous royalties or licensing fees. This is important as streaming app content consumption continues to increase, with more and more companies investing in its future.

By initiating a software-only royalty solution, HEVC Advance expects this move to push the rest of the market i.e. device manufacturers and browser providers to implement HEVC capability in their hardware and offer their customers the best and most efficient video experience possible.

What this Means for a Video Distributor

Beamr is the leader in H.265/HEVC encoding. With 60 engineers around the world working at the codec level to produce the highest performing HEVC codec SDK in the market, Beamr V.265 delivers exceptional quality with much better scalability than any other software codec.

Industry benchmarks are showing that H.265/HEVC provides on average a 30% bitrate efficiency for the same quality and resolution over H.264. Which given the bandwidth pressure all networks are under to upgrade quality while minimizing the bits used, there is only one video encoding technology available at scale to meet the needs of the market, and that is HEVC.

The classic chicken and egg problem no longer exists with HEVC.

The challenge every new technology faces as it is introduced into the market is the classic problem of needing to attract implementers and users. In the case of a video encoding technology, without an appropriately scaled video playback ecosystem, no matter the benefits, it cannot be deployed without a sufficiently large number of players in the market.

But the good news is that over the last few years, and as consumers have propelled the TV upgrade cycle forward, many have opted to purchase UHD 4k TVs.

Most of the 2015-2016 models of major brand TVs have built-in HEVC decoders and this trend will continue in 2017 and beyond. Netflix, Amazon, VUDU, and FandangoNow (M-GO) are shipping their players on most models of UHD TVs that are capable of decoding and playing back H.265/HEVC content from these services. These distributors were all able to utilize the native HEVC decoder in the TV, easing the complexity of launching a 4k app.

For those who wonder if there is a sufficiently large ecosystem of HEVC playback in the market, just look at the 90 million TVs that are in homes today globally (approximately 40 million are in the US). And consider that in 2017 the number of 4k HEVC capable TV’s will nearly double to 167 million according to Cisco, as illustrated below.

cisco-vni-global-ip-traffic-forecast-2015-2020

The industry has spoken regarding the superior quality and performance of Beamr’s own HEVC encoder, and we will be providing benchmarks and documentation in future blog posts. Meanwhile our team of architects and implementation specialists who work with the largest service providers, SVOD consumer streaming services, and broadcasters in the world are ready to discuss your migration plans from H.264 to HEVC.

Just fill out our short Info Request form and the appropriate person will get in touch.

We Need a Revolution of 4K!

Don’t panic or stop reading, we used the word ‘revolution’ in the title and though admittedly it’s provocative being less than a week from the US Presidential elections, we are talking about entertainment and TV, not politics. Cue the massive sigh of relief here…

Our story starts with a recent article published in PC Magazine titled “Meet Two Companies That Want to Revolutionize 4K Video”, where the author Troy Dreier examines the state of 4K and some of the issues surrounding the rate of 4K adoption, specifically a chicken-and-egg problem. As Dreier points out, 4K UHD TVs are being bought in considerable numbers “over 8 million 4K TVs to date, 1.4 million in the US.”

But what about content?

Although, 4K is already far more widely deployed in Asia Pacific and Western Europe, in the US cable and satellite customers are seeing limited content choices, with almost no options in broadcast, leaving consumers turning to online distribution services to satisfy their needs.

But with this comes another problem facing streaming providers, the commodity of the internet: bits.

Though the internet is getting much faster and infrastructure is improving, overall average speeds are still just 15.3 Mbps per household, making it difficult to deliver 4K UHD video sustainably. Or at least with the quality promise that the TV vendors are making. This ultimately, puts the pressure on network operators and over-the-top content suppliers to do everything they can to lower the number of bits they transport without damaging the picture quality of the video.

To this point, Dreier suggests that video optimization solutions are needed to “condense 4K video.” Dreier goes on to point out two solutions that are solving this problem, and one of them he highlights is Beamr’s content adaptive optimization solution, Beamr Video.

At the heart of our video encoding and processing technology solutions is the Beamr content adaptive quality measure that is backed up by more than 20 granted patents with another 30 still pending.  

The Beamr Video optimization technology is based on a proprietary, low complexity, reliable, perceptual quality measure. Or put simply, we have the most advanced commercially available content adaptive quality measure available. The existence of this measure enables controlling a video encoder, to obtain an output clip with maximal compression of the video input, while still maintaining the input video resolution, format and visual quality. This is performed by controlling the compression level frame by frame, in such a way that the maximum number of bits are squeezed out of the file, while still resulting in a perceptually identical visual output.

An important characteristic of our quality measures is that it operates as a full-reference to the source which insures that artifacts are never introduced as a result of the bitrate reduction process. Many “alternative” solutions struggle with inconsistent quality as they operate in an open loop, which means at times quality may be degraded while at other times they leave “bits on the table.”

With so much at stake for next generation entertainment formats, it is critical that every new encoding and video processing technology be evaluated for quality and useability. This is why we are proud of the customers we have which include major Hollywood studios, premium OTT content distributors, MSOs and large video platforms.

Beamr Video in the real world with 720p VBR input, reduced 21%:

beamr_video_live

For more information on the why and how behind content adaptive solutions, download the free Beamr Content Adaptive Tech Guide.

Immersive VR and 360 video at streamable bitrates: Are you crazy?

There have been many high-profile experiments with VR and 360 video in the past year. Immersive video is compelling, but large and unwieldy to deliver. This area will require huge advancements in video processing – including shortcuts and tricks that border on ‘magical’.

Most of us have experienced breathtaking demonstrations that provide a window into the powerful capacity of VR and 360 video – and into the future of premium immersive video experiences.

However, if you search the web for an understanding of how much bandwidth is required to create these video environments, you’re likely to get lost in a tangled thicket of theories and calculations.

Can the industry support the bitrates these formats require?

One such post on Forbes in February 2016 says No.

It provides a detailed mathematical account of why fully immersive VR will require each eye to receive 720 million pixels at 36 bits per pixel and 60 frames per second – or a total of 3.1 trillion bits per second.1

We’ve taken a poll at Beamr, and no one in the office has access to those kinds of download speeds. And some of these folks pay the equivalent of a part-time salary to their ISP!

Thankfully the Forbes article goes on to explain that it’s not quite that bad.

Existing video compression standards will be able to improve this number by 300, according to the author, and HEVC will compress that by 600 down to what might be 5.2 Gbps.

The truth is, the calculations put forth in the Forbes piece are very ambitious indeed. As the author states:

“The ultimate display would need a region of 720 million pixels for full coverage because even though your foveal vision has a more narrow field of view, your eyes can saccade across that full space within an instant. Now add head and body rotation for 360 horizontal and 180 vertical degrees for a total of more than 2.5 billion (giga) pixels.”

A more realistic view of the way VR will rollout was presented by Charles Cheevers of network equipment vendor ARRIS at INTX in May of this year.2

Great VR experiences including a full 360 degree stereoscopic video environment at 4K resolutions could easily require a streaming bandwidth of 500 Mbps or more.

That’s still way too high, so what’s a VR producer to do?

Magical illusion, of course. 

In fact, just like your average Vegas magician, the current state of the art in VR delivery relies on tricks and shortcuts that leverage the imperfect way we humans see.

For example, Foveated Rendering can be used to aggressively compress the areas of a VR video where your eyes are not focused.

This technique alone, and variations on this theme – can take the bandwidth required by companies like NextVR dramatically lower, with some reports that an 8 Mbps stream can provide a compelling immersive experience. The fact is, there are endless ways to configure the end-to-end workflow for VR and much will depend on the hardware and software and networking environments in which it is deployed.

Compression innovations utilizing perceptual frame by frame rate control methodologies, and some involving the mapping of spherical images to cubes and pyramids, in an attempt to transpose images into 5 or 6 viewing planes, and ensure the highest resolution is always on the plane where the eyes are most intensely focused, are being tried.3

At the end of the day, it’s going to be hard to pin down your nearest VR dealer on the amount of bandwidth that’s required for a compelling VR experience. But there’s one thing we know for sure – next generation compression including HEVC and content adaptive encoding – and perceptual optimization – will be a critical part of the final solution.

References:

(1) Found on August 10, 2016 at the following URL: http://www.forbes.com/sites/valleyvoices/2016/02/09/why-the-internet-pipes-will-burst-if-virtual-reality-takes-off/#ca7563d64e8c

(2) Start at 56 minutes. https://www.intxshow.com/session/1041/  — Information and a chart is also available online here: http://www.onlinereporter.com/2016/06/17/arris-gives-us-hint-bandwidth-requirements-vr/ 

(3) Facebook’s developer site gives a fascinating look at these approaches, which they call dynamic streaming techniques. Found on August 10, 2016 at the following URL:  https://code.facebook.com/posts/1126354007399553/next-generation-video-encoding-techniques-for-360-video-and-vr/

Can we profitably surf the Video Zettabyte Tsunami?

Two key ingredients are in place. But we need to get started now.

In a previous post, we warned about the Zettabyte video tsunami – and the accompanying flood of challenges and opportunities for video publishers of all stripes, old and new. 

Real-life tsunamis are devastating. But California’s all about big wave surfing, so we’ve been asking this question: Can we surf this tsunami?

The ability to do so is going to hinge on economics. So a better phrasing is perhaps: Can we profitably surf this video tsunami?

Two surprising facts came to light recently that point to an optimistic answer, and so we felt it was essential to highlight them.

1. The first fact is about the Upfronts – and it provides evidence that 4K UHD content can drive growth in top-line sales for media companies.

The results from the Upfronts – the annual marketplace where networks sell ad inventory to premium brand marketers – provided TV industry watchers a major upside surprise. This year, the networks sold a greater share of ad inventory at their upfront events, and at higher prices too. As Brian Steinberg put it in his July 27, 2016 Variety1 article:

“The nation’s five big English-language broadcast networks secured between $8.41 billion and $9.25 billion in advance ad commitments for primetime as part of the annual “upfront” market, according to Variety estimates. It’s the first time in three years they’ve managed to break the $9 billion mark. The upfront finish is a clear signal that Madison Avenue is putting more faith in TV even as digital-video options abound.”

Our conclusion? Beautiful, immersive content environments with a more limited number of high-quality ads can fuel new growth in TV. And 4K UHD, including the stunning impact of HDR, is where some of this additional value will surely come from.

Conventional wisdom is that today’s consumers are increasingly embracing ad-free SVOD OTT content from premium catalogs like Netflix, even when they have to pay for it. Since they are also taking the lead on 4K UHD content programming, that’s a great sign that higher value 4K UHD content will drive strong economics. But the data from the Upfronts also seems to suggest that premium ad-based TV content can be successful as well, especially when the Networks create immersive, clutter-free environments with beautiful pictures. 

Indeed, if the Olympics are any measure, Madison Avenue has received the message and turned up their game on the creative. I saw more than a few head-turning :30-second spots. Have you seen the Chobani ads in pristine HD? They’re as powerful as it gets.2

Check out this link to see the ads.

2. The second fact is about the operational side of the equation.

Can we deliver great content at a reasonable cost to a large enough number of homes?  On that front, we have more good news. 

The Internet in the United States is getting much faster. This, along with advanced methods of compression including HEVC, Content Adaptive Encoding and Perceptual Quality Metrics, will result in a ‘virtual upgrade’ of existing delivery network infrastructure. In particular, Ookla’s Speedtest.net published data on August 3, 2016 contained several stunning nuggets of information. But before we reveal the data, we need to provide a bit of context.

It’s important to note that 4K UHD content requires bandwidth of 15 Mbps or greater. Let’s be clear, this assumes Content Adaptive Encoding, Perceptual Quality Metrics, and HEVC compression are all used in combination. However, in Akamai’s State of the Internet report released in Q1 of this year, only 35% of the US population could access broadband speeds of 15 Mbps.

(Note: We have seen suggestions that 4K UHD content requires up to 25 Mbps. Compression technologies improve over time and those data points may well be old news. Beamr is on the cutting edge of compression and we firmly believe that 10 – 15 Mbps is the bandwidth needed – today – to achieve stunning 4K UHD audio visual quality.)

And that’s what makes Ookla’s data so important. Ookla found that in the first 6 months of 2016, fixed broadband customers saw a 42% year-over-year increase in average download speeds to a whopping 54.97 Mbps. Even more importantly, while 10% of Americans lack basic access to FCC target speeds of 25 Mbps, only 4% of urban Americans lack access to those speeds. This speed boost seems to be a direct result of industry consolidation, network upgrades, and growth in fiber optic deployments.

After seeing this news, we also decided to take a closer look at that Akamai data. And guess what we found? A steep slope upward from prior quarters (see chart below).

To put it back into surfing terms: Surf’s Up!
time-based-trends-in-internet-connection-speeds-and-adoption-rates

References:

(1) “How TV Tuned in More Upfront Ad Dollars: Soap, Toothpaste and Pushy Tactics” Brian Steinberg, July 27, 2016: http://variety.com/2016/tv/news/2016-tv-upftont-networks-advertising-increases-1201824887/ 

(2)  Chobani ad examples from their YouTube profile: https://www.youtube.com/watch?v=DD5CUPtFqxE&list=PLqmZKErBXL-Nk4IxQmpgpL2z27cFzHoHu