Patent Pool HEVC Advance Responds: Announces “Royalty Free” HEVC Software

HEVC Advance Releases New Software Policy

November 22nd 2016 may be shown by history as the day that wholesale adoption of HEVC as the preferred next generation codec began. For companies like Beamr who are innovating on next-generation video encoding technologies such as HEVC, the news HEVC Advance announced on to drop royalties (license fees) on certain applications of their patents is huge.

In their press release, HEVC Advance, the patent pool for key HEVC technologies stated that they will not seek a license fee or royalties on software applications that utilize the HEVC compression standard for encoding and decoding. This carve out only applies to software which is able to be run on commodity servers, but we think the restriction fits beautifully with where the industry is headed.

Did you catch that? NO HEVC ROYALTIES FOR SOFTWARE ENCODERS AND DECODERS!

Specifically, the policy will protect  “application layer software downloaded to mobile devices or personal computers after the initial sales of the device, where the HEVC encoding or decoding is fully executed in software on a general purpose CPU” from royalty and licensing fees.  

Requirements of Eligible Software

For those trying to wrap their heads around eligibility, the new policy outlines three requirements which the software products performing HEVC decoding or encoding must meet:

  1. Application layer software, or codec libraries used by application layer software, enabling software-only encoding or decoding of HEVC.
  2. Software downloaded after the initial sale of a related product (mobile device or desktop personal computer). In the case of software which otherwise would fit the exclusion but is being shipped with a product, then the manufacturer of the product would need to pay a royalty.
  3. Software must not be specifically excluded.

Examples of exempted software applications where an HEVC decode royalty will likely not be due includes web browsers, personal video conferencing software and video players provided by various internet streaming distributors or software application providers.

For more information check out  https://www.hevcadvance.com/

As stated previously, driven by the rise of virtual private and public cloud encoding workflows, provided an HEVC encoder meets the eligibility requirements, for many companies it appears that there will not be an added cost to utilize HEVC in place of H.264.

A Much Needed Push for HEVC Adoption

As 4k, HDR, VR and 360 video are gathering steam, Beamr has seen the adoption rate moving faster than expected, but with the unanswered questions around royalties, and concerns of the cost burden, even the largest distributors have been tentative. This move by HEVC Advance is meant to encourage and accelerate the adoption (implementation) of HEVC, by removing uncertainties in the market.

Internet streaming distributors and software application providers can be at ease knowing they can offer applications with HEVC software decoders without incurring onerous royalties or licensing fees. This is important as streaming app content consumption continues to increase, with more and more companies investing in its future.

By initiating a software-only royalty solution, HEVC Advance expects this move to push the rest of the market i.e. device manufacturers and browser providers to implement HEVC capability in their hardware and offer their customers the best and most efficient video experience possible.

What this Means for a Video Distributor

Beamr is the leader in H.265/HEVC encoding. With 60 engineers around the world working at the codec level to produce the highest performing HEVC codec SDK in the market, Beamr V.265 delivers exceptional quality with much better scalability than any other software codec.

Industry benchmarks are showing that H.265/HEVC provides on average a 30% bitrate efficiency for the same quality and resolution over H.264. Which given the bandwidth pressure all networks are under to upgrade quality while minimizing the bits used, there is only one video encoding technology available at scale to meet the needs of the market, and that is HEVC.

The classic chicken and egg problem no longer exists with HEVC.

The challenge every new technology faces as it is introduced into the market is the classic problem of needing to attract implementers and users. In the case of a video encoding technology, without an appropriately scaled video playback ecosystem, no matter the benefits, it cannot be deployed without a sufficiently large number of players in the market.

But the good news is that over the last few years, and as consumers have propelled the TV upgrade cycle forward, many have opted to purchase UHD 4k TVs.

Most of the 2015-2016 models of major brand TVs have built-in HEVC decoders and this trend will continue in 2017 and beyond. Netflix, Amazon, VUDU, and FandangoNow (M-GO) are shipping their players on most models of UHD TVs that are capable of decoding and playing back H.265/HEVC content from these services. These distributors were all able to utilize the native HEVC decoder in the TV, easing the complexity of launching a 4k app.

For those who wonder if there is a sufficiently large ecosystem of HEVC playback in the market, just look at the 90 million TVs that are in homes today globally (approximately 40 million are in the US). And consider that in 2017 the number of 4k HEVC capable TV’s will nearly double to 167 million according to Cisco, as illustrated below.

cisco-vni-global-ip-traffic-forecast-2015-2020

The industry has spoken regarding the superior quality and performance of Beamr’s own HEVC encoder, and we will be providing benchmarks and documentation in future blog posts. Meanwhile our team of architects and implementation specialists who work with the largest service providers, SVOD consumer streaming services, and broadcasters in the world are ready to discuss your migration plans from H.264 to HEVC.

Just fill out our short Info Request form and the appropriate person will get in touch.

Shows Without Safety Nets: The Lasting Appeal of Live TV

Live video streaming is certainly popular these days, but it’s not a new concept. Instead, it hearkens back to a beloved form of 20th-century entertainment: live scripted television. In fact, this type of non-news, non-sports programming endures to this day.

Live TV Enthralls a Nation

During the 1950s, comedies and dramas on TV were often live. Variety programs like “Your Show of Shows” were full of energetic comedy sketches. Anthology shows were popular as well. “Playhouse 90,” for one, staged different dramatic productions each week. When you tuned in, it was like watching a 90-minute play. For example, “Days of Wine and Roses,” which concerns a couple battling alcoholism, was a gripping 1958 TV movie before it became an acclaimed 1962 film.

However, in 1951, CBS decided to film “I Love Lucy” in front of a studio audience. By the 1960s, live primetime TV had become scarce partly due to this show’s success.

Live Sitcoms? Not.

Let’s give credit to “Roc,” a Fox sitcom about a sanitation worker. The show averaged around 9 million viewers during its first season (1991-92). But when it aired a live show in February 1992, the episode attracted approximately 11 million. The producers then decided to do the whole second season live. It didn’t go as well the second time around.

In March 2015, NBC’s “Undateable” revived this approach. The comedy, which revolves around some slovenly singles, televised a season of live episodes. Unfortunately, like “Roc,” it’s far from a hit.

Live Episodes and Musicals

In 1997, the season premiere of the NBC medical drama “ER,” entitled “Ambush,” was broadcast live. Cast member George Clooney, a fan of 1950s TV, had urged the producers to approve a live episode. The actors had to perform it twice, the second time for the West Coast. With 42.7 million viewers, it was a massive ratings success. And it prompted several other shows to telecast their own live episodes, including NBC’s “30 Rock” and “The West Wing.”

Additionally, in recent years, television has been lighting up social media with live versions of musicals. This trend started with 2013’s “The Sound of Music Live!” on NBC, which starred country singer Carrie Underwood. Filmed at New York’s Grumman Studios, it was the first live Broadway musical on TV in more than 50 years. Nearly 38.7 million viewers caught at least some of this three-hour show, and it averaged about 18.6 million viewers at any one moment.

Video Streaming: Live TV for the 21st Century?

Live TV has lasted because it gives people the feeling that they’re having a unique experience. After all, no one knows what might happen during such a performance. Actors could forget their lines, or an earthquake could hit the studio.

Live video streaming offers that same anything-can-happen thrill. Plus, marketing professionals value it as it lets them interact with consumers and get a sense of their opinions.

Best of all, these videos draw people closer. Friends and family members can sit with laptops, tablets and smartphones and watch them at the same time. Whether they’re in the same room or viewing remotely, a special camaraderie arises. It’s the emotional connection that forms when you know that others are feeling what you’re feeling. Yes, there’s real joy in laughing, crying and gasping as a group.

Moreover, whenever you’re watching a video of a one-time-only event ― for instance, a Periscope video of a birth or graduation ― the shared viewing becomes even more powerful.

It’s the kind of togetherness that many people must’ve known as they gathered in living rooms to watch “Your Show of Shows” and “Playhouse 90” way back when.

Which has us thinking, will services like Facebook live become the new reality TV format? With Facebook now serving 8 billion views a day, a 100% increase over just 6 months earlier according to this TechCrunch article. There is no doubt that the shared, social experience of live video is here to stay.

But what this means technically is what motivates Beamr’s team of 60 video codec engineers and image scientists to not stop innovating. As consumer expectations are increasing for better video quality and improved streaming stability, never before has the need for high quality video encoding that makes the best use of as few bits as possible, been needed.

The State of Commercially Available 4K UHD Services

In a recent article we did a little investigative research on the state of 4K and four significant trends:

  1. The upgrade in picture quality is significant and will drive an increase in value to the consumer – and additional revenues for services.
  2. Competitive forces are operating at scale – Service Providers and OTT distributors will drive the adoption of 4K.
  3. SNL Kagen forecasts the number of global UHD Linear channels at 95 by the end of 2016 – and 237 globally by 2020.
  4. Geography. 4K is already far more widely deployed in Asia Pacific and Western Europe than in the U.S.

In this article we want to further highlight the state of commercially available 4K UHD services. The UltraHD Forum published a list of UHD services that are “live” and it’s worth checking out.

To break it down, there are 18 VOD and 37 Live services with 8 in the US and 47 outside the US.

The 4K adoption rate isn’t moving as slowly as one might think, so don’t make the mistake of misreading its speed. It’s time to start building your 4K workflows now as the competitive pressure is fast approaching.

Note: The following UHD service chart is courtesy UltraHD Forum.

Operator Country Service Topology Delivery Model Notes
AcTVila Japan VoD OTT Unicast ABR
airtel 4K India Live IPTV broadcast
Amazon US VoD OTT Unicast ABR
Bein Middle East Live DTH Broadcast
BT UK Live IPTV broadcast
Comcast US Push VoD Cable DOCSIS 3.x NBC used HDR10 & Atmos for Rio Olympics
Dalian Tiantu China TS Playout Cable unverified
DirecTV US VoD DTH Push VoD
Dish UHD promo Live IPTV broadcast
Fashion one (SES) Luxembourg Live DTH broadcast
Festival4K France Live IPTV broadcast
Fransat France Live DTH broadcast
Fransat France TS Playout DTH broadcast
Free France Live IPTV Multicast Android middleware, 1 channel at launch: Fashion TV loop
Globo TV Brazil VoD OTT Unicast ABR
High 4K TV Live IPTV broadcast
insight Live IPTV broadcast
Inspur China Live Cable unverified
J:COM Japan Live Cable Broadcast
KPN Netherlands Live IPTV Multicast
KT Korea Live IPTV Multicast
LG Uplus Korea VoD / Live ? IPTV Multicast
M-Go US VoD OTT Unicast ABR
Nasa TV US/Europe Live IPTV broadcast
Netflix US VoD OTT Unicast ABR
NOS Portugal Live Cable Broadcast, Multicast, Unicast ABR OTT trials have occured
NTT Plala Japan Live / VoD IPTV Multicast
Orange France France Live IPTV Multicast Dolby Atmos available on some broadcasts
pearl tv Luxembourg Live DTH broadcast
SFR France Live IPTV Multicast UHD used to promote Fiber
SKBB Korea Live IPTV Multicast
Sky Deutschland Germany Live / Push-VoD DTH / Cable broadcast Launched October 5th 2016, 2 Live channels + Push VoD
Sky Italia Italy Live DTH broadcast “Super HD” launched June 2016, HDR Announced for 2017
Sky UK UK Live DTH broadcast Available to premium Sky Q customers
SkyLife Korea Live DTH broadcast
SkyPerfecTV Japan Live DTH / Opticast broadcast HDR announced for October 2016
Slovak Telecom Slovakia VoD OTT Unicast ABR
Sony US VoD OTT Unicast ABR
Sth Korea’s Pandora Korea VoD OTT Unicast ABR
Stofa Dennmark Live cable Multicast Viasat Ultra HD
Swisscom Switzerland Live & VoD IPTV Multicast Testing HDR
Tata Sky India Live DTH broadcast cricket world cup’15
Telekom Malaysia Malaysia Live IPTV Multicast Demonstration/Trials – Launch soon
Telus Canada VoD OTT Unicast ABR Starts with VoD – Live coming soon
Tivusat Italy Live DTH Broadcast
Tricolor Russia TS Playout DTH broadcast
Turkcell Turkey Live IPTV Multicast
UHD-1 Live IPTV broadcast
UMAX Korea TS Playout Cable broadcast
Videocon India Live DTH broadcast cricket world cup’15
Vidity US VoD OTT Unicast ABR
Vodafone Portugal Portugal Live IPTV Multicast
Vodafone Spain Spain Live / VoD IPTV Multicast, Unicast
VUDU US VoD OTT Unicast ABR Dolby Vision and Atmos support announced
waiku tv France VoD OTT Unicast ABR

We Need a Revolution of 4K!

Don’t panic or stop reading, we used the word ‘revolution’ in the title and though admittedly it’s provocative being less than a week from the US Presidential elections, we are talking about entertainment and TV, not politics. Cue the massive sigh of relief here…

Our story starts with a recent article published in PC Magazine titled “Meet Two Companies That Want to Revolutionize 4K Video”, where the author Troy Dreier examines the state of 4K and some of the issues surrounding the rate of 4K adoption, specifically a chicken-and-egg problem. As Dreier points out, 4K UHD TVs are being bought in considerable numbers “over 8 million 4K TVs to date, 1.4 million in the US.”

But what about content?

Although, 4K is already far more widely deployed in Asia Pacific and Western Europe, in the US cable and satellite customers are seeing limited content choices, with almost no options in broadcast, leaving consumers turning to online distribution services to satisfy their needs.

But with this comes another problem facing streaming providers, the commodity of the internet: bits.

Though the internet is getting much faster and infrastructure is improving, overall average speeds are still just 15.3 Mbps per household, making it difficult to deliver 4K UHD video sustainably. Or at least with the quality promise that the TV vendors are making. This ultimately, puts the pressure on network operators and over-the-top content suppliers to do everything they can to lower the number of bits they transport without damaging the picture quality of the video.

To this point, Dreier suggests that video optimization solutions are needed to “condense 4K video.” Dreier goes on to point out two solutions that are solving this problem, and one of them he highlights is Beamr’s content adaptive optimization solution, Beamr Video.

At the heart of our video encoding and processing technology solutions is the Beamr content adaptive quality measure that is backed up by more than 20 granted patents with another 30 still pending.  

The Beamr Video optimization technology is based on a proprietary, low complexity, reliable, perceptual quality measure. Or put simply, we have the most advanced commercially available content adaptive quality measure available. The existence of this measure enables controlling a video encoder, to obtain an output clip with maximal compression of the video input, while still maintaining the input video resolution, format and visual quality. This is performed by controlling the compression level frame by frame, in such a way that the maximum number of bits are squeezed out of the file, while still resulting in a perceptually identical visual output.

An important characteristic of our quality measures is that it operates as a full-reference to the source which insures that artifacts are never introduced as a result of the bitrate reduction process. Many “alternative” solutions struggle with inconsistent quality as they operate in an open loop, which means at times quality may be degraded while at other times they leave “bits on the table.”

With so much at stake for next generation entertainment formats, it is critical that every new encoding and video processing technology be evaluated for quality and useability. This is why we are proud of the customers we have which include major Hollywood studios, premium OTT content distributors, MSOs and large video platforms.

Beamr Video in the real world with 720p VBR input, reduced 21%:

beamr_video_live

For more information on the why and how behind content adaptive solutions, download the free Beamr Content Adaptive Tech Guide.

Immersive VR and 360 video at streamable bitrates: Are you crazy?

There have been many high-profile experiments with VR and 360 video in the past year. Immersive video is compelling, but large and unwieldy to deliver. This area will require huge advancements in video processing – including shortcuts and tricks that border on ‘magical’.

Most of us have experienced breathtaking demonstrations that provide a window into the powerful capacity of VR and 360 video – and into the future of premium immersive video experiences.

However, if you search the web for an understanding of how much bandwidth is required to create these video environments, you’re likely to get lost in a tangled thicket of theories and calculations.

Can the industry support the bitrates these formats require?

One such post on Forbes in February 2016 says No.

It provides a detailed mathematical account of why fully immersive VR will require each eye to receive 720 million pixels at 36 bits per pixel and 60 frames per second – or a total of 3.1 trillion bits per second.1

We’ve taken a poll at Beamr, and no one in the office has access to those kinds of download speeds. And some of these folks pay the equivalent of a part-time salary to their ISP!

Thankfully the Forbes article goes on to explain that it’s not quite that bad.

Existing video compression standards will be able to improve this number by 300, according to the author, and HEVC will compress that by 600 down to what might be 5.2 Gbps.

The truth is, the calculations put forth in the Forbes piece are very ambitious indeed. As the author states:

“The ultimate display would need a region of 720 million pixels for full coverage because even though your foveal vision has a more narrow field of view, your eyes can saccade across that full space within an instant. Now add head and body rotation for 360 horizontal and 180 vertical degrees for a total of more than 2.5 billion (giga) pixels.”

A more realistic view of the way VR will rollout was presented by Charles Cheevers of network equipment vendor ARRIS at INTX in May of this year.2

Great VR experiences including a full 360 degree stereoscopic video environment at 4K resolutions could easily require a streaming bandwidth of 500 Mbps or more.

That’s still way too high, so what’s a VR producer to do?

Magical illusion, of course. 

In fact, just like your average Vegas magician, the current state of the art in VR delivery relies on tricks and shortcuts that leverage the imperfect way we humans see.

For example, Foveated Rendering can be used to aggressively compress the areas of a VR video where your eyes are not focused.

This technique alone, and variations on this theme – can take the bandwidth required by companies like NextVR dramatically lower, with some reports that an 8 Mbps stream can provide a compelling immersive experience. The fact is, there are endless ways to configure the end-to-end workflow for VR and much will depend on the hardware and software and networking environments in which it is deployed.

Compression innovations utilizing perceptual frame by frame rate control methodologies, and some involving the mapping of spherical images to cubes and pyramids, in an attempt to transpose images into 5 or 6 viewing planes, and ensure the highest resolution is always on the plane where the eyes are most intensely focused, are being tried.3

At the end of the day, it’s going to be hard to pin down your nearest VR dealer on the amount of bandwidth that’s required for a compelling VR experience. But there’s one thing we know for sure – next generation compression including HEVC and content adaptive encoding – and perceptual optimization – will be a critical part of the final solution.

References:

(1) Found on August 10, 2016 at the following URL: http://www.forbes.com/sites/valleyvoices/2016/02/09/why-the-internet-pipes-will-burst-if-virtual-reality-takes-off/#ca7563d64e8c

(2) Start at 56 minutes. https://www.intxshow.com/session/1041/  — Information and a chart is also available online here: http://www.onlinereporter.com/2016/06/17/arris-gives-us-hint-bandwidth-requirements-vr/ 

(3) Facebook’s developer site gives a fascinating look at these approaches, which they call dynamic streaming techniques. Found on August 10, 2016 at the following URL:  https://code.facebook.com/posts/1126354007399553/next-generation-video-encoding-techniques-for-360-video-and-vr/

4 Facts about 4K

We recently did a little investigative research on the state of 4k and here are four highlights of what we found.

To start, as an industry, we’ve been anticipating 4K for a few years now, but it was just this past April that DIRECTV launched the first-ever Live 4K broadcast from the Masters Golf Tournament. Read more here:

http://ktla.com/2016/03/30/get-ready-for-4k-programming-with-directv/

In May Comcast EVP Matt Strauss spoke with Multichannel News about the company’s plans to begin distributing a 4K HDR capable Xi6 set-top box, but not until 2017.

http://www.multichannel.com/news/content/building-video-momentum/405085

And Comcast did broadcast the Olympics in 4K, but only to the Xfinity App built in to a select set of Smart TVs. Also, as with DIRECTV and DISH Network, the 4K signals were broadcast after a 24-hour delay which I understand was caused mostly by content prep requirements. 

Meanwhile for VOD, Netflix and Amazon are in the game producing and delivering 4K content. While VUDU and FandangoNow also have a limited set of licensed content available for streaming delivery.

Watch Dave Ronca discuss Netflix 4K workflow and technology architecture at Streaming Media East.

As for linear 4K UHD options, in the U.S. today there are just a few TV channels available with the only major operator offering a 24×7 4K UHD linear TV channel being DIRECTV. (There is also a small operator in Chattanooga Tennessee with five 4K UHD channels)

Given the seeming “lack of content” and esoteric discussions about 4K not being easy to “actually see” because most screen sizes are too small due to the extended viewing distance in most homes, you’d be excused for thinking that 4K is still a ways out.

But… our research took us to Best Buy, where the store is filled wall to wall with 4K UHD capable TVs.

Our conclusion?

Forget everything you’ve read: The upgrade in picture quality is real and it’s awesome.

And that brings us to the first key fact about 4K UHD:

  1. The upgrade in picture quality is significant – and it will drive an increase in value to the consumer – and drive additional revenues in return.

SNL Kagan data released in July 2016 the following data. Nearly 2 out of 3 service providers and content producers they surveyed reported they believe consumers are willing to pay more for 4k UHD content. (4K Global Industry Forecast, SNL Kagan, July 2016)

However, it’s important to note that this stunning picture quality isn’t simply resolution. In fact, as we’ll point out in an upcoming white paper, High Dynamic Range is probably as important a feature in today’s 4K UHD TVs as resolution.

HDR enables three key things. Most essentially, HDR improves camera hardware by capturing the high contrast ratios – lighter lights and darker darks – that exist in the real world. As such, HDR images provide more ‘realism’ – and to stunning effect. Also, HDR provides greater luminance (brighter) and thirdly, it offers a wider color gamut (redder reds and greener greens.)

If that consumer benefit can translate into revenue impact, and we believe it will, this will drive accelerated service provider adoption, particularly given our 2nd fact finding about 4k:

  1. Competitive forces operating at scale – amongst Service Providers and OTT providers will drive the adoption of 4K.

Once 4K rollouts start, many in the business feel it will move lightning fast compared to the HD rollout. Why? Consolidation has created more scale in the TV market.

Plus you need to add competitive pressure to the mix with digital leaders like Netflix setting a high video quality bar for not only OTT competitors but MVPDs.

Meantime, major video service providers have been aggressive in efforts to dominate and extend their footprint into consumer homes. Fear and competition will drive decision making and actions at MVPDs as much as consumer delight.

All of the growth pressure described in #2 manifests itself in the growing forecasts for UHD linear TV channel launches.

  1. SNL Kagan forecasts the number of global UHD Linear channels at 95 by the end of 2016 – and 237 globally by 2020.

Of course, this is a chicken-and-egg problem. Few consumers want to purchase 4K TVs if there isn’t enough content to be displayed on them.

But as Tim Bajarin of Creative Strategies points out, until 35-40% of homes have a 4K TV, the cable and broadcast networks won’t justify sizable numbers of 4K channel launches. [USA TODAY Jan 2 2016, “More 4K TV programming finally here in 2016”]

Which leads us to our 4th key fact about 4k UHD TV.

  1. Don’t forget about Geography. 4K is already far more widely deployed in Asia Pacific and Western Europe than in the U.S.

It’s clear that 4K UHD is in the earliest stages of a commercial rollout. Yet it is surprising to see how far behind the U.S. is in 4K UHD channel launches, at least according to the SNL Kagan report previously referenced.

In that report, the North American region had just 12% of linear 4K UHD channels globally, compared with 42% in Asia Pacific, and 30% in Western Europe.

But as you think about the state of 4K and your company’s investment level whether that be in acquiring content rights, licensing HEVC encoders, or upgrading your network and streaming technologies to accommodate the increased bandwidth demands, don’t make the mistake of misreading the speed of adoption. Start acquiring content and building your 4K workflows now, because when the competitive pressure arrives to have a full UHD 4K offer (and it will come) you do not want to be scrambling.

Can we profitably surf the Video Zettabyte Tsunami?

Two key ingredients are in place. But we need to get started now.

In a previous post, we warned about the Zettabyte video tsunami – and the accompanying flood of challenges and opportunities for video publishers of all stripes, old and new. 

Real-life tsunamis are devastating. But California’s all about big wave surfing, so we’ve been asking this question: Can we surf this tsunami?

The ability to do so is going to hinge on economics. So a better phrasing is perhaps: Can we profitably surf this video tsunami?

Two surprising facts came to light recently that point to an optimistic answer, and so we felt it was essential to highlight them.

1. The first fact is about the Upfronts – and it provides evidence that 4K UHD content can drive growth in top-line sales for media companies.

The results from the Upfronts – the annual marketplace where networks sell ad inventory to premium brand marketers – provided TV industry watchers a major upside surprise. This year, the networks sold a greater share of ad inventory at their upfront events, and at higher prices too. As Brian Steinberg put it in his July 27, 2016 Variety1 article:

“The nation’s five big English-language broadcast networks secured between $8.41 billion and $9.25 billion in advance ad commitments for primetime as part of the annual “upfront” market, according to Variety estimates. It’s the first time in three years they’ve managed to break the $9 billion mark. The upfront finish is a clear signal that Madison Avenue is putting more faith in TV even as digital-video options abound.”

Our conclusion? Beautiful, immersive content environments with a more limited number of high-quality ads can fuel new growth in TV. And 4K UHD, including the stunning impact of HDR, is where some of this additional value will surely come from.

Conventional wisdom is that today’s consumers are increasingly embracing ad-free SVOD OTT content from premium catalogs like Netflix, even when they have to pay for it. Since they are also taking the lead on 4K UHD content programming, that’s a great sign that higher value 4K UHD content will drive strong economics. But the data from the Upfronts also seems to suggest that premium ad-based TV content can be successful as well, especially when the Networks create immersive, clutter-free environments with beautiful pictures. 

Indeed, if the Olympics are any measure, Madison Avenue has received the message and turned up their game on the creative. I saw more than a few head-turning :30-second spots. Have you seen the Chobani ads in pristine HD? They’re as powerful as it gets.2

Check out this link to see the ads.

2. The second fact is about the operational side of the equation.

Can we deliver great content at a reasonable cost to a large enough number of homes?  On that front, we have more good news. 

The Internet in the United States is getting much faster. This, along with advanced methods of compression including HEVC, Content Adaptive Encoding and Perceptual Quality Metrics, will result in a ‘virtual upgrade’ of existing delivery network infrastructure. In particular, Ookla’s Speedtest.net published data on August 3, 2016 contained several stunning nuggets of information. But before we reveal the data, we need to provide a bit of context.

It’s important to note that 4K UHD content requires bandwidth of 15 Mbps or greater. Let’s be clear, this assumes Content Adaptive Encoding, Perceptual Quality Metrics, and HEVC compression are all used in combination. However, in Akamai’s State of the Internet report released in Q1 of this year, only 35% of the US population could access broadband speeds of 15 Mbps.

(Note: We have seen suggestions that 4K UHD content requires up to 25 Mbps. Compression technologies improve over time and those data points may well be old news. Beamr is on the cutting edge of compression and we firmly believe that 10 – 15 Mbps is the bandwidth needed – today – to achieve stunning 4K UHD audio visual quality.)

And that’s what makes Ookla’s data so important. Ookla found that in the first 6 months of 2016, fixed broadband customers saw a 42% year-over-year increase in average download speeds to a whopping 54.97 Mbps. Even more importantly, while 10% of Americans lack basic access to FCC target speeds of 25 Mbps, only 4% of urban Americans lack access to those speeds. This speed boost seems to be a direct result of industry consolidation, network upgrades, and growth in fiber optic deployments.

After seeing this news, we also decided to take a closer look at that Akamai data. And guess what we found? A steep slope upward from prior quarters (see chart below).

To put it back into surfing terms: Surf’s Up!
time-based-trends-in-internet-connection-speeds-and-adoption-rates

References:

(1) “How TV Tuned in More Upfront Ad Dollars: Soap, Toothpaste and Pushy Tactics” Brian Steinberg, July 27, 2016: http://variety.com/2016/tv/news/2016-tv-upftont-networks-advertising-increases-1201824887/ 

(2)  Chobani ad examples from their YouTube profile: https://www.youtube.com/watch?v=DD5CUPtFqxE&list=PLqmZKErBXL-Nk4IxQmpgpL2z27cFzHoHu

Data Caps, Zero-rated, Net Neutrality: The Video Tsunami Doesn’t Take Sides

We Need to Work Together to Conserve Bits in the Zettabyte Era

Over the past year, and again last week, there has been no shortage of articles and discussion around data caps, binge-on, zero rated content, and of course network neutrality.

We know the story. Consumer demand for Internet and over-the-top video content is insatiable. This is creating an unstoppable tsunami of video.

Vendors like Cisco have published the Visual Network Index to help the industry forecast how big that wave is, so we can work together to find sustainable ways to deliver it.

The Cisco VNI is projecting that internet video traffic will more than double to 2.3 Zettabytes by 2020. (Endnote 1.) To put it another way, that’s 1.3 Billion DVDs of video crossing the internet daily in 2020, versus the 543 Million DVDs of video that crossed the internet today.

That’s still tough to visualize, so here’s a back-of-the-envelope thought experiment

Let’s take the single largest TV event in history, Super Bowl 49.

114 million viewers on average, every minute, watched Super Bowl 49 in 2015. The broadcast is about 3 hours and 35 minutes.  We might say that 24.5 Billion cumulative viewer-minutes of video were watched.

Assume that a DVD holds 180 minutes of video. (Note, this is an inexact guess assuming a conservative video quality.) If one person watched 543 Million DVDs of video, she would have to spend 97.8 billion cumulative minutes watching all of it. That’s four Super Bowl 49s every day.

And in 2020, it’s going to be close to 10 Super Bowl 49s of cumulative viewer-minutes of video trafficking across the network. In one day.

That is a lot of traffic and it is going to be hard work to transport those bits in a reliable, high-quality fashion that is also economically sustainable.

And that’s true no matter whether you are a network operator or an over-the-top content distributor. Here’s why.

All Costs are Variable in the Long-run

Recently, Comcast and Netflix have agreed to partner, which bodes well for both companies’ business models, and for the consumer at large. However, last week there were several news headlines about data caps and zero-rated content. These will undoubtedly continue.

Now, it’s obvious that OTT companies like Netflix & M-GO need to do everything they can to reduce the costs of video delivery. That’s why both companies have pioneered new approaches to video quality optimization.

On the other hand, it might seem that network operators have a fixed cost structure that gives them wiggle room for sub-optimal encodes.

But it’s worth noting this important economic adage: In the long run, all costs are variable. When you’re talking about the kind of growth in video traffic that industry analysts are projecting to 2020, everything is a variable cost.

And when it comes to delivering video sustainably, there’s no room for wasting bits. Both network operators and over-the-top content suppliers will need to do everything they can to lower the number of bits they transport without damaging the picture quality of the video.

In the age of the Zettabyte, we all need to be bit conservationists.

 

Endnote 1: http://www.cisco.com/c/dam/m/en_us/solutions/service-provider/vni-forecast-widget/forecast-widget/index.html

Translating Opinions into Fact When it Comes to Video Quality

This post was originally featured at https://www.linkedin.com/pulse/translating-opinions-fact-when-comes-video-quality-mark-donnigan 

In this post, we attempt to de-mystify the topic of perceptual video quality, which is the foundation of Beamr’s content adaptive encoding and content adaptive optimization solutions. 

National Geographic has a hit TV franchise on its hands. It’s called Brain Games starring Jason Silva, a talent described as “a Timothy Leary of the viral video age” by the Atlantic. Brain Games is accessible, fun and accurate. It’s a dive into brain science that relies on well-produced demonstrations of illusions and puzzles to showcase the power — and limitation — of the human brain. It’s compelling TV that illuminates how we perceive the world.(Intrigued? Watch the first minute of this clip featuring Charlie Rose, Silva, and excerpts from the show: https://youtu.be/8pkQM_BQVSo )

At Beamr, we’re passionate about the topic of perceptual quality. In fact, we are so passionate, that we built an entire company based on it. Our technology leverages science’s knowledge about the human vision system to significantly reduce video delivery costs, reduce buffering & speed-up video starts without any change in the quality perceived by viewers. We’re also inspired by the show’s ability to turn complex things into compelling and accessible, without distorting the truth. No easy feat. But let’s see if we can pull it off with a discussion about video quality measurement which is also a dense topic.

Basics of Perceptual Video Quality

Our brains are amazing, especially in the way we process rich visual information. If a picture’s worth 1,000 words. What’s 60 frames per second in 4k HDR worth?

The answer varies based on what part of the ecosystem or business you come from, but we can all agree that it’s really impactful. And data intensive, too. But our eyeballs aren’t perfect and our brains aren’t either – as Brain Games points out. As such, it’s odd that established metrics for video compression quality in the TV business have been built on the idea that human vision is mechanically perfect.

See, video engineers have historically relied heavily on two key measures to evaluate the quality of a video encode: Peak Signal to Noise Ratio, or PSNR, and Structured Similarity, or SSIM. Both metrics are ‘objective’ metrics. That is, we use tools to directly measure the physics of the video signal and construct mathematical algorithms from that data to create metrics. But is it possible to really quantify a beautiful landscape with a number? Let’s see about that.

PSNR and SSIM look at different physics properties of a video, but the underlying mechanics for both metrics are similar. You compress a source video where the properties of the “original” and derivative are then analyzed using specific inputs, and metrics calculated for both. The more similar the two metrics are, the more we can say that the properties of each video are similar, and the closer we can define our manipulation of the video, i.e. our encode, as having a high or acceptable quality.

Objective Quality vs. Subjective Quality


However, it turns out that these objectively calculated metrics do not correlate well to the human visual experience. In other words, in many cases, humans cannot perceive variations that objective metrics can highlight while at the same time, objective metrics can miss artifacts a human easily perceives.

The concept that human visual processing might be less than perfect is intuitive. It’s also widely understood in the encoding community. This fact opens a path to saving money, reducing buffering and speeding-up time-to-first-frame. After all, why would you knowingly send bits that can’t be seen?

But given the complexity of the human brain, can we reliably measure opinions about picture quality to know what bits can be removed and which cannot? This is the holy grail for anyone working in the area of video encoding.

Measuring Perceptual Quality

Actually, a rigorous, scientific and peer-reviewed discipline has developed over the years to accurately measure human opinions about the picture quality on a TV. The math and science behind these methods are memorialized in an important ITU standard on the topic originally published in 2008 and updated in 2012. ITU BT.500 (International Telecommunications Union is the largest standards committee in global telecom.) I’ll provide a quick rundown.

First, a set of clips is selected for testing. A good test has a variety of clips with diverse characteristics: talking heads, sports, news, animation, UGC – the goal is to get a wide range of videos in front of human subjects.

Then, a subject pool of sufficient size is created and screened for 20/20 vision. They are placed in a light-controlled environment with a screen or two, depending on the set-up and testing method.

Instructions for one method is below, as a tangible example.

In this experiment, you will see short video sequences on the screen that is in front of you. Each sequence will be presented twice in rapid succession: within each pair, only the second sequence is processed. At the end of each paired presentation, you should evaluate the impairment of the second sequence with respect to the first one.

You will express your judgment by using the following scale:

5 Imperceptible

4 Perceptible but not annoying

3 Slightly annoying

2 Annoying

1 Very annoying

Observe carefully the entire pair of video sequences before making your judgment.

As you can imagine, testing like this is an expensive proposition indeed. It requires specialized facilities, trained researchers, vast amounts of time, and a budget to recruit subjects.

Thankfully, the rewards were worth the effort for teams like Beamr that have been doing this for years.

It turns out, if you run these types of subjective tests, you’ll find that there are numerous ways to remove 20 – 50% of the bits from a video signal without losing the ‘eyeball’ video quality – even when the objective metrics like PSNR and SSIM produce failing grades.

But most of the methods that have been tried are still stuck in academic institutions or research labs. This is because the complexities of upgrading or integrating the solution into the playback and distribution chain make them unusable. Have you ever had to update 20 million set-top boxes? Well if you have, you know exactly what I’m talking about.

We know the broadcast and large scale OTT industry, which is why when we developed our approach to measuring perceptual quality and applied it to reducing bitrates, we were insistent on staying 100% inside the standard of AVC H.264 and HEVC H.265.

By pioneering the use of perceptual video quality metrics, Beamr is enabling media and entertainment companies of all stripes to reduce the bits they send by up to 50%. This reduces re-buffering events by up to 50%, improves video start time by 20% or more, and reduces storage and delivery costs.

Fortunately, you now understand the basics of perceptual video quality. You also see why most of the video engineering community believes content adaptive sits at the heart of next-generation encoding technologies.

Unfortunately, when we stated above that there were “all kinds of ways” to reduce bits up to 50% without sacrificing ‘eyeball video quality’, we skipped over some very important details. Such as, how we can utilize subjective testing techniques on an entire catalog of videos at scale, and cost efficiently.

Next time: Part 2 and the Opinionated Robot

Looking for better tools to assess subjective video quality?

You definitely want to check out Beamr’s VCT which is the best software player available on the market to judge HEVC, AVC, and YUV sequences in modes that are highly useful for a video engineer or compressionist.

VCT is available for Mac and PC. And best of all, we offer a FREE evaluation to qualified users.

Learn more about VCT: http://beamr.com/h264-hevc-video-comparison-player/

 

VCT, the Secret to Confident Subjective Video Quality Testing

We can all agree that analyzing video quality is one of the biggest challenges when evaluating codecs. Companies use a combination of objective and subjective tests to validate encoder efficiency. In this post, I’ll explore why it is difficult to measure video quality with quantitative metrics alone because they fail to meet the subjective quality perception ability of the human eye.

Furthermore, we’ll look at why it’s important to equip yourself with the best resources when doing subjective testing, and how Beamr’s VCT visual comparison tool can help you with video quality testing.

But first, if you haven’t done so already, be sure to download your free trial of VCT here.

OBJECTIVE TESTING

The most common objective measurement used today is pixel-based Peak Signal to Noise Ratio (PSNR). PSNR is a popular test to use because it is easy to calculate and nearly everyone working in video is familiar with interpreting its values. But it does have limitations. Typically a higher PSNR value correlates to higher quality, while a lower PSNR value correlates to lower quality. However, since this test measures pixel-based mean-squared error over an entire frame; measuring the quality of a frame (or collection of frames) using a single number does not always parallel true subjective quality.

PSNR gives equal weight to every pixel in the frame and each frame in a sequence, ignoring many factors that can affect human perception. For example, below are 2 encoded images of the same frame.1 Image (a) and Image (b) have the same PSNR, which should theoretically correlate to two encoded images of the same quality. However, it is easy to see the difference in this example of perceived quality as viewers would rate Image (a) as exceptionally higher quality than Image (b).

Example: 

PSNR value example of why it shouldn't be the absolute measurement for assessing video quality

Due to the inconsistencies of error-based methods, like PSNR to adequately mimic human eye perception, other methods for analyzing video quality have been developed, including the Structural Similarity Index Metric (SSIM) which measures structural distortion. Unlike PSNR, SSIM addresses image degradation as measures of the perceived change in three major aspects of images: luminance, contrast, and correction. SSIM has gained popularity, but as with PSNR, it has its limitations. Studies have suggested that SSIM’s performance is equal to PSNR’s performance and some have cited evidence of a systematic relationship between SSIM and Mean Squared Error (MSE).2

While SSIM and other quantitative measures including multi-scale structural similarity (MS-SSIM) and the Sarnoff Picture Quality Rating (PQR) have made significant gains, none can truly deliver the same assurance as subjective evaluation, using the human eye. It is also important to note that the two most widely used objective quality metrics mentioned above, PSNR and SSIM, were designed to evaluate static image quality. This means that both algorithms provide no meaningful information regarding motion artifacts, whereby limiting the effectiveness of the metric with regards to video.

SUBJECTIVE TESTING

While objective methods attempt to model human perception, there are no substitutes for subjective “golden-eye” tests. But we are all familiar with the drawbacks of subjectivity analysis, including variance of individual quality perception and the difficulties of executing proper subjective tests in 100% controlled viewing environments so that a large number of testers can participate. Evaluating video using subjective visual tests can reveal key differences that may not get caught by objective measures alone. Which is why it is important to use a combination of both objective and subjective testing methodologies.

One of the logistic difficulties of performing subjective quality comparisons is coordinating simultaneous playback of two streams. Recognizing some of the drawbacks of current subjective evaluation methods, in particular single-stream playback or awkward dual-stream review workarounds, Beamr spent years in research and development to build a tool that offers simultaneous playback of two videos with various comparison modes, to significantly improve the golden-eye test execution necessary to properly evaluate encoder efficiency.

Powered by our professional HEVC and H.264 codec SDK decoders, the Beamr video comparison tool VCT allows encoding engineers and compressionists to play back two frame-synchronized independent HEVC, H.264, or YUV sequences simultaneously. And compare the quality of these streams in four modes:

  1. Split screen
  2. Side-by-side
  3. Overlay
  4. and the newest mode Butterfly

MPEG2-TS and MP4 files containing either HEVC or H.264 elementary streams are also supported. Additionally, VCT displays valuable clip information such as bit-rate, screen resolution, frame rate, number of frames, and other important video information.

Developed in 2012, VCT was the industry’s first internal software player offered as a tool to help Beamr customers conduct subjective testing while evaluating our encoder’s efficiency. Today, VCT has been tested by many content and equipment companies from around the world in multiple markets including broadcast, mobile, and internet streaming, making it the defacto standard for subjective golden-eye video quality testing and evaluation.

VCT BENEFITS AND TIPS

Your FREE trial of VCT will come with an extensive user guide that contains everything you need to get started. But we know you are eager to begin your testing, so following are a few quick tips we trust you will find useful. Take advantage of this “golden” opportunity and get started today!

Note: use Command (⌘) instead of Ctrl for the OS X version of VCT.

  1.      Split Screen Comparison Mode:
    • Benefits:
      • Great for viewing two clips when only one screen is available.
      • Moving slider bar allows you to clearly see quality difference between two streams in your desired region of interest. For example, you can move the slider bar back and forth across a face to see quality differences between two discrete files.
    • Pro Tips:
      • Use the keyboard shortcut Ctrl + \ to re-center the slider bar after it is moved.
      • Shortcut key Ctrl + Tab allows you to change which video appears on the left or right of the slider bar.

VCT split screen comparison mode for subjective video quality assessment

 

  1.       Side-by-side Comparison Mode:
    • Benefits:
      • Great for tradeshows. Solves the lack of synchronization of side by side comparison tests when using two independent players.
      • Single control for both streams.
    • Pro Tip:
      • Shortcut key Ctrl + Tab allows you to change which video appears on which screen without moving the windows.

VCT side-by-side comparison mode for subjective video quality assessment

 

  1.       Overlay Comparison Mode:
    • Benefits:
      • Great for viewing the full frame of one stream on a single window.
    • Tips:
      • Shortcut key Ctrl + Tab allows you to cycle between the two videos. If you do this fast it is a great way to easily see quality differences between the two streams that you might not have noticed.

Overlay Mode

 

  1.      Butterfly Comparison Mode:
    • Benefits:
      • Very useful for determining the accuracy of the encoding process. The butterfly mode displays mirrored images of two sequences to help you assess whether an artifact occurs in the source when comparing an encoded sequence to the original.
    • Tips:
      • Use shortcut key Ctrl + \ to reset the frame to the leftmost view in and use shortcut Ctrl + Alt + \ to switch to the rightmost view in butterfly mode.
      • Use shortcut key Ctrl + [ and Ctrl + ] to move image in butterfly mode left/right.

VCT butterfly comparison mode for subjective video quality assessment

  1.      Other Useful Tips:
    • Ctrl + m allows you to toggle through the 4 comparison modes.
    • Shift + Left Click opens the magnifier tool that allows you to zoom into hard to see areas of the video.
    • Easily scale frames of different resolutions to the same resolution by clicking “scale to same look” on the main menu
    • NEW automatic download feature on the splash screen notifies you of the latest version updates to ensure you’re always up to date.
    • For more great features be sure to check out the VCT userguide beamr.com/vct/userguide.com.

 

Reference:

(1)   P. M. Arun Kumar and S. Chandramathi. Video Quality Assessment Methods: A Bird’s-Eye View

(2)   Richard Dosselmann and Xue Dong Yang. A Formal Assessment of the Structural Similarity Index