Microservices – Good on a Bad Day [podcast]

Live streaming is arguably the least forgiving industry in today’s market. Anyone involved with live streaming workflows understands the sensitivity and high stakes involved with live streaming any event. Your viewers, on the other hand, don’t factor in the complexities of what happens behind the scenes when it comes to their quality expectations – but they certainly notice when something goes awry. In the words of id3as’ Adrien Roe, “What differentiates a great service from a merely good service is what happens when things go wrong.” And that’s where microservices can save the day.

In “Episode 07: Microservices – Good on a Bad Day,” The Video Insiders sit down with Dom Robinson & Dr. Adrian Roe from id3as to discuss how broadcasters are leveraging microservices to solve some of their workflow challenges.

https://youtu.be/Bt2earTfEU8

Press play to hear a snippet from Episode 07 or click here for the full episode.

Want to join the conversation? Reach out to TheVideoInsiders@beamr.com

Increasing Your Website’s Google Ranking With Smaller Images

For web users, waiting on slow-loading websites is about as exciting as watching paint dry. Those slow-moving progress bars and spinning wheels indicate you’re going nowhere fast, but what some may not realize is that the real impact reaches far beyond a poor user experience.

Google is moving to de-prioritize slow-loading sites in mobile search results. Starting in July, pagespeed will become a ranking factor in mobile web search, pushing content owners to make mobile optimization a priority in their workflows. Mobile Optimization Video Interview with RapidTV News.

For brands and content owners familiar with Google’s algorithms, you may know that Google has penalized slow-loading pages for SEO (search engine optimization) on the desktop for some time now, favoring websites built with user experience as a priority. With consumers now expecting the same experience across all of their devices, Google is extending their pagespeed policy to mobile search results. As websites transition away from static content to offer more dynamic, personalized, and engaging experiences for users, we’ve seen design, information architecture, and photo optimization turn from “nice to have” to necessities.

The Speed Update, as Google calls it, will affect pages that deliver the slowest experience to mobile users—raising the standard for all mobile sites and demanding more from your web developer.

“We encourage developers to think broadly about how performance affects a user’s experience of their page and to consider a variety of user experience metrics,” Google said, in a posting. “People want to be able to find answers to their questions as fast as possible. Studies show that people really care about the speed of a page.”

Of course, the challenge in creating fast mobile website experiences somewhat revolves around the other parts of the ecosystem: Mobile phones lack the processing power and memory that desktops and laptops do, for one; and, even with LTE-Advanced 4G, mobile networks are prone to congestion and slowdowns at busy times of the day. But developers do have the ability to make the experience better.

What you can do – how to get started.

The first order of business is to evaluate your website’s performance. Although no tool directly indicates whether a page is affected by the new ranking factor, Google houses Lighthouse, an automated tool for auditing the quality (performance, accessibility and more) of web pages, which developers can use in benchmarking their pages’ performance. You should also check out the mobile-friendliness of your site with Google’s Mobile-Friendly Test tool. Both Lighthouse and Mobile-Friendly are free, so there is no excuse to use them!

Next, identify where the content can be streamlined for speed. For instance, given that photos and videos are the most substantial content on your page and account for the lion’s share of bits that get transferred over the network, site owners can apply optimization tools like JPEGmini to compress image and photo file sizes without compromising quality. By reducing the size of each photo file, pages can deliver higher quality experiences to mobile users over lower-bandwidth channels (like 3G, in-flight WiFi or crowded networks), thus improving the overall user experience— giving the page’s search rankings a boost and reducing CDN cost at the same time.

Other best practices include reducing “code bloat,” where mobile sites are bogged down by excess code. Mobile sites should be as lightweight as possible, without a lot of background graphics, complex design or moving gifs. Reducing pop-up ads can also improve page-loading performance. For other ideas, Google offers PageSpeed Insights, a tool that indicates how well a page performs on the Chrome User Experience Report and suggests performance optimizations.

Sure, the intent of the search query is still pretty crucial for Google, so a slow-loading page may result in a high ranking if it has highly relevant content (for instance, an exact phrase match to the search query). And, most mobile website visits today originate from apps (think Facebook, Snapchat, and Pinterest), i.e., driving traffic to viral or curated content presented within a specific context, regardless of speed. But mobile developers should nonetheless heed the decree of Google, given that 40% of mobile web users abandon a site if it fails to load in three seconds or less.

Check out our article “Are viewers the new content aggregators?

And those creating business pages should take particular notice: As more internet traffic moves to mobile networks, websites cannot afford to ignore the data load on their pages. So, taking the steps necessary to improve mobile page performance is now a business visibility imperative, while making your audience happier in the process.

Are viewers the new content aggregators?

Consumer spending on video streaming services is expected to rise 39% this year to about $13 billion, according to the Consumer Technology Association—a staggering factoid, especially when one considers that the average consumer around the world already views 4.4 hours of video a day.

That CTA data point, released at the just-wrapped Consumer Electronics Show (CES), dovetails with a TiVo survey (also released at CES 2018) that indicates an ongoing blurring of the line between streaming and more traditional sources of content. TiVo found that 90% of households subscribe to a traditional pay TV service. Nonetheless, 60% are also subscribers to at least one streaming service, suggesting that the 4.4 waking hours of video consumption is comprised of a mix of sources—with streaming poised to take on an escalating role.

A big driver for this is the proliferation of pacts between pay-TV providers and OTT providers (for instance, Comcast’s integration of Netflix into its Xfinity X1 interface). In turn, this has created a groundswell of streaming services moving to “input one”— meaning that consumers don’t have to toggle between devices or remotes in order to tap into streamed content. That convenience factor is ushering in a new era of content consumption. On-demand binge watching as a shared experience between families and friends in the living room has made shows like Stranger Things, The Handmaid’s Tale, and Mindhunter into overnight pop-culture sensations. Streaming may still be unicast on a technical level, but it’s no longer strictly experienced by one individual viewing content on a secondary screen on his or her own, or on one platform.

“Consumers today are acting as their own aggregator, piecing together what they need from a variety of video service and device combinations to suit their individual needs,” according to the survey. “Success in this new environment will not be about a single content source monopolizing the living room, instead it will be about adapting the business model to deliver value, integrated services, and personalization to meet the evolving consumer needs.”

It helps, too, that smart TV ownership in the US has nearly doubled since 2013, with an average of three connected TV (CTV) devices owned per household, according to yet another CES survey, from Nielsen. It found that 74% of people use their CTV daily and that CTV streaming patterns mirror traditional linear TV. The highest usage by those surveyed was in the 8-11 pm prime-time block.

Along with this, mobile streaming continues to be an important part of the landscape. At CES, it was clear that the TV isn’t the only piece of real-estate catering to the OTT use case. For instance, Razer and Netflix announced a deal to make the service available on the new Razer Phone, which supports HDR video and Dolby Digital Plus 5.1 sound. It’s a device that’s tailored for high-end media consumption, with the Razer Phone standing as the first smartphone to support both premium audio and video formats for Netflix.

It’s exciting to imagine where we’ll be when next year’s CES rolls around, given that the trends we’ve highlighted here, taken together, point to OTT video beginning to permeate every corner of consumer lives (and across screens that are making it easier than ever to access it). As a result, a complex entertainment economy is emerging, where cutting-edge devices, marquee content, better carrier networks and infrastructure innovation on the compression and quality of experience front all work together to deliver new, revenue-generating consumer experiences that put the viewer in the driver seat.

CES 2018: Connected TV Tunes into the User Experience

As we roll into 2018, almost half of all US broadband households (45%) now own a smart TV, according to Parks Associates research – and it’s now the most commonly used platform for watching online video content. At the same time, consumers are getting choosy about their user experience (UX), meaning that key points of differentiation for the connected video device market moving forward will be ease of use, content discoverability and, above all, streaming quality.

Makers of smart TVs and streaming media players are in the process of shifting strategies to focus on the UX, which means beefing up middleware, adding bells and whistles (like intelligent voice control) and implementing strategic advances in video quality optimization, such as support for advanced HEVC encoding.

“Parks Associates’ holiday data found 11% of US broadband households had a strong intention to purchase a 4K/Ultra HD TV this holiday season, but overall, device sales of flat-panel TVs have flattened out,” said Jennifer Kent, director, research quality and product development, Parks Associates. “As a result, we are seeing new partnerships among device manufacturers focused on ways to improve or refresh the UI of the smart TV, to make the device easy to use and a single point of content in the living room.”

When it comes to making UX a key differentiator, improving how users search for and discover new content is a growing battlefield. Consumers are for instance expressing a thirst for cross-catalog, cross-platform search, where results from all video providers are in one place, be they streaming services or linear/traditional pay-TV offerings. To that end, in late 2017, Philips partnered with Roku to launch a line of smart TVs that use Roku’s platform to simplify remote control needs and content navigation.

Meanwhile, voice is making inroads into the connected entertainment area: More than 50% of US broadband households find voice control appealing for entertainment and smart-home devices, according to the Parks survey.

“Voice recognition and control are enabling entertainment equipment manufacturers to improve the user experience. An emphasis on a voice-enabled UX will be a key trend in connected CE for 2018,” said Dina Abdelrazik, research analyst, Parks Associates. “We expect to see more voice innovations in streaming services and connected platforms at CES this year.”

These UX advancements, however, won’t translate into market differentiation without one very important piece: A superior video quality of experience.

On this front, we see moves that place video optimization front-and-center, such as Apple enabling HEVC on up to one billion devices thanks to the release of iOS 11 and High Sierra back in September. Also, video streaming services and hardware manufacturers across the board are reevaluating their codec approaches in light of the fact that HEVC offers significantly improved video quality: Up to 40% greater compression with fewer artifacts and smoother playback than H.264. That also translates to the ability to stream 4K and HDR video over networks with reasonable bandwidth consumption, paving the way for more Ultra HD content availability and thus enhanced consumer demand for those connected devices that support it.

In 2018, a high-quality UX that can woo viewers with the right mix of top-notch streaming quality and advancements in content discovery will no longer be a nice-to-have when it comes to connected TV – it will be a critical linchpin for the competitive landscape moving forward. We expect this to be one of the main conversations at CES this week – and we can’t wait to join in.

2016 Paves the Way for a Next-Gen Video Encoding Technology Explosion in 2017

2016 has been a significant year for video compression as 4K, HDR, VR and 360 video picked up steam, paving the road for an EXPLOSION of HEVC adoption in 2017. With HEVC’s ability to reduce bitrate and file sizes up to 50% over H.264, it is no surprise that HEVC has transitioned to be the essential enabler of high-quality and reliable streaming video powering all the new and exciting entertainment experiences being launched.

Couple this with the latest announcement from HEVC Advance removing royalty uncertainties that plagued the market in 2016 and we have a perfect marriage of technology and capability with HEVC.

In this post we’ll discuss 2016 from the lenses of Beamr’s own product and company news, combined with notable trends that will shape 2017 in the advanced video encoding space.  

>> The Market Speaks: Setting the Groundwork for an Explosion of HEVC

The State of 4K

With 4K content creation growing and the average selling price of UHD 4K TVs dropping (and being adopted faster than HDTVs), 4K is here and the critical mass of demand will follow closely. We recently did a little investigative research on the state of 4K and four of the most significant trends pushing its adoption by consumers:

  • The upgrade in picture quality is significant and will drive an increase in value to the consumer – and, most importantly, additional revenue opportunities for services as consumers are preconditioned to pay more for a premium experience. It only takes a few minutes viewing time to see that 4K offers premium video quality and enhances the entertainment experience.
  • Competitive forces are operating at scale – Service Providers and OTT distributors will drive the adoption of 4K. MSO are upping their game and in 2017 you will see several deliver highly formidable services to take on pure play OTT distributors. Who’s going to win, who’s going to lose? We think it’s going to be a win-win as services are able to increase ARPUs and reduce churn, while consumers will be able to actually experience the full quality and resolution that their new TV can deliver.
  • Commercially available 4K UHD services will be scaling rapidly –  SNL Kagan forecasts the number of global UHD Linear channels at 237 globally by 2020, which is great news for consumers. The UltraHD Forum recently published a list of UHD services that are “live” today numbering 18 VOD and 37 Live services with 8 in the US and 47 outside the US. Clearly, content will not be the weak link in UHD 4K market acceptance for much longer.
  • Geographic deployments — 4K is more widely deployed in Asia Pacific and Western Europe than in the U.S. today. But we see this as a massive opportunity since many people are traveling abroad and thus will be exposed to the incredible quality. They will then return home to question their service provider, why they had to travel outside the country to see 4K. Which means as soon as the planned services in the U.S. are launched, they will likely attract customer more quickly than we’ve seen in the past.

HDR adds WOW factor to 4K

High Dynamic Range (HDR) improves video quality by going beyond more pixels to increase the amount of data delivered by each pixel. HDR video is capable of capturing a larger range of brightness and luminosity to produce an image closer to what can be seen in real life. Show anyone HDR content encoded in 4K resolution, and it’s no surprise that content providers and TV manufacturers are quickly jumping on board to deliver content with HDR. Yes, it’s “that good.” There is no disputing that HDR delivers the “wow” factor that the market and consumers are looking for. But what’s even more promising is the industry’s overwhelmingly positive reaction to it. Read more here.

Beamr has been working with Dolby to enable Dolby Vision HDR support for several years now, even jointly presenting a white paper at SMPTE in 2014. The V.265 codec is optimized for Dolby Vision and HDR10 and takes into account all requirements for both standards including full support for VUI signaling, SEI messaging, SMPTE ST 2084:2014 and ITU-R BT.2020. For more information visit http://beamr.com/vanguard-by-beamr-content-adaptive-hevc-codec-sdk

Beamr is honored to have customers who are best in class and span OTT delivery, Broadcast, Service Providers and other entertainment video applications. From what we see and hear, studios are uber excited about HDR, cable companies are prepping for HDR delivery, Satellite distributors are building the capability to distribute HDR, and of course OTT services like Netflix, FandangoNow (formerly M-GO), VUDU, and Amazon are already distributing content using either Dolby Vision or HDR10 (or both). If your current video encoding workflow cannot fully support or adequately encode content with HDR, it’s time to update. Our V.265 video encoder SDK is a perfect place to start.

VR & 360 Video at Streamable Bitrates

360-degree video made a lot of noise in 2016.  YouTube, Facebook and Twitter added support for 360-degree videos, including live streaming in 360 degrees, to their platforms. 360-degree video content and computer-generated VR content is being delivered to web browsers, mobile devices, and a range of Virtual Reality headsets.  The Oculus Rift, HTC Vive, Gear VR and Daydream View have all shipped this year, creating a new market for immersive content experiences.

But, there is an inherent problem with delivering VR and 360 video on today’s platforms.  In order to enable HD video viewing in your “viewport” (the part of the 360-degree space that you actually look at), the resolution of the full 360 video delivered to you should be 4K or more.  On the other hand, the devices on the market today which are used to view this content, including desktops, mobile devices and VR headsets only support H.264 video decoding. So delivering the high-resolution video content requires very high bitrates – twice as much as using the more modern HEVC standard.

The current solution to this issue is lowered video quality in order to fit the H.264 video stream into a reasonable bandwidth. This creates an experience for users which is not the best possible, a factor that can discourage them from consuming this newly-available VR and 360 video content.  But there’s one thing we know for sure – next generation compression including HEVC and content adaptive encoding – and perceptual optimization – will be a critical part of the final solution. Read more about VR and 360 here.

Patent Pool HEVC Advance Announces “Royalty Free” HEVC software

As 4K, HDR, VR and 360 video gathers steam, Beamr has seen the adoption rate moving faster than expected, but with the unanswered questions around royalties, and concerns of who would shoulder the cost burden, distributors have been tentative. The latest move by HEVC Advance to offer a royalty free option is meant to encourage and accelerate the adoption (implementation) of HEVC, by removing royalty uncertainties.

Internet streaming distributors and software application providers can be at ease knowing they can offer applications with HEVC software decoders without incurring onerous royalties or licensing fees. This is important as streaming app content consumption continues to increase, with more and more companies investing in its future.

By initiating a software-only royalty solution, HEVC Advance expects this move to push the rest of the market i.e. device manufacturers and browser providers to implement HEVC capability in their hardware and offer their customers the best and most efficient video experience possible.

 

>> 2017 Predictions

Mobile Video Services will Drive the Need for Content-adaptive Optimization

Given the trend toward better quality and higher resolution (4K), it’s more important than ever for video content distributors to pursue more efficient methods of encoding their video so they can adapt to the rapidly changing market, and this is where content-adaptive optimization provides a massive benefit.

The boundaries between OTT services and traditional MSO (cable and satellite) are being blurred now that all major MSOs include TVE (TV Everywhere streaming services with both VOD and Linear channels) in their subscription packages (some even break these services out separately as is the case with SlingTV). And in October, AT&T CEO Randall Stephenson vowed that DirecTV Now would disrupt the pay-TV business with revolutionary pricing for an  Internet-streaming service at a mere $35 per month for a package with more than 100 channels.

And get this – AT&T wireless is adopting the practice of “zero rating” for their customers, that is, they will not count the OTT service streaming video usage toward the subscriber’s monthly data plan. This represents a great value for customers, but there is no doubt that it puts pricing pressure on the operational side of all zero rated services.

2017 is the year that consumers will finally be able to enjoy linear as well as VOD content anywhere they wish even outside the home.

Beamr’s Contribution to MSOs, Service Providers, and OTT Distributors is More Critical Than Ever

When reaching to consumers across multiple platforms, with different constraints and delivery cost models, Beamr’s content adaptive optimizer perfects the encoding process to the most efficient quality and bitrate combination.

Whether you pay by the bit delivered to a traditional CDN provider, or operate your own infrastructure, the benefits of delivering less traffic are realized with improved UX such as faster stream start times and reduced re-buffering events, in addition to the cost savings. One popular streaming service reported to us that after implementing our content-adaptive optimization solution their rebuffering events as measured on the player were reduced by up to 50%, while their stream start times improved 20%.

Recently popularized by Netflix and Google, content-adaptive encoding is the idea that not all videos are created equal in terms of their encoding requirements. Content-adaptive optimization complements the encoding process by driving the encoder to the lowest bitrate possible based on the needs of the content, and not a fixed target bitrate (as seen in traditional encoding processes and products).

A content-adaptive solution can optimize more efficiently by analyzing already-encoded video on a frame-by-frame and scene-by-scene level, detecting areas of the video that can be further compressed without losing perceptual quality (e.g. slow motion scenes, smooth surfaces).

Provided the perceptual quality calculation is performed at the frame level with an optimizer that contains a closed loop perceptual quality measure, the output can be guaranteed to be the highest quality at the lowest bitrate possible. Click the following link to learn how Beamr’s patented content adaptive optimization technology achieves exactly this result.

Encoding and Optimization Working Together to Build the Future

Since the content-adaptive optimization process is applied to files that have already been encoded, by combining an industry leading H.264 and HEVC encoder with the best optimization solution (Beamr Video), the market will be sure to benefit by receiving the highest quality video at the lowest possible bitrate and file size. As a result, this will allow content providers to improve the end-user experience with high quality video, while meeting the growing network constraints due to increased mobile consumption and general Internet congestion.

Beamr made a bold step towards delivering on this stated market requirement by disrupting the video encoding space when in April 2016 we acquired Vanguard Video – a premier video encoding and technology company. This move will benefit the industry starting in 2017 when we introduce a new class of video encoder that we call a Content Adaptive Encoder.

As content adaptive encoding techniques are being adopted by major streaming services and video platforms like YouTube and Netflix, the market is gearing up for more advanced rate control and optimization methods, something that fits our perceptual quality measure technology perfectly. This fact when combined with Beamr having the best in class HEVC software encoder in the industry, will yield exciting benefits for the market. Read the Beamr Encoder Superguide that details the most popular methods for performing content adaptive encoding and how you can integrate them into your video workflow.

One Year from Now…

In one year from now when you read our post summarizing 2017 and heralding 2018, what you will likely hear is that 2017 was the year that advanced codecs like HEVC combined with efficient perceptually based quality measures, such as Beamr’s, provide an additional 20% or greater bitrate reduction.

The ripple effect of this technology leap will be that services struggling to compete today on quality or bitrate, may fall so far behind that they lose their ability to grow the market. We know of many multi-service operator platforms who are gearing up to increase the quality of their video beyond the current best of class for OTT services. That is correct, they’ve watched the consumer response to new entrants in the market offering superior video quality, and they are not sitting still. In fact, many are planning to leapfrog the competition with their aggressive adoption of content adaptive perceptual quality driven solutions.  

If any one service assumes they have the leadership position based on bitrate or quality, 2017 may prove to be a reshuffling of the deck.

For Beamr, the industry can expect to see an expansion of our software encoder line with the integration of our perceptual quality measure which has been developed over the last 7 years, and is covered by more than 50 patents granted and pending. We are proud of the fact that this solution has been shipping for more than 3 years in our stand-alone video and photo optimizer solutions.

It’s going to be an exciting year for Beamr and the industry and we welcome you to join us. If you are intrigued and would like to learn more about our products or are interested in evaluating any of our solutions, check us out at beamr.com.

Patent Pool HEVC Advance Responds: Announces “Royalty Free” HEVC Software

HEVC Advance Releases New Software Policy

November 22nd 2016 may be shown by history as the day that wholesale adoption of HEVC as the preferred next generation codec began. For companies like Beamr who are innovating on next-generation video encoding technologies such as HEVC, the news HEVC Advance announced on to drop royalties (license fees) on certain applications of their patents is huge.

In their press release, HEVC Advance, the patent pool for key HEVC technologies stated that they will not seek a license fee or royalties on software applications that utilize the HEVC compression standard for encoding and decoding. This carve out only applies to software which is able to be run on commodity servers, but we think the restriction fits beautifully with where the industry is headed.

Did you catch that? NO HEVC ROYALTIES FOR SOFTWARE ENCODERS AND DECODERS!

Specifically, the policy will protect  “application layer software downloaded to mobile devices or personal computers after the initial sales of the device, where the HEVC encoding or decoding is fully executed in software on a general purpose CPU” from royalty and licensing fees.  

Requirements of Eligible Software

For those trying to wrap their heads around eligibility, the new policy outlines three requirements which the software products performing HEVC decoding or encoding must meet:

  1. Application layer software, or codec libraries used by application layer software, enabling software-only encoding or decoding of HEVC.
  2. Software downloaded after the initial sale of a related product (mobile device or desktop personal computer). In the case of software which otherwise would fit the exclusion but is being shipped with a product, then the manufacturer of the product would need to pay a royalty.
  3. Software must not be specifically excluded.

Examples of exempted software applications where an HEVC decode royalty will likely not be due includes web browsers, personal video conferencing software and video players provided by various internet streaming distributors or software application providers.

For more information check out  https://www.hevcadvance.com/

As stated previously, driven by the rise of virtual private and public cloud encoding workflows, provided an HEVC encoder meets the eligibility requirements, for many companies it appears that there will not be an added cost to utilize HEVC in place of H.264.

A Much Needed Push for HEVC Adoption

As 4k, HDR, VR and 360 video are gathering steam, Beamr has seen the adoption rate moving faster than expected, but with the unanswered questions around royalties, and concerns of the cost burden, even the largest distributors have been tentative. This move by HEVC Advance is meant to encourage and accelerate the adoption (implementation) of HEVC, by removing uncertainties in the market.

Internet streaming distributors and software application providers can be at ease knowing they can offer applications with HEVC software decoders without incurring onerous royalties or licensing fees. This is important as streaming app content consumption continues to increase, with more and more companies investing in its future.

By initiating a software-only royalty solution, HEVC Advance expects this move to push the rest of the market i.e. device manufacturers and browser providers to implement HEVC capability in their hardware and offer their customers the best and most efficient video experience possible.

What this Means for a Video Distributor

Beamr is the leader in H.265/HEVC encoding. With 60 engineers around the world working at the codec level to produce the highest performing HEVC codec SDK in the market, Beamr V.265 delivers exceptional quality with much better scalability than any other software codec.

Industry benchmarks are showing that H.265/HEVC provides on average a 30% bitrate efficiency for the same quality and resolution over H.264. Which given the bandwidth pressure all networks are under to upgrade quality while minimizing the bits used, there is only one video encoding technology available at scale to meet the needs of the market, and that is HEVC.

The classic chicken and egg problem no longer exists with HEVC.

The challenge every new technology faces as it is introduced into the market is the classic problem of needing to attract implementers and users. In the case of a video encoding technology, without an appropriately scaled video playback ecosystem, no matter the benefits, it cannot be deployed without a sufficiently large number of players in the market.

But the good news is that over the last few years, and as consumers have propelled the TV upgrade cycle forward, many have opted to purchase UHD 4k TVs.

Most of the 2015-2016 models of major brand TVs have built-in HEVC decoders and this trend will continue in 2017 and beyond. Netflix, Amazon, VUDU, and FandangoNow (M-GO) are shipping their players on most models of UHD TVs that are capable of decoding and playing back H.265/HEVC content from these services. These distributors were all able to utilize the native HEVC decoder in the TV, easing the complexity of launching a 4k app.

For those who wonder if there is a sufficiently large ecosystem of HEVC playback in the market, just look at the 90 million TVs that are in homes today globally (approximately 40 million are in the US). And consider that in 2017 the number of 4k HEVC capable TV’s will nearly double to 167 million according to Cisco, as illustrated below.

cisco-vni-global-ip-traffic-forecast-2015-2020

The industry has spoken regarding the superior quality and performance of Beamr’s own HEVC encoder, and we will be providing benchmarks and documentation in future blog posts. Meanwhile our team of architects and implementation specialists who work with the largest service providers, SVOD consumer streaming services, and broadcasters in the world are ready to discuss your migration plans from H.264 to HEVC.

Just fill out our short Info Request form and the appropriate person will get in touch.

The State of Commercially Available 4K UHD Services

In a recent article we did a little investigative research on the state of 4K and four significant trends:

  1. The upgrade in picture quality is significant and will drive an increase in value to the consumer – and additional revenues for services.
  2. Competitive forces are operating at scale – Service Providers and OTT distributors will drive the adoption of 4K.
  3. SNL Kagen forecasts the number of global UHD Linear channels at 95 by the end of 2016 – and 237 globally by 2020.
  4. Geography. 4K is already far more widely deployed in Asia Pacific and Western Europe than in the U.S.

In this article we want to further highlight the state of commercially available 4K UHD services. The UltraHD Forum published a list of UHD services that are “live” and it’s worth checking out.

To break it down, there are 18 VOD and 37 Live services with 8 in the US and 47 outside the US.

The 4K adoption rate isn’t moving as slowly as one might think, so don’t make the mistake of misreading its speed. It’s time to start building your 4K workflows now as the competitive pressure is fast approaching.

Note: The following UHD service chart is courtesy UltraHD Forum.

Operator Country Service Topology Delivery Model Notes
AcTVila Japan VoD OTT Unicast ABR
airtel 4K India Live IPTV broadcast
Amazon US VoD OTT Unicast ABR
Bein Middle East Live DTH Broadcast
BT UK Live IPTV broadcast
Comcast US Push VoD Cable DOCSIS 3.x NBC used HDR10 & Atmos for Rio Olympics
Dalian Tiantu China TS Playout Cable unverified
DirecTV US VoD DTH Push VoD
Dish UHD promo Live IPTV broadcast
Fashion one (SES) Luxembourg Live DTH broadcast
Festival4K France Live IPTV broadcast
Fransat France Live DTH broadcast
Fransat France TS Playout DTH broadcast
Free France Live IPTV Multicast Android middleware, 1 channel at launch: Fashion TV loop
Globo TV Brazil VoD OTT Unicast ABR
High 4K TV Live IPTV broadcast
insight Live IPTV broadcast
Inspur China Live Cable unverified
J:COM Japan Live Cable Broadcast
KPN Netherlands Live IPTV Multicast
KT Korea Live IPTV Multicast
LG Uplus Korea VoD / Live ? IPTV Multicast
M-Go US VoD OTT Unicast ABR
Nasa TV US/Europe Live IPTV broadcast
Netflix US VoD OTT Unicast ABR
NOS Portugal Live Cable Broadcast, Multicast, Unicast ABR OTT trials have occured
NTT Plala Japan Live / VoD IPTV Multicast
Orange France France Live IPTV Multicast Dolby Atmos available on some broadcasts
pearl tv Luxembourg Live DTH broadcast
SFR France Live IPTV Multicast UHD used to promote Fiber
SKBB Korea Live IPTV Multicast
Sky Deutschland Germany Live / Push-VoD DTH / Cable broadcast Launched October 5th 2016, 2 Live channels + Push VoD
Sky Italia Italy Live DTH broadcast “Super HD” launched June 2016, HDR Announced for 2017
Sky UK UK Live DTH broadcast Available to premium Sky Q customers
SkyLife Korea Live DTH broadcast
SkyPerfecTV Japan Live DTH / Opticast broadcast HDR announced for October 2016
Slovak Telecom Slovakia VoD OTT Unicast ABR
Sony US VoD OTT Unicast ABR
Sth Korea’s Pandora Korea VoD OTT Unicast ABR
Stofa Dennmark Live cable Multicast Viasat Ultra HD
Swisscom Switzerland Live & VoD IPTV Multicast Testing HDR
Tata Sky India Live DTH broadcast cricket world cup’15
Telekom Malaysia Malaysia Live IPTV Multicast Demonstration/Trials – Launch soon
Telus Canada VoD OTT Unicast ABR Starts with VoD – Live coming soon
Tivusat Italy Live DTH Broadcast
Tricolor Russia TS Playout DTH broadcast
Turkcell Turkey Live IPTV Multicast
UHD-1 Live IPTV broadcast
UMAX Korea TS Playout Cable broadcast
Videocon India Live DTH broadcast cricket world cup’15
Vidity US VoD OTT Unicast ABR
Vodafone Portugal Portugal Live IPTV Multicast
Vodafone Spain Spain Live / VoD IPTV Multicast, Unicast
VUDU US VoD OTT Unicast ABR Dolby Vision and Atmos support announced
waiku tv France VoD OTT Unicast ABR

Can we profitably surf the Video Zettabyte Tsunami?

Two key ingredients are in place. But we need to get started now.

In a previous post, we warned about the Zettabyte video tsunami – and the accompanying flood of challenges and opportunities for video publishers of all stripes, old and new. 

Real-life tsunamis are devastating. But California’s all about big wave surfing, so we’ve been asking this question: Can we surf this tsunami?

The ability to do so is going to hinge on economics. So a better phrasing is perhaps: Can we profitably surf this video tsunami?

Two surprising facts came to light recently that point to an optimistic answer, and so we felt it was essential to highlight them.

1. The first fact is about the Upfronts – and it provides evidence that 4K UHD content can drive growth in top-line sales for media companies.

The results from the Upfronts – the annual marketplace where networks sell ad inventory to premium brand marketers – provided TV industry watchers a major upside surprise. This year, the networks sold a greater share of ad inventory at their upfront events, and at higher prices too. As Brian Steinberg put it in his July 27, 2016 Variety1 article:

“The nation’s five big English-language broadcast networks secured between $8.41 billion and $9.25 billion in advance ad commitments for primetime as part of the annual “upfront” market, according to Variety estimates. It’s the first time in three years they’ve managed to break the $9 billion mark. The upfront finish is a clear signal that Madison Avenue is putting more faith in TV even as digital-video options abound.”

Our conclusion? Beautiful, immersive content environments with a more limited number of high-quality ads can fuel new growth in TV. And 4K UHD, including the stunning impact of HDR, is where some of this additional value will surely come from.

Conventional wisdom is that today’s consumers are increasingly embracing ad-free SVOD OTT content from premium catalogs like Netflix, even when they have to pay for it. Since they are also taking the lead on 4K UHD content programming, that’s a great sign that higher value 4K UHD content will drive strong economics. But the data from the Upfronts also seems to suggest that premium ad-based TV content can be successful as well, especially when the Networks create immersive, clutter-free environments with beautiful pictures. 

Indeed, if the Olympics are any measure, Madison Avenue has received the message and turned up their game on the creative. I saw more than a few head-turning :30-second spots. Have you seen the Chobani ads in pristine HD? They’re as powerful as it gets.2

Check out this link to see the ads.

2. The second fact is about the operational side of the equation.

Can we deliver great content at a reasonable cost to a large enough number of homes?  On that front, we have more good news. 

The Internet in the United States is getting much faster. This, along with advanced methods of compression including HEVC, Content Adaptive Encoding and Perceptual Quality Metrics, will result in a ‘virtual upgrade’ of existing delivery network infrastructure. In particular, Ookla’s Speedtest.net published data on August 3, 2016 contained several stunning nuggets of information. But before we reveal the data, we need to provide a bit of context.

It’s important to note that 4K UHD content requires bandwidth of 15 Mbps or greater. Let’s be clear, this assumes Content Adaptive Encoding, Perceptual Quality Metrics, and HEVC compression are all used in combination. However, in Akamai’s State of the Internet report released in Q1 of this year, only 35% of the US population could access broadband speeds of 15 Mbps.

(Note: We have seen suggestions that 4K UHD content requires up to 25 Mbps. Compression technologies improve over time and those data points may well be old news. Beamr is on the cutting edge of compression and we firmly believe that 10 – 15 Mbps is the bandwidth needed – today – to achieve stunning 4K UHD audio visual quality.)

And that’s what makes Ookla’s data so important. Ookla found that in the first 6 months of 2016, fixed broadband customers saw a 42% year-over-year increase in average download speeds to a whopping 54.97 Mbps. Even more importantly, while 10% of Americans lack basic access to FCC target speeds of 25 Mbps, only 4% of urban Americans lack access to those speeds. This speed boost seems to be a direct result of industry consolidation, network upgrades, and growth in fiber optic deployments.

After seeing this news, we also decided to take a closer look at that Akamai data. And guess what we found? A steep slope upward from prior quarters (see chart below).

To put it back into surfing terms: Surf’s Up!
time-based-trends-in-internet-connection-speeds-and-adoption-rates

References:

(1) “How TV Tuned in More Upfront Ad Dollars: Soap, Toothpaste and Pushy Tactics” Brian Steinberg, July 27, 2016: http://variety.com/2016/tv/news/2016-tv-upftont-networks-advertising-increases-1201824887/ 

(2)  Chobani ad examples from their YouTube profile: https://www.youtube.com/watch?v=DD5CUPtFqxE&list=PLqmZKErBXL-Nk4IxQmpgpL2z27cFzHoHu

Data Caps, Zero-rated, Net Neutrality: The Video Tsunami Doesn’t Take Sides

We Need to Work Together to Conserve Bits in the Zettabyte Era

Over the past year, and again last week, there has been no shortage of articles and discussion around data caps, binge-on, zero rated content, and of course network neutrality.

We know the story. Consumer demand for Internet and over-the-top video content is insatiable. This is creating an unstoppable tsunami of video.

Vendors like Cisco have published the Visual Network Index to help the industry forecast how big that wave is, so we can work together to find sustainable ways to deliver it.

The Cisco VNI is projecting that internet video traffic will more than double to 2.3 Zettabytes by 2020. (Endnote 1.) To put it another way, that’s 1.3 Billion DVDs of video crossing the internet daily in 2020, versus the 543 Million DVDs of video that crossed the internet today.

That’s still tough to visualize, so here’s a back-of-the-envelope thought experiment

Let’s take the single largest TV event in history, Super Bowl 49.

114 million viewers on average, every minute, watched Super Bowl 49 in 2015. The broadcast is about 3 hours and 35 minutes.  We might say that 24.5 Billion cumulative viewer-minutes of video were watched.

Assume that a DVD holds 180 minutes of video. (Note, this is an inexact guess assuming a conservative video quality.) If one person watched 543 Million DVDs of video, she would have to spend 97.8 billion cumulative minutes watching all of it. That’s four Super Bowl 49s every day.

And in 2020, it’s going to be close to 10 Super Bowl 49s of cumulative viewer-minutes of video trafficking across the network. In one day.

That is a lot of traffic and it is going to be hard work to transport those bits in a reliable, high-quality fashion that is also economically sustainable.

And that’s true no matter whether you are a network operator or an over-the-top content distributor. Here’s why.

All Costs are Variable in the Long-run

Recently, Comcast and Netflix have agreed to partner, which bodes well for both companies’ business models, and for the consumer at large. However, last week there were several news headlines about data caps and zero-rated content. These will undoubtedly continue.

Now, it’s obvious that OTT companies like Netflix & M-GO need to do everything they can to reduce the costs of video delivery. That’s why both companies have pioneered new approaches to video quality optimization.

On the other hand, it might seem that network operators have a fixed cost structure that gives them wiggle room for sub-optimal encodes.

But it’s worth noting this important economic adage: In the long run, all costs are variable. When you’re talking about the kind of growth in video traffic that industry analysts are projecting to 2020, everything is a variable cost.

And when it comes to delivering video sustainably, there’s no room for wasting bits. Both network operators and over-the-top content suppliers will need to do everything they can to lower the number of bits they transport without damaging the picture quality of the video.

In the age of the Zettabyte, we all need to be bit conservationists.

 

Endnote 1: http://www.cisco.com/c/dam/m/en_us/solutions/service-provider/vni-forecast-widget/forecast-widget/index.html

Translating Opinions into Fact When it Comes to Video Quality

This post was originally featured at https://www.linkedin.com/pulse/translating-opinions-fact-when-comes-video-quality-mark-donnigan 

In this post, we attempt to de-mystify the topic of perceptual video quality, which is the foundation of Beamr’s content adaptive encoding and content adaptive optimization solutions. 

National Geographic has a hit TV franchise on its hands. It’s called Brain Games starring Jason Silva, a talent described as “a Timothy Leary of the viral video age” by the Atlantic. Brain Games is accessible, fun and accurate. It’s a dive into brain science that relies on well-produced demonstrations of illusions and puzzles to showcase the power — and limitation — of the human brain. It’s compelling TV that illuminates how we perceive the world.(Intrigued? Watch the first minute of this clip featuring Charlie Rose, Silva, and excerpts from the show: https://youtu.be/8pkQM_BQVSo )

At Beamr, we’re passionate about the topic of perceptual quality. In fact, we are so passionate, that we built an entire company based on it. Our technology leverages science’s knowledge about the human vision system to significantly reduce video delivery costs, reduce buffering & speed-up video starts without any change in the quality perceived by viewers. We’re also inspired by the show’s ability to turn complex things into compelling and accessible, without distorting the truth. No easy feat. But let’s see if we can pull it off with a discussion about video quality measurement which is also a dense topic.

Basics of Perceptual Video Quality

Our brains are amazing, especially in the way we process rich visual information. If a picture’s worth 1,000 words. What’s 60 frames per second in 4k HDR worth?

The answer varies based on what part of the ecosystem or business you come from, but we can all agree that it’s really impactful. And data intensive, too. But our eyeballs aren’t perfect and our brains aren’t either – as Brain Games points out. As such, it’s odd that established metrics for video compression quality in the TV business have been built on the idea that human vision is mechanically perfect.

See, video engineers have historically relied heavily on two key measures to evaluate the quality of a video encode: Peak Signal to Noise Ratio, or PSNR, and Structured Similarity, or SSIM. Both metrics are ‘objective’ metrics. That is, we use tools to directly measure the physics of the video signal and construct mathematical algorithms from that data to create metrics. But is it possible to really quantify a beautiful landscape with a number? Let’s see about that.

PSNR and SSIM look at different physics properties of a video, but the underlying mechanics for both metrics are similar. You compress a source video where the properties of the “original” and derivative are then analyzed using specific inputs, and metrics calculated for both. The more similar the two metrics are, the more we can say that the properties of each video are similar, and the closer we can define our manipulation of the video, i.e. our encode, as having a high or acceptable quality.

Objective Quality vs. Subjective Quality


However, it turns out that these objectively calculated metrics do not correlate well to the human visual experience. In other words, in many cases, humans cannot perceive variations that objective metrics can highlight while at the same time, objective metrics can miss artifacts a human easily perceives.

The concept that human visual processing might be less than perfect is intuitive. It’s also widely understood in the encoding community. This fact opens a path to saving money, reducing buffering and speeding-up time-to-first-frame. After all, why would you knowingly send bits that can’t be seen?

But given the complexity of the human brain, can we reliably measure opinions about picture quality to know what bits can be removed and which cannot? This is the holy grail for anyone working in the area of video encoding.

Measuring Perceptual Quality

Actually, a rigorous, scientific and peer-reviewed discipline has developed over the years to accurately measure human opinions about the picture quality on a TV. The math and science behind these methods are memorialized in an important ITU standard on the topic originally published in 2008 and updated in 2012. ITU BT.500 (International Telecommunications Union is the largest standards committee in global telecom.) I’ll provide a quick rundown.

First, a set of clips is selected for testing. A good test has a variety of clips with diverse characteristics: talking heads, sports, news, animation, UGC – the goal is to get a wide range of videos in front of human subjects.

Then, a subject pool of sufficient size is created and screened for 20/20 vision. They are placed in a light-controlled environment with a screen or two, depending on the set-up and testing method.

Instructions for one method is below, as a tangible example.

In this experiment, you will see short video sequences on the screen that is in front of you. Each sequence will be presented twice in rapid succession: within each pair, only the second sequence is processed. At the end of each paired presentation, you should evaluate the impairment of the second sequence with respect to the first one.

You will express your judgment by using the following scale:

5 Imperceptible

4 Perceptible but not annoying

3 Slightly annoying

2 Annoying

1 Very annoying

Observe carefully the entire pair of video sequences before making your judgment.

As you can imagine, testing like this is an expensive proposition indeed. It requires specialized facilities, trained researchers, vast amounts of time, and a budget to recruit subjects.

Thankfully, the rewards were worth the effort for teams like Beamr that have been doing this for years.

It turns out, if you run these types of subjective tests, you’ll find that there are numerous ways to remove 20 – 50% of the bits from a video signal without losing the ‘eyeball’ video quality – even when the objective metrics like PSNR and SSIM produce failing grades.

But most of the methods that have been tried are still stuck in academic institutions or research labs. This is because the complexities of upgrading or integrating the solution into the playback and distribution chain make them unusable. Have you ever had to update 20 million set-top boxes? Well if you have, you know exactly what I’m talking about.

We know the broadcast and large scale OTT industry, which is why when we developed our approach to measuring perceptual quality and applied it to reducing bitrates, we were insistent on staying 100% inside the standard of AVC H.264 and HEVC H.265.

By pioneering the use of perceptual video quality metrics, Beamr is enabling media and entertainment companies of all stripes to reduce the bits they send by up to 50%. This reduces re-buffering events by up to 50%, improves video start time by 20% or more, and reduces storage and delivery costs.

Fortunately, you now understand the basics of perceptual video quality. You also see why most of the video engineering community believes content adaptive sits at the heart of next-generation encoding technologies.

Unfortunately, when we stated above that there were “all kinds of ways” to reduce bits up to 50% without sacrificing ‘eyeball video quality’, we skipped over some very important details. Such as, how we can utilize subjective testing techniques on an entire catalog of videos at scale, and cost efficiently.

Next time: Part 2 and the Opinionated Robot

Looking for better tools to assess subjective video quality?

You definitely want to check out Beamr’s VCT which is the best software player available on the market to judge HEVC, AVC, and YUV sequences in modes that are highly useful for a video engineer or compressionist.

VCT is available for Mac and PC. And best of all, we offer a FREE evaluation to qualified users.

Learn more about VCT: http://beamr.com/h264-hevc-video-comparison-player/