Immersive VR and 360 video at streamable bitrates: Are you crazy?

There have been many high-profile experiments with VR and 360 video in the past year. Immersive video is compelling, but large and unwieldy to deliver. This area will require huge advancements in video processing – including shortcuts and tricks that border on ‘magical’.

Most of us have experienced breathtaking demonstrations that provide a window into the powerful capacity of VR and 360 video – and into the future of premium immersive video experiences.

However, if you search the web for an understanding of how much bandwidth is required to create these video environments, you’re likely to get lost in a tangled thicket of theories and calculations.

Can the industry support the bitrates these formats require?

One such post on Forbes in February 2016 says No.

It provides a detailed mathematical account of why fully immersive VR will require each eye to receive 720 million pixels at 36 bits per pixel and 60 frames per second – or a total of 3.1 trillion bits per second.1

We’ve taken a poll at Beamr, and no one in the office has access to those kinds of download speeds. And some of these folks pay the equivalent of a part-time salary to their ISP!

Thankfully the Forbes article goes on to explain that it’s not quite that bad.

Existing video compression standards will be able to improve this number by 300, according to the author, and HEVC will compress that by 600 down to what might be 5.2 Gbps.

The truth is, the calculations put forth in the Forbes piece are very ambitious indeed. As the author states:

“The ultimate display would need a region of 720 million pixels for full coverage because even though your foveal vision has a more narrow field of view, your eyes can saccade across that full space within an instant. Now add head and body rotation for 360 horizontal and 180 vertical degrees for a total of more than 2.5 billion (giga) pixels.”

A more realistic view of the way VR will rollout was presented by Charles Cheevers of network equipment vendor ARRIS at INTX in May of this year.2

Great VR experiences including a full 360 degree stereoscopic video environment at 4K resolutions could easily require a streaming bandwidth of 500 Mbps or more.

That’s still way too high, so what’s a VR producer to do?

Magical illusion, of course. 

In fact, just like your average Vegas magician, the current state of the art in VR delivery relies on tricks and shortcuts that leverage the imperfect way we humans see.

For example, Foveated Rendering can be used to aggressively compress the areas of a VR video where your eyes are not focused.

This technique alone, and variations on this theme – can take the bandwidth required by companies like NextVR dramatically lower, with some reports that an 8 Mbps stream can provide a compelling immersive experience. The fact is, there are endless ways to configure the end-to-end workflow for VR and much will depend on the hardware and software and networking environments in which it is deployed.

Compression innovations utilizing perceptual frame by frame rate control methodologies, and some involving the mapping of spherical images to cubes and pyramids, in an attempt to transpose images into 5 or 6 viewing planes, and ensure the highest resolution is always on the plane where the eyes are most intensely focused, are being tried.3

At the end of the day, it’s going to be hard to pin down your nearest VR dealer on the amount of bandwidth that’s required for a compelling VR experience. But there’s one thing we know for sure – next generation compression including HEVC and content adaptive encoding – and perceptual optimization – will be a critical part of the final solution.

References:

(1) Found on August 10, 2016 at the following URL: http://www.forbes.com/sites/valleyvoices/2016/02/09/why-the-internet-pipes-will-burst-if-virtual-reality-takes-off/#ca7563d64e8c

(2) Start at 56 minutes. https://www.intxshow.com/session/1041/  — Information and a chart is also available online here: http://www.onlinereporter.com/2016/06/17/arris-gives-us-hint-bandwidth-requirements-vr/ 

(3) Facebook’s developer site gives a fascinating look at these approaches, which they call dynamic streaming techniques. Found on August 10, 2016 at the following URL:  https://code.facebook.com/posts/1126354007399553/next-generation-video-encoding-techniques-for-360-video-and-vr/

4 Facts about 4K

We recently did a little investigative research on the state of 4k and here are four highlights of what we found.

To start, as an industry, we’ve been anticipating 4K for a few years now, but it was just this past April that DIRECTV launched the first-ever Live 4K broadcast from the Masters Golf Tournament. Read more here:

http://ktla.com/2016/03/30/get-ready-for-4k-programming-with-directv/

In May Comcast EVP Matt Strauss spoke with Multichannel News about the company’s plans to begin distributing a 4K HDR capable Xi6 set-top box, but not until 2017.

http://www.multichannel.com/news/content/building-video-momentum/405085

And Comcast did broadcast the Olympics in 4K, but only to the Xfinity App built in to a select set of Smart TVs. Also, as with DIRECTV and DISH Network, the 4K signals were broadcast after a 24-hour delay which I understand was caused mostly by content prep requirements. 

Meanwhile for VOD, Netflix and Amazon are in the game producing and delivering 4K content. While VUDU and FandangoNow also have a limited set of licensed content available for streaming delivery.

Watch Dave Ronca discuss Netflix 4K workflow and technology architecture at Streaming Media East.

As for linear 4K UHD options, in the U.S. today there are just a few TV channels available with the only major operator offering a 24×7 4K UHD linear TV channel being DIRECTV. (There is also a small operator in Chattanooga Tennessee with five 4K UHD channels)

Given the seeming “lack of content” and esoteric discussions about 4K not being easy to “actually see” because most screen sizes are too small due to the extended viewing distance in most homes, you’d be excused for thinking that 4K is still a ways out.

But… our research took us to Best Buy, where the store is filled wall to wall with 4K UHD capable TVs.

Our conclusion?

Forget everything you’ve read: The upgrade in picture quality is real and it’s awesome.

And that brings us to the first key fact about 4K UHD:

  1. The upgrade in picture quality is significant – and it will drive an increase in value to the consumer – and drive additional revenues in return.

SNL Kagan data released in July 2016 the following data. Nearly 2 out of 3 service providers and content producers they surveyed reported they believe consumers are willing to pay more for 4k UHD content. (4K Global Industry Forecast, SNL Kagan, July 2016)

However, it’s important to note that this stunning picture quality isn’t simply resolution. In fact, as we’ll point out in an upcoming white paper, High Dynamic Range is probably as important a feature in today’s 4K UHD TVs as resolution.

HDR enables three key things. Most essentially, HDR improves camera hardware by capturing the high contrast ratios – lighter lights and darker darks – that exist in the real world. As such, HDR images provide more ‘realism’ – and to stunning effect. Also, HDR provides greater luminance (brighter) and thirdly, it offers a wider color gamut (redder reds and greener greens.)

If that consumer benefit can translate into revenue impact, and we believe it will, this will drive accelerated service provider adoption, particularly given our 2nd fact finding about 4k:

  1. Competitive forces operating at scale – amongst Service Providers and OTT providers will drive the adoption of 4K.

Once 4K rollouts start, many in the business feel it will move lightning fast compared to the HD rollout. Why? Consolidation has created more scale in the TV market.

Plus you need to add competitive pressure to the mix with digital leaders like Netflix setting a high video quality bar for not only OTT competitors but MVPDs.

Meantime, major video service providers have been aggressive in efforts to dominate and extend their footprint into consumer homes. Fear and competition will drive decision making and actions at MVPDs as much as consumer delight.

All of the growth pressure described in #2 manifests itself in the growing forecasts for UHD linear TV channel launches.

  1. SNL Kagan forecasts the number of global UHD Linear channels at 95 by the end of 2016 – and 237 globally by 2020.

Of course, this is a chicken-and-egg problem. Few consumers want to purchase 4K TVs if there isn’t enough content to be displayed on them.

But as Tim Bajarin of Creative Strategies points out, until 35-40% of homes have a 4K TV, the cable and broadcast networks won’t justify sizable numbers of 4K channel launches. [USA TODAY Jan 2 2016, “More 4K TV programming finally here in 2016”]

Which leads us to our 4th key fact about 4k UHD TV.

  1. Don’t forget about Geography. 4K is already far more widely deployed in Asia Pacific and Western Europe than in the U.S.

It’s clear that 4K UHD is in the earliest stages of a commercial rollout. Yet it is surprising to see how far behind the U.S. is in 4K UHD channel launches, at least according to the SNL Kagan report previously referenced.

In that report, the North American region had just 12% of linear 4K UHD channels globally, compared with 42% in Asia Pacific, and 30% in Western Europe.

But as you think about the state of 4K and your company’s investment level whether that be in acquiring content rights, licensing HEVC encoders, or upgrading your network and streaming technologies to accommodate the increased bandwidth demands, don’t make the mistake of misreading the speed of adoption. Start acquiring content and building your 4K workflows now, because when the competitive pressure arrives to have a full UHD 4K offer (and it will come) you do not want to be scrambling.

Can we profitably surf the Video Zettabyte Tsunami?

Two key ingredients are in place. But we need to get started now.

In a previous post, we warned about the Zettabyte video tsunami – and the accompanying flood of challenges and opportunities for video publishers of all stripes, old and new. 

Real-life tsunamis are devastating. But California’s all about big wave surfing, so we’ve been asking this question: Can we surf this tsunami?

The ability to do so is going to hinge on economics. So a better phrasing is perhaps: Can we profitably surf this video tsunami?

Two surprising facts came to light recently that point to an optimistic answer, and so we felt it was essential to highlight them.

1. The first fact is about the Upfronts – and it provides evidence that 4K UHD content can drive growth in top-line sales for media companies.

The results from the Upfronts – the annual marketplace where networks sell ad inventory to premium brand marketers – provided TV industry watchers a major upside surprise. This year, the networks sold a greater share of ad inventory at their upfront events, and at higher prices too. As Brian Steinberg put it in his July 27, 2016 Variety1 article:

“The nation’s five big English-language broadcast networks secured between $8.41 billion and $9.25 billion in advance ad commitments for primetime as part of the annual “upfront” market, according to Variety estimates. It’s the first time in three years they’ve managed to break the $9 billion mark. The upfront finish is a clear signal that Madison Avenue is putting more faith in TV even as digital-video options abound.”

Our conclusion? Beautiful, immersive content environments with a more limited number of high-quality ads can fuel new growth in TV. And 4K UHD, including the stunning impact of HDR, is where some of this additional value will surely come from.

Conventional wisdom is that today’s consumers are increasingly embracing ad-free SVOD OTT content from premium catalogs like Netflix, even when they have to pay for it. Since they are also taking the lead on 4K UHD content programming, that’s a great sign that higher value 4K UHD content will drive strong economics. But the data from the Upfronts also seems to suggest that premium ad-based TV content can be successful as well, especially when the Networks create immersive, clutter-free environments with beautiful pictures. 

Indeed, if the Olympics are any measure, Madison Avenue has received the message and turned up their game on the creative. I saw more than a few head-turning :30-second spots. Have you seen the Chobani ads in pristine HD? They’re as powerful as it gets.2

Check out this link to see the ads.

2. The second fact is about the operational side of the equation.

Can we deliver great content at a reasonable cost to a large enough number of homes?  On that front, we have more good news. 

The Internet in the United States is getting much faster. This, along with advanced methods of compression including HEVC, Content Adaptive Encoding and Perceptual Quality Metrics, will result in a ‘virtual upgrade’ of existing delivery network infrastructure. In particular, Ookla’s Speedtest.net published data on August 3, 2016 contained several stunning nuggets of information. But before we reveal the data, we need to provide a bit of context.

It’s important to note that 4K UHD content requires bandwidth of 15 Mbps or greater. Let’s be clear, this assumes Content Adaptive Encoding, Perceptual Quality Metrics, and HEVC compression are all used in combination. However, in Akamai’s State of the Internet report released in Q1 of this year, only 35% of the US population could access broadband speeds of 15 Mbps.

(Note: We have seen suggestions that 4K UHD content requires up to 25 Mbps. Compression technologies improve over time and those data points may well be old news. Beamr is on the cutting edge of compression and we firmly believe that 10 – 15 Mbps is the bandwidth needed – today – to achieve stunning 4K UHD audio visual quality.)

And that’s what makes Ookla’s data so important. Ookla found that in the first 6 months of 2016, fixed broadband customers saw a 42% year-over-year increase in average download speeds to a whopping 54.97 Mbps. Even more importantly, while 10% of Americans lack basic access to FCC target speeds of 25 Mbps, only 4% of urban Americans lack access to those speeds. This speed boost seems to be a direct result of industry consolidation, network upgrades, and growth in fiber optic deployments.

After seeing this news, we also decided to take a closer look at that Akamai data. And guess what we found? A steep slope upward from prior quarters (see chart below).

To put it back into surfing terms: Surf’s Up!
time-based-trends-in-internet-connection-speeds-and-adoption-rates

References:

(1) “How TV Tuned in More Upfront Ad Dollars: Soap, Toothpaste and Pushy Tactics” Brian Steinberg, July 27, 2016: http://variety.com/2016/tv/news/2016-tv-upftont-networks-advertising-increases-1201824887/ 

(2)  Chobani ad examples from their YouTube profile: https://www.youtube.com/watch?v=DD5CUPtFqxE&list=PLqmZKErBXL-Nk4IxQmpgpL2z27cFzHoHu

Applications for On-the-Fly Modification of Encoder Parameters

As video encoding workflows modernize to include content adaptive techniques, the ability to change encoder parameters “on-the-fly” will be required. With the ability to change encoder resolution, bitrate, and other key elements of the encoding profile, video distributors can achieve a significant advantage by creating recipes appropriate to each piece of content.

For VOD or file-based encoding workflows, the advantages of on-the-fly reconfigurability are to enable content specific encoding recipes without resetting the encoder and disrupting the workflow. At the same time, on-the-fly functionality is a necessary feature for supporting real-time encoding on a network with variable capacity.  This way the application can take appropriate steps to react to changing bandwidth, network congestion or other operational requirements.

Vanguard by Beamr V.264 AVC Encoder SDK and V.265 HEVC Encoder SDK have supported on-the-fly modification of the encoder settings for several years. Let’s take a look at a few of the more common applications where having the feature can be helpful.

On-the-fly control of Bitrate

Adjusting bitrate while the encoder is in operation is an obvious application. All Vanguard by Beamr codec SDKs allow for the maximum bitrate to be changed via a simple “C-style” API.  This will enable bitrate adjustments to be made based on the available bandwidth, dynamic channel lineups, or other network conditions.

On-the-fly control of Encoder Speed

Encoder speed control is an especially useful parameter which directly translates into video encoding quality and encoding processing time. Calling this function triggers a different set of encoding algorithms, and internal codec presets. This scenario applies with unicast transmissions where a service may need to adjust the encoder speed for ever-changing network conditions and client device capabilities.

On-the-fly control of Video Resolution

A useful parameter to access on the fly is video resolution. One use case is in telecommunications where the end user may shift his viewing point from a mobile device operating on a slow and congested cellular network, to a broadband WiFi network, or hard wired desktop computer. With control of video resolution, the encoder output can be changed during its operation to accommodate the network speed or to match the display resolution, all without interrupting the video program stream.

On-the-fly control of HEVC SAO and De-blocking Filter

HEVC presents additional opportunities to enhance “on the fly” control of the encoder and the Vanguard by Beamr V.265 encoder leads the market with the capability to turn on or off SAO and De-blocking filters to adjust quality and performance in real-time.

On-the-fly control of HEVC multithreading

V.265 is recognized for having superior multithreading capability.  The V.265 codec SDK provides access to add or remove encoding execution threads dynamically. This is an important feature for environments with a variable number of tasks running concurrently such as encoding functionality that is operating alongside a content adaptive optimization process, or the ABR packaging step.

Beamr’s implementation of on-the-fly controls in our V.264 Codec SDK and V.265 Codec SDK demonstrate the robust design and scalable performance of the Vanguard by Beamr encoder software.

For more information on Vanguard by Beamr Codec SDK’s, please visit the V.264 and V.265 pages.  Or visit http://beamr.com for more on the company and our technology.

Dolby Vision and HEVC, an Introduction

Notice that some material from this post appeared originally in the article, “Integrating HEVC Video Compression with a High Dynamic Range Video Pipeline,” by Raul Diaz, Sam Blinstein, and Sheng Qu, SMPTE Motion Imaging Journal, 125 (1): 14-21, January/February 2016.

You can download the original paper here.

Advances in video capture and display are moving the state of the art beyond higher resolution to include high dynamic range or HDR.  Improvements in video resolution are now informed by advances in the area of color gamut and dynamic range.  New displays shipping today are capable of reproducing a much wider range of colors and brightness levels than can be represented by the video content being produced for streaming, download, digital broadcast, and even Blu-ray.  

By combining technical advances such as higher spatial resolution, larger temporal resolution, and higher dynamic range, distributors looking for differentiated offerings will be able to provide an improved video experience with the existing workflow and technology that is available today.  Content providers are taking notice of technology advances to deliver high-resolution content to viewers, and the newest video compression standard HEVC enables them to compress high-resolution digital video more efficiently for broadcast, mobile and Internet delivery enable exciting experiences such as HDR.

Higher frame rates are now widely supported in modern video compression and delivery standards, yet content producers and distributors largely use workflows and grading protocols that target the dynamic range standards set by technology from the early and mid 20th century. With modern cinema distribution moving to digital, and given the wholesale replacement of cathode ray tube displays by flat panels, the technology is now available to display and deliver a much wider dynamic and color range viewing experience. As a result, an opportunity exists to demonstrate high resolution, high frame rate, and high dynamic range in a single viewing environment.

Is your codec HDR ready?

The question then, is will the codec you are using be able to support HDR?  Native HDR support can be found in HEVC where extensions and new features were added to the standard in July 2014, many of which specifically address high dynamic range. Color profiles previously limited to sample sizes of 8 and 10 bits (named Main and Main10) were expanded. And because chroma subsampling was limited to 4:2:0 this created a significant restriction of chroma relative to luminance by a factor of 4 to 1, but the new HEVC range extensions can now support bit depths of 12, 14, and even 16 bits, along with lower subsampling of chroma to 2:1 (4:2:2) and 1:1 (4:4:4).

SEI messages have also been added to accommodate signal conversions from one color space to another and to facilitate conversions across varying dynamic ranges. With these tools, an end-to-end system for any arbitrary dynamic range and color gamut can be built, and transformations between disparate dynamic range systems implemented.

However, a challenging element is the need to address the widely varying dynamic range of display devices, including legacy systems. Efforts have been underway to develop a solution that can support the future while providing playback compatibility with legacy devices from various organizations and companies, one of which is Dolby Laboratories, who has developed a backwards compatible approach for HDR.

Dolby Vision

One example of this framework is Dolby Vision, the result of nearly a decade of research and development in dynamic range and color gamut scalability. Dolby Vision HDR exists in two flavors: dual layer and single layer.

The original architecture was a dual layer HDR schema, designed as a combination of a base layer containing video data constrained to current specifications, with an optional enhancement layer carrying a supplementary signal along with the Dolby Vision metadata.  This architecture implements a dual layer HDR workflow where the secondary data is an extension of a traditional signal format that provides backwards compatibility with legacy SDR devices (TVs, Set-top-boxes).  But it has a disadvantage of requiring two decoders for the client device to render video.

Single Layer Dolby Vision HDR was introduced later as an alternative to the original approach and to address competitive single layer technologies. It uses the similar metadata as a single layer, but has only one HDR video stream multiplexed with metadata. With this approach, Dolby loses compatibility with legacy devices, but the upside is that it is highly cost-effective for new deployments and operators as some consumer playback devices and TV’s can be upgraded to support single layer Dolby Vision HDR after they were originally sold.

When HDR and wide color gamut is supported on a given device, the two layers are simultaneously decoded and combined using metadata coefficients to present a highly engaging and compelling HDR viewing experience. On legacy equipment, the enhancement layer and metadata are ignored (or not transmitted) and the traditional viewing experience is unaffected. In this way, the dual layer HDR system offers multiple touch-points in the video pipeline to transition without the need for strict synchronization across multiple technologies. In contrast, single Layer HDR requires a hardware and software upgrade to the display device which is not always possible or easily achieved.

As HDR capable reference monitors become more cost effective and available, creative directors will be in a better position to review and master content in the higher range that the video was captured. Such a shift will preserve a much wider range of luminance and color, and invert the mastering stage bottleneck of grading to the lowest common denominator.

Naturally, a dual layer HDR architecture adds a certain complexity to the distribution of video content where both the encoding and decoding stages must handle the secondary layer Dolby requires for HDR. Also, color volume considerations of traditional SDR signals in the new HDR framework requires more than the traditional 8-bit signal.

By using a modular workflow in which augmented encoding and decoding systems can be integrated separately, introducing auxiliary metadata paths to support higher bit-depth requirements of a dual layer HDR system can leverage new video compression standards while simultaneously offering backwards compatibility with legacy equipment and signals.

From experience in production environments, offline encoded 4K HEVC content requires a bitrate between 10 and 15 Mbps to generate quality that is in line with viewer expectations. AVC has multiple inherent limitations relative to HEVC. These limitations make it particularly difficult for AVC to achieve acceptable quality levels at bitrates that can be streamed and delivered effectively to most viewers, particularly at 4K content bitrates.

Dolby Vision adds approximately 20% to 30% additional bitrate load for the higher dynamic range data, making the challenges even more severe for AVC. For 4K content combined with high dynamic range information, the HEVC video compression standard presents the optimum solution to generate acceptable video quality at the bitrates that are needed.

For traditional HD content, AVC video compression has been used successfully for many years for broadcast, the Internet, and mobile video delivery. By using HEVC video compression, this HD content can be compressed at least 30% more efficiently than with AVC video compression.  For high dynamic range HD content encoded with a dual layer HDR system, HEVC video compression can generate a bitrate that is equal to or less than the bitrate needed for a non-HDR HD bitstream encoded using AVC.

Consequently, any video delivery system that can deliver non-HDR HD content today with AVC can also distribute HDR HD content using HEVC video compression without altering the delivery infrastructure.

HEVC offers two primary advantages for HDR content delivery:

  1. An effective and viable method to deliver 4K content,
  2. Bandwidth compatibility with existing video delivery infrastructure to deliver HDR HD content.

If you have more questions about HEVC and encoding for next-generation high dynamic range solutions such as Dolby Vision, email info@beamr.com to learn how Beamr supports some of the largest streaming distributors in the world to deliver high-quality 4K UHD streams with Dolby Vision.

Using Beamr’s V.265 HEVC Encoder to Generate MPEG-DASH Compliant Streams

Recent developments in video encoding and streaming technology have come together to supply two major tools to optimize the delivery of synchronized video streams across multiple devices.

The first development is the next generation video coding standard HEVC, which offers significant compression efficiency gains over AVC. And the second is MPEG-DASH, which gives key advantages over HLS, in managing adaptive bitrate streaming of synchronized resolutions and profiles across varying network bandwidths. The combination of HEVC and MPEG-DASH supports higher quality video delivery over limited bandwidth networks.

Apple’s HLS ABR standard is in broad use today, but MPEG-DASH is not that new having been standardized before HEVC, and being applied in the distribution of AVC (H.264) content. MPEG-DASH is codec and media format agnostic, and the magic of MPEG-DASH is that it splits content into a collection of file segments, each containing a short section of the content. The MPEG-DASH standard defines guidelines for implementing interoperable adaptive streaming services, and it describes specific media formats for use with the ISO Base Media File Format (e.g., MP4) or MPEG-2 Transport Stream containers, making the integration of HEVC into an MPEG-DASH workflow possible within existing standards.

MPEG-DASH targets OTT delivery and CDNs but is also finding a home in broadcast and MSO/MVPD environments as a replacement for MPEG-2 TS-based workflows. Through the exhaustive descriptions available in the MPD, MPEG-DASH clients can determine which media segments best fit their user’s preferences, device capability, and network conditions, guaranteeing a high-quality viewing experience and support for next-generation video services.

Early in the development of HEVC, Beamr realized the need for true adaptive bitrate (ABR) Multistreaming support in HEVC as a tool for content preparation for multistreaming services. In Version 3.0 of our HEVC encoder SDK, V.265, we introduced several API extensions supporting multistream encoding. This architecture allows for the encoding of a single master source video into multiple, GOP-aligned streams of various resolutions and bitrates with a single encoder instance. Moreover, newly exposed encoder input settings allow for the specification of individual settings and flags for each of the streams.

Supporting multiple streams of varying resolutions, bitrates, and settings from a single source in a single encoder instance, which guarantees GOP alignment and offers computational savings across shared processes, is critical for reliable ABR encoding/transcoding performance. Beamr’s V.265 encoding SDK offers service providers the opportunity to combine the advancements of HEVC coding with the versatility of MPEG-DASH.

This functionality offers two significant advantages for developing a multistreaming workflow. First, the architecture guarantees that the multiple streams generated by the encoder are 100% GOP-aligned, an essential requirement for any multistreaming workflow. Second, it simplifies the encoding process to a single input source and encoder instance, reducing command-and-control and resource management.

Despite the performance savings available from multistreaming as a result of the shared computing resources that can be leveraged, our implementation yields optimally synchronized streams that are nearly impossible to generate with separate encoders. Beamr’s Multistreaming capability positions V.265 as highly unique, a contributing factor to why V.265 is in use by leading OTT service providers who use our solution as the basis for encoding their 4k HDR and 1080p ABR profiles.