The Video Codec Race to 2025: How AV1 is Driving New Possibilities

With numerous advantages, AV1 is now supported on about 60% of devices and all major web browsers. To accelerate its adoption – Beamr has introduced an easy, automated upgrade to the codec that is in the forefront of today’s video technology

Four years ago we explored the different video codecs, analyzing their strengths and weaknesses, and took a look at current and predicted market share. While it is gratifying to see that many of our predictions were pretty accurate, that is accompanied by some degree of disappointment: while AV1 strengths are well known in the industry, significant change in adoption of new codecs has yet to materialize.

The bottom line of the 2020 post was: “Only time will tell which will have the highest market share in 5 years’ time, but one easy assessment is that with AVC current market share estimated at around 70%, this one is not going to disappear anytime soon. AV1 is definitely gaining momentum, and with the giants backing we expect to see it used a fair bit in online streaming. “

Indeed we are living in a multi-codec reality, where AVC still accounts for, by far, the largest percentage of video content, but adoption of AV1 is starting to increase with large players such as Netflix and YouTube incorporating it into their workflows, and many others using it for specific high value use cases.

Thus, we are faced with a mixture of the still dominant AVC, HEVC (serving primarily UHD and HDR use cases), AV1 and some additional codecs such as VP9, VVC which are being used in quite small amounts.

The Untapped Potential of AV1

So while AV1 adoption is increasing, there is still significant untapped potential. One of the causes for slower than hoped rollout of AV1 is the obstacle present for adoption of any new standard – critical mass of decoding support in H/W on edge devices.

While for AVC and HEVC the coverage is very extensive, for AV1 that has only recently become the case, with support across an estimate of 60% of devices and all major web browsers, and complementing the efficient software decoding offered by Dav1d. 

Another obstacle AV1 faces involves the practicalities of deployment. While there is extensive knowledge, within the industry and available online, of how best to configure AVC encoding, and what presets and encoding parameters work well for which use cases – there is no such equivalent knowledge for AV1. Thus, in order to deploy it, extensive research is needed by those who intend to use it. 

Additionally, AV1 encoding is complicated, resulting in much higher processing power required to perform software encoding. In a world that is constantly trying to cut back costs, and use lower power solutions, this can pose a problem. Even when using software solutions at the fastest settings, the compute required is still significantly slower than AVC encoding at typical speeds. This is a strong motivator to upgrade to AV1 using H/W accelerated solutions (Learn more about Beamr solution to the challenge).

The upcoming codec possibilities are also a deterrent for some. With AV2 in the works, VVC finalized and gaining some traction, and various groups working on AI based encoding solutions, there will always be players waiting for ‘the next big thing’, rather than having to switch out codecs twice.

In a world where JPEG, a 30+ year old standard, is still used in over 70% of websites and is the most popular format on the web for photographic content, it is no surprise that adoption of new video codecs is taking time.

While a multi codec reality is probably going to stay with us, we can at least hope that when we revisit this topic in a blog a few years down the line, the balance between deployed codecs leans more towards the higher efficiency codecs, like AV1, to yield  the best bitrate – quality options for the video world.

Automatically upgrade your video content to a new and improved codec

Easy & Safe Codec Modernization with Beamr using Nvidia GPUs 

Following a decade where AVC/H.264 was the clear ruler of the video encoding world, the last years have seen many video coding options battling to conquer the video arena. For some insights on the race between modern coding standards you can check out our corresponding blog post.

Today we want to share how easy it can be to upgrade your content to a new and improved codec in a fast, fully automatic process which guarantees the visual quality of the content will not be harmed. This makes the switchover to newer encoders a smooth, easy and low cost process which can help accelerate the adoption of new standards such as HEVC and AV1. When this transformation is done using a combination of Beamr’s technology with the Nvidia NVENC encoder, using their recently released APIs, it becomes a particularly cutting-edge solution, enjoying the benefits of the leading solution in hardware AV1 encoding.

The benefit of switching to more modern codecs lies of course in the higher compression efficiency that they offer. While the extent of improvement is very dependent on the actual content, bitrates and encoders used, HEVC is considered to offer gains of 30%-50% over AVC, meaning that for the same quality you can spend up to 50% fewer bits. For AV1 this increase is generally a bit higher.. As more and more on-device support is added for these newer codecs, the advantage of utilizing them to reduce both storage and bandwidth is clear. 

Generally speaking, performing such codec modernization involves some non-trivial steps. 

First, you need to get access to the modern encoder you want to use, and know enough about it in order to configure the encoder correctly for your needs. Then you can proceed to encoding using one of the following approaches.

The first approach is to perform bit-rate driven encoding. One possibility is to use conservative bitrates, in which case the potential reduction in size will not be achieved. Another possibility is to set target bitrates that reflect the expected savings, in which case there is a risk of losing quality. For example, In an experimental test of files which were converted from their AVC source to HEVC, we found that on average, a bitrate reduction of 50% could be obtained when using the Beamr CABR codec modernization approach. However, when the same files were all brute-force encoded  to HEVC at 50% reduced bitrate, using the same encoder and configuration, the quality took a hit for some of the files.

 

This example shows the full AVC source frame on top, with the transcodes to HEVC below it. Note the distortion in the blind HEVC encode, shown on the left, compared to the true-to-source video transformed with CABR on the right.

The second approach is to perform the transcode using a quality driven encode, for instance using the constant QP (Quantization Parameter) or CRF (Constant Rate Factor) encoding modes with conservative values, which will in all likelihood preserve the quality. However, in this case you are likely to unnecessarily “blow up” some of your files to much higher bitrates. For example, for the UGC content shown below, transcoding to HEVC using a software encoder and CRF set to 21 almost doubled the file size.

Yet another approach is to use a trial and error encode process for each file or even each scene, manually verifying that a good target encoding setup was selected which minimizes the bitrate while preserving the quality. This is of course an expensive and cumbersome process, and entirely unscalable.

By using Beamr CABR this is all done for you under the hood, in a fully automatic process, which makes optimized choices for each and every frame in your video, selecting the lowest bitrate that will still perfectly preserve the source visual quality. When performed using the Nvidia NVENC SDK with interfaces to Beamr’s CABR technology, this transformation is significantly accelerated and becomes even more cost effective. 

The codec modernization flow is demonstrated for AVC to HEVC conversion in the above high-level block diagram. As shown here, the CABR controller interacts with NVENC, Nvidia’s hardware video encoder, using the new APIs Nvidia has created for this purpose. At the heart of the CABR controller lies Beamr’s Quality Measure, BQM, a unique, patented, Emmy award winning perceptual video quality measure. BQM has now been adapted and ported to the Nvidia GPU platform, resulting in significant acceleration of the optimization process .  

The Beamr optimization technology can be used not only for codec modernization, but also to reduce bitrate of an input video, or of a target encode, while guaranteeing the perceptual quality is preserved, thus creating encodes with the same perceptual quality at lower bitrates or file sizes. In any and every usage of the Beamr CABR solution, size or bitrate are reduced as much as possible while each frame of the optimized encode is guaranteed to be perceptually identical to the reference. The codec modernization use case is particularly exciting as it puts the ability to migrate to more efficient and sophisticated codecs, previously used primarily by video experts, into the hands of any user with video content.

For more information please contact us at info@beamr.com 

The Future of 3 Character Codecs. [podcast]

Anyone familiar with the streaming video industry knows that we love our acronyms. You would be hard-pressed to have a conversation about the online video industry without bringing one up…

In today’s episode, The Video Insiders focus on the future of three-character codecs: AVC, VP9, and VVC.

But before we can look at the future, we have to take a moment to revisit the past.  

The year 2018 marks the 15-year anniversary of AVC and in this episode, we visit the process and lifecycle of standardization to adoption and what that means for the future of these codecs.

Tune in to Episode 03: The Future of 3 Character Codecs or watch video below.

https://youtu.be/TmDFpmtnbU8

Want to join the conversation?

Reach out to TheVideoInsiders@beamr.com.

TRANSCRIPTION (lightly edited for improved readability)

Mark Donnigan: 00:49 Well, Hi, Dror!

Dror Gill: 00:50 Is this really episode three?

Mark Donnigan: 00:52 It is, it is episode three. So, today we have a really exciting discussion as we consider the future of codecs named with three characters.

Dror Gill: 01:03 Three character codecs, okay, let’s see.

Mark Donnigan: 01:06 Three character codecs.

Dror Gill: 01:09 I can think of …

Mark Donnigan: 01:09 How many can you name?

Dror Gill: 01:10 Let’s see, that’s today’s trivia question. I can think of AVC, VP9, AV1, and VVC?

Mark Donnigan: 01:21 Well, you just named three that I was thinking about and we’re gonna discuss today! We’ve already covered AV1. Yeah, yeah, you answered correctly, but we haven’t really considered where AVC, VP9, and VVC fit into the codec stew. So when I think about AVC, I’m almost tempted to just skip it because isn’t this codec standard old news? I mean, c’mon. The entire video infrastructure of the internet is enabled by AVC, so what is there to discuss?

Dror Gill: 01:57 Yeah. You’re right. It’s like the default, but in fact, the interesting thing is that today, we’re (in) 2018 and this is the twenty year anniversary of AVC. I mean, ITU issued the call for proposals, their video coding expert group, issued the call for proposal for a project. At the time was called H26L, and their target was to double the coding efficiency, which effectively means halving the bit rate necessary for given level of fidelity. And that’s why it was called H26L, it was supposed to be low bit rate.

Mark Donnigan: 02:33 Ah! That’s an interesting trivia question.

Dror Gill: 02:35 That’s where the L came from!

Mark Donnigan: 02:36 I wonder how many of our listeners knew that? That’s kind of cool. H26L.

Dror Gill: 02:42 But they didn’t go alone. It was the first time they joined forces in 2001 with the ISO MPEG, that’s the same Motion Pictures Experts Group, you know we discussed in the first episode.

Mark Donnigan: 02:56 That’s right.

Dror Gill: 02:57 And they came together, they joined forced, and they created JVT, that was the Joint Video Team, and I think it’s a great example of collaboration. There are standards by dealing with video communication standards, and ISO MPEG, which is a standards body dealing with video entertainment standards. So, finally they understood that there’s no point in developing video standards for these two different types of applications, so they got all the experts together in the JVT and this group developed what was the best video compression standard at the time. It was launched May 30, 2003.

Mark Donnigan: 03:35 Wow.

Dror Gill: 03:36 There was one drawback with this collaboration in that the video standard was known by two names. There was the ITU name which is H.264. And then there’s the ISO MPEG name which is AVC, so these created some confusion at the start. I think by now, most of our listeners know that H.264 and AVC are two of the same.

Mark Donnigan: 03:57 Yeah, definitely. So, AVC was developed 15 years ago and it’s still around today.

Dror Gill: 04:02 Yeah, yeah. I mean, that’s really impressive and it’s not only around, it’s the most popular video compression standard in the world today. I mean, AVC is used to deliver video over the internet, to computers, televisions, mobile devices, cable, satellite, broadcast, and even blu-ray disks. This just shows you how long it takes from standardization to adoption, right? 15 years until we get this mass market adoption market dominance of H.264, AVC as we have today.

Dror Gill: 04:31 And the reason it takes so long is that, we discussed it in our first episode, first you need to develop the standard. Then, you need to develop the chips that support the standard, then you need to develop devices that incorporate the chip. Even when initial implementation of the codec got released, they are still not as efficient as they can be, and it takes codec developers more time to refine it and improve the performance and the quality. You need to develop the tools, all of that takes time.

Mark Donnigan: 04:59 It does. Yeah, I have a background in consumer electronics and because of that I know for certainty that AVC is gonna be with us for a while and I’ll explain why. It’s really simple. Decoding of H.264 is fully supported in every chip set on the market. I mean literally every chip set. There is not a device that supports video which does not also support AVC today. It just doesn’t exist, you can’t find it anywhere.

Mark Donnigan: 05:26 And then when you look at in coding technologies for AVC, H.264, (they) have advanced to the point where you can really achieve state of the art for very low cost. There’s just too much market momentum where the encode and decode ecosystems are just massive. When you think about entertainment applications and consumer electronics, for a lot of us, that’s our primary market (that) we play in.

Mark Donnigan: 05:51 But, if you consider the surveillance and the industrial markets, which are absolutely massive, and all of these security cameras you see literally everywhere. Drone cameras, they all have AVC encoders in them. Bottom line, AVC isn’t going anywhere fast.

Dror Gill: 06:09 You’re right, I totally agree with that. It’s dominant, but it’s still here to stay. The problem is that, we talked about this, video delivery over the internet. The big problem is the bandwidth bottleneck. With so much video being delivered over the internet, and then the demand for quality is growing. People want higher resolution, they want HDR which is high dynamic range, they want higher frame rate. And all this means you need more and more bit rate to represent the video. The bit rate efficiency that is required today is beyond the standard in coding in AVC and that’s where you need external technologies such as content adaptive encoding perceptual optimization that will really help you push AVC to its limits.

Mark Donnigan: 06:54 Yeah. And Dror, I know you’re one of the inventors of a perceptual optimization technique based on a really unique quality measure, which I’ve heard some in the industry believe could even extend the life of AVC from a bit rate efficiency perspective. Tell us about what you developed and what you worked on.

Dror Gill: 07:13 Yeah, that’s right. I did have some part in this. We developed a quality measure and a whole application around it, and this is a solution that can reduce the bit rate of AVC by 30%, sometimes even 40%. It doesn’t get us exactly to where HEVC starts, 50% is pretty difficult and not for every content (type). But content distributors that recognize AVC will still be part of their codec mix for at least five years, I think what we’ve been able to do can really be helpful and a welcome relief to this bandwidth bottleneck issue.

Mark Donnigan: 07:52 It sounds like we’re in agreement that for at least the midterm horizon, the medium horizon, AVC is gonna stay with us.

Dror Gill: 08:01 Yeah, yeah. I definitely think so. For some applications and services and certain regions of the world where the device penetration of the latest, high end models is not as high as in other parts, AVC will be the primary codec for some time to come.

Dror Gill: 08:21 Okay, that’s AVC. Now, let’s talk about VP9.

Mark Donnigan: 08:24 Yes, let’s do that.

Dror Gill: 08:25 It’s interesting to me, essentially, it’s mostly a YouTube codec. It’s not a bad coded, it has some efficiency advantages over AVC, but outside of Google, you don’t see any large scale deployments. By the way, if you look at Wikipedia, you read about the section that says where is VP9 used, it says VP9 is used mostly by YouTube, some uses by Netflix, and it’s being used by Wikipedia.

Mark Donnigan: 08:50 VP9 is supported fairly well in devices. Though, it’s obviously hard to say exactly what the penetration is, I think there is support in hardware for decode for VP9. Certainly it’s ubiquitous on Android, and it’s in many of the UHD TV chip sets as well. So, it’s not always enabled, but again, from my background on the hardware side, I know that many of those SOCs, they do have a VP9 decoder built into them.

Mark Donnigan: 09:23 I guess the question in my mind is, it’s talked about. Certainly Google is a notable both developer and user, but why hasn’t it been adopted?

Dror Gill: 09:33 Well, I think there are several issues here. One of them is compression efficiency. VP9 brings maybe 20, 30% improvement in compression efficiency over AVC, but it’s not 50%. So, you’re not doubling your compression efficiency. If you want to replace the codec, that’s really a big deal. That’s really a huge investment. You need to invest in coding infrastructure, new players. You need to do compatibility testing. You need to make sure that your packaging and your DRM work correctly and all of that.

Dror Gill: 10:04 You really want to get a huge benefit to offset this investment. I think people are really looking for that 50% improvement, to double the efficiency, which is what you get with HEVC but not quite with VP9. I think the second point is that VP9, even though it’s an open source coder, it’s developed and the standard is maintained by Google. And some industry players are kind of afraid of the dominance of Google. Google has taken over the advertising market online.

Mark Donnigan: 10:32 Yes, that’s a good point.

Dror Gill: 10:34 You know, and search and mobile operating systems, except Apple, it’s all Android. So, those industry players might be thinking, I don’t want to depend on Google for my video compression format. I think this is especially true for traditional broadcasters. Cable companies, satellite companies, TV channels that broadcast over the air. These companies traditionally like to go with established, international standards. Compression technologies that are standardized, they have the seal of approval by ITU and ISO.

Dror Gill: 11:05 They’re typically following that traditional codec developer past. ISO MPEG too, now it’s AVC, starting with HEVC. What’s coming next?

Mark Donnigan: 11:16 Well, our next three letter codec is VVC. Tell us about VVC, Dror.

Dror Gill: 11:21 Yeah, yeah, VVC. I think this is another great example of collaboration between ITU and ISO. Again, they formed a joint video experts team. This time it’s called JVET.

Dror Gill: 12:10 So, JVET has launched a project to develop a new video coding standard. And you know, we had AVC that was advanced video coding. Then we had HEVC which is high efficiency video coding. So, they thought, what would be the next generation? It’s already advanced, it’s high efficiency. So, the next one, they called it VVC, which is versatile video code. The objective of VVC is obviously to provide a significant improvement in compression efficiency over the existing HEVC standard. Development already started. The JVET group is meeting every few in months in some exotic place in the world and this process will continue. They plan to complete it before the end of 2020. So, essentially in the next two years they are gonna complete the standard.

Dror Gill: 13:01 Today, already, even though VVC is in early development and they haven’t implemented all the tools, they already report a 30% better compression efficiency than HEVC. So, we have high hopes that we’ll be able to fight the video tsunami that is coming upon us with a much improved standard video coder which is VVC. I mean, its improved at least on the technical side and I understand that they also want to improve the process, right?

Mark Donnigan: 13:29 That’s right, that’s right. Well, technical capabilities are certainly important and we’re tracking of course VVC. 30% better efficiency this early in the game is promising. I wonder if the JVET will bring any learnings from the famous HEVC royalty debacles to VVC because I think what’s in everybody’s mind is, okay, great, this can be much more efficient, technically better. But if we have to go round and round on royalties again, it’s just gonna kill it. So, what do you think?

Dror Gill: 14:02 Yeah, that’s right. I think it’s absolutely true and many people in the industry have realized this, that you can’t just develop a video standard and then handle the patent and royalty issues later. Luckily some companies have come together and they formed an industry group called The Media Coding Industry Forum, or MC-IF. They held their first meeting a few weeks ago in Macau during empic meeting one through four. Their purpose statement, let me quote this from their website, and I’ll give you my interpretation of it. They say the media coding industry forum (MC-IF) is an open industry forum with a purpose of furthering the adoption of standards initially focusing on VVC, but establishing them as well accepted and widely used standards for the benefit of consumers and the industry.

Dror Gill: 14:47 My interpretation is that the group was formed in an effort for companies with interest in this next generation video codec to come together and attempt to influence the licensing policy of VVC and try to agree on a reasonable patent licensing policy in advance to prevent history from repeating itself. We don’t want that whole Hollywood story with the tragedy that took a few years until they reached the happy ending. So, what are you even talking about? This is very interesting. They’re talking about having a modular structure for the codec. These tools of the codecs, the features, can be plugged in and out, very easily.

Dror Gill: 15:23 So, if some company insists on reasonable licensing terms, this group can just decide not to support the feature and it will be very easily removed from the standard, or at least from the way that companies implement that standard.

Mark Donnigan: 15:37 That’s an interesting approach. I wonder how technically feasible it is. I think we’ll get into that in some other episodes.

Dror Gill: 15:46 Yeah. That may have some effect on performance.

Mark Donnigan: 15:49 Exactly. And again, are we back in the situation that the Alliance for Open Media is in with AV1. Where part of the issue of the slow performance is trying to work around patents. At the end of the day you end up with a solution that is hobbled technically.

Dror Gill: 16:10 Yeah. I hope it doesn’t go there.

Mark Donnigan: 16:13 Yeah, I hope we’re not there. I think you heard this too, hasn’t Apple joined the consortium recently?

Dror Gill: 16:21 Yeah, yeah, they did. They joined silently as they always do. Silently means that one day somebody discovers their logo… They don’t make any announcement or anything. You just see a logo on the website, and then oh, okay.

Mark Donnigan: 16:34 Apple is in the building.

Mark Donnigan: 16:41 You know, maybe it’s good to kind of bring this discussion back to Earth and close out our three part series by giving the listeners some pointers. About how they should be thinking about the next codec that they adopt. I’ve been giving some thought as we’ve been doing these episodes. I think I’ll kick it off here Dror if you don’t mind, I’ll share some of my thoughts. You can jump in.

Mark Donnigan: 17:11 These are complex decisions of course. I completely agree, billing this as codec wars and codec battles, it’s not helpful at the end of the day. Maybe it makes for a catchy headline, but it’s not helpful. There’s real business decisions (to be made). There are technical decisions. I think a good place to start for somebody who’s listening and saying “okay great, I now have a better understanding of the lay of the land of HEVC, for AV1, I can understand VP9, I can understand AVC and what some of my options are to even further reduce bit rate. But now, what do I do?”

Mark Donnigan: 17:54 And I think a good place to start is to just look at your customers, and do they lean towards early adopters. Are you in a strong economic environment, which is to say quite frankly, do most of your customers carry around the latest devices? Like an iPhone X, or Galaxy 9. If largely your customers lean towards early adopter and they’re carrying around the latest devices, then you have an obligation to serve them with the highest quality and the best performance possible.

Dror Gill: 18:26 Right. If your customers can receive HEVC, and it’s half the bit rate, then why not deliver it to them better quality, or say when you see the end cost with this more efficient codec and everybody is happy.

Mark Donnigan: 18:37 Absolutely, and again, I think just using pure logic. If somebody could afford a more than $1000 device in their pocket, probably the TV hanging on the wall is a very new, UHD capable (one). They probably have a game console in the house. The point is that you can make a pretty strong argument and an assumption that you can go, what I like to think of as all in HEVC including even standard definition, just SDR content.

Mark Donnigan: 19:11 So, the industry has really lost sight in my mind of the benefits of HEVC as they apply across the board to all resolutions. All of the major consumer streaming services are delivering 4K using HEVC, but I’m still shocked at how many, it’s kind of like oh, we forget that the same advantages of bit rate efficiency that work at 4K apply at 480p. Obviously, the absolute numbers are smaller because the file sizes are smaller, etc.

Mark Donnigan: 19:41 But the point is, 30, 40, 50% savings applies at 4K as it does at 480p. I understand there’s different applications in use cases, right? But would you agree with that?

Dror Gill: 19:55 Yeah, yeah, I surely agree with that. I mean, for 4K, HEVC is really an enabler.

Mark Donnigan: 20:00 That’s right.

Dror Gill: 20:01 For HEVC, you would need like 30, 40 megabits of video. Nobody can stream that to the home, but change it to 10, 15, that’s reasonable, and you must use HEVC for 4k otherwise it won’t even fit the pipe. But for all other resolutions, you get the bang with the advantage or you can trade it off for a quality advantage and deliver higher quality to your users, or higher frame rate, or enable HDR. If all of these possibilities that you can do with HD and even SD content, give them a better experience using HEVC and being able to stream on devices that your users already have. So yeah, I agree. I think it’s an excellent analysis. Obviously if you’re up in an emerging market, or your consumers don’t have high end devices, then AVC is a good solution. If there are network constraints, and there are many places in the world that network conductivity isn’t that great. Or in rural areas where we have very large parts of the population spread out (in these cases) bandwidth is low and you will get into a bottleneck even with HD.

Mark Donnigan: 21:05 That’s right.

Dror Gill: 21:06 That’s where perceptual optimization can help you reduce the bit rate even for AVC and keep within the constraints that you have. When your consumers can upgrade their devices and when the cycle comes in a few years when every device has HEVC support, then obviously you upgrade your capability and support HEVC across the board.

Mark Donnigan: 21:30 Yeah, that’s a very important point Dror, is that this HEVC adoption curve in terms of silicon, on devices. It is in full motion. Just the planning life cycles. If you look at what goes into hardware, and especially on the silicon side, it doesn’t happen that way. Once these technologies are in the designs, once they are in the dies, once the codec is in silicon, it doesn’t get arbitrarily turned on and off like light switches.

Mark Donnigan: 22:04 How should somebody be looking at VP9, VVC, and AV1?

Dror Gill: 22:13 Well, VP9 is an easy one. Unless you’re Google, you’re very likely gonna skip over this codec. Not just that the VP9 isn’t the viable choice, it simply doesn’t go so far as HEVC in terms of bit rate efficiency and quality. Maybe two years back we would consider it as an option for reducing bit rate, but now with the HEVC support that you have, there’s no point in going to VP9. You might as well go to HEVC. If you talk about VVC, (the) standard is still a few years from being ratified so, we actually don’t have anything to talk about.

Dror Gill: 22:49 The important point is again to remember, even when VVC launches, it will still be another 2 to 3 years after ratifying the standard before you have even a very basic playback ecosystem in place. So, I would tell our listeners if you’re thinking, why should I adopt HEVC, because VVC is just around the corner, well, that corner is very far. It’s more like the corner of the Earth than the corner of the next block.

Mark Donnigan: 23:15 That’s right.

Dror Gill: 23:18 So, HEVC today, VVC will be the next step in a few years. And then there’s AV1. You know, we talked a lot about AV1. No doubt, AV1 has support from huge companies. I mean Google, Facebook, Intel, Netflix, Microsoft. And those engineers, they know what they’re doing. But now, it’s quite clear that compression efficiency is the same as HEVC. Meanwhile, after removing other royalty cost for content delivery, HEVC Advance removed it. The license situation is much more clear now. You add to this the fact that at the end of the day, two to three years, you’re gonna need five to ten times more compute power to encode AV1, reaching effectively the same result. Now Google, again. Google may be that they have unlimited compute resources, they will use it. They developed it.

Dror Gill: 24:13 The smaller content providers, all the other ones, the non Googles of the world and other broadcasters with growing support for HEVC that we expect in a few years. I think it’s obvious. They’re gonna support HEVC and then a few years later when VVC is ratified, when it’s supported in devices, they’re gonna move to VVC. Because this codec does have the required compression efficiency improvement over HEVC.

Mark Donnigan: 24:39 Yeah, that’s an excellent summary Dror. Thank you for breaking this all down for our listeners so succinctly. I’m sure this is really gonna provide massive value. I want to thank our amazing audience because without you, the Video Insiders Podcast would just be Dror and me taking up bits on a server somewhere.

Dror Gill: 24:59 Yeah, talking to ourselves.

Mark Donnigan: 25:01 As you can tell, video is really exciting to us and so we’re so happy that you’ve joined us to listen. And again, this has been a production of Beamr Imaging Limited. Please, subscribe on iTunes and if you would like to try out beamer codecs in your lab or your production environment, we are giving away up to $100 of HEVC and H264 in coding every month. That’s each and every month. Just go to https://beamer.com/free and get started immediately.

2018 Video Trends: 8K Makes a Splash

At the 2018 Consumer Electronics Show, video hardware manufacturers came out swinging on the innovation front—including 8K TVs and a host of whiz-bang UX improvements—leading to key discussions around the business and economic models around content and delivery.

On the hardware side, TV has dominated at CES, with LG and Samsung battling it out over premium living room gear. LG, in addition to debuting a 65-inch rollable OLED screen, made headlines with its announcement of an 88-inch 8K prototype television. It’s backed by the new Alpha 9 intelligent processor, which provides seven times the color reproduction over existing models, and can handle up to 120 frames per second for improved gaming and sports viewing.

Not to be outdone, Samsung has debuted its Q9S 8K offering (commercially available in the second half of the year), featuring an 85-inch screen with built-in artificial intelligence that uses a proprietary algorithm to continuously learn from itself to intelligently upscale the resolution of the content it displays — no matter the source of that content.

The Korean giant also took the wraps off of what it is calling “the Wall,” which, true to its name, is an enormous 146-inch display. It’s not 8K, but it’s made up of micro LEDs that it says will let consumers “customize their television sizes and shapes to suit their needs.” It also said that its newest TVs will incorporate its artificial digital assistant Bixby and a universal programming guide with AI that learns your viewing preferences.

It’s clear that manufacturers are committed to upping their games when it comes to offering better consumer experiences. And it’s not just TVs that are leading this bleeding edge of hardware development: CES has seen announcements around 4K VR headsets (HTC), video-enabled drones, cars that can utilize a brain-hardware connection to tee up video-laden interactive apps, and a host of connected home gadgets—all of which will be driving the need for a combination of reliable hardware platforms, content availability and, perhaps above all, a positive economic model for content delivery.

This year CES provided a view into the next generation of video entertainment possibilities that are in active development. But it will all be for naught if content producers and distributors don’t have reliable and scalable delivery networks for compatible video, where costs don’t spiral out of control as the network becomes more content-intensive. For instance, driving down the bitrate requirements for delivering, say, 8K, whether it’s in a pay-TV traditional operator model or on an OTT basis, will be one linchpin for this vision of the future.

We’re committed to making sure we are in the strongest position to bring our extensive codec development resources to bear on this ecosystem. HEVC, for instance, is recognized to be 40 to 50 percent more efficient for delivering video than legacy format, AVC H.264. With Beamr’s advanced encoding offerings, content owners can optimize their encoding for reduced buffering, faster start times, and increased bandwidth savings.

We’re also keeping an eye on the progression of the Alliance for Open Media (AOMedia)’s AV1 codec standard, which recently added both Apple and Facebook to its list of supporters. It hopes to be up to 30 percent more efficient than HEVC, though it’s very much in the development stages.

We’re excited about the announcements coming out of CES this year, and the real proof that the industry is well on its way to delivering an exponential improvement on the consumer video experience. We also look forward to helping that ecosystem mature and doing our part to make sure that innovation succeeds, for 8K in the living room and very much beyond.

Translating Opinions into Fact When it Comes to Video Quality

This post was originally featured at https://www.linkedin.com/pulse/translating-opinions-fact-when-comes-video-quality-mark-donnigan 

In this post, we attempt to de-mystify the topic of perceptual video quality, which is the foundation of Beamr’s content adaptive encoding and content adaptive optimization solutions. 

National Geographic has a hit TV franchise on its hands. It’s called Brain Games starring Jason Silva, a talent described as “a Timothy Leary of the viral video age” by the Atlantic. Brain Games is accessible, fun and accurate. It’s a dive into brain science that relies on well-produced demonstrations of illusions and puzzles to showcase the power — and limitation — of the human brain. It’s compelling TV that illuminates how we perceive the world.(Intrigued? Watch the first minute of this clip featuring Charlie Rose, Silva, and excerpts from the show: https://youtu.be/8pkQM_BQVSo )

At Beamr, we’re passionate about the topic of perceptual quality. In fact, we are so passionate, that we built an entire company based on it. Our technology leverages science’s knowledge about the human vision system to significantly reduce video delivery costs, reduce buffering & speed-up video starts without any change in the quality perceived by viewers. We’re also inspired by the show’s ability to turn complex things into compelling and accessible, without distorting the truth. No easy feat. But let’s see if we can pull it off with a discussion about video quality measurement which is also a dense topic.

Basics of Perceptual Video Quality

Our brains are amazing, especially in the way we process rich visual information. If a picture’s worth 1,000 words. What’s 60 frames per second in 4k HDR worth?

The answer varies based on what part of the ecosystem or business you come from, but we can all agree that it’s really impactful. And data intensive, too. But our eyeballs aren’t perfect and our brains aren’t either – as Brain Games points out. As such, it’s odd that established metrics for video compression quality in the TV business have been built on the idea that human vision is mechanically perfect.

See, video engineers have historically relied heavily on two key measures to evaluate the quality of a video encode: Peak Signal to Noise Ratio, or PSNR, and Structured Similarity, or SSIM. Both metrics are ‘objective’ metrics. That is, we use tools to directly measure the physics of the video signal and construct mathematical algorithms from that data to create metrics. But is it possible to really quantify a beautiful landscape with a number? Let’s see about that.

PSNR and SSIM look at different physics properties of a video, but the underlying mechanics for both metrics are similar. You compress a source video where the properties of the “original” and derivative are then analyzed using specific inputs, and metrics calculated for both. The more similar the two metrics are, the more we can say that the properties of each video are similar, and the closer we can define our manipulation of the video, i.e. our encode, as having a high or acceptable quality.

Objective Quality vs. Subjective Quality


However, it turns out that these objectively calculated metrics do not correlate well to the human visual experience. In other words, in many cases, humans cannot perceive variations that objective metrics can highlight while at the same time, objective metrics can miss artifacts a human easily perceives.

The concept that human visual processing might be less than perfect is intuitive. It’s also widely understood in the encoding community. This fact opens a path to saving money, reducing buffering and speeding-up time-to-first-frame. After all, why would you knowingly send bits that can’t be seen?

But given the complexity of the human brain, can we reliably measure opinions about picture quality to know what bits can be removed and which cannot? This is the holy grail for anyone working in the area of video encoding.

Measuring Perceptual Quality

Actually, a rigorous, scientific and peer-reviewed discipline has developed over the years to accurately measure human opinions about the picture quality on a TV. The math and science behind these methods are memorialized in an important ITU standard on the topic originally published in 2008 and updated in 2012. ITU BT.500 (International Telecommunications Union is the largest standards committee in global telecom.) I’ll provide a quick rundown.

First, a set of clips is selected for testing. A good test has a variety of clips with diverse characteristics: talking heads, sports, news, animation, UGC – the goal is to get a wide range of videos in front of human subjects.

Then, a subject pool of sufficient size is created and screened for 20/20 vision. They are placed in a light-controlled environment with a screen or two, depending on the set-up and testing method.

Instructions for one method is below, as a tangible example.

In this experiment, you will see short video sequences on the screen that is in front of you. Each sequence will be presented twice in rapid succession: within each pair, only the second sequence is processed. At the end of each paired presentation, you should evaluate the impairment of the second sequence with respect to the first one.

You will express your judgment by using the following scale:

5 Imperceptible

4 Perceptible but not annoying

3 Slightly annoying

2 Annoying

1 Very annoying

Observe carefully the entire pair of video sequences before making your judgment.

As you can imagine, testing like this is an expensive proposition indeed. It requires specialized facilities, trained researchers, vast amounts of time, and a budget to recruit subjects.

Thankfully, the rewards were worth the effort for teams like Beamr that have been doing this for years.

It turns out, if you run these types of subjective tests, you’ll find that there are numerous ways to remove 20 – 50% of the bits from a video signal without losing the ‘eyeball’ video quality – even when the objective metrics like PSNR and SSIM produce failing grades.

But most of the methods that have been tried are still stuck in academic institutions or research labs. This is because the complexities of upgrading or integrating the solution into the playback and distribution chain make them unusable. Have you ever had to update 20 million set-top boxes? Well if you have, you know exactly what I’m talking about.

We know the broadcast and large scale OTT industry, which is why when we developed our approach to measuring perceptual quality and applied it to reducing bitrates, we were insistent on staying 100% inside the standard of AVC H.264 and HEVC H.265.

By pioneering the use of perceptual video quality metrics, Beamr is enabling media and entertainment companies of all stripes to reduce the bits they send by up to 50%. This reduces re-buffering events by up to 50%, improves video start time by 20% or more, and reduces storage and delivery costs.

Fortunately, you now understand the basics of perceptual video quality. You also see why most of the video engineering community believes content adaptive sits at the heart of next-generation encoding technologies.

Unfortunately, when we stated above that there were “all kinds of ways” to reduce bits up to 50% without sacrificing ‘eyeball video quality’, we skipped over some very important details. Such as, how we can utilize subjective testing techniques on an entire catalog of videos at scale, and cost efficiently.

Next time: Part 2 and the Opinionated Robot

Looking for better tools to assess subjective video quality?

You definitely want to check out Beamr’s VCT which is the best software player available on the market to judge HEVC, AVC, and YUV sequences in modes that are highly useful for a video engineer or compressionist.

VCT is available for Mac and PC. And best of all, we offer a FREE evaluation to qualified users.

Learn more about VCT: http://beamr.com/h264-hevc-video-comparison-player/

 

VCT, the Secret to Confident Subjective Video Quality Testing

We can all agree that analyzing video quality is one of the biggest challenges when evaluating codecs. Companies use a combination of objective and subjective tests to validate encoder efficiency. In this post, I’ll explore why it is difficult to measure video quality with quantitative metrics alone because they fail to meet the subjective quality perception ability of the human eye.

Furthermore, we’ll look at why it’s important to equip yourself with the best resources when doing subjective testing, and how Beamr’s VCT visual comparison tool can help you with video quality testing.

But first, if you haven’t done so already, be sure to download your free trial of VCT here.

OBJECTIVE TESTING

The most common objective measurement used today is pixel-based Peak Signal to Noise Ratio (PSNR). PSNR is a popular test to use because it is easy to calculate and nearly everyone working in video is familiar with interpreting its values. But it does have limitations. Typically a higher PSNR value correlates to higher quality, while a lower PSNR value correlates to lower quality. However, since this test measures pixel-based mean-squared error over an entire frame; measuring the quality of a frame (or collection of frames) using a single number does not always parallel true subjective quality.

PSNR gives equal weight to every pixel in the frame and each frame in a sequence, ignoring many factors that can affect human perception. For example, below are 2 encoded images of the same frame.1 Image (a) and Image (b) have the same PSNR, which should theoretically correlate to two encoded images of the same quality. However, it is easy to see the difference in this example of perceived quality as viewers would rate Image (a) as exceptionally higher quality than Image (b).

Example: 

PSNR value example of why it shouldn't be the absolute measurement for assessing video quality

Due to the inconsistencies of error-based methods, like PSNR to adequately mimic human eye perception, other methods for analyzing video quality have been developed, including the Structural Similarity Index Metric (SSIM) which measures structural distortion. Unlike PSNR, SSIM addresses image degradation as measures of the perceived change in three major aspects of images: luminance, contrast, and correction. SSIM has gained popularity, but as with PSNR, it has its limitations. Studies have suggested that SSIM’s performance is equal to PSNR’s performance and some have cited evidence of a systematic relationship between SSIM and Mean Squared Error (MSE).2

While SSIM and other quantitative measures including multi-scale structural similarity (MS-SSIM) and the Sarnoff Picture Quality Rating (PQR) have made significant gains, none can truly deliver the same assurance as subjective evaluation, using the human eye. It is also important to note that the two most widely used objective quality metrics mentioned above, PSNR and SSIM, were designed to evaluate static image quality. This means that both algorithms provide no meaningful information regarding motion artifacts, whereby limiting the effectiveness of the metric with regards to video.

SUBJECTIVE TESTING

While objective methods attempt to model human perception, there are no substitutes for subjective “golden-eye” tests. But we are all familiar with the drawbacks of subjectivity analysis, including variance of individual quality perception and the difficulties of executing proper subjective tests in 100% controlled viewing environments so that a large number of testers can participate. Evaluating video using subjective visual tests can reveal key differences that may not get caught by objective measures alone. Which is why it is important to use a combination of both objective and subjective testing methodologies.

One of the logistic difficulties of performing subjective quality comparisons is coordinating simultaneous playback of two streams. Recognizing some of the drawbacks of current subjective evaluation methods, in particular single-stream playback or awkward dual-stream review workarounds, Beamr spent years in research and development to build a tool that offers simultaneous playback of two videos with various comparison modes, to significantly improve the golden-eye test execution necessary to properly evaluate encoder efficiency.

Powered by our professional HEVC and H.264 codec SDK decoders, the Beamr video comparison tool VCT allows encoding engineers and compressionists to play back two frame-synchronized independent HEVC, H.264, or YUV sequences simultaneously. And compare the quality of these streams in four modes:

  1. Split screen
  2. Side-by-side
  3. Overlay
  4. and the newest mode Butterfly

MPEG2-TS and MP4 files containing either HEVC or H.264 elementary streams are also supported. Additionally, VCT displays valuable clip information such as bit-rate, screen resolution, frame rate, number of frames, and other important video information.

Developed in 2012, VCT was the industry’s first internal software player offered as a tool to help Beamr customers conduct subjective testing while evaluating our encoder’s efficiency. Today, VCT has been tested by many content and equipment companies from around the world in multiple markets including broadcast, mobile, and internet streaming, making it the defacto standard for subjective golden-eye video quality testing and evaluation.

VCT BENEFITS AND TIPS

Your FREE trial of VCT will come with an extensive user guide that contains everything you need to get started. But we know you are eager to begin your testing, so following are a few quick tips we trust you will find useful. Take advantage of this “golden” opportunity and get started today!

Note: use Command (⌘) instead of Ctrl for the OS X version of VCT.

  1.      Split Screen Comparison Mode:
    • Benefits:
      • Great for viewing two clips when only one screen is available.
      • Moving slider bar allows you to clearly see quality difference between two streams in your desired region of interest. For example, you can move the slider bar back and forth across a face to see quality differences between two discrete files.
    • Pro Tips:
      • Use the keyboard shortcut Ctrl + \ to re-center the slider bar after it is moved.
      • Shortcut key Ctrl + Tab allows you to change which video appears on the left or right of the slider bar.

VCT split screen comparison mode for subjective video quality assessment

 

  1.       Side-by-side Comparison Mode:
    • Benefits:
      • Great for tradeshows. Solves the lack of synchronization of side by side comparison tests when using two independent players.
      • Single control for both streams.
    • Pro Tip:
      • Shortcut key Ctrl + Tab allows you to change which video appears on which screen without moving the windows.

VCT side-by-side comparison mode for subjective video quality assessment

 

  1.       Overlay Comparison Mode:
    • Benefits:
      • Great for viewing the full frame of one stream on a single window.
    • Tips:
      • Shortcut key Ctrl + Tab allows you to cycle between the two videos. If you do this fast it is a great way to easily see quality differences between the two streams that you might not have noticed.

Overlay Mode

 

  1.      Butterfly Comparison Mode:
    • Benefits:
      • Very useful for determining the accuracy of the encoding process. The butterfly mode displays mirrored images of two sequences to help you assess whether an artifact occurs in the source when comparing an encoded sequence to the original.
    • Tips:
      • Use shortcut key Ctrl + \ to reset the frame to the leftmost view in and use shortcut Ctrl + Alt + \ to switch to the rightmost view in butterfly mode.
      • Use shortcut key Ctrl + [ and Ctrl + ] to move image in butterfly mode left/right.

VCT butterfly comparison mode for subjective video quality assessment

  1.      Other Useful Tips:
    • Ctrl + m allows you to toggle through the 4 comparison modes.
    • Shift + Left Click opens the magnifier tool that allows you to zoom into hard to see areas of the video.
    • Easily scale frames of different resolutions to the same resolution by clicking “scale to same look” on the main menu
    • NEW automatic download feature on the splash screen notifies you of the latest version updates to ensure you’re always up to date.
    • For more great features be sure to check out the VCT userguide beamr.com/vct/userguide.com.

 

Reference:

(1)   P. M. Arun Kumar and S. Chandramathi. Video Quality Assessment Methods: A Bird’s-Eye View

(2)   Richard Dosselmann and Xue Dong Yang. A Formal Assessment of the Structural Similarity Index

Will Virtual Reality Determine the Future of Streaming?

As video services take a more aggressive approach to virtual reality (VR), the question of how to scale and deliver this bandwidth intensive content must be addressed to bring it to a mainstream audience.

While we’ve been talking about VR for a long time you can say that it was reinvigorated when Oculus grabbed the attention of Facebook who injected 2 billion in investment based on Mark Zuckerberg’s vision that VR is a future technology that people will actively embrace. Industry forecasters tend to agree, suggesting VR will be front and center in the digital economy within the next decade. According to research by Canalys, vendors will ship 6.3 million VR headsets globally in 2016 and CCS Insights suggest that as many as 96 million headsets will get snapped up by consumers by 2020.

One of VR’s key advantages is the fact that you have the freedom to look anywhere in 360 degrees using a fully panoramic video in a highly intimate setting. Panoramic video files and resolution dimensions are large, often 4K (4096 pixels wide, 2048 pixels tall, depending on the standard) or bigger.

While VR is considered to be the next big revolution in the consumption of media content, we also see it popping up in professional fields such as education, health, law enforcement, defense telecom and media. It can provide a far more immersive live experience than TV, by adding presence, the feeling that “you are really there.”

Development of VR projects have already started to take off and high-quality VR devices are surprisingly affordable. Earlier this summer, Google announced that 360-degree live streaming support was coming to YouTube.

Of course, all these new angles and sharpness of imagery creates new and challenging sets of engineering hurdles which we’ll discuss below.

Resolution and, Quality?

Frame rate, resolution, and bandwidth are affected by the sheer volume of pixels that VR transmits. Developers and distributors of VR content will need to maximize frame rates and resolution throughout the entire workflow. They must keep up with the wide range of viewers’ devices as sporting events in particular, demand precise detail and high frame rates, such as what we see with instant replay, slow motion, and 360-degree cameras.

In a recent Vicon industry survey, 28 percent of respondents stated that high-quality content was important to ensuring a good VR experience. Let’s think about simple file size comparisons – we already know that ultra HD file sizes take up considerably more storage space than SD and the greater the file size, the greater a chance it will impede the delivery. VR file sizes are no small potatoes.  When you’re talking about VR video you’re talking about four to six times the foundational resolution that you are transmitting. And, if you thought that Ultra HD was cumbersome, think about how you’re going to deal with resolutions beyond 4K for an immersive VR HD experience.

In order to catch up with the file sizes we need to continue to develop video codecs that can quickly interpret the frame-by-frame data. HEVC is a great starting point but frankly given hardware device limitations many content distributors are forced to continue using H.264 codecs. For this reason we must harness advanced tools in image processing and compression. An example of one approach would be content adaptive perceptual optimization.

I want my VR now! Reaching End Users

Because video content comes in a variety of file formats including combinations of stereoscopic 3D, 360 degree panoramas and spherical views – they all come with obvious challenges such as added strain on processors, memory, and network bandwidth. Modern codecs today use a variety of algorithms to quickly and efficiently detect these similarities, but they are usually tailored to 2D content. However, a content delivery mechanism must be able to send this to every user and should be smart to optimize the processing and transmitting of video.

Minimizing latency, how long can you roll the boulder up the hill?

We’ve seen significant improvements in the graphic processing capabilities of desktops and laptops. However, to take advantage of the immersive environment that VR offers, it’s important that high-end graphics are delivered to the viewer as quickly and smoothly as possible. The VR hardware also needs to display large images properly and with the highest fidelity and lowest latency. There really is very limited room for things like color correction or for adjusting panning from different directions for instance. If you have to stitch or rework artifacts, you will likely lose ground. You need to be smart about it. Typical decoders for tablets or smart TVs are more likely to cause latency and they only support lower framerates. This means how you build the infrastructure will be the key to offering image quality and life-like resolution that consumers expect to see.

Bandwidth, where art thou?

According to Netflix, for an Ultra HD streaming experience, your Internet connection must have a speed of 25 Mbps or higher. However, according to Akamai, the average Internet speed in the US is only approximately 11 Mbps. Effectively, this prohibits live streaming on any typical mobile VR device which to achieve the quality and resolution needed may need 25 Mbps minimum.

Most certainly the improvements in graphic processing and hardware will continue to drive forward the realism of the immersive VR content, as the ability to render an image quickly becomes easier and cheaper. Just recently, Netflix jumped on the bandwagon and became the first of many streaming media apps to launch on Oculus’ virtual reality app store. As soon as all the VR display devices are able to integrate with these higher resolution screens, we will see another step change in the quality and realism of virtual environments. But will the available bandwidth be sufficient, is a very real question. 

To understand the applications for VR, you really have to see it to believe it

A heart-warming campaign from Expedia recently offered children at a research hospital in Memphis Tennessee the opportunity to be taken on a journey of their dreams through immersive, real-time virtual travel – all without getting on a plane:  https://www.youtube.com/watch?time_continue=179&v=2wQQh5tbSPw

The National Multiple Sclerosis Society also launched a VR campaign that inventively used the tech to give two people with MS the opportunity to experience their lifelong passions. These are the type of immersive experiences we hope will unlock a better future for mankind. We applaud the massive projects and time spent on developing meaningful VR content and programming such as this.

Frost & Sullivan estimates that $1.5 billion is the forecasted revenue from Pay TV operators delivering VR content by 2020. The adoption of VR in my estimation is only limited by the quality of the user experience, as consumer expectation will no doubt be high.

For VR to really take off, the industry needs to address some of these challenges making VR more accessible and most importantly with unique and meaningful content. But it’s hard to talk about VR without experiencing it. I suggest you try it – you will like it.

Applications for On-the-Fly Modification of Encoder Parameters

As video encoding workflows modernize to include content adaptive techniques, the ability to change encoder parameters “on-the-fly” will be required. With the ability to change encoder resolution, bitrate, and other key elements of the encoding profile, video distributors can achieve a significant advantage by creating recipes appropriate to each piece of content.

For VOD or file-based encoding workflows, the advantages of on-the-fly reconfigurability are to enable content specific encoding recipes without resetting the encoder and disrupting the workflow. At the same time, on-the-fly functionality is a necessary feature for supporting real-time encoding on a network with variable capacity.  This way the application can take appropriate steps to react to changing bandwidth, network congestion or other operational requirements.

Vanguard by Beamr V.264 AVC Encoder SDK and V.265 HEVC Encoder SDK have supported on-the-fly modification of the encoder settings for several years. Let’s take a look at a few of the more common applications where having the feature can be helpful.

On-the-fly control of Bitrate

Adjusting bitrate while the encoder is in operation is an obvious application. All Vanguard by Beamr codec SDKs allow for the maximum bitrate to be changed via a simple “C-style” API.  This will enable bitrate adjustments to be made based on the available bandwidth, dynamic channel lineups, or other network conditions.

On-the-fly control of Encoder Speed

Encoder speed control is an especially useful parameter which directly translates into video encoding quality and encoding processing time. Calling this function triggers a different set of encoding algorithms, and internal codec presets. This scenario applies with unicast transmissions where a service may need to adjust the encoder speed for ever-changing network conditions and client device capabilities.

On-the-fly control of Video Resolution

A useful parameter to access on the fly is video resolution. One use case is in telecommunications where the end user may shift his viewing point from a mobile device operating on a slow and congested cellular network, to a broadband WiFi network, or hard wired desktop computer. With control of video resolution, the encoder output can be changed during its operation to accommodate the network speed or to match the display resolution, all without interrupting the video program stream.

On-the-fly control of HEVC SAO and De-blocking Filter

HEVC presents additional opportunities to enhance “on the fly” control of the encoder and the Vanguard by Beamr V.265 encoder leads the market with the capability to turn on or off SAO and De-blocking filters to adjust quality and performance in real-time.

On-the-fly control of HEVC multithreading

V.265 is recognized for having superior multithreading capability.  The V.265 codec SDK provides access to add or remove encoding execution threads dynamically. This is an important feature for environments with a variable number of tasks running concurrently such as encoding functionality that is operating alongside a content adaptive optimization process, or the ABR packaging step.

Beamr’s implementation of on-the-fly controls in our V.264 Codec SDK and V.265 Codec SDK demonstrate the robust design and scalable performance of the Vanguard by Beamr encoder software.

For more information on Vanguard by Beamr Codec SDK’s, please visit the V.264 and V.265 pages.  Or visit http://beamr.com for more on the company and our technology.