It’s Getting Intense

Where two parties are fighting, can a third party win?

I ended my previous post on a positive note, suggesting that media optimization could be the savior for network congestion caused by the proliferation of massive photo and video sharing – an integral part of today’s “capturing process”.  Let’s take the discussion a step further by looking at devices, streams, files and key industry players.

Much has been said about the Verizon-Netflix dispute, yet bickering about who is at fault, or why, simply extends the saga.  For context check out Dan Rayburn’s post on the dispute.

Broadband providers and content distribution companies can continue their ping-pong spats with or without net neutrality as the ball, but as Conviva reported: “every consumer should get the best possible viewing experience – regardless of their device, network, platform, ISP, or any of the other myriad conditions that can have an impact.”  There is no dispute here, but what can be done is the question.

The gap is growing

TV display resolution, pixel depth and refresh rates are moving higher and faster than residential broadband capacity to carry all this new data.  To unleash the full potential of new UHD TVs, supporting HDR and speedy refresh rates up to 60 frames per second, the industry needs more broadband capacity than is available today.  ISPs and content providers must join forces to better use the shared resource called the Internet, to meet the challenge set by consumers and the electronics industry.

But back to the game, a Verizon sales representative told a customer that upgrading to 75Mbps will guarantee the smoothest Netflix experience.  Ok, so he was exaggerating.  But, in reality he wasn’t far off in principle.  Bandwidth will determine the viewer’s experience – especially with 4K, and even more so with newer and more advanced technologies to come such as High Dynamic Range, otherwise known as HDR.

UHD translates into 4 times more pixels compared to HD 1080p. But the data complexity of UHD translates into as much as 8 times more when we consider the additional bits per pixel and frames per second, all needed to maximize the user experience.

How to jump the gap

The only way today to win this game is to make sure there’s enough network capacity for delivering the video streams at the pace and quality users demand.  All players in the content delivery arena are faced with this challenge, but to guarantee their viewers a high-quality experience, they will likely need to “cut some kind of deal” with broadband providers that are carrying the bits.  But this can get messy…

So in an effort to avoid future “sagas” it is becoming apparent that more and more companies in the content delivery industry are acknowledging the bandwidth (translated, capacity) gap.  And more importantly, these companies have decided to take action.

Some companies settle for compression solutions, which can only go so far, while others have begun to apply more efficient packaging techniques such as JIT.  But the most progressive have adopted media optimization tools.  True, I have a “biased” opinion on the optimal way to overcome this growing tension between video delivery and broadband capacity.  Media optimization removes redundant bits from the network, just as hybrid cars do away with unwanted emissions, thus it seems as a citizen of this great industry, that media optimization solutions should not be ignored.

In my next post I will provide an overview of various approaches to media optimization so you can decide for yourself which is the best solution for your service.  Thank you for reading, I appreciate the time you have given me to enlighten you on this problem and solution.

Is It Legit to JIT?

If you’re delivering OTT content or a TV Everywhere service, and looking for ways to reduce your expenses, Just-In-Time (JIT) encoding and packaging might just be the solution for you. There are pros and cons to the JIT encoding and packaging workflow, which I would like to share with you, but first let’s start with tradition.

 

The traditional architecture

Traditional video processing flow takes a video asset (master file), and encodes it to various resolutions and bitrates. The variety of resolutions and bitrates enables both ABR (Adaptive Bitrate, which means adapting the video bitrate to changing network conditions), and support for viewing the video across a variety of devices.  After encoding, the files are packaged in all the required protocols, which are dictated by the supported devices (browsers, mobile phones, tablets, TVs, media streamers, game consoles, etc.).  All encoded and packaged versions of the video are stored in the service provider’s data center, so when a video is streamed to a particular device, all that is needed is to select the right file and stream it.

Pros: No heavy-duty processing is required when the video is streamed to the user.

Cons: High storage cost due to storing many copies of the video at different bitrates, resolutions and protocol packages.  Furthermore, to support a new device, you would have to go back and re-encode and re-package your entire repository in the encoding and packaging formats supported by the new device.

 

Full JIT architecture

With JIT encoding and packaging, only a single master copy of the video is stored in the data center.  When a user requests to view that video, the master file is encoded and packaged on-the-fly (just-in-time) to the required resolution, bitrate and protocol supported by the user’s device.  The encoded and packaged file can then be cached for a certain period of time in case another user with the same device profile wants to view that same video, saving the need to encode and package it again.

Pros: Significantly lower storage cost. In addition, in order to support a new device, codec (e.g., HEVC) or protocol (e.g., DASH), all you need to do is to add support for that device profile in your encoder and packager, and you’re done – no need to process the entire file repository.

Cons: Encoding the file needs to be done very quickly, so even the first user who views it won’t notice a delay in delivery.  This means that higher processing resources are needed, similar to the resources required for live video encoding.  In addition, the file needs to be encoded and packaged again after the cache expires.

 

Interim architecture

An interim solution is to use only JIT packaging.  This way, the files are pre-encoded to all the required resolutions and bitrates, but are stored in a single format.  Then, when a user requests to view that asset, the encoded files are packaged on-the-fly according to the protocol required by the user’s device.  Packaging the files requires much less computing resources than encoding them, and still some storage savings are gained as only a single format of each encode is stored.

 

Recommendations

  • For large content libraries with a diverse subscriber base and a multitude of different devices, I recommend using JIT encoding and packaging.  As an interim solution you can use JIT packaging alone, or use JIT encoding for the long-tail content that has a very small number of views.
  • For small content libraries, I don’t recommend using JIT encoding and packaging, even if you have a large number of viewers, since the cost of storing all the different versions is not so high.

I also recommend reading a great article Just in time transcoding is coming by Brian Santo.

 

Let’s recap

JIT encoding and packaging is an important workflow that brings efficiencies to large- scale video deployments.

  • Significant savings in storage cost
  • Simplified workflow
  • Reaches multiple devices
  • Easy support for new devices, codecs and protocols
  • Secure investment, adequate to subscriber growth

Consider the case of a cloud DVR implementation for a cable or satellite operator: the content repository is huge, with new content every day, multiplied by the number of channels. And sometimes due to legal issues, there’s a requirement to keep a copy per user.  In such a case, keeping just one version of the recorded program and performing JIT encoding and packaging, is not only legit – it’s absolutely crucial.

1. 2. 3. Cheese

We live in an amazing time where with a smartphone alone, professional quality photos and videos are no longer limited to the exclusive enclave of high end studios.  And with Zogby Analytics reporting that 87% of Millennials confirmed in a national study that “My smartphone never leaves my side, night or day”, never before have we had the ability to capture lifetime memories as they are actually happening.

But what actually happens after this image is captured?  In the old days of film the photographer would focus the camera on the subject and at the right moment click the shutter which would expose light to a special photographic paper thus writing the image on the film.  The film would then be removed from the camera where it would be developed by a special chemical process, exposing the image on a photo paper where the resulting photo could then be shared with the intended viewing audience.  

Today the process of digital photography and video capture is quite different, in that there is no film.  From the moment the image sensor in the camera captures the photons in its viewing range, and converts them to a series of zero’s and one’s, a complicated workflow (that requires a network connection) for viewing, uploading, downloading, posting, sharing, sending, and storing is executed.  Photography and videography today entails much more than “a click”.

 

completesession1

 

If a tree falls in a forest, and no one is there to hear it, does it make a sound?

Put into today’s visual context, if a photo or video is taken but not shared with anyone, well, what’s the point?  Unfortunately, media traffic is growing at a more rapid rate than the network capacity needed to support today’s workflow process of capturing photo and video sessions (http://www.kpcb.com/internet-trends slides 13, 24, 60, 61, 70).

In the near future more photo and video bits (one’s and zero’s) will be created than our current networks (wireline or wireless) can handle.  With new technologies being introduced, which greatly improve encoding efficiencies for photo and video files, there will be solutions to address this impending gap.  

Media optimization to the rescue

One of the technical solutions that is in the market today is media optimization.  Media optimization is being employed by distributors of photo and video content to reduce the file size and bitrate of photo and video files, and by so doing has a positive effect on the network by adding immediate excess capacity since the resulting bandwidth savings can be reutilized for other traffic.

“It is well established that video will constitute the bulk of data traffic both on wireline and wireless from here on. From a consumer point of view, the access technology will just disappear in the background. They will care less what access medium is carrying the bits of any particular content or traffic http://www.mobilefutureforward.com/5G_Chetan_Sharma_Consulting.pdf page 17.

As this quote demonstrates, many in the “photo and video capture industry” do not care about network capacity. Instead they care only about enabling an effortless capture and sharing experience for the consumer.  After all, technology exists to serve a need, and for those in the business of capturing and sharing lifetime memories, this need is no more complex than capturing the first step of a child learning to walk.  And this describes the essence and magic of photos and videos as the moments we create and share today, can be experienced for a lifetime, or perhaps will be seen for lifetime(s) to come.  

Thank you for reading.  I will be sharing more specific insights on the topic, stay tuned.  

Introducing Beamr Blogger – Eliezer (Eli) Lubitch, President

Hi, my name is Eliezer Lubitch, but everyone calls me Eli.  As the President of Beamr, I basically think, eat, sleep, breath and even dream about media optimization.  This passion started over 20 years ago…

At Scitex (a global leading graphic arts imaging company that was acquired by HP and Kodak), I filled various engineering, R&D and management positions. Then I was the Vice President Business Development for Kodak Versamark Inc. and Kodak’s corporate marketing.

I forgot to mention that in addition to my corporate executive experience, I have an entrepreneurial side and I was the seed investor of Tivella Inc., world pioneer of IPTV that was acquired by Cisco.

Today I’m completely focused on media optimization, specifically the intersection of imaging and networking solutions for the graphic arts, publishing and professional content industries.  I hold an MSc with honors in Computer Science from Tel-Aviv University and an MBA with honors from Technion, Israel Institute of Technology.

I have a lot of ideas related to the exciting and rapidly growing industry of media optimization. I’m looking forward to sharing them with you in future posts and hearing back from you through Facebook, Twitter or LinkedIn.

Why The Future of Sports Broadcasting Must Keep Pace with Live Streaming Adoption

Today’s sports fans expect digital availability everywhere and their viewing expectations are constantly rising. As fans have more choices to access content outside the home, will broadcasters be able to keep pace with the technology necessary to support live streaming?

Will the Big Game Be Live Streamed or Televised?

OTT (over-the-top, Internet delivery) services are hot.  Yet with all the consumer buzz and soaring market valuations of companies offering OTT service delivery, traditional broadcast television viewing time is not under great pressure.  Nielsen, the industry standard of audience measurement, found in their latest report that 95% of video viewing in the US occurred on traditional broadcast platforms with just 4% of views happening on the Internet and 1% on a smartphone.

So if chants of “we want our content now” can be heard in the streets, why is it that cord-cutting trends aren’t showing massive acceleration, and the earnings of pay TV operators are as strong as ever?

As a fierce supporter of consumer choice, I believe that all platforms should be free to proliferate, since there is a place for the quality and superior experience that can only be delivered by a pay TV service.  Likewise, the ability to view entertainment content outside the home is of such high interest that mobile devices such as tablets are now seen as equal to a 65-inch television hanging on a living room wall.

So what’s holding back the growth of consumer services?  It doesn’t seem to be lack of interest, after all- the answer is sports content.

Options for viewing professional sporting events are limited to pay TV services or network broadcasters.  This means if you choose to forego a pay TV package, or do not live in a market where the game is broadcast on a local channel, the only option available to watch the big game will be at your friend’s house, (who is likely shelling out $80 to $100 month for a pay TV package), or the neighborhood sports-bar.  Otherwise, you will be stuck with updates via Twitter or the radio.  Hardly compelling alternatives, I would say!

But, there’s good news! On Sunday October 25th we’ll get a glimpse of the future when for the first time anyone with an Internet connection will be able to watch a live National Football League (NFL) game exclusively on Yahoo!, and from anywhere in the world, for free via smartphone, computer, game console or smart TV.  Which means if you find yourself no longer watching the “big screen”but opting for your phone or tablet instead, you now have a way to watch the game without your friends teasing you for cheering the wrong team, or some guy spilling his beer on your arm at the sports bar.

The NFL earns a significant portion of its revenues from selling the rights to televise  games, which is why a shift to new delivery methods is a seismic one.  With license terms often spanning ten years or more, it is no surprise that the digital distribution rights to most NFL Sunday games are locked up until 2022 and 2023.  “The league is on a year-to-year contract with CBS for ‘Thursday Night Football’ and it is widely known that they are considering whether to open up streaming of those games to new partners,” according to The New York Times.  

According to Accenture, by 2016 the overall market for sports, concerts and trade programming is estimated to reach $228 billion, and digital live events will account for at least 30% of the total market.  For sports, media and entertainment businesses, the ability to create and execute flawless digital delivery of live events – and successfully monetize the outputs – will be a key differentiator and future earnings driver.  Which is why Major League Baseball has been in the live-streaming business longer than anyone.  The technology division built to deliver live streams of MLB games and MLB Advanced Media events also powers OTT streaming content services by HBO, Sony, ESPN and others.  This provides the MLB valuable monetization opportunities via their video service MLB.tv, At Bat mobile app and accompanying multiplatform Gameday pitch-tracking application.  With fans consuming collectively 71.35 million minutes each day, Major League Baseball is well positioned to benefit from offering access to games, as fans want to see them.

Watching the Game, Uninterrupted

Today’s sports fans demand a broadcast quality experience, regardless of the mode of delivery or viewing device.  Technically speaking, the big issue with OTT delivered content remains whether crowded networks can handle the increased data that comes with streaming video to millions of devices simultaneously. OTT providers like Netflix and Hulu are always seeking more efficient ways to manage bandwidth for Video-on-Demand, but the biggest challenge in streaming a live event is anticipating the bandwidth that will be required, since it fluctuates dramatically from one minute to the next.

Managing the unpredictable and peaky demand of Internet video traffic is the burden of the network. Yet the consumer only knows the service provider as the source, and is unaware of the complex patchwork of vendors and technology providers needed to deliver video to their screen.

Hence, if the service provider is not delivering a broadcast quality feed, the consumer will blame the service provider and not the network.  For this reason both Netflix and Google publish reports listing ISPs from fastest to slowest.  Unfortunately, this is not sufficient, as most consumers are not aware of these reports, and even if they are, they can only switch to a “higher performing” ISP, that is if one operates in their region.

With live sports remaining the one unconquered frontier for OTT, and perhaps the Holy Grail necessary to attract cord-cutters interested in sports, live streaming must rise to the challenge of providing video streams at a smaller file size without sacrificing viewing quality.

While football fans everywhere await the score of the game between the Buffalo Bills and the Jacksonville Jaguars on the 25th, I’ll be watching closely for the results of how Yahoo! and the NFL navigate the course for the future of sports broadcasting.

Whatever Happened to Mobile Broadcast TV?

10 years ago I was working as an independent consultant, specializing in mobile video technologies and services.  I helped companies such as Samsung, Qualcomm, NEC, Comverse and Radvision with their product strategies, and presented at numerous conferences and events.  One of my main areas of expertise at the time was Mobile Broadcast TV, an array of technologies and standards that enabled over-the-air broadcast of TV signals to mobile devices over dedicated (non-cellular) frequencies.  I delivered a lot of training sessions on this topic, including a full 3-day course, but was always skeptical about the prospects of this market.  And indeed, Mobile Broadcast TV services have pretty much disappeared from the market, a trend which I identified in the last post of my Mobile TV blog.

So what happened to Mobile Broadcast TV?  Why did the immense investments in purchasing spectrum, licensing content, building infrastructure and designing compatible devices go down the drain?  Let’s try to identify the reasons for this failure, and perhaps learn a lesson or two for the future.  

Once upon a time

Just a decade ago 3G networks were deployed in most countries, and mobile operators launched “walled garden” video services on top of them, creating a large demand for mobile data services and new revenue opportunities from both data and video content.  But based on usage forecasts, operators realized that their mobile networks would soon be flooded with demand for data services, which they wouldn’t be able to supply even over their newly launched 3G networks.  

Usage patterns clearly showed that live TV services, and especially the channels that were popular on regular broadcast TV, were also popular when delivered over cellular.  Hence the following revelation: If everyone is tuning into the same channels on mobile, why do we need to burden our cellular network with thousands or millions of per-user connections for these channels?  Why not create a dedicated Mobile Broadcast TV service, which would broadcast these TV channels to mobile devices over separate frequencies?  This would solve two problems at once: Unloading these channels from the cellular networks, as well as using broadcast technology (vs. unicast over IP that is used for cellular video delivery) so everyone could tune into the same broadcast, with virtually unlimited capacity.  This revelation triggered the launch of several different standards and technologies for Mobile Broadcast TV.

Round world trip

Europe adopted the DVB-H standard driven by Nokia, which was a low-power derivative of the DVB-T standard, and also enabled reception when moving at high speed.  In Korea, two competing systems were launched: T-DMB using terrestrial antennas, based on the Digital Audio Broadcasting (DAB) system, and S-DMB using satellites.  In the US, Qualcomm developed a system called MediaFLO, and not only implemented it in its cellphone chips, but also created a subsidiary that bought spectrum and licensed content for the service.  In Japan, the ISDB-T terrestrial broadcast system was designed with built-in support for both regular TV reception and mobile TV, so no additional infrastructure was required for broadcasting to mobile devices.

Hopes were high and forecasts shot through the sky for adoption of this new service, which would create additional revenues for mobile operators, broadcasters, chip vendors and even governments (since new spectrum would have to be licensed).  But, it took only 2-3 years for this market to collapse completely, due to technical deployment issues and low user adoption.  

Not very happily ever after

What happened to Mobile Broadcast TV and why did it fail?  First of all, Mobile Broadcast TV was very expensive to launch.  It required licensing new frequencies used for the broadcast service, deploying new dedicated antennas and getting new handsets with chips that support Mobile Broadcast TV reception into the hands of users.  The cost was enormous, much higher for example, than installing a new app for OTT delivery on a smartphone…

Another issue was a very complicated value chain, which required complex licensing agreements and revenue split between the different players: the TV broadcasters that own the content (and sometimes own the broadcasting antennas), and the operators who “owned” the customers and subsidized handsets.  Part of the grand vision was incremental revenues from services such as immediate purchase of products advertised on the Mobile Broadcast TV network, using the cellular IP network as a backchannel.  But the advertisers had relationships with the broadcasters, while the users had relationships with the operators, so who would manage the revenue split for these Mobile Broadcast-driven e-commerce transactions?

On top of that, the nature of regular TV was changing: Users were shifting from viewing content when it was broadcasted, to viewing content when it was convenient for them, using Personal Video Recorders (such as the famous TiVo box), and Video On Demand (VOD) services offered by MSOs.  Finally, when the walls of the “walled garden” disappeared, and users started watching Over The Top (OTT) content from every channel through smartphone apps, it was clear that the economics of Mobile Broadcast TV services would never work.  

Back to reality

So where are we today?  Back to square one…  Delivery of both live video and on-demand video to mobile devices is happening over cellular networks.  And although 4G networks with increased capacity have been deployed, they are again overloaded with demand for video services on-the-go.

So what can be done about network overload?  The answer is simple: Media Optimization.  It’s the best way to reduce the load from today’s networks and improve the UX for viewers without licensing additional spectrum, deploying new antennas or upgrading handsets…

What do you think happened to Mobile Broadcast TV?  I’ll be happy to hear from you on our Twitter, Facebook and LinkedIn channels.

Live Streaming: It’s a Whole Different Ball Game

The market for OTT live streaming is not only on the rise, but is also becoming a larger part of the overall OTT streaming market in general.  According to Conviva’s Industry Data, live streaming already consists of 20% of streamed video.  The most popular programs watched are sports, news, live events (such as Pope coverage, The Oscars), and OTT delivery of linear TV channels such as CNN, ABC, CBS, etc.

On the face of it, live streaming and on-demand streaming seem to have a lot in common: They are viewed on the same devices and delivered using the same codecs and protocols. However, unlike on-demand streaming, live streaming happens in real-time, bringing with it many challenges related to video processing and delivery.

Timing is everything

For on-demand video, the timing of content availability is important, but not critical.  Episodic content is prepared months in advance, and even if content is relevant for that same day, there is still some time to prepare it beforehand.  When encoding video, having more time means that more complex encoding tools can be used to improve video quality, even if encoding takes longer. For example, two-pass encoding, which first analyzes a video and then sets the best encoding parameters based on the analysis, can be performed when doing offline encoding of on-demand streaming content.

For live video, encoding has to take place on-the-fly, with minimal delay.  Nothing (at least in my eyes) is more annoying than watching a live football broadcast on the Internet, and seeing a touchdown only after hearing the (crazy loud) cheers from the next-door neighbors…

Simply put, since live video encoding needs to happen in real-time, the added encoding speed requires a tradeoff on video quality.  Two-pass encoding can’t be used, and some of the encoding tools need to be relaxed to meet the real-time constraints, resulting in a lower video quality.

Multiple formats at once

Streaming applications typically require encoding content in various formats (both codecs and protocols), in order to support different devices such as web browsers, mobile phones, streaming boxes and game consoles. When creating content for on-demand streaming, you can initially create only a limited range of formats (based on device popularity), and after releasing the content, you can expand the range of supported devices further by creating more formats.

With live streaming, however there is no second chancefor releasing the content to more devices: content is viewed in real-time, so all formats have to be prepared at once.  This creates an additional processing burden, which requires extra computing resources.

Sometimes tradition is less dedicated

Its interesting to compare live streaming with traditional live broadcast over terrestrial antennas.  With traditional broadcast, a single antenna can serve millions of viewers, who are watching the same program at the same time.  Having more viewers does not change the viewing experience for anyone, and doesnt require any dedicated additional transmission powerfrom the antenna.

With live streaming, each viewer has a unicast connection to the streaming server, meaning that a dedicated stream is sent from the broadcaster to that viewer.  Now, multiply the typical bandwidth of a live stream (that can be around 1 Megabit per second) by the millions of viewers watching an NBA game or the Papal Visit simultaneously; and the end result is an enormous bandwidth demand that needs to be supported over the (somewhat congested) network.

Clearly, processing and delivering live streaming is technically more challenging than on-demand streaming.  And subscribers, who always expect a great viewing experience, do not care about any technical issues, so the challenge is only the content providers to solve.  

Clearly, being a big fan of media optimization, I believe that as the demand for live streaming increases, the bitrate of live streams delivered over the Internet needs to decrease. Otherwise we will all suffer from the outcome of congestion

Beamr Video Cloud Service 101: How to Use the REST API

Introduction

Cloud-based video optimization offers a solution that is cost-efficient, scalable and seamlessly integrated into a video processing workflow. Naturally, I would recommend the Beamr Video Cloud service as the easiest way to get started with optimization for your videos. There is no installation or server management required.

Beamr Video Cloud service runs on Amazon Web Services (AWS) infrastructure, and is accessible via a REST API. We provide a simple interface, and handle all the heavy lifting for you.

API Overview

The Beamr Video API uses JSON over HTTP(S) and follows the standard REST design. Beamr Video users send optimization jobs to the service, and when a job is completed the service sends a notification (HTTP callback). The callback includes the URL to the optimized file, as well as statistics regarding the savings and processing costs. Users can query the API for the status of their current and past jobs at any time.

Following is a simple walkthrough of creating an optimization job and downloading the result. For simplicity I chose to use the HTTPie command line utility, however similar commands may be sent using CURL, or any REST capable utility or programming language.

Creating a New Optimization Job

To create a new optimization job, use the following command to send a POST request to the Beamr Video Cloud service.

$ http -a username:password POST https://api.beamrvideo.com/v1/jobs source=”http://example.com/movie.mp4″ quality=high

  • -a is used to pass the user credentials to the service (http digest authentication)
  • The POST verb is used, since we are creating a new job (as opposed to querying existing jobs as seen below)
  • The base-url https://api.beamrvideo.com/v1/ is where all requests are sent, and in this case, is followed by the /jobs suffix, since we are creating a new job
  • Two parameters are passed to the service in the request body:
    • source – a url to the video file which is to be optimized
    • quality – the desired quality setting (we support high and best)

The HTTPie utility is great because it automatically formats the request with the JSON payload required by Beamr Video Cloud Service and automatically sets the Content-Type header to application/json as required.

{
“source”: “http://example.com/movie.mp4”,
“quality”: “high”
}

The following JSON response from the server indicates the job has been scheduled for processing, and provides the job-id, which you will need later when querying the job result.

{
“code”: 201,
id“: “C3uxnY5j7L93rWZpir5HVN“,
“info”: {
“created”: 1443551274733,
“source”: “http://example.com/movie.mp4”,
“status”: “pending”
},
location“: “https://api.beamrvideo.com/v1/jobs/C3uxnY5j7L93rWZpir5HVN“,
“status”: “CREATED”
}

The response also includes a location for the job, which is the URL to the query for getting the job status.

Querying the Job and Retrieving the Optimized Video

The next step after the service has finished processing our job, is to retrieve the resulting optimized file. Users are sent a notification when the job status is updated, however it is also possible to query the status using the API.

Using the HTTPie utility, you can issue the following GET request:

$ http -a username:password GET https://api.beamrvideo.com/v1/jobs/C3uxnY5j7L93rWZpir5HVN

As before, the credentials are passed to the service via the -a parameter. The base-url for the API remains the same, however the requested resource is the new job-id received from the previous response. Note that the complete URL is also returned as the “location” value in the previous response, and it is best-practice to follow this value.

The server response below indicates that the jobs are completed, and provides an http accessible URL from which users can download the optimized video.

{
“code”: 200,
“job”: {
“id”: “C3uxnY5j7L93rWZpir5HVN”,
“info”: {
“created”: 1443551274733,
“optimizedVideo”: “s3://beamrvideo_results/danj-C3uxnY5j7L93rWZpir5HVN/bird_mini.mp4”,
optimizedVideoUrl“: “https://api.beamrvideo.com/v1/jobs/C3uxnY5j7L93rWZpir5HVN/optimized_video“,
“source”: “http://icvt-tech-data.s3.amazonaws.com/dan/bird.mp4”,
status“: “completed
},
“location”: “https://api.beamrvideo.com/v1/jobs/C3uxnY5j7L93rWZpir5HVN”
},
“status”: “OK”
}

Downloading the Result

Using HTTPie, the Beamr Video Cloud Service sends a final GET request, this time to the optimizedVideoUrl from the previous response. The –download flag tells it to save the file to disk (instead of printing the result to a file).

$ http -a username:password –download https://api.beamrvideo.com/v1/jobs/C3uxnY5j7L93rWZpir5HVN/optimized_video

and the result looks something like this (with the file saved to disk):

Downloading to “movie_mini.mp4”

Done.

Summary

Cloud-based video optimization, specifically when using Beamr Video Cloud Service with the REST API is the easiest and most cost efficient way to get started with video optimization. No installation is required, you pay as you go and enjoy fast turnaround and unlimited scale.

Optimization of your first videos is just a few clicks away; complete integration with your workflow is simple and straightforward with any programming or scripting language (Python, Java, C#, Bash, etc.).

To sign up for your evaluation, follow up with us at http://beamr.com/request_trial.

Driving the Mobile Visual Communication Revolution

Beamr to Present JPEGmini at Mobile Photo Connect on September 29

As image technology experts, the crux of our corporate mission has been to reduce the cost and improve the user experience of photo and video delivery, to any device over any network, while raising the bar on quality.

Since our inception in 2009, we have developed a strong related IP portfolio and established a stellar customer base within the world’s largest media companies. But what you may not know is that in addition to developing Beamr Video – aimed at better delivery of over-the-top streaming content to mobile and broadband connected consumers – the team behind Beamr is also responsible for JPEGmini, the highly acclaimed photo optimizer.

Currently in use by many social networks, photo sharing platforms and e-commerce websites, including Groupon Japan and Netflix, JPEGmini is capable of reducing the size of JPEG images by up to 80 percent without affecting image quality.

Recognizing the need to optimize the delivery and storage of photos in an era of exponential increase in visual social communication, our President, Eliezer (Eli) Lubitch, will be presenting our suite of media optimization tools, including JPEGmini and Beamr Video, at the annual Mobile Photo Connect conference on September 29 in San Francisco.

The key gathering for executives and entrepreneurs in the mobile photography ecosystem, Mobile Photo Connect attracts over 150 photo app developers, imaging companies, mobile vendors and other industry participants from Asia, Europe and the Americas.

Eli is scheduled to give a presentation during Session III of the “Show and Tell” portion of the program running form 2:50 PM – 3:30 PM PST. We’ll also have a stand in the exhibit hall where you can see live demos of JPEGmini and Beamr Video . If you are attending, be sure to stop by and say hello or tweet us @beamervideo or @jpegmini. Unable to attend, but curious to learn more? Visit www.jpegmini.com or check out this video of professional photographer, Paul McPherson of Shutterfreek.com as he tests JPEGmini optimization quality on screen and under the loupe.

Joining Forces to Create a Better Streaming Experience

Streaming terabytes of video is complicated – and as expectations for high-quality video delivery increase daily, the undertaking only gets more challenging. It is especially problematic considering the broadband networks carrying the burden of streaming all this video were designed originally for text and images – not size intensive, high-quality video that only gets better (and better) every generation.

It has taken the video industry a bit to catch up when it comes to admitting streaming video is the future, and then committing the financial resources to bolster the critical infrastructure necessary to remedy it.

Fresh off the heals of an exciting IBC Conference in Amsterdam, it is now more evident than ever that content providers and operators are making fiscal and technological investments to provide consumers with what they are demanding – a seamless video experience.

To address the rapid speed of industry transformation, leaders across the online video ecosystem, including us, have recently pledged to work together as part of the Streaming Video Alliance (SVA), a newly formed industry forum comprised of leading companies from the online video ecosystem working together to create better, faster digital experiences for today’s connected consumers.

We join a prestigious list of SVA members which now includes: Alcatel-Lucent, Beamr, CableLabs, Cedexis, Charter Communications, Cisco, Comcast, Conviva, EPIX, Ericsson, FOX Networks, Intel, Irdeto, Korea Telecom, Level 3 Communications, Liberty Global, Limelight Networks, MLB Advanced Media, NeuLion, Nominum, PeerApp, Qwilt, Sky, Telecom Italia, Telstra, Time Warner Cable, Ustream, Verizon, Wowza Media Systems and Yahoo!.

As a technology enabler, we are excited to join together with content providers, CDNs and service providers to work on the most pressing issues of streaming video delivery. The alliance’s main focus is on the bits you’ll never see – like optimization and delivery techniques – all aimed at ensuring online video flourishes and that you never find yourself or your customers suffering from a buffering issue, while trying to enjoy the vast world of online video unfolding before us. Happy streaming.