Why You Should Also be Excited About AWS Lambda

Websites like JPEGmini.com have a purpose. They tell a story. JPEGmini.com tells the story of image optimization through a web-based, easy-to-use demo. Users are able to experience the product online, hands-on, without further ado. This is the magic of JPEGmini.com.

But our web-based optimization service is not limited to a single photo.  JPEGmini also has a free web service, where users upload large amounts of photos, and download the optimized versions of their photos.

From the user’s point of view, optimizing their photos is very simple. They upload their photos to JPEGmini.com, and we send the optimized photos back, reduced by up to 80%, yet without any visible difference in quality.

Behind the scenes there is a lot going on in order make this happen. Just like any other online service, we run clusters of servers hosted in multiple data-centers in order to process millions of photos.

Being a long-time fan of AWS, it was an easy decision to build upon their infrastructure.

Using the AWS building blocks (EC2, ELB, etc.), it is relatively straightforward to setup multiple web servers behind a load balancer, queue tasks to multiple servers, scale the work force dynamically, manage storage, etc. We let Amazon handle infrastructure, so we can focus on the user experience.

Almost. We still need to configure, manage and monitor all these services and components.

That got me thinking — how could we simplify the backend?

Static websites hosted on S3 provide highly-available, fault-tolerant and scalable websites, with literally zero DevOps required. By combining client-side processing with backend-hosted 3rd party services (e.g. directly accessing AWS services with the js sdk) it is possible to build many dynamic applications. Yet, in our case, one part was still missing — the ability to run our customized photo optimization algorithm on the backend.

Well, not anymore — thanks to AWS Lambda functions.

Briefly, AWS Lambda is a service that runs your code in response to events, managing the compute resources automatically. With AWS Lambda, there is no need for infrastructure management. Say goodbye to the task-queues, servers, load-balancers, and autoscaling. There would even be no more need to monitor servers. It is essentially “serverless” processing.  Very cool.

In contrary to my first impression, Lambda is not limited to just JavaScript and Java. Any native code that can run in a container can also be packaged into a Lambda function (more on this below).

This would also mean less expenses. Lambda pricing is in increments of 100 milliseconds (as opposed to paying by the hour for EC2 servers). Essentially, better optimization of the pay-per-use model.

A Lambda-based backend means less effort for developers, IT and system engineers.

It is both serverless and simple. If your website requires back-end processing, and this can be broken into small compute tasks, you should think about making use of Amazon Lambda.

The technical details

The JPEGmini Lambda function is intended to replace the backend servers performing the actual image optimization. With the Lambda-based architecture, users upload their images directly to S3, which triggers our Lambda function for each new image. The function optimizes the image, and places the resulting image back on S3.

Out of the box, AWS Lambda supports the Node.JS and Java 8 runtimes, and those are the only two options you get to choose from when defining the function in the AWS Console. The less known fact is that you can bundle any code (including native compiled binaries) and execute it from within the JavaScript or Java Lambda function.

When defining the Lambda function, you can either edit the code inline (on the website), which is probably good enough for small hello-world type functions, or upload a pre-packaged zip file with all the code. The latter makes a lot more sense when the code uses external dependencies, grows in size, or when you manage your code with git (or similar). Packaging a zip file also lets you include native compiled binaries into the zip, and then execute them from within your code.

We used the AWS JS SDK for Node.js to handle moving files from S3 to the local system and back. The running Lambda function has permission to write into /tmp.  Execution of the pre-compiled JPEGmini binary is done with shelljs, which simplifies waiting for the subprocess to finish, and error handling.

To avoid dynamic dependency issues, we made sure that the JPEGmini binary was statically linked to all dependencies, and verified it works well on an Amazon Linux EC2 instance before trying to get it working within the Lambda context. During development, the console.log function proved to be a very useful debugging tool,l which helped figure out how things were behaving on the file system.

Tying it all together, the resulting function downloads an image from S3 to /tmp, optimizes the image using the native JPEGmini binary, and uploads the result back to S3. We configured an S3 event to trigger our Lambda function when new images are uploaded to the bucket, and we monitor the process via CloudWatch — serverless processing.

Introducing Beamr Blogger – Dan Julius

Hi, my name is Dan Julius and I’m the VP R&D at Beamr – and what you would call a typical tech geek…  I joined Beamr in 2011, and together with our great team, have been building the world’s best media optimization tools.

I studied computer science and math at Tel Aviv University and then completed my MSc in Computer Science at the University of British Columbia. I’m a hands-on developer, both low level (C/C++/Linux) and high level (Python/Cloud/Web), with experience in software development and architecture on various platforms. I’m a big fan of cloud technologies, an early AWS adopter, and heavily into image and video processing.

The need for media optimization is definitely on the rise, and leading the software development and cloud operations at Beamr is a great opportunity for me.

I’m looking forward to sharing technical information, knowledge and insights that I’ve learned while developing Beamr’s products, and look forward to hearing from you as well!

Feel free to chat with me via our Facebook, Twitter or LinkedIn accounts.

 

The #1 Challenge Content Providers, Studios, Web Publishers and Media Companies Share

The first and foremost challenge today for anyone delivering video is meeting customer expectation for a perfect viewing experience.

I’ll explain why: As more and more cord-cutters are dropping their cable and satellite TV subscriptions in favor of OTT online streaming services, they expect the same high-quality image they used to get from their traditional TV service. Unlike in the past, when Internet video viewers were accustomed to watching streamed videos on small monitors and compromised on quality, today’s viewing is done on big TV screens. And naturally, viewers want the same TV-like experience from their OTT providers.

The recent Conviva report demonstrates the effect of user experience on monetization and churn. According to the report, “consumers no longer simply expect a service to work, they demand that it provide a high-end experience.” Conviva declares, “2015 is the year of the OTT consumer”. Hence, current typical consumers will not only demand high-quality content, but will quickly quit using a service that doesn’t deliver well.

Clearly, today’s consumers are not as loyal as they used to be. In fact, according to the Conviva report, if the streaming service is not good, 75% of them will try switching to a different service in less than just five minutes.

On top of that, by 2018, 84% of the Internet traffic will be video content. And as more and more consumers are streaming more and more content, network congestion is only getting worse. On the one hand there are viewers already annoyed with long start times, recurring buffering events and other types of interruptions. On the other hand, network congestion will not be resolved overnight – and the effect this can have on content owners may be beyond repair.

Imagine having angry subscribers calling customer support to complain about your service, when in reality your service is great and it’s the network capacity (or should I say incapacity) that is the source of the problem. Or, finding yourself offering time-consuming explanations about network operation, only to realize that the customer doesn’t understand, and/or doesn’t care, and in any case still blames you.

Here’s what can be done: Let’s look at this problem from a different angle and focus on the video file itself. What if we decreased the size of video files, without compromising the quality of those videos in any way.  How would that affect the main user experience metrics –  video start time and re-buffering events? Hold onto that thought, we will explore it further in our next posts.

Video streaming users assume it’s basic and obvious that they’ll get the best viewing experience – always, wherever they are, and on any device they choose. So to stay competitive you need to understand what your customers already do: It’s all about the viewing experience, that’s what they care about, and that’s what will keep them around.

Introducing Beamr Blogger – Dror Gill

Hi, my name is Dror Gill, and in the coming months you’ll be hearing quite a lot from me on this blog.  So I thought it would be a good idea to introduce myself.  I’ve been with the company since day one, back in 2009, and I wear two hats: CTO and VP Marketing.  The only thing I need to remember is to wear the right hat at the right time, otherwise our algorithms might include some marketing messages by mistake, or our website will be filled with formulas and code…

My background is technical: I studied electronics engineering, and my first job was at IBM Research, where back in the 90s we pioneered the fields of Voice over IP and video streaming.  Then I joined a startup called Zapex, which developed video compression chips, and was soon acquired by Emblaze – another pioneering company, this time in the field of mobile video.  During this period I chaired the technical committee of the WMF (Wireless Multimedia Forum), a consortium of companies that defined the standards for video streaming over cellular networks.

After Emblaze, I worked for a few years as an independent consultant on multimedia technologies and markets, advising firms such as NEC, Samsung, Comverse, Radvision and Zoran on their product and technology strategies.  I also had the pleasure of being Entrepreneur in Residence at Giza Venture Capital, which gave me a fascinating inside view of how VCs actually operate.

And then I joined Beamr, and began my third pioneering journey: Creating a way to remove unnecessary bits from already-compressed photos and videos, without altering their formats or compromising quality.  I knew that if we succeeded, such a technology would create huge value across the media value chain – and luckily we did!  More about that soon…
Nice to meet you, and I look forward to hearing back through Facebook, Twitter or LinkedIn.

Optimization Begins Here

It all started in March 2009.  Beamr’s Founder and CEO, Sharon Carmel, was on a plane back from a meeting with a top executive at one of the world’s largest technology companies.  In the meeting, Sharon outlined his vision for storing photos in the cloud and delivering them to any device.  The executive thought it would be cost-prohibitive for a company to host all of the user’s photos due to the huge amount of cloud storage required.  Reflecting back on the meeting, Sharon thought: how can it be that in 2009, storage requirements for photos are still so high?  The JPEG standard was defined in the 1980s; didn’t we make technology advances since then, which could enable us to reduce file sizes of photos?

A few weeks later, I met Sharon at Yossi Vardi’s Kinnernet Unconference.  “I have an idea related to images” he said, “and I need your help.”  Soon enough, we started prototyping various methods for reducing the file size of photos – from storing a series of photos as a video clip, to taking techniques developed originally for video compression and applying them to still images.  During the course of these experiments, we realized that the missing piece of the puzzle was a quality measure; a reliable metric that can judge whether a file with reduced size has the same perceived quality as the original file.  And since none of the existing quality measures out there were good enough, we invented one of our own…

It turned out that this quality measure became the key ingredient in our image and video optimization solutions.  We began by optimizing JPEG images, making their file size as small as possible without lowering their perceptual quality.  The process was performed by encoding the original JPEG image at different compression levels, checking the value of our quality measure and then picking the deepest compression level, which still produced a perceptually identical image.  Using this method, we were able to reduce the size of high-resolution photos by up to 5 times (80% reduction) with no quality decrease, and JPEGmini was born.

We started a free web service, where users could upload an unlimited amount of photos, and download the optimized versions of their photos.  Communicating with our users, we learned that over half of them were web developers, using JPEGmini to reduce the size of web images in order to make their web pages load faster, especially on mobile devices.  Soon enough we launched our first product: JPEGmini Server, a Linux command-line app that enables websites and online photo services to automatically optimize millions of photos on-premises, without uploading them to our web service.  Shortly after, we launched JPEGmini desktop apps for PC and Mac, which enables users to free up valuable disk space on their computers, share photos much faster and store more photos on Dropbox and other cloud services.  To cater for the specific requirements of professional photographers we developed JPEGmini Pro, which features higher resolution and performance and includes a plug-in for Adobe(R) Lightroom(R).

With a successful release of JPEGmini products, we were able to concentrate on video.  It was clear that optimizing video files would solve a much bigger problem: The video file size is huge, and only getting bigger with 4K UltraHD resolutions.  Video already accounts for over 50% of Internet traffic, and by 2018, 84% of Internet traffic will be video content.  Both fixed and mobile networks are struggling to deliver high-quality video to their users during peak viewing hours.  So we decided it was time for action: We took the basic principles of our image optimization technology, modified some aspects to provide a better fit to video content, and created Beamr Video, a perceptual video optimizer that can reduce bitrate by up to 50% with no visible quality loss.  In Beamr Video, we applied our perceptual quality measure on a frame-by-frame basis, ensuring that each video frame was compressed to the smallest size possible, while still retaining a quality that was indistinguishable from the original video by a human viewer.

Fast forwarding to 2015, Beamr Video is already being used by major over-the-top service providers, including Sony Crackle and M-GO, a joint venture of Technicolor and DreamWorks Animation.  We are happy to see that our customers are reporting significant user experience improvements after deploying Beamr Video in their video processing workflow.

It looks quite simple when laid down in a short blog post, but naturally a lot of hard work and intensive research was required from the initial ideas we formed in 2009, until we reached the mature technology and products we have today.  During the development process we filed for 60 international patents, and 5 of them have already been granted.  Last year we raised $9.5M from Marker LLC, Innovation Endeavors and private investors, and we plan to move full steam ahead, continuing to build the world’s best media optimization tools.

 

Learn How Beamr Helped M-GO Enhance User Experience

It’s no secret that customers today demand the best possible viewing experience when streaming videos online. You can offer the best shows, but if the streaming isn’t smooth, customers are going to lose their patience and leave.
Ted If there’s a company out there who understands this, it’s M-GO. Their goal is to provide a premium viewing experience. How can a company measure streaming video experience? Well, there are three key metrics:
1) Video start time
2) Rebuffer events
3) Quality of video

Would you like to know how M-GO improved all three key metrics? The full results of the M-GO case study analyzing the benefits of Beamr Video optimization can be downloaded here.