AddLive Blog

For platform updates and company news.

AddLive has been acquired by Snapchat

Kavan Seggie - May 05, 2014

We are very happy to announce that AddLive is joining Snapchat.

While we have no immediate plans to add new customers to the platform, we intend to continue providing our ongoing video chat services to some of the most innovative companies in the world.

Our special thanks go out to all our early customers and everyone who has helped us along the way. We look forward to continuing our journey with Snapchat.

AddLive - Stable channel update v3.0.1.X

Ted Kozak - March 11, 2014

We are proud to inform you that the AddLive SDK v3 has received its first update over the stable channel. This update does not bring any new significant features, but improves overall reliability of the platform.

For all the changes, please refer to the documentation provided:

In case of any questions or comments please contact us.

As always, happy coding!

New AddLive Logo

Kavan Seggie - February 25, 2014

We have been thinking about performing an AddLive logo refresh for sometime now.

With everyone else going all 'flat', we felt left out and so the plan was to lose the textures, drop the drop shadows and remove anything that references the analogue world.

But following the crowd is not what AddLive are about and so Ted had an idea. He asked his 7 year old daughter, Mary, to mock us up something sweet. The results are nothing short of spectacular!

AddLive New Logo

So which one is your favorite? The one with the most votes will become the AddLive logo!

Here's to the Crunchies Misfits

Kavan Seggie - February 9, 2014

Crunchies 2013

Last night some members of the AddLive team attended the Crunchies. It was a glam affair with many of tech's royalty there. Marissa Mayer, Ron Conway, Travis Kalanik, Drew Houston, Evan Spiegel, Tom Preston Warner and many others.

The night began with a hilarious musical opener, referencing everything geeky and startup from 2013 in the song 'Technical Issues'. From the Nest acquisition, to last years 'failing' startup Yahoo! acqui-hires, and Uber's surge pricing. It was my personal highlight and I have embedded it at the bottom of this post.

Throughout the night of self congratulations, I couldn't help but think about the 1,000s of entrepreneurs who were hard at work on their startups and not at the awards. And not just trendy Silicon Valley types. But founders around the world of all ages and races. From the tech startup'er in Berlin to the African startup'er making shoes for her village. Those who don't do this for accolades but because it is what they love to do. Or perhaps, in an even purer form, they are doing it out of necessity.

Every week at AddLive I have a pleasure of speaking to entrepreneurs looking to do something exciting with RTC. It is a true privilege, and is the best part of my job.

So to reference one of the greats:

"Here’s to the crazy ones, the misfits, the rebels, the troublemakers, the round pegs in the square holes… the ones who see things differently - they’re not fond of rules… You can quote them, disagree with them, glorify or vilify them, but the only thing you can’t do is ignore them because they change things… they push the human race forward, and while some may see them as the crazy ones, we see genius, because the ones who are crazy enough to think that they can change the world, are the ones who do."

WebRTC Developer Meetup - San Francisco

Kavan Seggie - February 4, 2014

WebRTC Developer Meetup - Justin Uberti

On Tuesday night the Google Chrome team hosted the first WebRTC Developer Meetup at the Uber Conference offices. Members of the Chrome team who were there were Harold Alvestrand, Justin Uberti, Serge Lachapelle and Vikas Marwaha.

The Uber founders kicked the evening off with a short introduction on how Uber are using WebRTC. Then Justin and Serge took us through a few slides, allowing us to ask questions.

Justin and Serge were very open and gave us a few real nuggets of information about the future of WebRTC. Unfortunately I can't say what they were on our blog, you'll have to come to the next meetup :)

One important aspect I did pick up on was that the Google Chrome team appear to believe that they have enough traction with WebRTC and are now taking a bit of a back-seat with regards standardization, rather focusing on the Chrome WebRTC implementation. Their goal is to be 'best-in-class' when compared to the other browsers, which no doubt they will do.

WebRTC 2.0 was also discussed. This will be SDP free WebRTC, and something that we at AddLive are looking forward to help shape. But this will take time as it will need to go through further industry consultation. More info can be found on the W3C ORCA site,

If you did miss this one, I would highly recommend you attend the next one!

WebRTC Developer Meetup - Slides

AddLive v3 in Stable! - Stable channel update v3.0.0.31

Ted Kozak - February 3, 2014

We are extremely proud to inform you that the AddLive SDK v3 has finally reached a stable state. The main feature is interoperability with Chrome WebRTC. We have achieved this through numerous changes in the AddLive RTC stack that make us WebRTC compliant. And as WebRTC continues to change and mature, we will make sure we remain in line with upcoming standard. Support for Firefox WebRTC is coming soon.

Information about how to use the v3 web SDK, as well as download links for the native SDKs can be found here:

The sample applications and tutorials were updated to work with the v3 release by default (master branch). The v2-compatible versions are also available, in the v2 branch.

Please note that this release does not affect any existing applications powered by the v2 version of the AddLive SDK. We will maintain the v2 infrastructure for as long as it is needed.

That said, we strongly advise our customers to migrate to v3. The v3 API is backwards compatible with v2 complete for the Web SDK and almost almost completely for the native SDKs.

For more details about the migration please refer to following documents:

Important: As Chrome WebRTC is still maturing there are numerous features that AddLive cannot support with native Chrome WebRTC. For change logs corresponding to the latest stable release please check the API docs for each SDK:

In case of any questions or comments please contact us.

As always, happy coding!

Beta channel update v3.0.0.20

Ted Kozak - January 2, 2014

We are proud to announce that the AddLive SDK v3.0.0.20 is available for download via the beta channel.


  • Added support for speech activity reporting (via the monitorSpeechActivity API)
  • Restored the P2P communication (native SDKs only)
  • Updated libvpx to version 1.3
  • Several bug fixes and improvements.

All the SDKs can be downloaded from here:

Happy coding with AddLive!

Happy 2014!

Kavan Seggie - December 31, 2013

Happy 2014!

Many thanks for all your support over 2013. It has been a great year for AddLive and we owe much of our success to you. We now have over 30 customers and are doing over 2MM streamed minutes per month on the platform.

In 2014 we will remain absolutely focused on the platform and supporting our customers. We will be bringing you great new features and improving the old ones, and also increasing our support coverage.

All the best for 2014, and we look forward to some great video and voice applications next year and beyond!

AddLive SDK v3 is out! - Beta channel update v3.0.0.0

Ted Kozak - December 13, 2013

We are proud to announce that we have just released the v3 of the AddLive SDK through the beta channel.

This new release is a huge milestone for AddLive, as we have finally made our platform fully WebRTC compliant. This will allow your application to leverage the plug-in free experience provided by Chrome and Firefox (soon) and yet still be able to offer your services to users that prefer Internet Explorer, Safari or mobile applications.

The amount of changes required by the platform to provide this interoperability was huge. This forced us to break the backwards compatibility with the previous versions of the SDKs. It was a difficult decision, but a necessary step in order to provide you with the SDK that is easy to use and reliable.

It is important to know, that the v3 SDK will _not_ be released on the Web SDK side through the existing URLs. We've provided a separate location for the JavaScript SDK, which will allow you to select an appropriate time to migrate your application the new SDK.

We have also taken all the necessary steps in order to make the transition as seamless as possible. For more details on the migration on each platform, please refer to a corresponding migration guide:

Also to better understand the changes, please review the change logs:

In spite of the WebRTC interoperability the platform provides also several new features:

  1. Provides the automatic reconnects functionality on all supported platforms. For the Web SDK this functionality became mandatory (there is no enableReconnects flag)
  2. Adds support for a completely new platform - native desktop OS X SDK. The OS X SDK was derived from iOS one, so developers working with iOS SDK will feel like home.
  3. Obj-C SDK uses blocks to ease the development a lot
  4. On the iOS SDK, the ALVideoView was completely replaced by ALVideoView2 interface. This is an API change that breaks the compatibility on the API level. If you were using the already deprecated ALVideoView - you need to migrate your code. In case you've been already using the ALVideoView2, you just need to change the references to use ALVideoView instead of ALVideoView2
  5. The VideoView class in Android SDK was updated to use the same API as ALVideoView on iOS SDK

Please note that you can expect more updates to the beta channel before this release goes to stable.

We've been working extremely hard over the last months to provide as best experience as possible on all the fronts (end users, developers). We truly hope that you'll enjoy working with the AddLive SDK v3. 

AddLive Wins Best WebRTC Tool - WebRTC Expo Santa Clara

Kavan Seggie - November 22, 2013

We are happy to announce that AddLive has won an award for the Best WebRTC Tool at the WebRTC Expo in Santa Clara. This award is small validation of our goal to create the World's most advanced video and voice platform.

Thanks to all our customers and partners for supporting us. We look forward to helping you create some awesome products!

See a recording of our demo below.

Beta channel update v2.1.5.0

Ted Kozak - October 29, 2013

We'd like to inform you that the AddLive platform was updated on beta channel to version v2.1.5.X.


Android SDK

  • Fixed issue with the setAllowedSenders API, limiting reception of both media types even though only video was requested.


  • Fixed issue with the setAllowedSenders API, limiting reception of both media types even though only video was requested.
  • Fixed propagation of the quality index statistic. The value was always 0
  • Fixed a leak of ALServiceListener implementations passed to the [ALService addServiceListener] method. Release of the platform left the object leaked.

Browser plug-in

  • Fixed issue with the setAllowedSenders API, limiting reception of both media types even though only video was requested.
  • Fixed installer on OS X 10.9 - the HiddenItem.localized was visible on the platform.
  • Removed the invalid audio devices from the list of available ones (e.g. "default (Built-in Microphone)").

The latest native SDKs can be downloaded from here:

Since this is just a maintenance release, it will be pushed through the stable release channel on Thursday.

Stable channel update v2.1.1.0

Ted Kozak - September 16, 2013

During the weekend we have released through the stable release channel an update to the AddLive Platform. The version number v2.1.1.0 is a maintenance release, fixing two issues. First one may lead to crash on all supported platform and the other one, that may lead to crash of the iOS SDK when entering the background mode.

The updates on the plug-in side will rollout later this week, after receiving confirmation from the white-label partners.

Happy coding!

Platform update v2.1.3.0

Ted Kozak - July 17, 2013

We are pleased to announce that the beta release channel of the Platform was recently updated to the version v2.1.3.0. This release brings major improvements to the video streaming, the quality control infrastructure, the video devices configuration. The video processing was almost completely redesigned, which allows us to add a few useful features. For a complete changelog please refer to the points below.

  • Support for HD resolution.
    If the hardware allows, your application will be able to stream video feeds with picture size up to 720p. HD streams are enabled only for P2P mode or in broadcasting scenarios, where there is only one peer publishing HD stream.
  • More aggressive adaptation. With this update, we have completely changed how the low and high video layers are employed. Prior to this release, the client SDK was initially publishing only the low video feed. The high quality one was enabled after a short while during which the SDK ensured that it won't cause a network congestion. Starting from now, the AddLive SDK will use more aggressive approach, where the high quality feed will be published at first and then, if required the quality will be reduced or improved if possible. This allows us to provide much better experience to the end user almost instantly.
  • Low quality feed use optimisation. Another improvement is related to how the Platform uses the low quality video feed. Up until now, the low video feed was published always during the session. Starting from this version, the low quality video feed will be enabled only when required, when one of the peers experiences downlink congestion.
  • Simplified connection descriptor. The part of the connection descriptor used for video configuration was simplified a lot. Currently the connection descriptor uses single videoStream attribute that describes the high quality video feed. The configuration of the low layer is derived from the high layer. Also attributes of the video stream attribute were changed. The connect method now supports just four parameters: maxWidth, maxHeight, maxFps and useAdaptation. The maxWidth, maxHeight and maxFps attributes have same meaning as previously. The useAdaptation attribute is a boolean flag allowing applications to enable or disable the adaptation infrastructure. This is useful when the application vendor has a complete control over the network environment in which the Platform will be used. For more information, please refer to the API docs section related to the connect method:
  • New api: reconfigureVideo. Thanks to the new reconfigureAPI your application can update the configuration of the published video stream on-line, without the need to restart the session. This is crucial when implementing dynamic video conference layout, where the size of video feeds changes depending on the amount of participants on-line. Of course this is just a single use case example, with this API the flexibility of the platform was pushed to another level, where your imagination is the only limit. The API docs for the new method can be found . There is also a new tutorial covering this available in the main tutorials repository:
  • Screen sharing improvements. The approach to quality control of the screen sharing feeds was fine tuned and also improved. This allows the platform to stream screen sharing feeds at even higher resolutions and also solves an issue when the screen sharing feed was blank for remote peers.

Quality sneak peek

Check this screenshot for a quick peek of the new platform capabilities (yip, that's me working 'till late to make sure that the beta release is a huge success). Screen Shot 2013-07-16 at 9.38.03 PM

Next steps

Please note that since this release involved a lot of internal changes, it will have to be tested a bit more in the beta channel, before releasing this into the stable one. The beta SDK is stable but some of the features may require fine tune to ensure that the platform behaves properly on a wide range of hardware and network configurations available in the wild. You can expect more beta releases before the code reaches stable.

Once we are happy to release it through the stable channel, it will be announced in a separate post.

Hope you will enjoy working with the latest version of our SDK. Big kudos to the AddLive core development team - great work guys.

Happy coding with AddLive!

Beta channel updated to v2.0.2.7

Ted Kozak - July 5, 2013

We are proud to announce that the beta channel was recently updated the version v2.0.2.7 for the native SDKs and v2.2.10 for the JavaScript bindings.

This version brings minor API addition and several quality control improvements and bug fixes.


1. Added support for audio streams to the setAllowedSenders API.
2. Added an option to forcibly disable proxy auto detection to the JavaScript SDK.
3. Added the scopeId attribute to the MessageEvent allowing application to identify from which active connection given message came.
4. Several small adaptation algorithms improvements and fixes.
5. Fixed how the not handled events are being logged in JavaScript API
6. Added support for the OS X 10.9
7. Removed the bundled jQuery from the JS bindings
8. Added a function to safely dispose a renderer: disposeRenderer

In addition to the above, with this version we are adding in your hands another tool that helps with the integration effort on the webapps platform - addlive-ui library. The AddLive UI is a widget gallery that will contain several, easy embeddable widgets allowing you to bootstrap your application easily.

At this version, the UI library offers a SetupAssistant widget easing the user boarding process. For more information on how to use it, please refer to our GitHub JS tutorials repository: Tutorial1_Platform_init_with_SA[Removed].

Since this release brings just minor improvements, we would like to push it to production before the 12th July.

Happy coding with AddLive!

The Day the WebRTC Revolution Began

Kavan Seggie - March 26, 2013

EC2013 WebRTC Stream

Monday 18th March 2013 was, in my opinion, the biggest day in WebRTC's history. It was more important than the day WebRTC landed in Chrome, and the day that Chrome and Firefox interoperated. I say this because it was the first time that WebRTC entered the imagination of a large group of business decision makers who are outside of the tech industry.

For those of you who do not know, Monday 18th March was the WebRTC Conference-within-a-conference at Enterprise Connect (EC13).

The day was hosted by Brent Kelly and Irwin Lazar. The morning started with a highly informed talk by Jan Linden of Google and Cullen Jennings of Cisco, both pivotal and highly respected engineers working on WebRTC. Despite it being a fairly technical overview, the room was completely full. And as the talk continued, more and more Enterprise Connect attendees piled in.

About 45 minutes into the session the conference organizer, Eric Krapf, asked Cullen and Jan to pause. We were told that there were too many people and that they were going to extend the room size. In a few minutes a wall was removed to reveal 100 more seats that were all quickly taken.

This buzz continued through out the day with the AddLive and other 'WebRTC Innovators' demo's were delivered shortly after 1pm to a still packed room.

The interest that day was a surprise to all of us involved in WebRTC, but we had no idea what was going to happen over the following days.

For those of you who don't know about Enterprise Connect, it is the single most important enterprise communications conference globally. It is attended by 5,000 enterprise professionals, including CIO's of large enterprises, CEO's of vendors and also smaller disruptive companies like us at AddLive and our friends at Plivo.

The likes of Microsoft, Cisco and Avaya presented the keynotes. They shared their next generation products and their vision of the future and the panels discussed topics like the Cloud, BYOD, business models and the communication requirements of the enterprise.

But what was remarkable was that every single keynote, panel, and lecture had either a dedicated WebRTC segment or the conversation drifted to WebRTC. It became the hot topic of EC13. WebRTC was being talked about in the corridors, during lunch and even over evening drinks.

When I introduce AddLive, I introduce WebRTC. Firstly I introduce the term, then the technology, then the implications. On Monday this was still the case, but by Wednesday things had changed. Although I still needed to explain the tech and what it means to businesses, when I mentioned WebRTC to others their face lit up and they began asking me questions.

So when we look back at the WebRTC journey, I believe that Monday 18th March was the day when the business world woke up to WebRTC. The day that WebRTC entered the imagination of the enterprise.

AddLive to Demo at the Enterprise Connect Conference in Orlando

Kavan Seggie - March 16, 2013

AddLive has been selected to demo at EC's WebRTC Conference-within-a-conference on Monday 18th March.

The session is 'Innovation within WebRTC' and runs from 1pm to 2pm EST in the Osceola 5 room.

More details can be found here:

Look forward to seeing you there!

“Real Time Communications Made Easy: AddLive API” - ProgrammableWeb

Kavan Seggie - February 1, 2013

A nice article by Candice McMillan on our video and voice APIs.

"You could describe AddLive as a supplement to WebRTC; it takes the technology a step further. WebRTC only supports web browsers, but AddLive expands on this and also allows for the development of native iOS, Android, Windows and Mac OS X applications. Where WebRTC is solely a peer-to-peer technology, AddLive extends this to enable multiparty conferencing. It also supports screen sharing, firewall traversal, usage and quality analytics, and enterprise level support.

The rest is here:

“AddLive offers peace in a crazy world of WebRTC” - TMCnet

Kavan Seggie - January 29, 2013

Our goal at AddLive is enable the WebRTC community. When WebRTC started back in 2011, we realised that it was going to be a while before WebRTC was ready in all browsers, and that there were numerous features that we wanted that WebRTC wasn't going to support.

Fast forward two years and we are now offering our clients easy access to this technology not only in browsers but also in native WebRTC iOS and Android SDKs. We save them time and  money, allowing them to focus on their products and their business.

Steve Anderson, a contributing writer for TMCNet, just wrote a post summarizing how we are helping the WebRTC community.

I have added the first paragraph below, the rest you can find at:

With constant movement going on in the Web-based real-time communications (WebRTC) market, it leaves a lot of room open for potential competitors to enter the field and show off just what they can do. The folks at AddLive are looking to not only provide a powerful new solution, but also throw in a little extra information about the idea of WebRTC as a whole, and what forces are impacting this constantly changing market.

AddLive Tutorial on

Kavan Seggie - January 22, 2013

Ted Kozak, our CTO, has just written a great AddLive JS API tutorial for

It is part of a two article tutorial that teaches you how to create a 1-1 video room. You can have a look at a working prototype on JSFiddle here.

A big thanks to the guys at If you are looking for an engineering job, or looking to employ an engineer, make sure you give them a shout!

About is where hackers find jobs. Every month, tens of thousands of the best independent software developers from the open source community use to find high-quality freelance and full-time jobs.

Happy 2013 and Welcome to the New AddLive Site!

Kavan Seggie - January 17, 2013

Happy New Year to All!

We have been working super hard on getting the new website up, so apologies for the lack of posts. We are now back online and you can expect many great WebRTC articles and AddLive updates.

The new website will be changing quite a bit over the next few months. We have a lot of exciting additions we will be making, especially to the brand new AddLive Portal.

The AddLive Portal allows our customers to:

  • Check the usage of their application including;
    • Dashboard with summary statistics
    • Individual session listings with date, length and number of participants
    • Minutes summary per day
  • Purchase additional minutes

All the best for 2013 from the AddLive Team!

Platform update to v1.18.0.2

Ted Kozak - November 23, 2012

We are really proud to announce that the beta channel of the AddLive platform was updated on all fronts - JavaScript, Desktop SDK, iOS SDK and Android SDK. This is a major update that uses the AddLive brand naming convention throughout the API. To ensure backwards compatibility, this release uses completely new distribution channel, working next to, current Cloudeo SDK release endpoints; thus there is no need to worry that it will break your existing applications.

Starting from this release, we'll be using new release endpoints, powered by Amazon CloudFront. All the SDKs and documentation resources will be available via the domain. The JavaScript SDK can be used by embedding scripts from following locations:

The native SDKs can be downloaded from following locations:

Additionally, for Apache Maven users, from now on we'll be releasing our Android SDK also using the maven dependency management. To use maven with beta channel, use following repository: 1

The Stable channel repository can be used with following declaration: 1

After defining the AddLive repository, you can use the Android SDK by defining following dependencies: 1

The API itself was changed slightly. On the JavaScript side, the CDO namespace was renamed to ADL; The CloudeoService and CloudeoServiceListener classes are now AddLiveService and AddLiveServiceListener respectively. If you're not willing to do the migration right now, the update JS SDK is backward compatible with Cloudeo SDK so your existing application should work.

Also please note that this update does not affect any existing applications. To effectively use the updated SDK one needs to switch to new SDK. Since this is a definitive goodbye with Cloudeo brand for us, this update brings a plug-in that uses different install location, different labels and different mime-type. It means that after switching to new SDK end users will be actually using completely new plug-in which will have to be installed.

The native binding APIs also were changes slightly (renaming), for more details please refer to the documentation:

In spite of above significant changes, this update brings few smaller improvements:

  • We now can provide white labelled installers
  •  Authentication details are required. The previous mock-authentication scheme was removed from JavaScript SDK. For backwards compatibility with Cloudeo plug-in, streamer side still allows connections that aren't authenticated but it will be changed soon. For more details about AddLive authentication, please refer to the documentation:
  • Screen sharing sources listing contains only valid windows
  • Few JavaScript API simplifications:
    • the application Id now can be defined also during platform initialization
    • the connection descriptor has now safe sane defaults. The only required attributes are authDetails and scopeId

Since we've got finally this painful, but necessary step behind us, we can focus on adding more improvements to the platform. Stay tuned as we're working currently on some really cool features!

TW3C’s position on RTCWEB mandatory to implement video codec

Kavan Seggie - November 13, 2012

An interesting email just came through on the WebRTC discussion list. It was written, by Mr Internet, Tim Berners-Lee and is titled, "W3C's position on RTCWEB mandatory to implement video codec".

In the email, Tim Berners-Lee states that "W3C believes that there should be a royalty-free standard web infrastructure which should include Real Time Communications on the Web" and that "we encourage the Working Group to work toward technologies that implementers can be confident are available on a royalty-free basis and W3C is willing to work with the IETF in achieving this".

This is clearly a shot in the arm for VP8 and it's supporters.

Email below:

We understand that the IETF rtcweb Working Group is expecting to select a mandatory-to-implement video codec.

W3C believes that there should be a royalty-free standard web infrastructure which should include Real Time Communications on the Web.

W3C is not expressing any preference among the codecs based on the technical merits of the proposals before the working group. We wish to bring a few background facts to participants' attention.

In 2011 W3C approached MPEG-LA, the licensing authority for the generally-known patent pool for H.264, with a proposal for royalty-free licensing of the H.264 baseline codec, to be referenced for use by the HTML5 video tag.  MPEG-LA was receptive to this proposal; however, the proposal was turned down by a narrow margin within the MPEG-LA membership.

Whatever codec the rtcweb Working Group might choose, we encourage the Working Group to work toward technologies that implementers can be confident are available on a royalty-free basis and W3C is willing to work with the IETF in achieving this.

For Tim Berners-Lee, Director, 
and Jeff Jaffe, CEO, Philippe Le Hégaret 
and Thomas Roessler, IETF Contacts for W3C

Microsoft’s CU-RTC-Web better at Calling Gateways

Kavan Seggie - November 12, 2012

Martin Thomson of Skype/Microsoft recently sent an email with an interesting use case, Calling Gateways. He talks about dialing into a call centre right from the comfort of your browser, which would be very cool!

He argues that Microsoft's WebRTC proposal, CU-RTC-Web, is better at doing this than the current WebRTC proposal.

It will be interesting where we end up with the two proposals. The worst case scenario for us developers would be if Microsoft and Google don't agree, and we have two very separate definitions of 'WebRTC'. But as Martin says, 'We [Microsoft] are still committed to the W3C process'.

Fingers crossed we'll have agreement between the major browser vendors soon!

Full post is below.

Since sharing CU-RTC-Web – our proposed design for the Web-RTC API – we have been experimenting with the API and its capabilities.  Today, we’d like to share a simple example application, to demonstrate the API capabilities.

We are still committed to the W3C process.  Our API proposal provides concrete recommendations for many of the open issues in the working group, even if it does still forgo the use of SDP.

Calling Gateways

Calling a legacy device using Web-RTC is a use case that we anticipate will be important many users.  Dialing in to an existing call center through a gateway is likely to be a common scenario as companies Web-RTC enable their sites.

In this simple configuration, an existing gateway services calls originating from the PSTN.  In order to interoperate with Web-RTC clients, this gateway needs to support ICE lite and G.711.

  • ICE or ICE lite is a critical part of Web-RTC security solution, even if this scenario does not require the NAT traversal capabilities it provides.  Full ICE is rarely implemented by gateway devices. This scenario assumes the gateway only supports ICE lite.  ICE lite was specifically designed for scenarios just like this.
  • Web-RTC implementations are required to support the G.711 audio codec.  It is expected that many legacy systems will not support the other audio codec that Web-RTC implementations require: Opus.
In this example deployment, the web server provides the following configuration to clients:
  • Transport details: IP and port
  • ICE details: username fragment and password
  • Secure RTP keys for both inbound and outbound media

The client then takes that information and establishes a bi-directional, secured RTP session with the gateway. The gateway forwards media between the call center and web client.

JavaScript for this simple application is shown below.

API Refinements

We’ve also made some improvements to our API proposal.  As we continue to gain experience with its use, we discover small errors and omissions.  An updated proposal is attached to this email.

We’ve also listened to the feedback we’ve received on our initial proposal.  The most common concern voiced was that asking application developers to implement ICE, or something like it, was too difficult.

In response to concerns over ease of use we’ve added the RealtimeTransportBuilder interface. Our original view was that third party libraries would be created to support this capability.  This is still possible with the API provided – our prototype of RealtimeTransportBuilder is implemented using nothing more than the RealtimePort and the RealtimeTransport APIs.

RealtimeTransportBuilder is designed to make it as easy as possible to construct a peer-to-peer transport. Application developers who use this interface will benefit from a browser-based implementation of NAT traversal. The application needs to provide a channel for exchanging port (or candidate) information between peers.  RealtimeTransportBuilder will do all of the hard work, producing a RealtimeTransport.

The following example code demonstrates how easy it is to use a RealtimeTransportBuilder:

var options = { transport: transportOptions, stun: stunServer };
var builder = new RealtimeTransportBuilder(options);
builder.onport = function(e) {
    signaling.send('port', e.port);
signaling.onport = function(port) {
builder.onconnect = function(e) {
    gotTransport(e.transport);  // at which point streams can be added, etc...

ICE Lite Gateway Client Code

The source code for our client is included below:
(function() {
    'use strict'; /*jshint browser:true*/
    var gatewayConfig;
    var localPorts;
    var localPort;
    var transport;
    var localMedia;

    function buildDescription(ssrc) {
        var g711Codec = {
            type: 'audio/PCMU',
            clockRate: 8000,
            packetType: 0
        var stream = {
            ssrc: ssrc
        return new RealtimeMediaDescription({
            streams: [stream],
            codecs: [g711Codec]

    function startOutgoingStream() {
        if(transport && localMedia) {
            var localDescription = buildDescription();
            var outgoingStream =
                    new LocalRealtimeMediaStream(localMedia.audioTracks.item(0),
                                                 localDescription, transport);<

    function discoveredSsrc(e) {
        var remoteDescription = buildDescription(e.ssrc);
        var rtStream = new RemoteRealtimeMediaStream(remoteDescription, transport);
        var incomingStream = new MediaStream();
        document.getElementById('output').src = URL.createObjectURL(incomingStream);

    function gotTransport(err, t) {
        transport = t;
        transport.addEventListener('unknownssrc', discoveredSsrc);

    function gotAudio(stream) {
        localMedia = stream;

    function portChecked(e) {
        var transportOptions;
        if(!localPort) {
            localPort =;
            transportOptions = {
                mode: "srtp",
                outboundSdes: gatewayConfig.local.sdes,
                inboundSdes: gatewayConfig.remote.sdes
            RealtimeTransport.createTransport(localPort, gatewayConfig.port,
                                              transportOptions, gotTransport);

    function checkPorts() {
        if(localPorts && gatewayConfig) {
            localPorts.forEach(function(port) {
                port.addEventListener('checksuccess', portChecked);

    function gotPorts(err, ports) {
        localPorts = ports;

    function gotGatewayConfig(e) {
        gatewayConfig = JSON.parse(;
        gatewayConfig.local.sdes.key = b64.Decode(gatewayConfig.local.sdes.key);
        gatewayConfig.local.sdes.salt = b64.Decode(gatewayConfig.local.sdes.salt);
        gatewayConfig.remote.sdes.key = b64.Decode(gatewayConfig.remote.sdes.key);
        gatewayConfig.remote.sdes.salt = b64.Decode(gatewayConfig.remote.sdes.salt);

    function go() {
        var xhr = new XMLHttpRequest();<'GET', '/config', true);
        xhr.addEventListener('load', gotGatewayConfig);


            audio: true
        }, gotAudio);
    window.go = go;
The sample code is embedded in an HTML page with an

This sample code relies on a b64 module that converts from the base64 encoded values above to the ArrayBuffer instances required by the CU-RTC-Web API.

Using RealtimeTransportBuilder

An astute observer will notice that the example above is not robust in the presence of packet loss.  It is also not immediately obvious that it is necessary to attempt a connectivity check using all local ports.  These are the sorts of errors that are easy to miss.

Using the RealtimeTransportBuilder simplifies the code and removes these issues. The updated sample has no need for the gotPorts(), checkPorts() and portChecked() functions.  The call to RealtimePort.openLocalPorts() is removed and a RealtimeTransportBuilder is constructed using data from the gateway configuration once that is retrieved.

Here is the modified JavaScript:

(function() {
    'use strict'; /*jshint browser:true*/
    var gatewayConfig;
   var localPorts;
    var localPort;
    var transport;
    var localMedia;

    function buildDescription(ssrc) {
        var g711Codec = {
            type: 'audio/PCMU',
            clockRate: 8000,
            packetType: 0
        var stream = {
            ssrc: ssrc
        return new RealtimeMediaDescription({
            streams: [stream],
            codecs: [g711Codec]

    function startOutgoingStream() {
        if(transport && localMedia) {
            var localDescription = buildDescription();
            var outgoingStream =
                    new LocalRealtimeMediaStream(localMedia.audioTracks.item(0),
                                                 localDescription, transport);

   function discoveredSsrc(e) {
        var remoteDescription = buildDescription(e.ssrc);
        var rtStream = new RemoteRealtimeMediaStream(remoteDescription, transport);
        var incomingStream = new MediaStream();
        document.getElementById('output').src = URL.createObjectURL(incomingStream);

    function gotTransport(err, t) {
        transport = t;
        transport.addEventListener('unknownssrc', discoveredSsrc);

    function gotAudio(stream) {
        localMedia = stream;

    function buildTransport(e) {
        var options = {<
            transport: {
                mode: 'srtp',
                outboundSdes: gatewayConfig.local.sdes,
                inboundSdes: gatewayConfig.remote.sdes
        var transportBuilder = new RealtimeTransportBuilder(options);<
        transportBuilder.addEventListener('connect', gotTransport);

    function gotGatewayConfig(e) {
        gatewayConfig = JSON.parse(;
        gatewayConfig.local.sdes.key = b64.Decode(gatewayConfig.local.sdes.key);
        gatewayConfig.local.sdes.salt = b64.Decode(gatewayConfig.local.sdes.salt);
        gatewayConfig.remote.sdes.key = b64.Decode(gatewayConfig.remote.sdes.key);
        gatewayConfig.remote.sdes.salt = b64.Decode(gatewayConfig.remote.sdes.salt);

    function go() {
        var xhr = new XMLHttpRequest();'GET', '/config', true);
        xhr.addEventListener('load', gotGatewayConfig);

            audio: true
        }, gotAudio);
    window.go = go;

This example shows how this API can be used to create a simple client that talks to a VoIP gateway. This enables some very important use cases for sites deploying WebRTC.  We encourage people to replicate this example with their own implementations.

buzzumi uses AddLive to enable world-renown US medical specialist to present to a conference in Sydney

Kavan Seggie - November 9, 2012

Few things make the AddLive team feel better than making our customers happy. Sure this sounds like marketing speak, but it is absolutely true. It means that all our hard work in tackling a very difficult technology problem has been worth it.

So we were very happy when we received a thank you email from Richard Clark, the CTO of buzzumi, that I have included below. If any you have any stories, please let us know!

Just thought I'd let you guys know, last week buzzumi software, with the AddLive plugin, enabled a world-renown medical specialist in the US to present to a conference in Sydney, including the presenter being able to see and hear the audience while presenting, and perform a Q&A at the end. We had the opportunity to drive the bandwidth and frame settings right up and ended up with a full-screen projected image of very high quality. 

Due to your gear, we were able to have:

1) Rapid synchronisation of pre-rendered slide displays (so the presenter never ended up talking to the wrong slide)
2) A presenter who could see and hear his audience in realtime
3) Technical support who could use a "monitor" account that gave them text chat to the presenter (but not the projection screen), audio controls for the presenter and network/frame stats
4) Separate "camera" laptops that could be placed facing the audience to provide the presenter one or more audience views

Everything worked beautifully on the day. I figured you guys would like to know :)  
Richard Clark
Chief Technology Officer

WebRTC is now live in Chrome 23!

Kavan Seggie - November 7, 2014

This is a historic day, WebRTC is now live in Chrome 23! Well done to the Chrome team.

From Serge Lachapelle, "It's the biggest milestone yet. Our journey started with the open sourcing of key technologies in June 2011 and with the help of community driven workgroups at the W3C and IETF, we made these technologies available through a web API and ensured standardized protocols."

More here:, "Let the codec wars continue"

Kavan Seggie - November 3, 2012

Tsahi Levent-Levi, of, just published a post on the Ericsson mobile browser, Bowser.

In it he states, "Ericsson launched a new mobile browser with WebRTC support. Its main purpose is to push Google in their open codec wars."

He also includes slides from his talk at the WebRTC Conference in Paris titled, "Which codec for WebRTC?". They make for interesting reading. The general theme for the deck is that choosing WebRTC mandatory codecs is not about Technology. Instead it is about Business objectives. And because the WebRTC players have different business goals, it will be difficult to get everyone to agree.

The WebRTC video codec working group has been tasked with exactly this, lets hope they can find agreement in the not too distant future.

Firefox to take ‘the fight for unencumbered formats to the next battlefront, WebRTC’

Kavan Seggie - November 1, 2012

Mozilla CTO, Brendan Eich, vows to take the fight for an unencumbered video codec to WebRTC. This is particularly relevant as Cisco and Apple have just published a paper proposing that H.264 becomes the mandatory-to-implement video codec.

In a great post by Eich earlier this year, he outlines why Mozilla have had to back-track on one of their core ideals, and use the encumbered H.264. The codec has simply become too ubiquitous and as Eich puts it 'We carried the unencumbered HTML5 video torch even when it burned our hands'.

Even their VP8 ally, Google, have changed their position on H.264. In January 2011 Google stated they would drop H.264 support from Chrome 'in the next couple of months', but almost 2 years later they are yet to do so.

It will be interesting to see which video codec does become the mandatory to implement WebRTC video codec. It seems like the camp is clearly divided with Google, Firefox, and Opera supporting VP8; and Cisco, Apple, Ericsson and probably Microsoft supporting H.264.

Cisco and Apple propose H.264 as the mandatory WebRTC video codec

Kavan Seggie - October 23, 2012

In a new working group memo, Cisco and Apple stated that they would like H.264 AVC (H.264) to be the mandatory-to-implement video codec in WebRTC.

The Cisco and Apple arguments can be distilled to three main points:

  1. H.264 is more broadly adopted than VP8
  2. H.264 has higher 'Quality-Power-Bandwidth' than VP8
  3. H.264 has a more clearly established IPR status

Lets go through these points individually.

1. The Adoption Advantage

Clearly H.264 has an adoption advantage over VP8. It is a standard and has been in use for almost 10 years now. It is ubiquitous on the web and importantly both iOS and Android devices have hardware support for H.264.

But VP8 is gaining traction with the WebM Project that Google is championing. Google has released 5 updates to the VP8 SDK, with their 6th update due soon.

The biggest hurdle for broad adoption of VP8 is iOS hardware support. Apple are unlikely to ever add VP8 hardware support to their devices.

2. H.264 is simply a better codec

Cisco and Apple state that when considering Quality, Power consumption and Bandwidth, H.264 is superior to VP8.

Here is a link to Cullen Jennings showing a side-by-side comparison on the iPhone:

The above seems to show that H.264 is far better, but my guess is that the VP8 version is not hardware accelerated.

That said, I think that most video codec engineers agree that H.264 is superior to VP8. Here is Jason Garrett-Glaser's (aka Dark Shikari) blog post on it a few years ago: And here is further comparison by

3.  Patents and licensing

In my opinion this is the most important point.

Cisco and Apple argue that although H.264 is not royalty free, at least the 300+ patents are publicly listed and because it has a 'common patent policy requiring disclosure by participants and third parties of any known patents or patent applications that may be essential to implement specifications in development' it is far less likely to have outstanding patent issues than VP8.

Interestingly they state that 'A dozen or so companies responded to MPEG LA's call for essential patents on VP8'.

They also point out that H.264 is free for developers with less than 100,000 users per annum.

Finally they ask Google to 'provide some assurance (such as indemnification)' that VP8 is not encumbered.


The mandatory-to-implement video codec will be the subject of intense debate over the next few months. If the various parties can not agree then perhaps WebRTC will end up without having a mandatory video codec, or perhaps we'll end up with a CELT/Opus styled codec using entirely different principles for video coding.

Time will tell, but what is certain is that we are in a very interesting time for Real Time Communications.

Platform update v1.17.0.3

Ted Kozak - October 3, 2012

The stable and beta release channels just received an update of the Cloudeo Plug-in to version v1.17.0.3. It is minor, bugfixing release improving the quality of HTTP fallback and fixing possible crashes on OSX.

Please note that this is the last release of the Cloudeo Plug-in. All the subsequent releases will use the AddLive brand.

Version changes
  • Improved the quality control algorithms used with the HTTP fallback resulting in vast improvements of video feed quality when data are being transmitted using the fallback protocol.

Bug fixes:

  • Fixed bug causing a crash on Mac OS X if page reload was requested while Cloudeo Plug-in is in the middle of connection establishment process.

PeerConnection API now live in Chrome 23 beta!

Kavan Seggie - October 2, 2012

The PeerConnection API is now live in Chrome 23 beta :)

This, together with the getUserMedia API, will allow P2P video chat directly within Chrome without the need for any additional download.

You can read Justin Uberti's post on the Chromium Blog here: is now!

Kavan Seggie - September 28, 2012

AddLive logo
AddLive is here! We are very excited to introduce you to our new name and logo, Over the next few weeks we will be rolling the brand out, so you'll see the changes across all our properties.

Although we loved the name, and we had a great response from you guys, we have decided to change names. The reasons were:

1. We wanted a '.com', but no matter how hard we tried we couldn't make contact with the Vietnamese owners of
2. There is a Finnish company called Cloudeo,
3. We wanted a more descriptive name.
4. If we weren't 100% happy with the name, then now is the time to change it.

The naming process was intense and lasted a few weeks. There are some great articles on the web on renaming and we managed to short-list about 5 names. AddLive was the favorite across a wide cross section of us so we negotiated a very reasonable the price and acquired it a few weeks ago. (A hat-tip to Brett Hellman at Hall for a great blog post on their purchase of

For those working with the Cloudeo APIs, all the class, namespace and method names that contain the word "Cloudeo" will stay intact for the next two months. We will provide name aliases for all of them to make sure that your applications won't be affected by the change. They will be marked as deprecated, and we will notify you when we are not going to support them any more.

A Few Words on Networking

Ted Kozak - September 21, 2012

A lot of our customers started to report to us issues related to the "2005 errorMsg : Failed to connect media links" error. In a single sentence, this error code indicates that Cloudeo Service couldn't establish a media connection with the streaming server, as there is a firewall device (application) between the client and streaming server that blocks the connection. Obviously, the problem is much broader and thus, I'll try to analyze it in more detail, explain its sources and all the surrounding repercussions.
I'll try to cover here:

  • How our current implementation transmits the media data.
  • Our network requirements and how they compare to requirements of similar service providers.
  • Fallback protocols consequences and use cases.
  • Brief insight into our development road map around the media data transmission.

Cloudeo streaming protocols now

At the moment of writing, the stable release channel offers Cloudeo Service in version and the beta in which brings a significant change on how media data may be transmitted. In short, with v1.17 we have replaced the RTP over TCP fallback mechanism with similar but notably better RTP over HTTP over TCP fallback.

Current stable release uses 3 media data transport protocols: RTP over UDP in P2P mode (denoted by the CDO.ConnectionType.UDP_P2P constant), RTP over UDP relayed through our streaming server (CDO.ConnectionType.UDP_RELAY) and RTP over TCP, also relayed through our streaming server (CDO.ConnectionType.TCP_RELAY). With all the above protocols, each media type has it's own media transport instance which transmits the data using 2 separate ports - one for audio, and one for video.

When user connects to the Cloudeo Streaming Server, at first place the management link is being established (we use it to track scope presence and reliable messaging) and connection request is being authenticated. Once the management connection is ready, the streamer sends to the client it's own media endpoints.

With those data available, client tries to establish the media connection. At first it tries the UDP_RELAY protocol. Cloudeo Service creates an UDP socket and starts to exchange ECHO packets. Using those packets, client simply shouts at streamer "hey I'm here, can you hear me? If so please respond". After sending a whole bunch of such a UDP packets, client waits for a replies. If it receives at least single response ("yes I can hear you, stop shouting!"), it is assumed that the media connection is functional and the success result is being returned to the SDK client.
In case the UDP probing fails we're assuming that the UDP communication is blocked by some device on the path and then try to establish the TCP communication. Since TCP is reliable, and stateful, we just try to connect client's TCP socket to the server's media TCP endpoint. If this transport mechanism also fails, Cloudeo Service returns an error result with the aforementioned 2005 error code.

Additionally, when the media connection is already established and there are only 2 participants in the scope, streamer will push to both clients (using the management link) external UDP endpoints of the other client. Upon this notification, Cloudeo Service will try to establish media UDP connection with the external endpoint of the remote client using the same procedure as when establishing an UDP connection to the streaming server. If it succeeds, clients just switch the link to use the P2P mode and start exchanging media packets directly.

With introduction of Cloudeo Service v1.17.0.0 we have completely replaced the RTP over TCP stack with RTP over HTTP over TCP. With this protocol, all the media are being transmitted using the TCP protocol, with single connection to one remote port (80) and data flow start is preceded by HTTP headers (POST for the sending stream and GET for receiving one).
In short, it means that it will be impossible for any service to distinguish our traffic from casual HTTP requests and responses. Furthermore, it means that if given network configuration people inside to browse through Internet, our service will operate there.

Network requirements

Our service currently (and there aren't any plans to change it) uses following ports:

  • UDP 540
    For audio streaming
  • UDP 541
    For video streaming
  • TCP 540
    For audio streaming, TCP fallback
  • TCP 541
    For video streaming, TCP fallback
  • TCP 80 (HTTP)
    For introduced in v1.17.0.0 HTTP fallback, both media types
  • TCP 443 (HTTPS)
    For management protocol.

The above list indicates that for our service to run with the best quality, clients should be able to communicate with the remote hosts listening on ports TCP 443 and UDP 540, 541. The TCP ports 540, 541 and 80 will be used only if the UDP pair is unavailable.

The above configuration, actually does not differ us from other services in field. The Cisco WebEx uses ports UDP 5101 and if they aren't available, falls back to the TCP 80 and 443 (source). The Citrix GoToMeeting uses UDP ports 8200 for audio and UDP port 1853 for video streaming without any TCP fallback mentioned (source).
Finally all the solution based on the Adobe Flash player use either RTMP protocol which requires TCP 1935 port to be opened (or http streaming as a fallback, but this one is used only when broadcasting media, not in real time conferences) or RTMFP which works only in P2P mode with random ports used (which may be solved by use of TURN proxy but the network still needs to allow user to communicate with the TURN proxy endpoint, source).

Fallback protocol, its use cases and consequences

Saying that fallback protocol isn't default protocol without a reason is a truism but it's actually also a truth. In our case, we use protocols built on top of the UDP stack by default, as UDP communication is ideal for real time communication use cases. UDP is an unreliable protocol, which means that it offers no congestion control, data retransmission, flow control or any other safe guard. But, in turn, it offers the low latency, which is the holy grail of the real time communication.
On the other hand, the TCP protocol offers all those features but it comes at price of unbounded delay, a delay that may simply render any live conversation as impossible.

I'm writing this to clearly state that we design our service to operate within highly restricted network environments, but it's not our main use case. The HTTP fallback shouldn't be treated as one of available, equal options. Use of HTTP fallback will always have an impact on the conversation quality. We have designed this mechanism to cover cases when one of your users needs to jump into a conversation using network being outside of their control, like internet cafe, airport etc.
In order to provide your customers best video experience possible you need to inform them about our networking requirements and it is up to your clients to decide how they would like to use the service. Whether they would like to allow the traffic directly or configure a proxy server insider their network (more on this below). In the end, it's always an administrative decision of your clients we may prepare as many hacks for bypassing the firewall restrictions as possible but in my opinion it's not a legit solution to the problem. I think that in properly managed, internal network the IT department needs to be aware of all the possible traffic source and should explicitly allow particular service. By using other approaches, you're just cheating your clients.

Networking development road map

We are currently working on adding two features easing the firewall traversal and allowing your clients to better integrate with the service.
First of all we'll be introducing proxy support. The proxy configuration will be used for streaming purposes (assuming that the client's network uses SOCKS5-compliant proxy with UDP proxying enabled), management traffic and self update (SOCKS4, SOCKS5 or HTTP proxy).

The second feature we'll start working on is the UDP fallback to well known ports. In general, we'll modify the connection establishment routine, so it first tries to establish connection using current UDP port set (540, 541) if that fails, will fallback to the HTTP streaming and in the same time will start checking whether well known UDP ports are available (e.g. the ones used by aforementioned services). If we'll detect that one of the ports allows us to communicate with the Cloudeo Streaming Server, we'll switch the UDP protocol with both media types data packets (audio and video) multiplexed in a single link.

Open letter to the WebRTC Committees

Ted Kozak - August 31, 2012

Dear WebRTC standardization committees,

I work for a company that provides middleware, allowing developers to easily implement video conferencing across all major platforms. For web apps we’re currently using our own NPAPI/ActiveX plugin to do all the heavy lifting. But we’re closely watching the WebRTC movement. It’s a great opportunity for us to provide an excellent user experience without the need to install any native software.

Until now, we’ve been monitoring the movement rather passively, but the recent announcement of the CU-RTC-Web proposal led me to express some concerns that IMHO are common to the developer community.

Comments on Option 1

The main reason why I’ve been always passively following the API proposals was the simple assumption that we can deal with any API as long as the use cases defined in the initial IETF draft will be covered. Whether it’s PeerConnection00, DeprecatedPeerConnection, or battery of interfaces proposed by CU-RTC-Web, it’s really not a major issue. Once the API is available, we’ll be able to implement it in our libraries easily.

That said, the PeerConnection API brings a bit too narrow semantics, focusing mostly on P2P communication (we’re more conference oriented) and thus would require some nasty hacks, but it’s not a big deal, we can live with it.

Also, I think that voters for Option 1 in the poll are exaggerating with the statement that the API’s ease of use is more important than its power. In my opinion, once you’ll give developers those capabilities there will be tons of libraries like jQuery, Prototype or Mootools that will simplify life of the community enough. I really don’t see a problem with building the PeerConnection API on top of the CU-RTC-Web. This will give developers more choice and features, and the best community driven libraries will quickly surface.

The only meaningful difference from having it implemented in the browser is that it will be easier for the community to maintain such a library. This will be more efficient than waiting for browser vendors to meet and agree on how to update an API in order to provide a new feature or fix.

Comments on Option 2

You may think that if I could participate in the recent poll, I’d vote for the CU-RTC-Web, but actually it’s not the case. While I definitely think that a more powerful API is better, the MS proposal is simply too late for such a radical change. Why such a pool wasn't created before the PeerConnection00 was introduced in Chrome canary builds? Why was the CU-RTC-Web proposal so late? I can remember that there were negotiations at some stage, but I can’t understand why MS didn’t propose their views months ago.

What really matters to developers

Ultimately the cross browser interoperability is the most important factor here. Things like the API definition can make only our job slightly more or less difficult, but it won’t affect the job’s feasibility. For us, the worst outcome that could come from the WebRTC movement is a situation where the browsers cannot communicate between each other. Sure, we can smooth the differences using MCUs or other bridges, but we need be aware of an impact such an approach has to an ability to scale a service up. This problem is amplified by the general trend for NPAPI/ActiveX native browser plug-ins being disallowed, because “nowadays you can do everything in DOM”.

I’m pretty sure that there won’t be significant problems with having consensus on the networking stack, but I’m not so confident with regards to the media codecs used. We’ve seen this previously, when the audio and video tags were standardized. The difference is that with HTML media tags it is still feasible to serve different static content depending on the browser used by the client. It is not convenient, but it still is feasible. On the other hand, with the WebRTC if different codecs are being employed by different browsers this whole facility will be simply useless.

Assuming large problems with getting consent here, I’d like to ask you to consider adding to the standard an ability to extend the browser capabilities by native components acting as codecs and packetizers. With NPAPI extinct I actually don’t see any other option to free developers from the browser vendor lock-in problem.

Summing up

WebRTC seems to be in a great place now that 4 out of the 5 major browser vendors are at the table. All we need is some kind of consensus, which we are not sure a poll will provide. Consensus is more important, at least IMHO, than whether Option 1, Option 2 or any combination of them is chosen.

To summarize, I’ve got few requests to the standards committees and browsers vendors working on WebRTC:
1. Please agree on having the same WebRTC API, so it will be easier for us to use the technology.
2. If that’s impossible, please at least ensure that you’ll be using the same networking stack and make the features detection simple and legit.
3. Please agree on having same, baseline codecs so the media streams do not have to be transcoded on the server side.
4. If that’s impossible, please introduce to the standard an extension API, allowing 3rd party developers to install and register their own codecs/packetizers.

Platform update v1.16.0.4

Ted Kozak - July 2, 2012

The Beta and Stable channels have been updated to version for all targeted web application platforms.

Since it is our first publicly announced release, I will allow myself to list all the features provided by the Platform prior to the current release.

Platform features

For clarity, the complete feature set was split into 4 functionality areas.

Devices functionality area
  • Listing of all installed video capture devices (web cameras).
  • Listing of all installed audio capture devices (microphones).
  • Listing of all installed audio output devices (speakers, headphones).
  • Ability to change a device of a particular type while it is in use (while actively publishing media stream or watching local user's video preview).
  • Notifications of devices availability change (e.g. camera plugged in).
  • Devices hot-plug (ability to use a device which was plugged in by the user after the platform initialization).
  • Monitoring of microphone activity level (aka speech level).
  • Ability to get and adjust speakers volume level.
  • Ability to get and adjust microphone gain level.
  • Playing sample audio file to test the speakers.
  • Automatic Gain Control.

Connectivity functionality area
  • Establish a connection to the Cloudeo Streaming Server
  • Terminate a connection to the Cloudeo Streaming Server
  • Publish a media stream (audio, video, screen sharing) to an already established connection.
  • Stop publishing a media stream of any type (audio, video, screen sharing) to an already established connection.
  • Transmit media data between peers using the P2P mode.
  • UDP hole punching that allows users behind NAT devices to transmit media data using the P2P mode.
  • Relay media data using the RTP over UDP protocol and the Cloudeo Streaming Server. It is used by connections to media scopes with more than 2 participants or in case. when UDP hole punching fails for at least one of the participants.
  • Relay media data using the RTP over TCP protocol for users with UDP communication blocked by a firewall.

Media encoding functionality area
  • Video encoding and decoding done by the VP8 codec.
  • Audio encoding and decoding done by the iSAC wideband speech codec.
  • Acoustic Echo Cancelling.
  • Quality control of the video encoding, depending on the network conditions and local CPU use utilization.
  • Adaptive quality of the incoming video stream depending on network conditions.

General functionality area
  • Ability to reliably broadcast any JavaScript string between peers connected to the same media scope.
  • Listing of all screen sharing sources (windows or desktops) with picture preview

Version changes


  • This release brings a major update to the video rendering. Internally, it was almost completely rewritten. First of all, the rendering code was moved from the Cloudeo Service Container component to the Cloudeo Service. This transition allows us to render more efficiently and to maintain the rendering code more easily - now we can publish an update to the rendering code without the need to restart the browser.
  • Added support for windowless rendering on the Windows platform.
  • Exposed video rendering fine-tune settings to the CDO.renderSink function. Now developers have control over: whether the rendering should be mirrored (useful for local preview), whether the rendering should be done in windowed or windowless mode, which scaling filter should be used (bilinear and bicubic filters are supported).
  • Implemented new facility for handling video capture devices on OS X using the QTKit framework. Previously the Platform has been using the Sequence Grabber API.
  • Updated the signature of the CDO.renderSink function. After the update, the configuration can be passed as several parameters or using single JavaScript object with only required properties defined.
  • Added new facility for initialization of the Cloudeo SDK - the CDO.initPlatform function. Using the new API the complete initialization is as easy as one function call and few asynchronous events handling. Also, this method performs initial devices configuration for a first run (first device from devices list will be used) and then reuses previous devices configuration (see below).
  • Added the devices configuration persistence. On browsers that support the HTML5 localStorage (or for applications that use available shim), each call to CDO.CloudeoService#setVideoCaptureDevice, CDO.CloudeoService#setAudioCaptureDevice or CDO.CloudeoService#setAudioOutputDevice methods, upon successful result will store the setting in local storage and then will use the same devices configuration during the platform initialization.
  • Created completely new documentation section within the Cloudeo site. One may find it here: [depracted]

Bug fixes:

  • Fixes a bug which makes it impossible to perform the auto update of the SDK on Internet Explorer, if prototype.js is also used by the web application.
  • Fixes an update bug appearing on hosts without the complete certificate chain of the GlobalSign CA (#5006: Unhandled exception in TaskProcessor: Failed to verify update bundle).
  • Fixes a deadlock in a TCP transport which was causing in certain conditions the media stream to hang when using the TCP relay mode.

Auto updating

Any currently installed Cloudeo Plug-in with version and newer can auto update itself to this version. Browser restart is required to complete the update in all cases.