5 Interoperability Issues In WebRTC Projects For Web-Based Communication

As consumers continue to prefer web video and voice communication over traditional telephony, WebRTC (Web Real-Time Communication) has been growing exponentially since its inception 5 years ago. This set of communication protocol and APIs is already used by the big guns of Silicon Valley such as Snapchat, Facebook, Google and Microsoft in their flagship services such as Facebook Messenger and Google Hangout. Yet, many are still in the dark about it.

According to reports by Google, WebRTC is used by over 2 billion supported browsers, which is every Chrome browser and Android supported devices. Meanwhile, Facebook said it has over 300 million active users monthly on Messenger, which uses WebRTC.

Just by adding up the number of users on Google Hangout, Snapchat, Google Duo, web-based Skype on certain platforms, you can easily chalk up 500 million active users monthly on RTC based platforms or web apps.

WebRTC has gained massive momentum with more than 850 vendors and projects using it at the end of 2015, signaling a 100% growth in 2 years. Gartner predicts that 15% of enterprise video and voice communication will use WebRTC by 2019

The growing adoption is due to the simple fact that real-time video is becoming the preferred way to communicate, whether it is between your friends and family, or between customer and company. Real-time video adds a human touch by utilizing visual and audio elements, and is designed and implemented at a cost-effective price.

We’ve seen plenty of companies jump onboard the real-time video bandwagon by offering services which sets them apart in the market such as live customer service connections.

However, as with any technology, there are drawbacks to using it. If you are planning to implement a WebRTC project or build a video and voice web app, then one of the biggest issues to consider is interoperability with other platforms, protocols, and legacy systems.

Compared to the telco industry, the world wide web is still young. Often, we forget the years of work and the amount of regulation that goes into making a simple phone call. Today, you can pick up your phone on Verizon’s network and simply ring up someone on AT&T’s network, and it is a seamless experience.

In the world of web-based video and voice calls, this is not the case. You can’t call or chat with someone on Google Hangout by using web-based Skype or Facebook Messenger. It only works within the same platform, which means you must register a new account on every platform.

This is how web communications have worked, by not cooperating with each other. One of the reasons why web companies choose not to cooperate is so that they can capture user data on their own for advertising purposes.

Although the lack of standardization gives companies the freedom to innovate and offers consumers multiple choices of communication, it also means more work for developers and designers. Thus, interoperability is paramount when you take on a WebRTC project.

Outlined below are 5 crucial aspects to consider on WebRTC interoperability:

– Signaling

This is the process of coordinating communication between two parties, such as session management, codec settings, handling security and network data.

WebRTC was designed by omitting signaling standards to allow more freedom to developers and vendors. You can use various protocols for WebRTC, but legacy systems tend to use SIP (Session Initiation Protocol) or H.323.

However, it can get tricky. SIP is designed to be independent of the underlying transport layer, so you may still deal with different protocols even if both WebRTC endpoints use SIP. One solution is through the usage of gateways and proxies that can translate protocols seamlessly.

– Call Control

Legacy systems normally support multi-way calls with features such as hold, park, transfer, etc. In order to implement this with WebRTC, you need to use bandwidth efficiently through prudent management of voice and video.  For example, you should pause the transmission of video or voice when a call is on hold, and this requires extra codes.

– Transcoding

Although WebRTC uses the latest video codes such as VP8 and VP9, a lot of legacy platforms do not and requires conversion. Transcoding video is not an ideal situation as it takes up too much bandwidth and CPU power but is sometimes unavoidable to achieve interoperability.

Transcoding extends to audio codecs. Older systems will use codecs such as AMR, AMR-WB and G.722.1 that are not supported by WebRTC. However, there are already talks to include the older codecs in the WebRTC specification.

Transcoding is also needed in a conference call, and may require a centralized media server to ensure smooth communication.

– Identity Management

Some legacy platforms manage and control their own subscriber identity for billing and security purposes. To achieve interoperability, WebRTC clients must register with the legacy systems which raises the need for a unified identification of users.

– Security

WebRTC is an encrypted protocol while a lot of legacy platforms are unsecured and not encrypted. While there are easy solutions, at times, the only feasible solution for interoperability is by encrypting and decrypting media.

Thankfully, there are many open source tools available to achieve WebRTC compatibility with older systems. But don’t forget the issue of scalability when you are coming up with interoperability solutions for WebRTC.

Your project should address the management of all components but also allow for expansion without any growing pains. Until more standardization happens, WebRTC projects should be planned and designed for integration and interoperability. WebRTC is the next big thing if you can handle interoperability.

Image Source