Suddenly, there are so many good WebRTC events you can attend.
My kids are still young, and for some reason, still consider me somewhat important in their lives. It is great, but also sad – I found myself this year needing to decline so many good events to attend. Here’s a list of all the places that I am not going to be at, but you should if you’re interested in WebRTC
BTW – Some of these events are still in their call for papers stage – why not go as a speaker?
AllThingsRTCWhen? 13 June
Where? San Francisco
Call for speakers: https://www.papercall.io/allthingsrtc
AllThingsRTC is hosted by Agora.io. The event they did in China a few years back was great (I haven’t attended but got good feedback about it), and this one is taking the right direction. They have room for more speakers – so be sure to add your name if you wish to present.
Sadly, I won’t be able to join this event as I am just finishing a family holiday in London.
URL: https://2019.commcon.xyz/
When? 7-11 July
Where? Buckinghamshire, UK
CommCon started last year by Dan Jenkins from Nimble Ape.
It takes a view of the communications market as a whole from the point of view of the developers in that market. The event runs in two tracks with a good deal of sessions around WebRTC.
I couldn’t attend last year’s even and can’t attend this year’s event (extended family trip to Eastern Europe). What I’ve heard from last year’s attendees was that the event was really good – and as testament, the people I know are going to attend this year’s event as well.
ClueConWhen? 5-8 August
Where? Downtown Chicago
Call for speakers: https://www.cluecon.com/speakers/
This is the 15th year that ClueCon will be held. This event is about open source projects in VoIP, with the team behind the event being the FreeSWITCH team.
This one is just after that extended family trip to Eastern Europe, and I’d rather not be on another airplane so soon.
Twilio SignalURL: https://signal.twilio.com/
When? 6-7 August
Where? San Francisco
Call for speakers: https://eegeventsite.secure.force.com/twiliosignal/twiliosignalcfpreghome
Twilio Signal is a lot of fun. Twilio is the biggest CPaaS vendor out there and their event is quite large. I’ve been to two such events and found them really interesting. They deal a lot about Twilio products and new launches which tend to define a lot of the industry, but they have technical and business sessions as well.
Can’t make it this year. Falls at roughly the same time as ClueCon which I am skipping as well.
JanusConWhen? 23-25 September
Where? Napoli, Italy
Call for papers: https://www.papercall.io/januscon2019
The meetecho team behind Janus decided to create a conference around Janus.
Janus is one of the most popular open source WebRTC media servers today, and this is a leap of faith when it comes to creating an event – always a risky business.
I might end up attending it. For Janus (and for the food obviously). Only challenge is my daughter is starting a new school that month, so need to see if and how will that fit.
IIT RTCURL: https://www.rtc-conference.com/2019/
When? 14-16 October
Where? Chicago
Call for speakers: https://www.rtc-conference.com/2019/submit-presentation-for-conference/
The IIT RTC is a mixture of academic and industry event around real time communications. I’ve taken part in it twice without really being there in person, through a video conference session. The event runs multiple tracks with WebRTC in a track of its own. As with many of the other larger industry events, IIT RTC is preceded by a TADHack event and one of its tracks is TAD Summit.
I’ll be skipping this one due to Sukkot holiday here in Israel.
Kranky GeekURL: https://www.krankygeek.com/
When? 15 November
Where? San Francisco
Call for speakers: just contact me
That’s the event I am hosting with Chris Koehncke and Chad Hart. Our focus is WebRTC and ML/AI in real time communications. We’re still figuring out the sponsors and agenda for this year (just started planning the event).
Obviously, I’ll be attending this event…
Which event should you attend?This is a question I’ve been asked quite a few times, and somehow, this year, there are just so many of them that I want and can’t attend. If you think of going to an event to learn about WebRTC and communications in general, then any of these will be great.
Go to a few – why settle for one?
Next MonthNext month, I’ll be hosting a webinar along with Chad Hart. We will be reviewing the changing domain of machine learning and artificial intelligence in real time communications. We’ve published a report about it a few months back, and it is time to take another look at the topic. If you’re interested – join us.
The post Upcoming WebRTC events in 2019 appeared first on BlogGeek.me.
Suddenly, there are so many good WebRTC events you can attend.
My kids are still young, and for some reason, still consider me somewhat important in their lives. It is great, but also sad – I found myself this year needing to decline so many good events to attend. Here’s a list of all the places that I am not going to be at, but you should if you’re interested in WebRTC
BTW – Some of these events are still in their call for papers stage – why not go as a speaker?
AllThingsRTCWhen? 13 June
Where? San Francisco
Call for speakers: https://www.papercall.io/allthingsrtc
AllThingsRTC is hosted by Agora.io. The event they did in China a few years back was great (I haven’t attended but got good feedback about it), and this one is taking the right direction. They have room for more speakers – so be sure to add your name if you wish to present.
Sadly, I won’t be able to join this event as I am just finishing a family holiday in London.
URL: https://2019.commcon.xyz/
When? 7-11 July
Where? Buckinghamshire, UK
CommCon started last year by Dan Jenkins from Nimble Ape.
It takes a view of the communications market as a whole from the point of view of the developers in that market. The event runs in two tracks with a good deal of sessions around WebRTC.
I couldn’t attend last year’s even and can’t attend this year’s event (extended family trip to Eastern Europe). What I’ve heard from last year’s attendees was that the event was really good – and as testament, the people I know are going to attend this year’s event as well.
ClueConWhen? 5-8 August
Where? Downtown Chicago
Call for speakers: https://www.cluecon.com/speakers/
This is the 15th year that ClueCon will be held. This event is about open source projects in VoIP, with the team behind the event being the FreeSWITCH team.
This one is just after that extended family trip to Eastern Europe, and I’d rather not be on another airplane so soon.
Twilio SignalURL: https://signal.twilio.com/
When? 6-7 August
Where? San Francisco
Call for speakers: https://eegeventsite.secure.force.com/twiliosignal/twiliosignalcfpreghome
Twilio Signal is a lot of fun. Twilio is the biggest CPaaS vendor out there and their event is quite large. I’ve been to two such events and found them really interesting. They deal a lot about Twilio products and new launches which tend to define a lot of the industry, but they have technical and business sessions as well.
Can’t make it this year. Falls at roughly the same time as ClueCon which I am skipping as well.
JanusConWhen? 23-25 September
Where? Napoli, Italy
Call for papers: https://www.papercall.io/januscon2019
The meetecho team behind Janus decided to create a conference around Janus.
Janus is one of the most popular open source WebRTC media servers today, and this is a leap of faith when it comes to creating an event – always a risky business.
I might end up attending it. For Janus (and for the food obviously). Only challenge is my daughter is starting a new school that month, so need to see if and how will that fit.
IIT RTCURL: https://www.rtc-conference.com/2019/
When? 14-16 October
Where? Chicago
Call for speakers: https://www.rtc-conference.com/2019/submit-presentation-for-conference/
The IIT RTC is a mixture of academic and industry event around real time communications. I’ve taken part in it twice without really being there in person, through a video conference session. The event runs multiple tracks with WebRTC in a track of its own. As with many of the other larger industry events, IIT RTC is preceded by a TADHack event and one of its tracks is TAD Summit.
I’ll be skipping this one due to Sukkot holiday here in Israel.
Kranky GeekURL: https://www.krankygeek.com/
When? 15 November
Where? San Francisco
Call for speakers: just contact me
That’s the event I am hosting with Chris Koehncke and Chad Hart. Our focus is WebRTC and ML/AI in real time communications. We’re still figuring out the sponsors and agenda for this year (just started planning the event).
Obviously, I’ll be attending this event…
Which event should you attend?This is a question I’ve been asked quite a few times, and somehow, this year, there are just so many of them that I want and can’t attend. If you think of going to an event to learn about WebRTC and communications in general, then any of these will be great.
Go to a few – why settle for one?
Next MonthNext month, I’ll be hosting a webinar along with Chad Hart. We will be reviewing the changing domain of machine learning and artificial intelligence in real time communications. We’ve published a report about it a few months back, and it is time to take another look at the topic. If you’re interested – join us.
The post Upcoming WebRTC events in 2019 appeared first on BlogGeek.me.
There are multiple ways to implement WebRTC multiparty sessions. These in turn are built around mesh, mixing and routing.
In the past few days I’ve been sick to the bone. Fever, headache, cough – the works. I couldn’t do much which meant no writing an article either. Good thing I had to remove an appendix from my upcoming WebRTC API Platforms report to make room for a new one.
I wanted to touch the topic of Flow and Embed in Communication APIs, and how they fit into the WebRTC space. This topic will replace an appendix in the report about multiparty architectures in WebRTC, which is what follows here – a copy+paste of that appendix:
Multiparty conferences of either voice or video can be supported in one of three ways:
The quality of the solution will rely heavily on the different type of architecture used. In Routing, we see further refinement for video routing between multi-unicast, simulcast and SVC.
WebRTC API Platform vendors who offer multiparty conferencing will have different implementations of this technology. For those who need multiparty calling, make sure you know which technology is used by the vendor you choose.
MeshIn a mesh architecture, all users are connected to all others directly and send their media to them. While there is no overhead on a media server, this option usually falls short of offering any meaningful media quality and starts breaking from 4 users or more.
Mesh topologyFor the most part, consider vendors offering mesh topology for their video service as limited at best.
MixingMCUs were quite common before WebRTC came into the market. MCU stands for Multipoint Conferencing Unit, and it acts as a mixing point.
MCU mixing topologyAn MCU receives the incoming media streams from all users, decodes it all, creates a new layout of everything and sends it out to all users as a single stream.
This has the added benefit of being easy on the user devices, which see it as a single user they need to operate in front; but it comes at a high compute cost and an inflexibility on the user side.
RoutingSFUs were new before WebRTC, but are now an extremely popular solution. SFU stands for Selective Forwarding Unit, and it acts like a router of media.
SFU routing topologyAn SFU receives the incoming media streams from all users, and then decides which streams to send to which users.
This approach leaves flexibility on the user side while reducing the computational cost on the server side; making it the popular and cost effective choice in WebRTC deployments.
To route media, an SFU can employ one of three distinct approaches:
This is the naïve approach to routing media. Each user sends his video stream towards he SFU, which then decide who to route this stream to.
If there is a need to lower bitrates or resolutions, it is either done at the source, by forcing a user to change his sent stream, or on the receiver end, by having the receiving user to throw data he received and processed.
It is also how most implementations of WebRTC SFUs were done until recently. [UPDATE: Since this article was originally written in 2017, that was true. In 2019, most are actually using Simulcast] Simulcast
Simulcast is an approach where the user sends multiple video streams towards the SFU. These streams are compressed data of the exact same media, but in different quality levels – usually different resolutions and bitrates.
SimulcastThe SFU can then select which of the streams it received to send to which participant based on their device capability, available network or screen layout.
Simulcast has started to crop in commercial WebRTC SFUs only recently.
SVCSVC stands for Scalable Video Coding. It is a technique where a single encoded video stream is created in a layered fashion, where each layer adds to the quality of the previous layer.
SVCWhen an SFU receives a media stream that uses SVC, it can peel of layers out of that stream, to fit the outgoing stream to the quality, device, network and UI expectations of the receiving user. It offers better performance than Simulcast in both compute and network resources.
SVC has the added benefit of enabling higher resiliency to network impairments by allowing adding error correction only to base layers. This works well over mobile networks even for 1:1 calling.
SVC is very new to WebRTC and is only now being introduced as part of the VP9 video codec.
The post WebRTC Multiparty Architectures appeared first on BlogGeek.me.
There are multiple ways to implement WebRTC multiparty sessions. These in turn are built around mesh, mixing and routing.
In the past few days I’ve been sick to the bone. Fever, headache, cough – the works. I couldn’t do much which meant no writing an article either. Good thing I had to remove an appendix from my upcoming WebRTC API Platforms report to make room for a new one.
I wanted to touch the topic of Flow and Embed in Communication APIs, and how they fit into the WebRTC space. This topic will replace an appendix in the report about multiparty architectures in WebRTC, which is what follows here – a copy+paste of that appendix:
Multiparty conferences of either voice or video can be supported in one of three ways:
The quality of the solution will rely heavily on the different type of architecture used. In Routing, we see further refinement for video routing between multi-unicast, simulcast and SVC.
WebRTC API Platform vendors who offer multiparty conferencing will have different implementations of this technology. For those who need multiparty calling, make sure you know which technology is used by the vendor you choose.
MeshIn a mesh architecture, all users are connected to all others directly and send their media to them. While there is no overhead on a media server, this option usually falls short of offering any meaningful media quality and starts breaking from 4 users or more.
Mesh topologyFor the most part, consider vendors offering mesh topology for their video service as limited at best.
MixingMCUs were quite common before WebRTC came into the market. MCU stands for Multipoint Conferencing Unit, and it acts as a mixing point.
MCU mixing topologyAn MCU receives the incoming media streams from all users, decodes it all, creates a new layout of everything and sends it out to all users as a single stream.
This has the added benefit of being easy on the user devices, which see it as a single user they need to operate in front; but it comes at a high compute cost and an inflexibility on the user side.
RoutingSFUs were new before WebRTC, but are now an extremely popular solution. SFU stands for Selective Forwarding Unit, and it acts like a router of media.
SFU routing topologyAn SFU receives the incoming media streams from all users, and then decides which streams to send to which users.
This approach leaves flexibility on the user side while reducing the computational cost on the server side; making it the popular and cost effective choice in WebRTC deployments.
To route media, an SFU can employ one of three distinct approaches:
This is the naïve approach to routing media. Each user sends his video stream towards he SFU, which then decide who to route this stream to.
If there is a need to lower bitrates or resolutions, it is either done at the source, by forcing a user to change his sent stream, or on the receiver end, by having the receiving user to throw data he received and processed.
It is also how most implementations of WebRTC SFUs were done until recently.
SimulcastSimulcast is an approach where the user sends multiple video streams towards the SFU. These streams are compressed data of the exact same media, but in different quality levels – usually different resolutions and bitrates.
SimulcastThe SFU can then select which of the streams it received to send to which participant based on their device capability, available network or screen layout.
Simulcast has started to crop in commercial WebRTC SFUs only recently.
SVCSVC stands for Scalable Video Coding. It is a technique where a single encoded video stream is created in a layered fashion, where each layer adds to the quality of the previous layer.
SVCWhen an SFU receives a media stream that uses SVC, it can peel of layers out of that stream, to fit the outgoing stream to the quality, device, network and UI expectations of the receiving user. It offers better performance than Simulcast in both compute and network resources.
SVC has the added benefit of enabling higher resiliency to network impairments by allowing adding error correction only to base layers. This works well over mobile networks even for 1:1 calling.
SVC is very new to WebRTC and is only now being introduced as part of the VP9 video codec.
The post WebRTC Multiparty Architectures appeared first on BlogGeek.me.
A while ago we looked at how Zoom was avoiding WebRTC by using WebAssembly to ship their own audio and video codecs instead of using the ones built into the browser’s WebRTC. I found an interesting branch in Google’s main (and sadly mostly abandoned) WebRTC sample application apprtc this past January. The branch is named wartc… a name which is going to stick as warts!
The repo contains a number of experiments related to compiling the webrtc.org library as WebAssembly and evaluating the performance.
Continue reading Finding the Warts in WebAssembly+WebRTC at webrtcHacks.
WebRTC disconnections are quite common, but you can “fix” many of them just by careful planning and proper development.
Years ago, I developed the H.323 Protocol Stack at RADVISION (later turned Avaya, turned Spirent turned Softil). I was there as a developer, R&D manager and then the product manager. My code is probably still in that codebase, lovingly causing products around the globe to crash from time to time – as any other developer, I have my share of bugs left behind.
Anyways, why am I mentioning this?
I had a client asking me recently about disconnections in WebRTC. And it kinda reminded me of a similar issue (or set of issues) we had with the H.323 stack and protocol years back.
If you bear with me a bit – I promise it will be worth your while.
I am starting this week the office hours for my WebRTC course. The next office hour (after the initial “hi everyone”) will cover WebRTC disconnections.
Check out the course – and maybe go over the first module for free:
A quick intro to H.323 signaling and transportH.323 is like SIP just better and more complex. At least for me, who started his way in VoIP with H.323 (I will always have a soft spot for it). For many years, the way H.323 worked is by opening two separate TCP connections for transporting its signaling. The first for passing what is called Q.931 protocol and the next for passing H.245 protocol.
If you would like to compare it to the way WebRTC handles things, then Q.931 is how you setup the connection – have the users find each other. H.245 is similar to what SDP and JSEP are for (I am blatantly ignoring H.225 here, another protocol in H.323 which takes care of registration and authentication).
Once Q.931 and H.245 get connected, you start adding the RTP/RTCP stuff over UDP, which gets you quite a lot of connections.
Add to that complexities like tunneling H.245 over Q.931, using something called faststart instead of H.245 (or before H.245), then sprinkle a dash of “parallel H.245” and then a bit of NAT traversal and/or security and you get a lot of places that require testing and a huge number of edge cases.
Where can H.323 get “stuck” or disconnected?With so many connections, there are a lot of places that things can go wrong. There are multiple state machines (one for Q.931 state, one for H.245 state) and there are different connections that can get severed for one reason or another.
Oh – and in H.323 (at least in its earlier specifications that I had the joy to work with), when the Q.931 or H.245 connections get severed – the whole session is considered as disconnected, so you go and kill the RTP/RTCP sessions.
At the time, we suffered a lot from zombie sessions due to different edge cases. We ended up with solutions that were either based on the H.323 specification itself or best practices we created along the way.
Here are a few of these:
H.323 existed before smartphones. Systems were usually tethered to an ethernet cable or at most over WiFi in a static location at a time. There was no notion of roaming or moving between networks. Which meant that there was no need to ask yourself if a connection got severed because of a switch in the network or because there’s a real issue.
Life was simple:
And if you were really insistent then maybe this:
(in real life scenarios, these two simplistic state machines were a lot bigger and complicated, but their essence was based on these concepts)
Back to WebRTC signaling and transportWebRTC is simpler and more complicated than H.323 at the same thing.
It is simpler, as there is only SRTP. There’s no signaling that is standardized or preselected for WebRTC. And for the most part, the one you use will probably require only a single connection (as opposed to the two in H.323). It also has a lot less alternatives built into the specification itself that H.323 has.
It is more complicated, as you own the signaling part. You make that selection, so you better make a good one. And while at it, implement it reasonably well and handle all of its edge cases. This is never a simple task even for simple signaling protocols. And it’s now on you.
Then there’s the fact that networks today are more complex. User expect to move around while communicating, and you should expect such scenarios where users switch networks in mid-session.
If you use WebRTC in a browser, then you get these interesting aspects associated with your implementation:
A lot of dying taking place on the browser, and the server, or the other client, will need to “sniff” these scenarios as they might not be gracefully disconnected, and decide what to do about them.
Where can WebRTC get “stuck” or disconnected?We can split disconnections of WebRTC into 3 broad categories:
In each, there will be multiple scenarios, defining the reasons for failure as well as how to handle and overcome such issues.
In broad strokes, here’s what I’d do in each of these 3 categories:
#1 – Failure to connect at allThere’s a decent amount of failures happening when trying to connect WebRTC sessions. They start from not being able to even send out an SDP, through interoperability issues across browsers and devices to ICE negotiation failing to connect media.
In many of these cases, better configuration of the service as well as focus on edge cases would improve the situation.
If you experience connection failures for 10% or more of the sessions – you’re doing something wrong. Some can get it as low as 1% or less, but oftentimes that depends on the type of users your service attracts.
This leads to another very important aspect of using WebRTC:
Measure what you can if you want to be able to improve it in the future
#2 – Media disconnectionsSometimes, your sessions will simply disconnect.
There are many reasons why that can happen:
Each of these requires different handling – some in the code while others some manual handling (think customer support working out the configuration with a customer to resolve the firewall issue).
#3 – Signaling disconnectionsUnlike H.323, if signaling gets disconnected, WebRTC doesn’t even know about it, so it won’t immediately cause the session itself to disconnect.
First thing you’ll need to do is make a decision how you want to proceed in such cases – do you treat this as session failure/disconnection or do you let the show go on.
If you treat these as failures, then I suggest killing peer connections based on the status of your websocket connection to the server. If you are on the server side, then once a connection is lost, you should probably go ahead and kill the media paths – either from your media server towards the “dead” session leg or from the other participant on a P2P connection/session.
If you want to make sure the show goes on, you will need to try and reconnect the peer connection towards the same user/session somehow. In which case, additional signaling logic in your connection state machine along with additional timers to manage it will be necessary.
Announcing the WebRTC course snippets moduleHere’s the thing.
My online WebRTC training has everything in it already. Well… not everything, but it is rather complete. What I’ve noticed is that I get repeat questions from different students and clients on very specific topics. They are mostly covered within lessons of the course, but they sometimes feel as being “buried” within the hours and hours of content.
This is why I decided to start creating course snippets. These are “lessons” that are 3-5 minutes long (as opposed to 20-40 minutes long), with a purpose to give an answer to one specific question at a time. Most of the snippets will be actionable and may contain additional materials to assist you in your development. This library of snippets will make up a new course module.
Here are the first 3 snippets that will be added:
While we’re at it, office hours for the course start today. If you want to learn WebRTC, now is the best time to enroll.
The post Handling session disconnections in WebRTC appeared first on BlogGeek.me.
WebRTC disconnections are quite common, but you can “fix” many of them just by careful planning and proper development.
Years ago, I developed the H.323 Protocol Stack at RADVISION (later turned Avaya, turned Spirent turned Softil). I was there as a developer, R&D manager and then the product manager. My code is probably still in that codebase, lovingly causing products around the globe to crash from time to time – as any other developer, I have my share of bugs left behind.
Anyways, why am I mentioning this?
I had a client asking me recently about disconnections in WebRTC. And it kinda reminded me of a similar issue (or set of issues) we had with the H.323 stack and protocol years back.
If you bear with me a bit – I promise it will be worth your while.
I am starting this week the office hours for my WebRTC course. The next office hour (after the initial “hi everyone”) will cover WebRTC disconnections.
Check out the course – and maybe go over the first module for free:
A quick intro to H.323 signaling and transportH.323 is like SIP just better and more complex. At least for me, who started his way in VoIP with H.323 (I will always have a soft spot for it). For many years, the way H.323 worked is by opening two separate TCP connections for transporting its signaling. The first for passing what is called Q.931 protocol and the next for passing H.245 protocol.
If you would like to compare it to the way WebRTC handles things, then Q.931 is how you setup the connection – have the users find each other. H.245 is similar to what SDP and JSEP are for (I am blatantly ignoring H.225 here, another protocol in H.323 which takes care of registration and authoentication).
Once Q.931 and H.245 get connected, you start adding the RTP/RTCP stuff over UDP, which gets you quite a lot of connections.
Add to that complexities like tunneling H.245 over Q.931, using something called faststart instead of H.245 (or before H.245), then sprinkle a dash of “parallel H.245” and then a bit of NAT traversal and/or security and you get a lot of places that require testing and a huge number of edge cases.
Where can H.323 get “stuck” or disconnected?With so many connections, there are a lot of places that things can go wrong. There are multiple state machines (one for Q.931 state, one for H.245 state) and there are different connections that can get severed for one reason or another.
Oh – and in H.323 (at least in its earlier specifications that I had the joy to work with), when the Q.931 or H.245 connections get severed – the whole session is considered as disconnected, so you go and kill the RTP/RTCP sessions.
At the time, we suffered a lot from zombie sessions due to different edge cases. We ended up with solutions that were either based on the H.323 specification itself or best practices we created along the way.
Here are a few of these:
H.323 existed before smartphones. Systems were usually tethered to an ethernet cable or at most over WiFi in a static location at a time. There was no notion of roaming or moving between networks. Which meant that there was no need to ask yourself if a connection got severed because of a switch in the network or because there’s a real issue.
Life was simple:
And if you were really insistent then maybe this:
(in real life scenarios, these two simplistic state machines were a lot bigger and complicated, but their essence was based on these concepts)
Back to WebRTC signaling and transportWebRTC is simpler and more complicated than H.323 at the same thing.
It is simpler, as there is only SRTP. There’s no signaling that is standardized or preselected for WebRTC. And for the most part, the one you use will probably require only a single connection (as opposed to the two in H.323). It also has a lot less alternatives built into the specification itself that H.323 has.
It is more complicated, as you own the signaling part. You make that selection, so you better make a good one. And while at it, implement it reasonably well and handle all of its edge cases. This is never a simple task even for simple signaling protocols. And it’s now on you.
Then there’s the fact that networks today are more complex. User expect to move around while communicating, and you should expect such scenarios where users switch networks in mid-session.
If you use WebRTC in a browser, then you get these interesting aspects associated with your implementation:
A lot of dying taking place on the browser, and the server, or the other client, will need to “sniff” these scenarios as they might not be gracefully disconnected, and decide what to do about them.
Where can WebRTC get “stuck” or disconnected?We can split disconnections of WebRTC into 3 broad categories:
In each, there will be multiple scenarios, defining the reasons for failure as well as how to handle and overcome such issues.
In broad strokes, here’s what I’d do in each of these 3 categories:
#1 – Failure to connect at allThere’s a decent amount of failures happening when trying to connect WebRTC sessions. They start from not being able to even send out an SDP, through interoperability issues across browsers and devices to ICE negotiation failing to connect media.
In many of these cases, better configuration of the service as well as focus on edge cases would improve the situation.
If you experience connection failures for 10% or more of the sessions – you’re doing something wrong. Some can get it as low as 1% or less, but oftentimes that depends on the type of users your service attracts.
This leads to another very important aspect of using WebRTC:
Measure what you can if you want to be able to improve it in the future
#2 – Media disconnectionsSometimes, your sessions will simply disconnect.
There are many reasons why that can happen:
Each of these requires different handling – some in the code while others some manual handling (think customer support working out the configuration with a customer to resolve the firewall issue).
#3 – Signaling disconnectionsUnlike H.323, if signaling gets disconnected, WebRTC doesn’t even know about it, so it won’t immediately cause the session itself to disconnect.
First thing you’ll need to do is make a decision how you want to proceed in such cases – do you treat this as session failure/disconnection or do you let the show go on.
If you treat these as failures, then I suggest killing peer connections based on the status of your websocket connection to the server. If you are on the server side, then once a connection is lost, you should probably go ahead and kill the media paths – either from your media server towards the “dead” session leg or from the other participant on a P2P connection/session.
If you want to make sure the show goes on, you will need to try and reconnect the peer connection towards the same user/session somehow. In which case, additional signaling logic in your connection state machine along with additional timers to manage it will be necessary.
Announcing the WebRTC course snippets moduleHere’s the thing.
My online WebRTC training has everything in it already. Well… not everything, but it is rather complete. What I’ve noticed is that I get repeat questions from different students and clients on very specific topics. They are mostly covered within lessons of the course, but they sometimes feel as being “buried” within the hours and hours of content.
This is why I decided to start creating course snippets. These are “lessons” that are 3-5 minutes long (as opposed to 20-40 minutes long), with a purpose to give an answer to one specific question at a time. Most of the snippets will be actionable and may contain additional materials to assist you in your development. This library of snippets will make up a new course module.
Here are the first 3 snippets that will be added:
While we’re at it, office hours for the course start today. If you want to learn WebRTC, now is the best time to enroll.
The post Handling session disconnections in WebRTC appeared first on BlogGeek.me.
CPaaS differentiation seems to be revolving around tackling niches.
Time for another look at the world of CPaaS – Communication Platform as a Service. In January 2018, a bit over a year ago, I’ve looked at CPaaS trends for 2018. The ones there were:
I’d like to look at what’s happening in CPaaS this time from a slightly different angle, which alludes itself to trends as well, but in a more nuanced way. From briefings I’ve been given this past few weeks and the announcements and stories coming out of Enterprise Connect 2019, it looks like different CPaaS vendors are settling on different target audiences and catering to different use cases and market niches.
Today CPaaS is almost synonymous to Twilio. Every player looks at what Twilio does in order to plot its own route in the market, at times, with the intended aim of disrupting Twilio and then mostly with lower price points. In other times, with trying to offer something more/better.
Then there are external players who add APIs to their platform. Usually UCaaS (Unified Communications as a Service) platform. They don’t directly compete with CPaaS, but if you are purchasing a “phone system” for your enterprise from a UCaaS player, then why not use its APIs and services instead of opting for another vendor (a CPaaS vendor in this case)?
Planning on selecting a CPaaS vendor? Check out this shortlist of CPaaS vendor selection metrics:
Get the shortlist
Here are how some of the vendors in this space are trying to differentiate, pivot and/or find their niche within the CPaaS market.
Agora.io – GamingIf you look at Agora’s blog, what you’ll find out there is a slew of posts around gaming and gaming related frameworks (Unity to be exact):
Gaming is an untapped market for CPaaS.
There’s communications there of all kinds – collaboration or communications across gamers inside a game, talking before the game, streaming the game to viewers, etc.
All this communications is either developed by the gaming companies (not a lot), being catered for by specialized VoIP gaming vendors, done out of scope (using Discord, Skype, …). Rarely is it covered by a CPaaS vendor.
Somehow, for CPaaS cracking this market is really tough. Agora.io is trying to do just that, along with its other focus areas – live broadcast and social (two other tough nuts).
ECLWebRTC – Media PipelineThe Japanese platform from NTT Communications – ECLWebRTC.
Like many of the WebRTC-first/only platforms out there, ECLWebRTC had an SFU implementation and support for various devices and browsers.
When you get to that point, one approach is to go after voice and PSTN. Another one is to add more features and increase the sizes of meetings and live broadcasts that can be supported.
ECLWebRTC decided to go after machine learning here, with the intent of letting its customers integrate and connect its media paths directly to cloud APIs. This is done using what they call Media Pipeline Factory, which feels from the looks of it like a general purpose media server.
ECLWebRTC is less known in Europe and the US, and probably not much outside of Japan either. With the Japanese market focus on automation, it makes sense that media pipeline would be a focus area for ECLWebRTC. This type of a capability is relevant elsewhere as well, but it doesn’t seem to be a priority for others yet.
Infobip – OmnichannelI’ve had the opportunity to fiddle around with Infobip Flow recently, something that turned out to be a very pleasant experience. From Flow, it became apparent that Infobip is working hard on offering its customers an omnichannel experience. Compared to other CPaaS vendors, they seem to have the most coverage of channels:
To the above, you can add SMS and RCS and email.
Infobip Flow has another nice quality – it is built for both inbound and outbound communications. Most of its competitors do inbound flows only.
In a world where competition may force price wars on CPaaS basic offerings of voice and SMS, adding support for omnichannel seems like a good way to limit attrition and churn and increase vendor lock-in.
RingCentral – EmbeddablesRingCentral isn’t a CPaaS vendor. They offer a communication service for the enterprise. You got a company and need a way to communicate? There’s RingCentral.
What they’ve done in the past couple of years was add an API layer to some of their services. Things like pushing messages into Glip, handling phone calls, etc.
The idea is that if you need something done in an automated fashion in RingCentral you can use the API for it. In many simple cases, this might be used instead of adopting CPaaS APIs. in other cases, it is about using a single vendor or having specific integrations relevant to the RingCentral platform.
What RingCentral did was add what they call Embeddable:
“With RingCentral Embeddable, you can embed a full-featured softphone into your favorite web application for an integrated communications experience that drives productivity and ease of use without lengthy development time“
This concept of embedding a piece of code isn’t new – YouTube videos offer such a capability as well as a slew of other services out there. When it comes to communications, it is similar in nature to what TokBox has in the form of Video Chat Embeds but done at the level of users and their user accounts on RingCentral.
This definitely makes integrations of RingCentral with CRM tools a lot easier to get done, and makes it easier to non-developers to engage with them – similar to how Flow type offerings make it easier for non-developers to handle communication flows.
SignalWire – Price and FlexibilitySignalWire is an interesting proposition. It comes from the team that created and is maintaining FreeSWITCH, the leading open source framework used today by many communication providers, including some of the CPaaS vendors.
The FreeSWITCH team decided to build their own managed service (=CPaaS in this case), calling it SignalWire. Here’s a few examples of the punchy copy they have on their website:
What they seem to be aiming for are two things: price and flexibility
Price
They offer close to whole-sale price points (at least based on the website – I haven’t gone to a price comparison on this one, though their sample pricing for the US does seem low).
To make things easier, they are targeting Twilio customers, doing that by offering TwiML support (similar to what Pilvo did/is doing). TwiML is a markup language for Twilio, which can be used to control what happens on connected calls. Continuing with the blunt approach, SignalWire calls this LāML – Legacy Antiquated Markup Language.
While this may fit a certain type of Twilio customers, it certainly doesn’t cover the whole gamut of Twilio services today.
Flexibility
On the flexibility front, there’s mostly marketing messages today and not any real announced products on the SignalWire website.
Besides LāML there’s a WebSocket based client API/SDK, not so different than what you’ll find elsewhere.
They can probably get away with it in the sales process by saying “we give you FreeSWITCH from the source”, but I am not sure what happens when developers want to configure that elastic cloud service the way they are used to be doing with their own FreeSWITCH installation.
All in all, this is an interesting offering and an interesting approach and go to market.
TeleSign – Security and Data AnalyticsTeleSign is focused on SMS. And a bit of voice. As their website states: “APIs Delivering User Verification, Data Insights & Communications”
Since security, verification and fraud prevention these days rely heavily on analytics, TeleSign are “horeding” data about phone numbers, using it for these use cases. It isn’t that others don’t do it (there’s Twilio Authy, nexmo Number Insight and others), but this is what they are putting front and center.
Since their acquisition by BICS, a wholesale operator for wireline and wireless carries, that has grown even further, as they gain access to more and more data.
It will be interesting to see how TeleSign grows their business from security to additional communication domains, or will they try to focus on security and expand from the telecom space to adjacent areas.
Twilio – AdjacenciesTalking about adjacencies, that’s what Twilio is doing. Now that they are a public company, there is even more insatiability for growth within Twilio, in an effort to find more revenue streams. So far, this has worked great for Twilio.
Here are two areas we’ve seen Twilio going into:
How email fits into the Twilio communication APIs is still an open question, though I can see a few interesting initiatives there.
And then there’s the wireless offering of Twilio, which resembles a more flexible M2M play.
But where would Twilio go next?
UCaaS, going after unified communications vendors and competing with them head to head?
Maybe try to jump towards an Intercom like service of its own? Or purchase Intercom?
Or find another market of developers that is growing nicely – similar maybe to its recent Stripe integration of Twilio Pay.
Twilio in a way has been defining and redefining what CPaaS is for the past several years. They need to continue doing that to stay in the lead and well ahead of their competition.
VoIP Innovations – MarketplaceVoIP Innovations came out with what they call Showroom.
Here’s a short video of the explanation of what that is exactly:
Many of the CPaaS vendors offer a partner program of sorts. This is where vendors who develop stuff for others or build tooling and apps on top of the CPaaS vendor’s APIs can go and showcase their work. The programs vary from CPaaS company to another.
Twilio has Showcase as well as an add-on marketplace of sorts. Nexmo has a partners directory. VoIP Innovations are banking on their showroom.
What makes it different a bit is the target audience associated with it:
While there isn’t much documentation to go about, I am assuming that the whole intent behind the marketplace is to offer direct monetization opportunities for developers and resellers by taking care of customer acquisition as well as payment on behalf of the developer and reseller.
A concept taken from other marketplaces (think mobile app stores). It will be intersting to see how successful this will be.
Vonage – UCaaS+CPaaSVonage is interesting. Started as consumer VoIP, turned cloud UC vendor (=enterprise communications) through acquisitions, turned to acquire Nexmo and then TokBox to add CPaaS, continued with NewVoiceMedia acquisition to cover contact center space.
How does one differentiate in such a way? Probably by leveraging synergies across its product offerings and markets.
What Vonage recently did was bring number programmability from its Nexmo/CPaaS offering to its VBC/UCaaS platform.
What do they gain?
Is this good for Nexmo customers and partners? Yap. They can now reach out to the Vonage business customers as an additional target market.
Is this good for Vonage customers and partners? Yap. They can now do more, and more customized communications solutions with this added flexibility.
Voximplant – FlowVoximplant is one of the lesser known CPaaS vendors. Its whole platform is built on the concept of an App Engine, where you write the communications logic right onto their platform using Java Script. It is serverless from the ground up. A year or two ago, Voximplant added Smartcalls. A product that enables you to sketch out call flows for outbound interactions – marketing, sales, etc. These interactions would be played out across a large number of phone numbers and get automated, making it really easy and flexible to drive phone based campaigns.
Now? Voximplant took the next step of adding inbound interactions, covering the IVR and contact center types of scenarios.
Twilio, MessageBird and Plivo offer inbound visual flow products. These allow developers to drag and drop communication widgets to build a flow – a customer interaction through the system.
Voximplant and Infobip offer inbound and outbound flows, where you can also plot company/agent based initiatives with greater ease as well as the customer initiated interactions.
Why aren’t you listed here?The CPaaS market is large and varied. It is hard to see everyone all the time. It is also hard to innovate and differentiate every year. The vendors here are the ones I had briefings with or ones who promoted their products in ways that were visible to me. But more than anything, these are the ones that I felt have changed their offerings in the past year in a differentiating manner.
BTW – if you think that differentiation here means that it is a functionality that other vendors don’t have then you are wrong. Doing that is close to impossible today. Differentiation is simply where each vendor is putting his focus and trying to attract customers and carve his niche within the broader market. It is the stories each vendor tells about his product.
If you feel like a vendor needs to be here, or did something meaningful and interesting, just contact me. I am always happy to learn more about what is happening in the market.
Who is missing in my WebRTC PaaS report?Later this month, I will be releasing my latest update of the WebRTC PaaS report.
There are changes taking place in the market, and what vendors are offering in the WebRTC space as a managed API service is also changing. This report is there to guide buyers and sellers in the market on what to do.
For buyers, it is about which platform to pick for their project – or in some cases, in which of the platform vendors to invest.
For sellers, it is about what to add to their roadmap. To understand how they are viewed from the outside and how do they compare to their peers.
Here’s who’s been in the last update of the report:
Think you should be there? Contact me.
Want to purchase the report? There’s a 30% discount on it from today and until the update gets published (and yes – you will be receiving the update once it gets published for no additional fee).
There will be a new appendix in the report, covering the topic of Flow and Embeddable trends in the market. Something that will become more important as we move forward.
The post CPaaS differentiation in 2019 appeared first on BlogGeek.me.
This demo of the Microsoft Surface Hub 2 is pretty damn cool…
I don’t run a lot of Microsoft product anymore, switched to mac when the intel chip landed + Apple moved to a unix underpinning. That said, I have seen much better quality in products coming from Microsoft in the last few years, so maybe they deserve a second look.
Surface Hub 2 sort of reminds me of a product called Perch, built a by a local Vancouver team which was meant to serve as a portal into disparate global offices. Perch was way before it’s time. WebRTC was still in its infancy and personal device video conferencing had not really crossed the chasm, which is a shame considering where we are today.
Now there are many of video conferencing companies and products, and plenty of alternatives / platforms for developers to build on. It certainly seems plausible now that we could see the Microsoft Surface Hub 2 in boardrooms across the globe. Apparently it will be interoperable with WebRTC endpoints as well, which could make this a powerful work tool indeed. That would enable collaboration with peers over IP on various endpoints including laptops, tablets and mobile, regardless of the OS. Sharing product ideas, riffing on concepts and polishing final features on a product release using the Microsoft Surface Hub 2 as a tool, could be a refreshing new way to work.
It will be interesting to see what developments come about from the Microsoft press event in NYC in April, as reported by The Verge.
I haven’t blogged here in some time, so I figured that since the topic is relevant this would be good a good opportunity to dust off the old blog (webrtc.is / sipthat.com) and post something we have been working on at SignalWire. I am quite passionate about WebRTC and real-time communications so it’s great to be helping bring it to life at SignalWire!
We all know and love <cough> SIP, so we decided we would enable the use of SIP over WebSockets at SignalWire. This new offer also enables functionality like WebRTC with SIP over WebSockets.
This means our customers can now use off the shelf JS libraries, like JSSIP to create basic web experiences for their users, powered by SignalWire. It used to be a bit of a PITA, to create services that provided users with seamless online communications. Now it’s a breeze, and when using SignalWire it’s also very affordable.
For now, we are enabling basic calling and video capabilities, the advanced functionality (including video conferencing) will come in conjunction with a future release of a SignalWire RELAY JS library.
Personally, I can’t wait to see what creative minds will build using this technology with SignalWire on the backend.
If you want to know more about SignalWire’s new WebRTC + SIP over WebSockets offer, you can read about it on the SignalWire product blog.
WebRTC doesn’t really connect people, but the way you think about it signaling is important to your WebRTC application.
Here’s a comment left on one of my recent articles:
WebRTC is… still just a little confusing…Tsahi, i’m reading the book recommended by Loreto & Romano but the examples are outdated. With regards to the SDP signal – if peer A is on a webRTC application, but peer B is surfing youtube – How does peer B get notified of an offer? It would have to go to peer B’s email address right? — because there is no way of knowing peer B’s IP address. Please help.
A few quick things before I dig deeper into this WebRTC connectivity thing:
How well do you know WebRTC? Check it out in my online WebRTC quiz.
Connecting, Signaling and WebRTCI’ll try to use a kind of a bad comparison here to try to explain this.
Let’s say you are the proud owners of a Pilates studio. You’re the instructor there (#truestory – at least for my wife).
My wife gives Pilates lessons at different hours of the day. These are private lessons so it is rather flexible on both sides. But let me ask you this – how do people know when to come for a lesson?
This being Israel, they usually communicate with my wife via Whatsapp to decide together on the date and time. Usually, people stick to the day of week and time and start communicating only if they can’t make it, want to reschedule or just make sure the lesson is still taking place.
Back to WebRTC.
WebRTC is that Pilates studio. It does one thing – enables live media to flow from one browser to another. Sometimes also non-browsers, but let’s stick to the basics here.
How do the people who need to share or receive that live media connect to each other? That’s not what WebRTC does – it happens somewhere else. And that somewhere is the signaling mechanism that you pick for your own application. I am calling it a mechanism and not a protocol, since it is going to be a tad more confusing in a second.
Or not.
Now let’s go back to WebRTC, signaling and connecting people and look at it from a point of view of different scenarios.
Scheduled MeetingWe’ll start with a scheduled meeting. At any given point in time, I have a few of those coming up. Meetings with clients, partners and potential clients. Here’s one such calendar invitation:
This one happens to take place using Google Meet. Who’s calling who? No one really. I’ll just click that link in the invite when the time comes and magically find myself in the same conference with the other participants.
In most scheduled conferences, you just join a WebRTC link
Where do you get that link to use?
Some of these services allow inviting people from inside the meeting. That ends up being sent to them via email or an SMS as a link or just dialing their phone (without WebRTC).
Ad-hoc “upgrade” of text chat to video conferenceThere are ad-hoc calls. These usually start from a chat message.
Often times, I’d rather text chat than do a voice or a video call. It has to do with the speed and asynchronous nature of text. Which means that I’ll be chatting with someone over whatever instant messaging service we select, and at some point, I might want to switch medium – move from text to something a bit more synchronous like video:
Like this example with Philipp – most of our conversations start in Hangouts (that’s where he is most reachable to me) and when needed, we’ll just jump on a call, without planning it first.
Who is calling whom here? Does it matter?
What happens here is that both of us are already “inside” the communications app, so we both have a direct link to the service. Passing that information from one side to the other is a no brainer at this point.
So how will that get signaled? However you see fit. Probably on top of a Websocket or over HTTPS.
I am calling you on the “phone”What if there’s nothing pre-planned, so it isn’t a scheduled meeting. And we haven’t really been on a text chat to warm things up towards a call. How do you reach me now?
How do you “dial”?
Puneet is one of our support/testing engineers at testRTC. While he will usually text me over slack to start a call, he might just try calling directly from time to time.
What happens then?
I am not in front of my laptop with the Slack app opened. My phone is on standby mode. How does it start ringing on me? What does WebRTC do to get my attention?
Nothing.
The phone starts dialing because it received a mobile push notification. I’ve got the Slack app installed, so it can receive push notifications. Slack invoked a push notification to wake up the app and make it “ring” for me.
The same can be done with web notifications. And there are probably other means to do similar things in IOT devices. The thing is – this is out of scope for WebRTC, but something that is doable with the signaling technologies available to you.
Contact center agent answering callsWhen a contact center adopts WebRTC to be able to migrate its agents from using desktop phones or installed softphone towards WebRTC, calls will end up being received in the browser.
This happens by integrating callbars inside CRMs or just by having the CRM implement the contact center part of the equation as well.
What happens then? How do calls get dialed? (the above is a screenshot taken from Talkdesk’s support site)
They go through PSTN towards a PBX. More often than not, that PBX will be based on Asterisk or FreeSWITCH, though other alternatives exist. PBXs usually base themselves around the SIP protocol, which will lead to two alternatives on the signaling protocol that will be used by WebRTC in the browser:
In both cases, the contact center agent is registered in advance. It is also marked as “available” in most contact center software logic – this means that incoming calls waiting in the call center queue can be routed to that agent. So it is sitting and waiting for incoming calls. In some ways, this is similar to the upgrade from text chat scenario.
Connecting? WebRTC?When it comes to actual users, WebRTC doesn’t get them “connected”. At least not from a signaling point of view.
What WebRTC does is negotiate the paths that the media will use throughout the session. That’s the “offer-answer” (or JSEP) messages that pass between one WebRTC entity to another. And even that isn’t sent by WebRTC itself – WebRTC creates the blob of data it wants to send and lets your application send it in any way you see fit.
Still confused? There’s a course for that – my online WebRTC training. The first module (out of eight modules) is free, so go learn about WebRTC.
The post How does WebRTC connect people? appeared first on BlogGeek.me.
WebRTC wins over competition because there is no competition – browsers offer only WebRTC as a technology for web developers.
It was raining and miserable this last Saturday. I had lost of ideas for articles to write for BlogGeek.me in my backlog, but none of them really inspired me to action. The 8yo went to his cousin. The wife had her own things to do. My 11yo daughter was bored to death. She comes to me and says: “Can we do a trip outside to the park? I need some fresh air.” How could I answer besides saying yes?
The rain stopped a bit, so we went outside. What she really wanted wasn’t fresh air, but a chaperone to the closest candy vending machine. They are having a game at school for Purim, where she needs to bring small presents and candies to another kid in her class without her knowing who is pampering her. She needed an extra candy.
How is this related to WebRTC? It isn’t.
When I asked her about her plans for this game, she mentioned the trinket she planned on giving today –
2 mechanical pencils.
And that’s definitely WebRTC related.
A quick conversation ensued between me and my daughter – are these 0.5 mm or 0.7 mm point type? My daughter went to explain that it might even be 0.9 mm.
So many alternatives.
Competing standardsIt got me thinking:
With analog video recording we had VHS and Betamax.
Paper size? A4 and Letter.
Power frequency? 50 Hz and 60 Hz.
With VoIP signaling we had H.323 and SIP. And also XMPP.
Audio and video codecs? A shopping mall of alternatives.
Web browser streaming? HLS and MPEG-DASH.
Inches and Meters. Left side vs right side driver in cars.
The list is endless.
WebRTC standardBut browser based real time media communications?
WebRTC.
There. Is. No. Other. Alternative.
We had that short romance around ORTC, which ended with ORTC dead and its main concepts just wrapped back into WebRTC.
What other technology would you use or could you use inside a browser to do a video call?
Nothing.
Just WebRTC.
The other alternatives just don’t cure it (including what Zoom is presumably doing).
What does that mean exactly? It gives us a kind of a virtuous circle.
For the most part, there’s no question if you should select WebRTC these days. There’s also no question what are the alternatives (there usually are none). It isn’t a question if WebRTC is getting adopted, used, growing or popular.
When our window to the world is the browser, then WebRTC is what you use.
For mobile apps or other devices, the need for browsers or just having an ecosystem around the technology picked translates again to WebRTC.
Thinking of using real time media technology? That’s synonymous to WebRTC.
Want to learn more about WebRTC? Check out the first module of my online course – it is free.
The post Why is WebRTC winning over its (non)competition? appeared first on BlogGeek.me.
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.