The beginning of the end of HEVC/H.265 video codec.
On September 1st the news got out. There’s a new group called Alliance for Open Media. There have been some interesting coverage about it in the media and some ramifications that already got published. The three most relevant pieces of news I found are these:
I’ve written about the pending codec wars just a week ago on SearchUC, concluding that all roads lead to a future with royalty free video codecs. This was before I had any knowledge on the announcement of the open media alliance. This announcement makes this future a lot more possible.
What I’d like to do here, is to cover some aspects of where this is headed and what it tells us about the players in this alliance and the pending codec wars.
The Press ReleaseLet’s start off with the alliance’ initial press release:
This initial project will create a new, open royalty-free video codec specification based on the contributions of members, along with binding specifications for media format, content encryption and adaptive streaming, thereby creating opportunities for next-generation media experiences.
So the idea is to invent a new codec that is royalty-free. As Chris pointed out, this is hard to impossible. Cisco in their own announcement of their new Thor codec made it quite clear what the main challenge is. As Jonathan Rosenberg puts it:
We also hired patent lawyers and consultants familiar with this technology area. We created a new codec development process which would allow us to work through the long list of patents in this space, and continually evolve our codec to work around or avoid those patents.
The closest thing to a “finished good” here is VP9 at the moment.
Is the alliance planning on banking on VP9 and use it as their baseline for the specification of this new codec, or will they be aiming at VP10 and a clean slate? Mozilla, a member company in this alliance, stated that they “believe that Daala, Cisco’s Thor, and Google’s VP10 combine to form an excellent basis for a truly world-class royalty-free codec.”
Daala takes a lot of its technologies from VP9. Thor is too new to count, and VP10 is just a thought compared to VP9. It makes more sense that VP9 would be used as the baseline; and Microsoft’s adoption of VP9 at that same timeframe may indicate just that intent. Or not.
The other tidbit I found interesting is the initial focus in the statement:
The Alliance’s initial focus is to deliver a next-generation video format that is:
Would be easier to just bio-engineer Superman.
Jokes aside, the bulleted list above are just table-stakes today:
High goals for a committee to work on.
It will require Cisco’s “cookbook”: a team comprised of codec engineers and lawyers.
The MembersWhat can we learn from the 7 initial alliance members? That this was an impossible feat and someone achieved just that. Getting these players into the same table while leaving the egos out of the room wasn’t easy.
AmazonAmazon is new to video codecs – or codecs and media. They have their own video streaming service, but that’s about it.
Their addition into this group is interesting in several aspects:
Cisco is a big player in network gear and in unified communications. It has backed H.264 to date, mainly due to its own deployed systems. That said, it is free to pick and choose next generation codecs. While it supports H.265 in its high-end telepresence units, it probably saw the futility of the exercise continuing down this path.
Cisco though, has very little say over future codecs adoption.
GoogleGoogle needs free codecs. This is why it acquired On2 in the first place – to have VP8, VP9 and now VP10 compete with H.26x. To some extent, you can point the roots of this alliance to the On2 acquisition and the creation as webm as the first turning point in this story.
For Google, this means ditching the VPx codec branding, but having what they want – a free video codec.
The main uses for Google here are first and foremost YouTube and later on WebRTC. Chrome is the obvious vehicle of delivery for both.
I don’t see Google slowing down on their adoption of VP9 in WebRTC or reducing its use on YouTube – on the contrary. Assume the model played out here will be the same one Google played with SPDY and HTTP/2:
To that end, Google may well increase their team size to try and speed up their technology advancement here.
IntelIntel is trying for years now to conquer mobile with little to show for its efforts. When it comes to mobile, ARM chipsets rule.
Intel can’t really help with the “any modern device” part of the alliance’s charter, but it is a good start. They are currently the only chipset vendor in the alliance, and until others join it, there’s a real risk of this being a futile effort.
The companies we need here are ARM, Qualcomm, Broadcom and Samsung to begin with.
MicrosoftMicrosoft decided to leave the H.26x world here. This is great news. It is also making the moves towards adopting WebRTC.
Having Google Chrome and Microsoft Edge behind this initiative is what is necessary to succeed. Apple is sorely missing, which will most definitely cause market challenges moving forward – if Apple doesn’t include hardware acceleration for this codec in their iOS devices, then a large (and wealthy) chunk of the consumer market will be missing.
Every day that passes it seems that Microsoft is acting like a modern company ready for this day and age as opposed to the dinosaur of the 90’s.
MozillaMozilla somehow manages to plug itself into every possible initiative. This alliance is an obvious fit for a company like Mozilla. It is also good for the alliance – 3 out of 4 major browser players behind this initiative is more than we’ve seen for many years in this area.
NetflixNetflix started by adopting H.265 for their 4K video streaming. It seemed weird for me that they adopted H.265 and not VP9 at the time. I am sure the latest announcements coming out of HEVC Advance about licensing costs for content streaming has caused a lot of headache at Netflix and tipped the scale towards them joining this alliance.
If you are a content provider operating at Netflix scale with their margins and business model, the greedy %0.5 gross revenue licensing of HEVC Advance becomes debilitating.
With YouTube, Amazon and Netflix behind this alliance, you can safely say that web video streaming has voiced their opinion and placed themselves behind this alliance and against HEVC/H.265.
Missing in ActionWho’s missing?
We have 3 out of 4 browser vendors, so no Apple.
We have the web streaming vendors. No Facebook, but that is probably because Facebook isn’t as into the details of these things as either Netflix or Google. Yet.
We don’t have the traditional content providers – cable companies and IPTV companies.
We don’t have the large studios – the content creators.
We don’t have the chipset vendors.
AppleApple is an enigma. They make no announcements about their intent, but the little we know isn’t promising.
Once this initiative and video codec comes to W3C and IETF for standardization, will they object? Join? Implement? Ignore? Adopt?
Content providersContent providers are banking around H.265 for now. They are using the outdated MPEG2 video codec or the current H.264 video codec. For them, migrating to H.265 seems reasonable. Until you look at the licensing costs for content providers (see Netflix above).
That said, some of them, in Korea and Japan, actually own patents around H.265.
Where will they be headed with this?
Content creatorsContent creators wouldn’t care less. Or they would, as some of them are now becoming also content providers, streaming their own content direct-to-consumer in trials around unbundling and cord cutting.
They should be counting themselves as part of the Alliance for Open Media if you ask me.
Chipset vendorsChipset vendors are the real missing piece here. Some of them (Samsung) hold patents around H.265. Will they be happy to ditch those efforts and move to a new royalty free codec? Hard to say.
The problem is, that without the chipset vendors behind this initiative it will not succeed. One of the main complaints around WebRTC is lack of support for its codecs by chipsets. This will need to change for this codec to succeed. It is also where the alliance needs to put its political effort to increase its size.
The Beginning of the End for HEVC/H.265This announcement came as a surprise to me. I just finished writing my presentation for an upcoming TechTok with the same title as this post: WebRTC Codec Wars Rebooted. I will now need to rewrite that presentation.
This announcement if played right, can mean the end of the line for the H.26x video codecs and the beginning of a new effort around royalty free video codecs, making them the norm. The enormity of this can be compared to the creation of Linux and its effect on server operating systems and the Internet itself.
Making video codecs free is important for the future of our digital life.
Kudos for the people who dared dream this initiative and making it happen.
Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.
The post WebRTC Codec Wars: Rebooted appeared first on BlogGeek.me.
Hello, again. This past week in the FreeSWITCH master branch we had 116 commits! Wow, the team was really busy this week! Our features for this week are: added getenv FSAPI to mod_commands, the verto communicator saw many improvements, and the beginnings of another new module! Mod_redis is being deprecated in favor of mod_hiredis!
Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
And, this past week in the FreeSWITCH 1.4 branch we had 2 new commits merged in from master. And the FreeSWITCH 1.4.21 release is here! Go check it out!
The following bugs were fixed:
L’azienda detta il ritmo dell’innovazione nella Unified Communication grazie ai nuovi servizi di PUSH per i client per smartphones, al nuovo ambiente di installazione cloud integrato e alla ri-disegnata interfaccia utente.
LONDRA, UK XX Luglio 2015 – 3CX, azienda produttrice del centralino di nuova generazione, 3CX Phone System for Windows, annuncia oggi il lancio della Versione 14 della sua pluri-premiata soluzione di comunicazione. L’ultima release da ai rivenditori la possibilità di fornire un centralino in cloud o in locale sulla stessa piattaforma; un’ottima notizia per quei partner desiderosi di ampliare la propria offerta e di includere servizi di centralini virtuali.
Nuovi client per iOS, Android e Windows Phone
I nuovi client per iOS e Android sono stati completamente riscritti e portano la mobilità ad un nuovo livello grazie all’uso avanzato della tecnologia PUSH e al tunnel SIP integrato. Gli utenti smartphone potranno d’ora in poi usare il proprio interno aziendale dappertutto, con la stessa semplicità di un normale telefono cellulare. Grazie al PUSH, 3CX può risvegliare il telefono in caso di chiamata entrante e la rapida registrazione e attivazione dell’interno consentono all’utente di rispondere facilmente.
Nick Galea, CEO di 3CX dichiara:
“La Versione 14 stabilisce nuovi standard di funzionamento, in particolare per i client smartphone. Ora, con il PUSH integrato nei nostri client softphone, che da priorità ai messaggi 3CX rispetto agli altri, 3CX Phone System garantisce un’imbattibile mobilità, mantenendo il vostro centralino aziendale a portata di dita, ovunque voi siate. Tutto ciò, combinato alla facilità d’uso e di gestione, rende 3CX Phone System unico sul mercato.”
Funzionalità di Centralino Virtuale Integrata
In aggiunta, ai clienti è ora offerta una nuova modalità di installazione che consente ai partner 3CX di offrire facilmente un centralino virtuale. 3CX Phone System v14 può essere installato come server di centralini virtuali in grado di gestire fino a 25 istanze su un singolo Windows Server, con una eccezionale semplicità di gestione e ad un costo per istanza estremamente competitivo. 3CX offre un centralino virtuale completo, diverso dai soliti sistemi multi-account ed in cui i dati e i servizi del centralino sono completamente separati per ciascun cliente.
Questa possibilità arriva nel momento in cui i produttori stanno assistendo ad un aumento nella domanda di soluzioni aziendali basate sul cloud. E’ previsto che entro il 2018 il mercato delle soluzioni UC/centralini virtuali supererà i 18mld di dollari di ricavi e già da oggi, con 3CX Phone System Hosted PBX, i partner 3CX sono in grado di offrire 3CX come centralino virtuale in parallelo alla versione in locale.
Gestione semplificata con nuove funzionalità e più VoIP Provider supportati
A livello di gestione del sistema, sono state aggiunte diverse funzionalità come backup & restore programmabili, maggiori messaggi di avviso ed un’interfaccia razionalizzata. 3CX inoltre integra un sistema di fault tolerance. Con l’edizione PRO è ora semplice mantenere in stand-by una copia virtuale del centralino da attivare in caso di server failure. Oltre a questo, gli amministratori potranno sfruttare una serie di nuove funzionalità, come ad esempio report schedulati inviati per email, gestione dello spazio disco per voicemail e registrazioni e diversi nuovi VoIP Provider fra cui scegliere, compresi, fra gli altri, Broadvoice, AMcom, Deutsche Telekom, Time Warner Cable e, per l’Italia, CloudItalia.
Videoconferenza Integrata aggiornata
La soluzione di videoconferenza integrata, 3CX WebMeeting, ha a sua volta beneficiato di diverse migliorie, a cominciare da una maggior qualità video dovuta ai miglioramenti della tecnologia WebRTC. Il software ora comprende diverse nuove funzionalità ideali per webinars e classi virtuali, come ad esempio il controllo remoto, la registrazione in formato standard compatibile con YouTube e feedback-polling. In aggiunta, offre la possibilità di cedere il controllo del meeting ad altri partecipanti e di pre-caricare presentazioni PowerPoint in formato XML per un minor consumo di banda. Infine, 3CX WebMeeting è gratuito per l’intera azienda per un massimo di 10 participanti contemporanei e, per garantire una miglior user experience, 3CX ha esteso la propria rete mondiale di server per consentire una gestione ottimale dei flussi video.
Nota per i Redattori
Per maggiori informazioni su 3CX Phone System 14 si prega di visitare:
Listino qui.
Download Links e Documentazione
Download 3CX Phone System v14: http://downloads.3cx.com/downloads/3CXPhoneSystem14.exe
Download 3CXPhone for iOS Client
Download 3CXPhone for Android Client
Download 3CXPhone for Windows Client
Download 3CXPhone for Mac Client
Download 3CX Session Border Controller for Windows
Leggi il v14 Admin Manual
Leggi il v14 User Manual
ApprofondimentiUna veloce panoramica sulle prospettive del voip in ambito lavorativo. Il 2010 potrebbe essere l’anno della definitiva consacrazione dell’ unified communication, in versione molto differente da quella originariamente prospettata, ed incentrata sull’utilizzo [...]
La nuova major release di 3CX Phone System è pronta! Il nostro team Ricerca&Sviluppo c’è riuscito un’altra volta e ci ha fornito una versione straordinaria: pronta per il Cloud e corredata di [...]
Alcuni giorni fa è stata rilasciata la quinta versione del più completo centralino per piattaforma Windows che ora annovera alcune “succose” feautures come fax server integrato, codec G729 ed un piccolo client [...]
It is hard to stress how exciting this version release is for our resellers, who are passionate about the end user experience. Some of the requests came through our community, who helped us beta test before launch, and deserve a hand for kicking the tires! Some major highlights:
Our third KazooCon is rapidly approaching, and we have been reflecting on the successes and failures of our first two events. We have talked to past attendees, our partners, and sponsors about what is most important to them about our products. Over the past couple of months, we have been slowly building the KazooCon agenda and today we are proud to announce it to you. As you may already know, this year’s event will occur October 5th – 6th at Broadway Studios in San Francisco, CA.
So what will take place at KazooCon and why should you come?
Huge Announcements
Every year at KazooCon we want to show you all the incredible things we’ve been working on, and admittedly sometimes we over promise and show an unfinished product. Back in January, the entire 2600Hz team discussed Kazoo, our common goals, and how we want to take the company forward. Over the past nine months we have been singularly focused on building a set of fully-featured products, and KazooCon is where we will be introducing it to the world. There is no question that this will be the most important KazooCon yet! We will announce SaaS including our brand new reseller platform, Cluster Manager, Infrastructure as a Service (IaaS) and will reveal our incredible new brand. We have been very quiet all year, but brace yourself because it’s about to get loud.
Interactive Demo’s
We heard your feedback from KazooCon 2014, and the overwhelming favorite session was the breakout demo. So we are doubling down this year and we’ll be hosting five hands-on demo’s (hint: bring your laptop). You will get a first-hand look of the of our new Kazoo API’s and our Engineers will take you through it step-by-step. Our interactive demo’s will include:
Wonderful Slate of Telephony Leaders
This year we have an exceptional slate of Business leaders and and Telecom pioneers. Telco gurus from around the world are going to wow you with their talks by presenting their own stories of success, tools for enhancement in the telecom industry, and much more. Our speakers include:
So what are you waiting for, join us for the Unified Communications Revolution!
Not convinced yet? Take a look at a presentation from last year!
The FreeSWITCH 1.4.21 release is here! This is a routine maintenance release and the resources are located here:
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
The fact that you can use WebRTC to implement a secure, reliable, and standards based peer-to-peer network is a huge deal that is often overlooked. We have been notably light on the DataChannel here at webrtcHacks, so I asked Arin Sime if would be interested in providing one of his great walkthrough’s on this topic. He put together a very practical example of a multi-player game. You make recognize Arin from RealTime Weekly or from his company Agility Feat or his new webRTC.ventures brand. Check out this excellent step-by-step guide below and start lightening the load on your servers and reducing message latency with the DataChannel.
{“editor”: “chad hart“}
WebRTC is “all about video chat”, right? I’ve been guilty of saying things like that myself when explaining WebRTC to colleagues and clients, but it’s a drastic oversimplification that should never go beyond your first explanation of WebRTC to someone.
Of course, there’s more to WebRTC than just video chat. WebRTC allows for peer-to-peer video, audio, and data channels. The Data channels are a distinct part of that architecture and often forgotten in the excitement of seeing your video pop up in the browser.
Being able to exchange data directly between two browsers, without any sort of intermediary web socket server, is very useful. The Data Channel carries the same advantages of WebRTC video and audio: it’s fully peer-to-peer and encrypted. This means Data Channels are useful for things like text chat applications, file transfers, P2P file exchanges, gaming, and more.a
In this post, I’m going to show you the basics of how to setup and use a WebRTC Data Channel.
First, let’s review the architecture of a WebRTC application.
You have to setup signaling code in order to establish the peer to peer connection between two peers. Once the signaling is complete (which takes place over a 3rd party server), then you have a Peer to Peer (P2P) connection between two users which can contain video and audio streams, and a data channel.
The signaling for both processes is very similar, except that if you are building a Data Channel only application then you don’t need to call GetUserMedia or exchange streams with the other peer.
Data Channel SecurityThere are a couple of other differences about using the DataChannel. The most obvious one is that users don’t need to give you their permission in order to establish a Data Channel over an RTCPeerConnection object. That’s different than video and audio, which will prompt the browser to ask the user for permissions to turn on their camera and microphone.
Although it’s generating some debate right now, data channels don’t require explicit permission from users. That makes it similar to a web socket connection, which can be used in a website without the knowledge of users.
The Data Channel can be used for many different things. The most common examples are for implementing text chat to go with your video chat. If you’re already setting up an RTCPeerConnection for video chat, then you might as well use the same connection to supply a Data Channel for text chat instead of setting up a different socket connection for text chat.
Likewise, you can use the Data Channel for transferring files directly between your peers in the RTCPeerConnection. This is nicer than a normal socket style connection because just like WebRTC video, the Data Channel is completely peer-to-peer and encrypted in transit. So your file transfer is more secure than in other architectures.
The game of “Memory”Don’t limit your Data Channel imagination by these common examples though. In this post, I’m going to show you how to use the Data Channel to build a very simple two-player game. You can use the Data Channel to transfer any type of data you like between two browsers, so in this case we’ll use it to send commands and data between two players of a game you might remember called “Memory”.
In the game of memory, you can flip over a card, and then flip a second card, and if they match, you win that round and the cards stay face up. If they didn’t match, you put both face down again, and it’s the next person’s turn. By trying to remember what you and your opponents have been flipping, and where those cards were, you can win the game by correctly flipping the most pairs.
Adam Khoury already built a javascript implementation of this game for a single player, and you can read his tutorial on how to build the game Memory for a single player. I won’t explain the logic of his code for building the game, what I’m going to do instead is build on top of his code with a very simple WebRTC Data Channel implementation to keep the card flipping in synch across two browsers.
You can see my complete code on GitHub, and below I’m going to show you the relevant segments.
In this example view of my modified Memory game, the user has correctly flipped pairs of F, D, and B, so those cards will stay face up. The cards K and L were just flipped and did not match, so they will go back face down.
Setting up the Data Channel configurationI started with a simple NodeJS application to serve up my code, and I added in Express to create a simple visual layer. My project structure looks like this:
The important files for you to look at are datachannel.js (where the majority of the WebRTC logic is), memorygame.js (where Adam’s game javascript is, and which I have modified slightly to accommodate the Data Channel communications), and index.ejs, which contains a very lightweight presentation layer.
In datachannel.js, I have included some logic to setup the Data Channel. Let’s take a look at that:
//Signaling Code Setup var configuration = { 'iceServers': [{ 'url': 'stun:stun.l.google.com:19302' }] }; var rtcPeerConn; var dataChannelOptions = { ordered: false, //no guaranteed delivery, unreliable but faster maxRetransmitTime: 1000, //milliseconds }; var dataChannel;The configuration variable is what we pass into the RTCPeerConnection object, and we’re using a public STUN server from Google, which you often see used in WebRTC demos online. Google is kind enough to let people use this for demos, but remember that it is not suitable for public use and if you are building a real app for production use, you should look into setting up your own servers or using a commercial service like Xirsys to provide production ready STUN and TURN signaling for you.
The next set of options we define are the data channel options. You can choose for “ordered” to be either true or false.
When you specify “ordered: true”, then you are specifying that you want a Reliable Data Channel. That means that the packets are guaranteed to all arrive in the correct order, without any loss, otherwise the whole transaction will fail. This is a good idea for applications where there is significant burden if packets are occasionally lost due to a poor connection. However, it can slow down your application a little bit.
We’ve set ordered to false, which means we are okay with an Unreliable Data Channel. Our commands are not guaranteed to all arrive, but they probably will unless we are experiencing poor connectivity. Unless you take the Memory game very seriously and have money on the line, it’s probably not a big deal if you have to click twice. Unreliable data channels are a little faster.
Finally, we set a maxRetransmitTime before the Data Channel will fail and give up on that packet. Alternatively, we could have specified a number for maxRetransmits, but we can’t specify both constraints together.
Those are the most common options for a data channel, but you can also specify the protocol if you want something other than the default SCTP, and you can set negotiated to true if you want to keep WebRTC from setting up a data channel on the other side. If you choose to do that, then you might also want to supply your own id for the data channel. Typically you won’t need to set any of these options, leave them at their defaults by not including them in the configuration variable.
Set up your own Signaling layerThe next section of code may be different based on your favorite options, but I have chosen to use express.io in my project, which is a socket.io package for node that integrates nicely with the express templating engine.
So the next bit of code is how I’m using socket.io to signal to any others on the web page that I am here and ready to play a game. Again, none of this is specified by WebRTC. You can choose to kick off the WebRTC signaling process in a different way.
io = io.connect(); io.emit('ready', {"signal_room": SIGNAL_ROOM}); //Send a first signaling message to anyone listening //In other apps this would be on a button click, we are just doing it on page load io.emit('signal',{"type":"user_here", "message":"Would you like to play a game?", "room":SIGNAL_ROOM});In the next segment of datachannel.js, I’ve setup the event handler for when a different visitor to the site sends out a socket.io message that they are ready to play.
io.on('signaling_message', function(data) { //Setup the RTC Peer Connection object if (!rtcPeerConn) startSignaling(); if (data.type != "user_here") { var message = JSON.parse(data.message); if (message.sdp) { rtcPeerConn.setRemoteDescription(new RTCSessionDescription(message.sdp), function () { // if we received an offer, we need to answer if (rtcPeerConn.remoteDescription.type == 'offer') { rtcPeerConn.createAnswer(sendLocalDesc, logError); } }, logError); } else { rtcPeerConn.addIceCandidate(new RTCIceCandidate(message.candidate)); } } });There are several things going on here. The first one to be executed is that if the rtcPeerConn object has not been initialized yet, then we call a local function to start the signaling process. So when Visitor 2 announces themselves as here, they will cause Visitor 1 to receive that message and start the signaling process.
If the type of socket.io message is not “user_here”, which is something I arbitrarily defined in my socket.io layer and not part of WebRTC signaling, then the code goes into a couple of WebRTC specific signaling scenarios – handling an SDP “offer” that was sent and crafting the “answer” to send back, as well as handling ICE candidates that were sent.
The WebRTC part of SignalingFor a more detailed discussion of WebRTC signaling, I refer you to http://www.html5rocks.com/en/tutorials/webrtc/infrastructure/”>Sam Dutton’s HTML5 Rocks tutorial, which is what my signaling code here is based on.
For completeness’ sake, I’m including below the remainder of the signaling code, including the startSignaling method referred to previously.
function startSignaling() { rtcPeerConn = new webkitRTCPeerConnection(configuration, null); dataChannel = rtcPeerConn.createDataChannel('textMessages', dataChannelOptions); dataChannel.onopen = dataChannelStateChanged; rtcPeerConn.ondatachannel = receiveDataChannel; // send any ice candidates to the other peer rtcPeerConn.onicecandidate = function (evt) { if (evt.candidate) io.emit('signal',{"type":"ice candidate", "message": JSON.stringify({ 'candidate': evt.candidate }), "room":SIGNAL_ROOM}); }; // let the 'negotiationneeded' event trigger offer generation rtcPeerConn.onnegotiationneeded = function () { rtcPeerConn.createOffer(sendLocalDesc, logError); } } function sendLocalDesc(desc) { rtcPeerConn.setLocalDescription(desc, function () { io.emit('signal',{"type":"SDP", "message": JSON.stringify({ 'sdp': rtcPeerConn.localDescription }), "room":SIGNAL_ROOM}); }, logError); }This code handles setting up the event handlers on the RTCPeerConnection object for dealing with ICE candidates to establish the Peer to Peer connection.
Adding DataChannel options to RTCPeerConnectionThis blog post is focused on the DataChannel more than the signaling process, so the following lines in the above code are the most important thing for us to discuss here:
rtcPeerConn = new webkitRTCPeerConnection(configuration, null); dataChannel = rtcPeerConn.createDataChannel('textMessages', dataChannelOptions); dataChannel.onopen = dataChannelStateChanged; rtcPeerConn.ondatachannel = receiveDataChannel;In this code what you are seeing is that after an RTCPeerConnection object is created, we take a couple extra steps that are not needed in the more common WebRTC video chat use case.
First we ask the rtcPeerConn to also create a DataChannel, which I arbitrarily named ‘textMessages’, and I passed in those dataChannelOptions we defined previously.
Setting up Message Event HandlersThen we just define where to send two important Data Channel events: onopen and ondatachannel. These do basically what the names imply, so let’s look at those two events.
function dataChannelStateChanged() { if (dataChannel.readyState === 'open') { dataChannel.onmessage = receiveDataChannelMessage; } } function receiveDataChannel(event) { dataChannel = event.channel; dataChannel.onmessage = receiveDataChannelMessage; }
When the data channel is opened, we’ve told the RTCPeerConnection to call dataChannelStateChanged, which in turn tells the dataChannel to call another method we’ve defined, receiveDataChannelMessage, whenever a data channel message is received.
The receiveDataChannel method gets called when we receive a data channel from our peer, so that both parties have a reference to the same data channel. Here again, we are also setting the onmessage event of the data channel to call our method receiveDataChannelMessage method.
Receiving a Data Channel MessageSo let’s look at that method for receiving a Data Channel message:
function receiveDataChannelMessage(event) { if (event.data.split(" ")[0] == "memoryFlipTile") { var tileToFlip = event.data.split(" ")[1]; displayMessage("Flipping tile " + tileToFlip); var tile = document.querySelector("#" + tileToFlip); var index = tileToFlip.split("_")[1]; var tile_value = memory_array[index]; flipTheTile(tile,tile_value); } else if (event.data.split(" ")[0] == "newBoard") { displayMessage("Setting up new board"); memory_array = event.data.split(" ")[1].split(","); newBoard(); } }Depending on your application, this method might just print out a chat message to the screen. You can send any characters you want over the data channel, so how you parse and process them on the receiving end is up to you.
In our case, we’re sending a couple of specific commands about flipping tiles over the data channel. So my implementation is parsing out the string on spaces, and assuming the first item in the string is the command itself.
If the command is “memoryFlipTile”, then this is the command to flip the same tile on our screen that our peer just flipped on their screen.
If the command is “newBoard”, then that is the command from our peer to setup a new board on our screen with all the cards face down. The peer is also sending us a stringified array of values to go on each card so that our boards match. We split that back into an array and save it to a local variable.
Controlling the Memory game to flip tilesThe actual flipTheTile and newBoard methods that are called reside in the memorygame.js file, which is essentially the same code that we’ve modified from Adam.
I’m not going to step through all of Adam’s code to explain how he built the single player Memory game in javascript, but I do want to highlight two places where I refactored it to accommodate two players.
In memorygame.js, the following function tells the DataChannel to let our peer know which card to flip, as well as flips the card on our own screen:
function memoryFlipTile(tile,val){ dataChannel.send("memoryFlipTile " + tile.id); flipTheTile(tile,val); }Notice how simple it is to send a message to our peers using the data channel – just call the send method and pass any string you want. A more sophisticated example might send well formatted XML or JSON in a message, in any format you specify. In my case, I just send a command followed by the id of the tile to flip, with a space between.
Setting up a new game boardIn Adam’s single player memory game, a new board is setup whenever you load the page. In my two player adaptation, I decided to have a new board triggered by a button click instead:
var setupBoard = document.querySelector("#setupBoard"); setupBoard.addEventListener('click', function(ev){ memory_array.memory_tile_shuffle(); newBoard(); dataChannel.send("newBoard " + memory_array.toString()); ev.preventDefault(); }, false);
In this case, the only important thing to notice is that I’ve defined a “newBoard” string to send over the data channel, and in this case I want to send a stringified version of the array containing the values to put behind each card.
Next steps to make the game betterThat’s really all there is to it! There’s a lot more we could do to make this a better game. I haven’t built in any logic to limit the game to two players, keep score by players, or enforce the turns between the players. But it’s enough to show you the basic idea behind using the WebRTC data channel to send commands in a multiplayer game.
The nice thing about building a game like this that uses the WebRTC data channel is it’s very scalable. All my website had to do is help the two players get a connection setup, and after that, all the data they need to exchange with each other is done over an encrypted peer-to-peer channel and it won’t burden my web server at all.
A completed multiplayer game using the Data ChannelHere’s a video showing the game in action:
Demo of a simple two player game using the WebRTC Data Channel video
As I hope this example shows you, the hard part of WebRTC data channels is really just in the signaling and configuration, and that’s not too hard. Once you have the data channel setup, sending messages back and forth is very simple. You can send messages that are as simple or complex as you like.
How are you using the Data Channel? What challenges have you run into? Feel free to contact me on Twitter or through my site to share your experiences too!
{“author”: “arin sime“}
Sources:
http://www.html5rocks.com/en/tutorials/webrtc/infrastructure/
http://www.w3.org/TR/webrtc/#simple-peer-to-peer-example
https://www.developphp.com/video/JavaScript/Memory-Game-Programming-Tutorial
Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.
The post Gaming with the WebRTC DataChannel – A Walkthrough with Arin Sime appeared first on webrtcHacks.
40 days until the Communications Revolution! Time to book your hotel. Join us at KazooCon for talks from FreeSWITCH, Kamailio, Mast Mobile, Virtual PBX, IBM Cloudant, Telnexus and many more! Register now and save: www.KazooCon.com
One of WebRTC’s benefits is that the source to it is all open source. Building WebRTC from source provides you the ultimate flexibility to do what you want with the code, but it is also crazy difficult for all but the small few VoIP stack developers who have been dedicated to doing this for years. What benefit does the open source code provide if you can’t figure out how to build from it?
As WebRTC matures into mobile, native desktop apps, and now into embedded devices as part of the Internet of Things, working with the lower-level source code is becoming increasingly common.
Frequent webrtcHacks guest poster Dr. Alex Gouaillard has been trying to make this easier. Below he provides a review of the building WebRTC from source, exposing many of the gears WebRTC developers take for granted when they leverage a browser or someone else’s SDK. Alex also reviews the issues complexities associated with this process and introduces the open source make process he developed to help ease the process.
{“editor”: “chad hart“}
Building WebRTC from sourceMost of the audience for WebRTC (and webrtcHacks) is made of web developers, JavaScript and cloud ninjas that might not be less familiar with handling external libraries from source. That process is painful. Let’s make it clear, it’s painful for everybody – not only web devs.
What are the cases where you need to build from source?
You basically need to build from source anytime you can’t leverage a browser, WebRTC enabled node.js (for the sake of discussion), SDK’s someone how put together for you, or anything else.
These main cases are illustrated below in the context of a comprehensive offering.
Usually, the project owners provide precompiled and tested libraries that you can use yourself (stable) and the most recent version that is compiled but not tested for those who are brave.
Pre-compiled libraries are usable out of the box, but do not allow you to modify anything. Sometimes there are build scripts that help you recompile the libs yourselves. This provides more flexibility in terms of what gets in the lib, and what optimizations/options you set, at the cost of now having to maintain a development environment.
Comparing industry approachesFor example, Cisco with its openH264 library provides both precompiled libraries and build scripts. In their case, using the precompiled library defers H264 royalty issues to them, but that’s another subject. While the libwebrtc project includes build scripts, they are complex use, do not provide a lot of flexibility for modifying the source, and make it difficult to test any modifications.
The great cordova plugin from eFace2Face is using a precompiled libWebRTC (here) (see our post on this too). Pristine.io were among the first one to propose build script to make it easier (see here; more about that later).
Sarandogou/doubango’s webrtc-everywhere plugin for IE and Safari does NOT use automated build scripts, versioning or a standard headers layout, which causes them a lot of problems and slows their progress.
The pristine.io guys put a drawing of what the process is, and noted that, conceptually, there is not a big difference between android and iOS build as to the steps you need to follow. Practically, there is a difference in the tools you used though.
My build processHere is my build process:
Please also note that I mention testing explicitly and there is a good reason for that, learned the hard way. I will come to it in the next section.
You will see I have a “send to dashboard” step. I mean something slightly different than what people usually refer to as a dashboard. Usually, people want to report the results of the tests to a dashboard to show that a given revision is bug free (as much as possible) and that the corresponding binary can be used in production.
If you have performance tests, a dashboard can also help you spot performance regressions. In my case here, I also want to use a common public dashboard as a way to publish failing builds on different systems or with different configurations, and still provide full log access to anyone. It makes solving those problem easier. The one asking the question can point to the dashboard, and interesting parties have an easier time looking at the issue or reproducing it. More problems reported, more problems solved, everyone is happy.
Now that we have reviewed the build from source process a bit, let’s talk about what’s wrong with it.
Building from Source SucksWriting an entire WebRTC stack is insanely hard. That’s why Google went out and bought GIPS, even though they have a lot of very very good engineers at disposal. Most devs and vendors use an existing stack.
For historical reasons most people use google’s contributed WebRTC stack based on the GIPS media engine, and Google’s libjingle for the network part.
Even Mozilla is using the same media engine, even though they originally went for a Cisco SIP soft phone code as the base (see here, under “list of components”, “SIPCC”) to implement the network part of WebRTC. Since then, Mozilla went on and rewrote almost all that part to support more advanced functionality such as multi-party. However, the point is, their network and signaling is different from Google’s while their media engine is almost identical. Furthermore, Mozilla does not attempt to provide a standalone version of their WebRTC implementation, which makes it hard for developers to make use of it right away.
Before Ericson’s OpenWebRTC announcement in October 2014, the Google standalone version was the only viable option out there for most. OpenWebRTC has advantages on some parts, like hardware support for H.264 on iOS for example, but lacks some features and Windows support that can be a showstopper for some. It is admittedly less mature. It also uses GStreamer, which has its own conventions and own build system (cerbero), which is also tough to learn.
The webrtc.org stack is not available in a precompiled library with an installer. This forces developers to compile WebRTC themselves, which is “not a picnic”.
One needs first to become accustomed to Chrome dev tools which are quite unique, adding a learning step to the process. The code changes quite often (4 commits a day), and the designs are poorly documented at best.
Even if you manage to compile the libs, either by yourself or using resources on the web, it is almost certain that you cannot test it before using it in your app, as most of the bug report, review, build, test and dashboard infrastructure is under the control of Google by default.
Don’t get me wrong, the bug report and review servers allow anybody to set up an account. What is done with your tickets or suggestions however is up to Google. You can end up with quite frustrating answers. If you dig deep enough in the Chrome infrastructure for developers, you will also find how to replicate their entire infrastructure, but the level you need to have to go through this path, and the amount of effort to get it right is prohibitive for most teams. You want to develop your product, not become a Chrome expert.
Finally, the contributing process at Google allows for bugs to get in. You can actually looks at the logs and see a few “Revert” commits there.
From the reverted commits (see footnote[1]: 107 since January 2015), one can tell that revisions of WebRTC on the HEAD are arbitrarily broken. Here again, this comment might be perceived as discriminatory against Google. It is not. There is nothing wrong there; it always happen for any project, and having only 107 reverts in 6 months while maintaining 4 commits a day is quite an achievement. However, it means that you, as a developer, cannot work with any given commit and expect the library to be stable. You have at least to test it yourself.
My small side project to helpMy goals are:
More importantly I would like to lower the barrier of adoption / collaboration / contribution by providing:
We leveraged the CMake / CTest / CDash / CPack suite of tools instead of the usual shell scripts, to automate most of the fetch, configure, build, test, report and package processes.
CMake is cross platform from the ground up, and makes it very easy to deploy such processes. No need to maintain separate or different build scripts for each platform, or build-toolchain.
CTest help you manage your test suites, and is also a client for CDash which handle the dashboard part of the process.
Finally CPack handle packaging your libs with headers and anything else you might want, and support a lot of different packagers with a unified syntax.
This entire suite of tools have also designed in such a way that “a gifted master student could use it and contribute back in a matter of days”, while being so flexible and powerful that big companies like Netflix or Canonical (Ubuntu), use it as the core of their engineering process.
Most of the posts at webrtcbydralex.com will take you through the process, step by step of setting up this solution., in conjunction with a github repository holding all the corresponding source code.
The tool page provides installers for WebRTC for those in a hurry.
{“author”: “Alex Gouaillard“}
[1] git log –since=1.week –pretty=oneline | grep Revert | wc -l
Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.
The post Making WebRTC source building not suck (Alex Gouaillard) appeared first on webrtcHacks.
Hello, again. This past week in the FreeSWITCH master branch we had 56 commits. This week the features are: a wonderful perl script, filebug.pl, to help file bugs from the command line, and more work on improving verto communicator including integration of Gravatars, some filter options, and grunt.
Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
It is time for a quick vacation.
In the past, I tried publishing here while on vacation, I’ll refrain from it this time.
Please do your best not to acquire anyone until end of August.
See you all next month!
The post Quick vacation appeared first on BlogGeek.me.
I need your help to gain better visibility.
If you are developing something with WebRTC, there’s a good chance you are using existing tools and frameworks already. Be it signaling or messaging frameworks, a media engine in the backend, a third party mobile library.
As I work on my research around the tools enabling the vibrant ecosystem that is WebRTC, I find myself more than once wondering about a specific tool – how much is it used? What do people think about? Are they happy with it? What are its limitations? While I know the answers in some cases, in others not so much. This is where you come in.
If you are willing to share your story with a third party tool – one you purchased or an open source one – I’d like to hear about it. Even if it is only the name of the tool or a one liner.
Feel free to comment below or just use my contact form if you wish this to stay private between us.
I really appreciate your help in this.
The post What WebRTC Tool are you using for your Service? appeared first on BlogGeek.me.
We’re officially less than 50 days until KazooCon! Buy your tickets, get a flight, and get out to San Francisco! http://goo.gl/SQqpO4
Hello, again. This past week in the FreeSWITCH master branch we had 7 commits. There were no new features this week, but you should go check out the Verto Communicator that was added last week!
Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.
The following bugs were squashed:
Hello, again. This past week in the FreeSWITCH master branch we had 17 commits. The new features this week are: new properties added to the amqp configuration, fixed the usage for enable_fallback_format_fields, a fix for a routing key issue in amqp, and the awesome new Verto Communicator!
Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
Hello, again. This past week in the FreeSWITCH master branch we had 41 commits. The new features this week are: improved mod_png to allow snapshot of single legged calls, added session UUID to lua error logs and added session UUID to embedded language (lua, javascript, etc) logs when session sanity check fails, improved the xml fetch lookup for channels on nightmare transfer, and added uuid_redirect API command to mod_commands.
Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
The following bugs were squashed:
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.