There are a few side stories around Apple lately that relate to WebRTC. I wanted to share them here.
Apple TV ships with no HTML5 supportIt seems that in the Apple TV reboot, there are going to be apps. But not ones that can make use of HTML5. Just native apps. John Gruber points to a post around that topic titled Everything but the Web, concluding that Web views won’t find their way to the TV screen if Apple has anything to do with it.
He doesn’t state the reason though. If you ask me, this has nothing to do to the Apple/Flash war of the past. It also has nothing to do with design or aesthetics. It has anything to do with ecosystem control. For Apple, the ability of cross platform development is an aberration. Why on earth enable developers to write their code once and then run it elsewhere? There’s nothing outside the closed garden of Apple, so why bother?
Killing HTML5 in Mac is impossible. Killing it on the iPhone or iPad is also rather hard – too many apps already use it, and there’s that pesky browser people use. But on the TV? That’s greenfield! So why not just forget about HTML5 altogether?
If you ask me, the good people in Apple see only one reason for HTML5 to exist – and that is for people to be able to go to apple.com website. Other than that? Useless.
The future of WebKitThere have been a lot of back and forth lately about the future of the web. Should we run full steam ahead with it or sit and wait. People prefer having it change and progress less. I can’t see why – when every year thousands of new APIs are rained at us by Apple and WWDC and Google at I/O – why can’t the Web improve? Why should it stay static?
WebKit, on the other had, is a rather dead rendering engine at this point in time. It might be fast and optimized, but it is becoming a bit old when it comes to adopting and supporting standards. WebRTC isn’t there, but multiple other technologies aren’t either. It seems to be keeping up with the HTML5 and CSS notation, but the programmatic parts of JavaScript? Falling behind the other browsers.
I’ve written before on how Microsoft Edge is getting way better. Mozilla is getting their act together and modernizing the older parts of their Firefox browser (extensions for example) and Google are speeding up and optimizing Chrome now that it has become huge. But Safari? Microsoft Edge will keep Google and Mozilla on edge and get them to improve. I don’t think the other browser vendors care too much about Safari getting too good anytime soon.
I wonder how much care and affection the Safari/WebKit team gets inside Apple these days. Probably not that much.
This goes somewhat counter intuitive to the positive assertions Alex made here about Apple and WebRTC.
H.265 / VP9Apple and H.265 take center stage in my video codec sessions lately. You can see the video codec wars session I gave at TokBox last week.
My usual spiel?
But then I get directed to this interesting post in 9to5Mac:
Another interesting detail: 4K videos are being recorded in H.264, and Apple is no longer making reference to H.265 support for any purpose, FaceTime or otherwise
Hmm…
Is it only me or did Apple just drop H.265 support and is shifting camps? Or at the very least, sitting on the fence. It might have something to do with the HEVC Advance stupidity that brought the gang to open up the Alliance of Open Media. They might be edging away from royalty bearing codecs and moving to the free alternative. Or they might try using it as leverage over HEVC Advance to make their licensing terms more palatable.
How do you do 4K video resolutions with a camera if not by using H.265? Use H.264? Ridiculous. But that’s exactly what seems to be happening now for the new iPhone 6.
Should they be moving to VP9 instead? Probably, but it will be hard on Apple. They rely heavily on hardware acceleration and they don’t seem to have it on their chipsets at the moment.
This is a loss to the H.26x camp at the moment.
–
Where is this all headed?
I am not sure, but here are a couple of things I’d plan if I had that task given to me.
Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.
The post Thoughts on Apple, WebRTC, HTML5, H.265 and VP9 appeared first on BlogGeek.me.
It turns out people like their smartphone apps, so that native mobile is pretty important. For WebRTC that usually leads to venturing outside of JavaScript into the world of C++/Swift for iOS and Java for Android. You can try hybrid applications (see our post on this), but many modern web apps applications often use JavaScript frameworks like AngularJS, Backbone.js, Ember.js, or others and those don’t always mesh well with these hybrid app environments.
Can you have it all? Facebook is trying with React which includes the ReactJS framework and React Native for iOS and now Android too. There has been a lot of positive fanfare with this new framework, but will it help WebRTC developers? To find out I asked VoxImplant’s Alexey Aylarov to give us a walkthrough of using React Native for a native iOS app with WebRTC.
{“editor”: “chad hart“}
If you haven’t heard about ReactJS or React Native then I can recommend to check them out. They already have a big influence on a web development and started having influence on mobile app development with React Native release for iOS and an Android version just released. It sounds familiar, doesn’t it? We’ve heard the same about WebRTC, since it changes the way web and mobile developers implement real-time communication in their apps. So what is React Native after all?
“React Native enables you to build world-class application experiences on native platforms using a consistent developer experience based on JavaScript and React. The focus of React Native is on developer efficiency across all the platforms you care about — learn once, write anywhere. Facebook uses React Native in multiple production apps and will continue investing in React Native.”
https://facebook.github.io/react-native/
I can simplify it to “one of the best ways for web/javascript developers to build native mobile apps, using familiar tools like Javascript, NodeJS, etc.”. If you are connected to WebRTC world (like me) the first idea that comes to your mind when you play with React Native is “adding WebRTC there should be a big thing, how can I make it?” and then from React Native documentation you’ll find out that there is a way to create your own Native Modules:
Sometimes an app needs access to platform API, and React Native doesn’t have a corresponding module yet. Maybe you want to reuse some existing Objective-C, Swift or C++ code without having to reimplement it in JavaScript, or write some high performance, multi-threaded code such as for image processing, a database, or any number of advanced extensions.
That’s exactly what we needed! Our WebRTC module in this case is a low-level library that provides high-level Javascript API for React Native developers. Another good thing about React Native is that it’s an open source framework and you can find a lot of required info on GitHub. It’s very useful, since React Native is still very young and it’s not easy to find the details about native module development. You can always reach out to folks using Twitter (yes, it works! Look for #reactnative or https://twitter.com/Vjeux) or join their IRC channel to ask your questions, but checking examples from GitHub is a good option.
React Native’s module architectureNative modules can have C/C++ , Objective-C, and Javascript code. This means you can put the native WebRTC libraries, signaling and some other libs written in C/C++ as a low-level part of your module, implement video element rendering in Objective-C and offer Javascript/JSX API for react native developers.
Technically low-level and high-level code is divided in the following way:
While in Objective-C you can interact with the OS, C/C++ libs and even create iOS widgets. The Ready-to-use native module(s) can be distributed in number of different ways, the easiest one being via a npm package.
WebRTC module APIWe’ve been implementing a React Native module for our own platform and already knew which of our API functions we would provide to Javascript. Creating a WebRTC module that is independent of signaling that can be used by any WebRTC developer is a much more complicated problem.
We can divide the process into few parts:
Integration with WebRTCSince webRTC does not limit developers how to discover user names and network connection information, this signaling can be done in multiple ways. Google’s WebRTC implementation known as libwebrtc. libwebrtc has a built-in library called libjingle that provides “signaling” functionality.
There are 3 ways how libwebrtc can be used to establish a communication:
This is the simplest one leveraging libjingle. In this case signaling is implemented in libjingle via XMPP protocol.
This is a more complicated one with signaling on the application side. In this case you need to implement SDP and ICE candidates exchange and pass data to webrtc. One of popular methods is to use some SIP library for signaling.
For the hardcore you can avoid using signaling altogether This means the application should take care of all RTP session params: RTP/RTCP ports, audio/video codecs, codec params, etc. Example of this type of integration can be found in WebRTC sources in WebRTCDemo app for Objective-C (src/talk/app/webrtc)
Adding SignalingWe used the 2nd approach in our implementation. Here are some code examples for making/receiving calls (C++):
First of all we need to create react-native module (https://facebook.github.io/react-native/docs/native-modules-ios.html) , where we describe the API and implement audio/video calling using WebRTC (Obj-C , iOS):
@interface YourVoipModule () { } @end @implementation YourVoipModule RCT_EXPORT_MODULE(); RCT_EXPORT_METHOD(createCall: (NSString *) to withVideo: (BOOL) video ResponseCallback: (RCTResponseSenderBlock)callback) { NSString * callId = [createVoipCall: to withVideo:video]; callback(@[callId]); }If want to to support video calling we will need an additional component to show the local camera (Preview) or remote video stream (RemoteView):
@interface YourRendererView : RCTView @endInitialization and deinitialization can be implemented in the following methods:
- (void)removeFromSuperview { [videoTrack removeRenderer:self]; [super removeFromSuperview]; } - (void)didMoveToSuperview { [super didMoveToSuperview]; [videoTrack addRenderer:self]; }You can find the code examples on our GitHub page – just swap the references to our signaling with your own. We found examples very useful while developing the module, so hopefully they will help you to understand the whole idea much faster.
DemoThe end result can look like as follows:
Closing ThoughtsWhen WebRTC community started working on the standard one of the main ideas was to make real-time communications simpler for web developers and provide developers with a convenient Javascript API for real time communications. React Native has similar goal, it lets web developers build native apps using Javascript. In our opinion bringing WebRTC to the set of available React Native APIs makes a lot of sense – web app developers will be able to build their RTC apps for mobile platforms. Guys behind React Native has just released it for Android at Scale conference, so we will update the article or write a new one about building the module compatible with Android as soon as we know all the details.
{“author”, “Alexey Aylarov”}
Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.
The post Reacting to React Native for native WebRTC apps (Alexey Aylarov) appeared first on webrtcHacks.
Hello, again. This past week in the FreeSWITCH master branch we had 35 commits. This week we had one feature: the handling of a=sendonly, a=sendrecv, a=recvonly to change who is sending video during a call.
Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
And, this past week in the FreeSWITCH 1.4 branch we had 9 new commits merged in from master. And the FreeSWITCH 1.4.21 release is here! Go check it out!
The following bugs were fixed:
We’re slightly more than 20 days away from KazooCon 2015 and we want you here to see the power and scalability of our new VoIP platform firsthand. So if you are still on the fence on attending, we’re revealing the top ten reasons why you need to attend the Unified Communications Revolution. Early-bird pricing ends on Sunday, September 20th, so register now and save money. So what are you waiting for, go to www.KazooCon.com and register.
Number 10 - Demo’s, Demo’s, Demo’s
We heard your feedback from KazooCon 2014, and the overwhelming favorite session was the breakout demo. So we are doubling down this year and will be hosting five hands-on demos (hint: bring your laptop). You will get a first-hand look of the of our new Kazoo API’s and our Engineering team will take you through it step-by-step. Our interactive demo’s will include:
Number 9 - Amazing Guest Speakers
Our staff is poised to give outstanding presentations, and we have been prepping demos for weeks. But we’re not the only ones going to be on the podium this week. Telco gurus from around the world are going to wow you with their talks by presenting their own stories of success, tools for enhancement in the telecom industry, and more. We have C-Level speakers from Mast Mobile, Orange, Kamailio, FreeSWITCH, VirtualPBX, IBM Cloudant, Telnexus and SIPLABS. You don’t want to miss it!
Number 8 - Let’s make it a week in San Francisco
Join us for our official Kazoo Training directly following KazooCon from October 7-9. This three-day training will teach you about Kazoo and all of the third-party components that power the platform. You will deep-dive into Kazoo APIs, and learn how to set up a cluster, GUI, WhApps, FreeSWITCH, BigCouch and much more. Purchase both an Early-Bird KazooCon + Kazoo Training ticket and save money! Register Here!
Number 7 - The After-Party
No conference is complete without an after-party. Join us this year for some spirited libations and telco networking. If you attended last year, you bore witness to our alcohol-fueled Scrabble, Pictionary, and arcade games. Last year our Lead Designer Josh was undefeated at Mario Kart, so we’re all practicing to take him down this year. We’ve got plenty of fun and collaborative events planned for this year’s party, but we’re not spoiling the surprise.
Stay tuned as we count down to Number 1. But Register Now at www.KazooCon.com
WebRTC in the big screen
Video Conf
Small
Voice, Video
WebRTC on the large screen.
[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]I am not a fan of video calling in the living room. Not because I have real issues with it, but because I think it is a steep mountain to climb – I am more of the low-hanging-fruit kind of a guy.
That’s not the case with Tellybean, a company focused on TV video calling and recently doing that using WebRTC.
Cami Hongell, CEO of Tellybean, found the time to chat with me and answer a few questions about what they are doing and the challenges they are facing.
What is Tellybean all about?
Tellybean is all about easy video calling on the TV. My two co-founders are Aussies living in Finland and they had a problem. A software update or a forgotten password too often got in the way of their weekly Skype call with grandma Down Under. Once audio and video were finally working both ways, there were four people fighting for a spot in front of the 13” screen.
We realised that modern life tends to separate families and our problem was far from unique. That’s when we decided to build an easy video calling service for the TV. It had to be so easy that even grandma could use it from the comfort of her couch. At the same time as we worked hard to eliminate complexity, we also needed to keep it affordable and build a channel which would provide users an easy way of getting the service.
Today we have an app which allows easy video calls on Android TV devices of our TV and set-top box partners. Currently you can make calls between selected Tellybean enabled Android TV devices and our web app on www.tellybean.com. To make it as easy as possible to call somebody from your TV, we will release apps for Android and iOS mobiles and tablets in the future.
You started by building your TV solution using Skype. What made you switch to WebRTC?
When we founded Tellybean four years ago the tech landscape looked very different from today. WebRTC wasn’t there. Android TV and Tizen weren’t there – the TV operating systems were all over the place. So initially we set out to build an easy service which would run on our own dedicated Linux box. Our intention was to allow our service to connect with other existing services by putting our own UI on top of headless clients developed using the SDK’s provided by some of the existing services. We started with SkypeKit and had a first version of it ready a few years ago. We were going to continue by adding Gtalk.
However, Skype decided to wind down the support of 3rd party developers and Google stopped Gtalk development. This happened almost at the same time as WebRTC was starting to gain traction. Switching to WebRTC turned out to be an easy decision once we looked into it and moved over to working on Android and 3rd party hardware only.
What excites you about working in WebRTC?
Having tried different VoIP platforms in the past, we have learned to appreciate the fact that working with WebRTC has allowed us to focus our resources on the more important UX and UI development. Since WebRTC offers a plugin-free and no-download alternative for video calling with modern browsers, combined with our TV and upcoming mobile device approach we are able to provide easy use for a huge audience, with almost all entry barriers removed.
We are excited about having a great service which is getting a lot of interest from everybody in the Android TV value chain from the chip manufacturers to the TV and STB manufacturers as well as the operators. We’ve announced co-operation with TP Vision / Philips TVs and Nvidia and much more in the pipeline. The great support and resources available in the WebRTC community, coupled with the support from the hardware manufacturers means that WebRTC is truly becoming a compelling open source alternative for service developers, such as ourselves.
Can you tell me a bit about the challenges of getting WebRTC to operate properly in an embedded environment fit for the TV?
An overall problem has been that we are moving slightly ahead of the curve.
Firstly, we need access to a regular USB camera. Unfortunately the Android TV platform and most devices lack UVC camera support. So we have been pushing everybody, Google, the device manufacturers and the chip suppliers, to add camera support. The powerful Nvidia Shield Console has camera support and we already have a few of the other major players implementing it for us.
Secondly, there are still devices that are underpowered and/or lack support for VP8 HW encoding, meaning that it is hard for us to provide a satisfactory call quality. Luckily again, most of the devices launched this year can handle video calling and our app.
The third problem relates to fine tuning the audio for our use case where the distance between the USB camera’s mic and the TV’s speakers is not a constant. Third time lucky: WebRTC provides us pretty good echo cancellation and other tools to optimize this and produce good audio quality.
What signaling have you decided to integrate on top of WebRTC?
Wanting to support browsers for user convenience and to get going quickly, we started out building our own solution with Socket I/O, but we are transitioning to MQTT for two reasons. Firstly, we came to the conclusion that MQTT provided us much more efficient scalability. Secondly, MQTT is much easier on the battery for mobile devices.
Current implementations of MQTT also allow us to use websockets for persistent connections in browsers, so it suits our purposes well. Additionally, some transaction-like functionality is done using REST. We are writing our own custom protocol as we go, which allows us to grow the service organically instead of trying to match a specification set forth by another party that doesn’t match our requirements or introduces undue complexity in architecture or implementation.
Backend. What technologies and architecture are you using there?
We have server instances on Amazon Web Services, running our MQTT brokers and REST API, as well as the TURN/STUN service required for WebRTC. We use Node.JS on the servers and MongoDB from a cloud service which allows us easy distributed access to shared data.
Where do you see WebRTC going in 2-5 years?
The recent inclusion of H.264 will lead to broader adoption of WebRTC in online services, and also in dedicated hardware devices since H.264 decoders are readily available. Microsoft is also starting to adopt WebRTC in their new Edge browser, so it seems like there’s a bright future for rich communication using WebRTC once all the players have started moving. Like everybody else, we would naturally like full WebRTC support from Microsoft and Apple sooner rather than later, and it will be hard for them to ignore it with all the support it is already receiving. In this timeframe, at least high-end mobile devices should have powerful enough hardware to support WebRTC in the native browsers without issues. With this kind of background infrastructure a lot of online services will be starting to use WebRTC in some form, instead of more isolated projects. With everyone moving towards a new infrastructure, hopefully any interoperability issues between different endpoints have been sorted out, which allows service developers to focus on their core ideas.
If you had one piece of advice for those thinking of adopting WebRTC, what would it be?
WebRTC is still an emerging technology, that will surely have an impact for developers and businesses going forward, but it’s not completely mature yet. We’ve seen a lot of good development over time, so for a specific use case, it might be a plug-and-play experience or then in a more advanced case you may need a lot of development work.
Given the opportunity, what would you change in WebRTC?
WebRTC has been improving a lot during the time that we’ve worked with it, so we believe that current issues will be improved on and disappear over time. The big issue right now on the browser side is obviously adoption, with Microsoft and especially Apple not up to speed yet. We would also like to see good support for all WebRTC codecs from involved parties, to avoid transcoding and to be able to use existing hardware components for a great user experience.
What’s next for Tellybean?
We’ve recently launched our Android TV app and are seeing the first users on the Nvidia Shield console, the first compatible device. We are now learning a lot and have a chance to fine tune our app. From a business point of view we currently have full focus on building a partner network which will provide us the platform for 100+ million TV installations in the coming years. Next we are starting development of mobile apps for Android and iOS. Later we will need to decide if moving to other TV operating systems or e.g. enabling other video calling services to connect to Tellybean TVs will be the next most important step towards achieving our aim of becoming THE video calling solution for the TV.
–
The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.
The post Tellybean and WebRTC: An Interview With Cami Hongell appeared first on BlogGeek.me.
Hello, again. This past week in the FreeSWITCH master branch we had 75 commits! The new features for this week include: improvements to the verto communicator, allowing Freeswitch to initiate late offer calls, the addition of CUSTOM esl events menu::enter and menu::exit when a call enter and exit an ivr menu, and more work toward the new module, mod_hiredis!
Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
And, this past week in the FreeSWITCH 1.4 branch we had 9 new commits merged in from master. And the FreeSWITCH 1.4.21 release is here! Go check it out!
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were fixed:
The FreeSWITCH 1.6.0 release is here! This is the release of the Video/MCU branch!
Release files are located here:
And we’re dropping support in packaging for anything older than Debian 8.0 and anything older than Centos 7.0 due to a number of dependency issues on older platforms.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
There are indications out there that soon we won’t be needing plugins to support WebRTC in some of the browsers out there.
[Alexandre Gouaillard decided to drop by here at BlogGeek.me offering his analysis on recent news coming out of Apple and Microsoft – news that affect how these two players will end up supporting WebRTC real soon.] Apple Safari news
It is still unknown when this (GetUserMedia only) will find its way into Safari, and more specifically in Safari on iOS. Hopefully before the end of the year. (high, but probably unrealistic, hopes for a Sept. 9 announcement).
We can also only hope that the WebView framework apps are forced to use to display web pages according to the Apple app store rules will be updated accordingly, which would open WebRTC to iOS apps directly, catching up a little bit with the WebView on android.
How much the webrtcinwebkit project helped making this happen is also open to debate but I want to believe it did (yes, I am uber-biased). It is also possible that the specifications for Device API being stable (last call status at W3C) motivated Apple to go for it.
In any case, what is important is that it is now undeniable that Apple is bringing WebRTC to its browsers and devices, and that answers a question left open for almost 4 years now!
Microsoft Edge newsHere again, a lot of good news. For a long time, it was unsure if Microsoft would do anything at all, and even now, nobody is clear on exactly what API they are going to implement and how compatible it will be with the WebRTC specs. The convergence of the WebRTC Working Group and ORTC Community Group within w3c is raising hope of better interoperability, but it was not substantiated. Until today that is. There is no other web API that would use VP9 than WebRTC. Ok, it could be to provide a better YouTube experience, but Opus is WebRTC only.
So here again, while the exact date is unknown it is undeniable that Microsoft Edge is moving in the right direction. Moreover, it’s been moving faster than most expected lately.
All good news for the ecosystem, and surely more news to come on the Kranky Geek WebRTC Live event on september 11th where three browser vendors will be present to make their announcements.
Toward a plugin free experience (finally)During my original announcement of a plugin for IE and Safari I stated that the “goal [was] to remove the “what about IE and safari” question from the Devs and Investors’ table long enough for a native implementation to land.”
I also stated that “We hate plugins, with a passion. Some browser vendors put us in the situation where we do not have the luxury to wait for them to act on a critical and loudly expressed need from their user base. We sincerely hope that this is only a temporary solution, and we don’t want people to get the impression that plugins are the magical way to bypass what browser vendors do (or don’t do). Native implementation is always best.”
I truly believe that the day we can get rid of plugins for webrtc is now very very close. If I’m lucky, Santa Claus will bring it to me for Xmass (after all, I’ve been a good boy all year). There will still be a need for some help, but it will be in the form of a JS library and not a heavy duty plugin. Of course, you still have to support some older versions of Windows here and there, especially for enterprise market, but Microsoft, and I am writing this from Redmond, next to the Microsoft campus ;-), is putting a lot of ressources behind moving people to auto-updating versions of their software, be it Windows itself, or the browsers. Nowadays, the OS do not bring value per themselves, but bring in a lot of maintenance burden. It is in everybody’s interest to have short update cycles, and MS knows that.
For those who need to support older versions of IE for some time (Apple users will never be seen with an old Apple device :-D), there are today several options, all converging toward the same feature set, and a zero price tag. You can see more about this here.
Tsahi and I have this in common that we hate plug-ins, especially for video communication. I think we are seeing the end of this problem.
The post WebRTC Plugin Free World is Almost Here: Apple and Microsoft joining the crowd appeared first on BlogGeek.me.
We’re now 30 days away from KazooCon! Want to tell your friends about KazooCon and save money at the same time? Refer a friend, you’ll receive 25% off your ticket for each person you refer. Here’s the referral link: http://www.eventbrite.com/r/kazoocon
A quick note.
Just wanted to list out the events and venues where you’ll be able to find me and meet with me in the next month or two.
Me in San FranciscoI’ll be in San Francisco 9-11 September, mainly for the Kranky Geek event. If you want to meet and have a chat with me – contact me and let’s see if we can schedule a time together.
WebRTC Codec Wars: RebootedWhen? Wednesday, September 9, 18:00
Where? TokBox’ office – 501 2nd Street, San Francisco
TokBox were kind enough to invite me to their upcoming TechTok meetup event. Codecs are becoming a hot topic now – up to the point that I had to rearrange my writing schedule and find the time to write about the new Alliance for Open Media. It also got me to need to change my slides for this event.
Would be great to see you there, and if you can’t make it, I am assuming the video of the session will be available on YouTube later on.
Attendance is free, but you need to register.
Kranky Geek WebRTC ShowWhen? Friday, September 11, 12:00
Where? Google – 6th floor 345 Spear St, San Francisco
This is our second Kranky Geek event in San Francisco, and we’re trying to make it better than the successful event we had a year ago.
Check out our roster of speakers – while registration has closed, we do have a waiting list, so if you still want to join – register for the waiting list and you might just make it to our event.
Development Approaches of WebRTC Based ServicesWhen? September 24, 14:00 EDT
Where? Online
It is becoming a yearly thing for me, having a webinar on the BrightTALK platform.
This time, I wanted to focus on the various development approaches companies take when building WebRTC based services. This has recently changed with one or two new techniques that I have seen.
The event is free and takes place online, so be sure to register and join.
Video+Conference 2015When? Thursday, October 15, 11:00
Where? Congress Centre Hotel “Alfa”, Moscow
I have never been to Russia before, and I won’t be this time. I will be joining this one remotely. TrueConf have asked me to give a presentation about WebRTC.
The topic selected for this event is WebRTC Extremes and how different vendors adopt and use WebRTC to fit their business needs.
If you happen to be in Moscow at that time, it would be great to virtually meet you on video.
Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.
The post Upcoming Sep-Oct Events appeared first on BlogGeek.me.
W3C WebRTC working group chairs [Harald Alvestrand (Google), Stefan Håkansson (Ericsson), Erik Lagerway (Hookflash)], made a decision recently to add a new editor to the working group, as Peter St. Andre (&yet) has resigned as editor.
Bernard Aboba (Microsoft) has now been appointed as editor.
Bernard’s attention to detail and advocacy for transparency, fairness and community has been refreshing. It has been my pleasure (as chair of the W3C ORTC CG) to work with Bernard whom also is an author in the W3C ORTC CG alongside Justin Uberti and Robin Raymond (editor). I look forward to working more with him in the WG.
Congrats Bernard!
/Erik
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.