News from Industry

Why an SDK is Critical to your API Offering

bloggeek - Tue, 06/30/2015 - 12:00

While you need to give direct access to your APIs, an SDK is a critical piece of your offering.

There was an article on the ProgrammableWeb on Sending.io NOT offering an SDK for their service. I think in most cases, this approach is wrong.

Sending.io decided to offer only an API layer for its customers. You can access their REST APIs, but how you do it is your problem – even when what they give is designed and built for mobile devices.

API and SDK

I’ll start with a quick explanation of the two – at least in the scope of this post. There will be those who will definitely object my definitions here, but the idea is just to make the distinction I need here – and not to pontificate the meaning of the two.

  • API – an API is a set of operations you can use to access a backend service of sorts. Assumption is this is a server-side API, where we have a service on some remote server (probably on AWS or whatever other cloud), and that service offers access to it via APIs. You invoke the API by making a remote call from your machine or device to the cloud running the service. Usually these APIs will be REST based, though not always
  • SDK – an SDK is a piece of code that gets embedded into the customer’s service. The customer is a developer who decided to use your API, so he downloads your SDK and puts it in his own code. The SDK itself calls the API when necessary to get things done. The result – the customer calls the SDK locally, the SDK calls the API remotely and your service gets used
Why not an SDK?

Back to Sending.io and their reasons – from this article:

  • SDKs introduce performance issues
  • Reduces control of the customer using it
  • Crashing SDKs
  • Privacy issues

While this may work in the gaming industry, I think it is not workable in many other industries. Here are my thoughts on this one:

It all boils down to your execution

There are two ways to treat an SDK – as part of your offering or as an afterthought.

If you treat it as an afterthought, then performance issues, crashes and privacy issues will crop more frequently than not.

With most SDKs today built as frontends to a backend REST API, it makes perfect sense that some of them just aren’t written well: Backend developers are good at scaling a service to run in the cloud. For them, considerations of memory and performance of the single session in the same way that a native Android developer thinks about is foreign.

If you really want to offer an SDK, have a pro build it for you.

The customer’s control

Assuming what you have on offer is a closed binary SDK that the customer ends up using, then control may be an issue.

It doesn’t have to be this way.

There are 3 options you can take here, each with its own control points for customers:

  1. Offer your SDK as a closed binary, but also give access to the backend API
    • Those who wish to use the SDK to shorten their time to market can do that
    • Those that wish to have more control can use the API directly
  2. Offer your SDK in source code format
    • This gives more control to your customer, who can now debug the code
    • The customer may modify the code, and in such cases, you should make it clear your support will be of the backend API only
  3. Offer a sample SDK client only
    • Provide a reference written in the native language of choice
    • Don’t offer support for it, but write it in a way that makes it easy to understand and modify
Why an SDK is needed?

There are several reasons that make an SDK so powerful:

  1. While REST APIs are simple enough, connecting to them can be quite a hassle
    • Which native library should be used? Have the APIs been tested with these libraries? Having this one decided, implemented and tested makes life easier for customers
    • What authentication mechanism is provided? How do you implement it on your own in the native language? This can eat up many hours, so having that done for customers reduces the friction and the chance of your customer moving to a competitor
    • There’s a flow issue – you need to call API A then API B then check something locally before running API C. Developers never read documentation. Give them a sample to work from in the SDK, and half your problems are solved
  2. It might not be REST…
    • There’s a shift towards WebSocket communications in some places. Documenting the spec and having customers follow it isn’t easy
    • Give an SDK instead, and the actual protocol you use for the WebSocket becomes irrelevant to the customer – AND allow you to easily update it in the future
  3. You might want to run things in the client side
    • WebRTC, for example, runs on the client side
    • You can’t really offer a backend API and just forget about the client side – there’s a lot of code that ends up there
    • That code has value – especially on mobile

Plan on offering a backend API for your customers?

You shouldn’t just ignore an SDK – especially not if you plan on having developers integrate with your APIs inside mobile apps.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Why an SDK is Critical to your API Offering appeared first on BlogGeek.me.

FreeSWITCH Week in Review (Master Branch) June 20th-26th

FreeSWITCH - Tue, 06/30/2015 - 07:23

Hello, again. This passed week in the FreeSWITCH master branch we had 37 commits. There was one feature this week with improvements to play_and_detect_speech to set the current_application_response channel variable.

Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.

New features that were added:

  • FS-7720 Improve play_and_detect_speech to set current_application_response channel variable as follows: “USAGE ERROR”: bad application arguments’, “GRAMMAR ERROR”: speech recognizer failed to load grammar, “ASR INIT ERROR”: speech recognizer failed to allocate a session, and “ERROR”: any other errors

Improvements in build system, cross platform support, and packaging:

  • FS-7707 Fix build error on CentOS7
  • FS-7655 Fixed a build error when we have PNG but not YUV
  • FS-7723 Change RPMs to use -ncwait instead of -nc. This will cause the initscript to pause and wait for FS to be ready before continuing.
  • FS-7648 Added a test cases for FS-7724 and FS-7687
  • FS-7726 Additional configurations for a QA test case
  • FS-7715 Updates to configure and spec files for next development branch and added images to spec file and fixed build/freeswitch.init.redhat since redhat likes to override settings in the script with TAGs in comments

The following bugs were squashed:

  • FS-7467 [mod_callcenter] Fixing stuck channels using uuid-standby agents
  • FS-7699 [mod_verto] Fixed for browser compatibility
  • FS-7722 Fixed an issue with record_session including params when creating path
  • FS-7489 [mod_unimrcp] Fixed a TTS Audio Queue Overflow
  • FS-7724 [mod_conference] Fixed a segfault when missing fonts when trying to render banner
  • FS-7519 [mod_av] Fixed a regression in the visual appearance of decode app output
  • FS-7703 Fixed a bug caused by answer_delay being set in the default configurations
  • FS-7679 [mod_verto] Fixed a bug causing one way audio on Chrome when video is enabled and when using a sip without video
  • FS-7729 [mod_verto] Fixed the formatting for IPv6 addresses

 

This passed week in the FreeSWITCH 1.4 branch we had 30 commits merged in from master.

Security issues:

  • FS-7708 Fixed docs on enabling cert CN/SAN validation

New features that were added:

  • FS-7561 [mod_sofia] Add Perfect Forward Secrecy (DHE PFS)
  • FS-7564 [mod_rayo] Added new algorithms for offering calls to clients
  • FS-7623 [mod_amqp] Allow for custom exchange name and type for producers and fixed param name ordering bug caused by exposing these params
  • FS-7720 Improve play_and_detect_speech to set current_application_response channel variable as follows: “USAGE ERROR”: bad application arguments’, “GRAMMAR ERROR”: speech recognizer failed to load grammar, “ASR INIT ERROR”: speech recognizer failed to allocate a session, and “ERROR”: any other errors
  • FS-7743 [mod_skinny] Updated SKINNY on-hook action to hang up all calls on a device, except those in a short list of call states (or perform a blind transfer) and added a hook after completing the hangup operation to start ringing if there is an inbound call active on the device.

Improvements in build system, cross platform support, and packaging:

  • FS-7610 Fixed a gcc5 compilation issue
  • FS-7426 Only disable mod_amqp on Debian Squeeze and Wheezy
  • FS-7297 g729 installer

The following bugs were squashed:

  • FS-7582 FS-7432 Fixed missing a=setup parameter from answering SDP
  • FS-7650 [mod_verto] Fixed crash when making a call from a verto user with profile-variables in their user profile
  • FS-7678 Fixed for fail_on_single_reject not working with | bridge
  • FS-7612 Fixed invalid json format for callflow key
  • FS-7621 [mod_shout] Fixed a slow interrupt
  • FS-7432 Fixed missing a=setup parameter from answering SDP
  • FS-7573 Fixed 80bit tag support for zrtp
  • FS-7636 Fixed an issue with transfer_after_bridge and park_after_bridge pre-empting transfers
  • FS-7654 Fixed an issue with eavesdrop audio not working correctly with a mixture of mono and stereo
  • FS-7579 [mod_conference] Fixed a bug not allowing suppression of play-file-done
  • FS-7593 [mod_skinny] Fixed a bug where skinny phones would stomp on each other in database when thundering herd occurs
  • FS-7597 [mod_codec2] Fixed encoded_data_len for MODE 2400, it should be 6 bytes. Also replaced 2550 bps bitrate (obsoleted operation mode) by 2400
  • FS-7604 [fs_cli] Fixed fs_cli tab completion concurrency issues on newer libedit
  • FS-7258 FS-7571 [mod_xml_cdr] Properly encode xml cdr for post to web server
  • FS-7607 Update URLs to reflect https protocol on freeswitch.org websites and update additional URLs to avoid 301 redirects.
  • FS-7479 Fixed a crash caused by large RTP/PCMA packets and resampling
  • FS-7524 [mod_callcenter] Fixing tiers, level and position should default to 1 instead of 0
  • FS-7622 [mod_amqp] Make sure to close the connections on destroy. Currently the connection is malloc’d from the module pool, so there is nothing to destroy.
  • FS-7689 [mod_lua] Fixed a bug with lua not loading directory configurations
  • FS-7489 [mod_unimrcp] Fixed a TTS Audio Queue Overflow
  • FS-7467 [mod_callcenter] Fixing stuck channels using uuid-standby agents

Video

2600hz - Tue, 06/30/2015 - 04:02


Developing mobile WebRTC hybrid applications

webrtchacks - Mon, 06/29/2015 - 15:30

There are a lot of notable exceptions, but most WebRTC developers start with the web because well, Web RTC does start with web and development is much easier there. Market realities tells a very different story – there is more traffic on mobile than desktop and this trend is not going to change. So the next phase in most WebRTC deployments is inevitably figuring out how to support mobile. Unfortunately for WebRTC that has often meant finding the relatively rare native iOS and Android developer.

The team at eFace2Face decided to take a different route and build a hybrid plugin. Hybrid apps allows web developers to use their HTML, CSS, and JavaScript skills to build native mobile apps. They also open sourced the project and verified its functionality with the webrtc.org AppRTC reference. We asked them to give us some background on hybrid apps and to walk us through their project.

 {“intro-by”, “chad“}

Hybrid apps for WebRTC (image source)

When deciding how to create a mobile application using WebRTC there is no obvious choice. There are several items that should be taken into consideration when faced with this difficult decision, like the existence of previous code base, the expertise, amount of resources and knowledge available. Maintenance and support are also a very important factors given the fragmentation of the mobile environment.

At eFace2Face we wanted to  extend our service to mobile devices.  We decided to choose our own path- exploring and filling in the gaps (developing new tools when needed) in order to create the solution that fitted us best.This post shares some of the knowledge and expertise we gained the hard way while doing so. We hope you find it useful!

Types of mobile apps (image source)

What’s a hybrid application?

There are two main approaches on how hybrid apps are built:

  • WebView: Put simply, this is an HTML5 web application that is bundled inside a native app and uses the device’s web browser to display it. The development framework for the application provides access to the device’s functions (camera, address book, accelerometers, etc.) in the form of JavaScript APIs through the use of plugins. It should also be totally responsive and use native-like resources to get a UX similar to a real app. Examples include Cordova/PhoneGap, Trigger.io, Ionic, and Sencha (the latter two being like Cordova with steroids).

Simple hybrid app example using PhoneGap (source)

Creating Hybrid HTML5 app is the most extensive alternative and the one we prefer because it uses web specific technologies. You can get a deeper overview about native vs. HTML5 (and hybrid applications) in a recent blog post at Android Authority.

Hybrid App Pros & Cons Pros:
  • Hybrid apps are as portable as HTML5 apps. They allow code reuse across platforms, with the framework handling all platform-specific differences.
  • A hybrid app can be built at virtually the same speed at which an HTML5 app can be built. The underlying technology is the same.  
  • A hybrid app can be built for almost the same cost as an HTML5 app. However, most frameworks require a license, which adds an extra development cost.
  • Hybrid apps can be made available and distributed via the relevant app store, just like native apps.
  • Hybrid apps have greater access to native hardware resources than plain HTML5 apps, usually through the corresponding framework’s own APIs.
Cons:
  • Not all native hardware resources are available to hybrid apps. The available functionality depends on the framework used.
  • Hybrid apps appear to the end user as native apps, but run significantly slower than native apps. The same restriction on HTML5 apps being rejected for being too slow and not responsive on Apple’s App Store also applies to hybrid apps. Rendering complex CSS layouts will take longer than rendering a corresponding native layout.
  • Each framework has its own unique idiosyncrasies and ways of doing things that are not necessarily useful outside of the given framework.

From our point of view, a typical WebRTC application is not really graphic-intensive (i.e. it is not, for instance, a game with lots of animations and 3D effects). Most of the complex processes are done internally by the browser, not in JavaScript, so a graphical UX interface should be perfectly doable on a hybrid application and run without any significant perceptible slowdown. Instagram is a good example of a well-known hybrid app that uses web technologies in at least some of its components.

WebRTC on native mobile: current status

Native support in Android and iOS is a bit discouraging. Apple do not support it at all, and has no public information about when are they going to do so, if they decide to support it at all. On Android, the native WebView supported WebRTC starting in version 4.4 (but be cautious as it is based on Chromium 36) then in 5.0 and onwards.

Browser vendors fight (source)

Note that there are no “native WebRTC” APIs on Android or iOS yet, so you will have to use Google’s WebRTC library. Justin Uberti (@juberti) provides a very nice overview of how to do this (go here to see the slides).

Solutions

Let’s take a look at the conclusions of our research.

Android: Crosswalk

In Android, using the native WebView seems like a good approach; in fact we used it during our first attempt to create our application. But then we decided to switch to Intel’s Crosswalk, which includes what’s best described as a “full Chrome browser”. It actually allows us to use a fully updated version of native Chromium instead of WebView.

These were our reasons for choosing Crosswalk:

  • Fully compatible source code: You only have to handle a single Chromium version across all Android devices. More importantly, it has the latest, regularly updated WebRTC APIs.
  • Backward compatibility: According to developer.android.com, approximately 48% of Android devices currently in use are running Android versions below 4.4. While most of them don’t have hardware powerful enough to run WebRTC (either native or hybrid), you shouldn’t exclude this market.
  • Fragmentation: Different versions of Android mean different versions of WebView. Given the speed at which WebRTC is evolving, you will have difficulties dealing with version fragmentation and supporting old versions of WebView.
  • Performance: It seems you can get up to 10x improvement of both HTML/CSS rendering and JavaScript performance and CSS correctness.

An advanced reader could think: “Ok, this is cool but I need to use different console clients (Cordova and Crosswalk) to generate my project, and I don’t like the idea of that.” You’re right, it would be a hassle, but we also found another trick here. This project allows us to add Crosswalk support to a Cordova project; it uses a new Cordova feature to provide different engines like any other plugin. This way we don’t need to have different baselines in the source code.

iOS: Cordova plugin

As explained before, there are frameworks that provide hybrid applications with the device functionality code via plugins. You can use them in your JavaScript code but they are implemented using native code. So, it should be possible to add the missing WebRTC JavaScript APIs.

There are several options available, but most of them provide custom APIs or are tightly coupled with some proprietary signaling from a service provider. That’s the reason that we released an open source WebRTC Cordova plugin for iOS.

The plugin is built on top of Google’s native WebRTC code and exposes the W3C WebRTC APIs. Also, as it is a Cordova plugin, it allows you to have the same Cordova application running on Android with Crosswalk, and on iOS with the WebRTC plugin. And both of them reuse all of the code base you are already using for your web application.

Show me the code!

“Yes, I have heard this already”, you might say, so let’s get some hands-on experience. In order to demonstrate that it’s trivial to reuse your current code and have your mobile application running in a matter of days (if not hours), we decided to take Google’s AppRTC HTML5 application and create a mobile application using the very same source code.

You can find the iOS code on github, Here are the steps required to get everything we’re talking about working in minutes:

  • Get the source code: “git clone https://github.com/eface2face/iOSRTCApp; cd iOSRTCApp”
  • Add both platforms; all required plugins are installed automatically because of their inclusion in the “config.xml” file: Cordova platform add iOS android
  • Run as usual: “cordova run –device”
  • Once running, enter the same room as the one that’s already been created via web browser at https://apprtc.appspot.com/ and enjoy!

Call between iOSRTCApp on iPad and APPRTC on browser

We needed to make some minor changes in order to make it work properly in the Cordova environment. Each of these changes didn’t require more than a couple of js/html/css lines:

  • Due to Cordova’s nature, we had to add its structure to the project. Some plugins are required to get native features and permissions. The scripts js/apprtc.debug.js and js/appwindow.js are loaded once Cordova’s deviceready  event is fired. This is necessary since the first one relies on the existing window.webkitRTCPeerConnection  and navigator.webkitGetUserMedia , which are not set by cordova-plugin-iosrtc until the event fires.
  • The webrtcDetectedVersion  global variable is hardcoded to 43 as the AppRTC JavaScript code expects the browser to be Chrome or Chromium, and fails otherwise.
  • In order to correctly place video views (iOS native UIView elements), the plugin function refreshVideos is called when local or remote video is actually displayed. This is because the CSS video elements use transition effects that modify their position and size for a duration of 1 second.
  • A new CSS file css/main_overrides.css changes the properties of video elements. For example, it sets opacity to 0.85 in local-video  and remote-video  so HTML call controls are shown even below the native UIView elements rendering the local and remote video.
  • Safari crashes when calling plugin methods within WebSocket events (“onopen”, “onmessage”, etc.). Instead, you have to run a setTimeout  within the WebSocket event if you need to call plugin methods on it. We loaded the provided ios-websocket-hack.js script into our Cordova iOS app and solved this.
  • Polyfill for windows.performance.now()  used on AppRTC code.
Conclusion

Deciding whether to go hybrid or native for your WebRTC app is up to you. It depends on the kind of resources and relevant experience your company has, the kind of application that you want to implement, and the existing codebase and infrastructure you already have in place. The good news is our results show that using WebRTC is not a key factor in this decision, and you can have the mobile app version of your WebRTC web service ready in much less time than you probably expected.

References

 

{“authors”, [“Jesus Perez“,”Iñaki Baz“, “Sergio Garcia Murillo“]}

Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.

The post Developing mobile WebRTC hybrid applications appeared first on webrtcHacks.

Why I Hate Video Conferencing Plugins and LOVE WebRTC Services

bloggeek - Mon, 06/29/2015 - 12:00

Friction.

A true story…

I had a meeting the other day. It was with a company that has been offering WebRTC video chat as part of its own services to their own customers for some time now, but internally, they used some other vendor for their own business meetings. My invitation was on that other vendor’s platform.

At the time of the meeting, I opened the calendar invitation, searching for the link to press.

Found it. Clicked it.

Got using my Chrome browser on my home desktop Ubuntu machine to the web page.

Clicked to join the meeting using my browser.

Was greeted with a message telling me Chrome isn’t supported due to a Chrome bug (with a link to a page detailing the issue on Chrome’s bug tracker) AND suggesting me to use Firefox.

Good.

Opened up Firefox, pasted the link to it.

Clicked to join the meeting using my browser.

Was greeted with a message telling me that only Windows and Mac are supported.

Great.

Opened my laptop to join. It runs Windows 8, so no issues there (I hoped).

Clicked the link on the email there, just to get Chrome opened there.

Somehow, the system knew this time that I should be able to use Chrome, so it happily instructed me to wait to download and then run the executable they were sending me.

Ok.

It took a minute or two to get that executable to run and start installing *something*.

But it got lost in all my windows. A bit of searching and I found the pesky window telling me to open the link yet again.

So I did.

It then went into this seemingly endless loop of trying to open up a meeting, failing and reopening.

This is when I noticed that the window being opened was an Internet Explorer one.

I cut the loop short and opened the link to the meeting on Internet Explorer.

It worked.

10 minutes later, frustrated, with another crappy installation of a client lurking around my Windows machine, I got to talk to the people who invited me.

Two were there with video – me one of them – we actually installed and executed that “plugin”.

Others joined by phone.

I am a technical person.

I worked in the video conferencing industry.

Why the hell should we use such broken tools and technologies in 2015?

I couldn’t care less if the video conferencing equipment that have been purchased ions ago don’t support VP8 or require conversion of SRTP to RTP or require translation from REST/WebSocket to H.323 signaling. I really don’t.

The only thing I want is to open a browser to a specific URL and have that URL just work.

On Ubuntu please.

The service in question?

Wasn’t a new one. They’ve been around for a decade or so.

They started with the desktop, so why can’t they get that experience to work well?

Yes. Internet Explorer and Safari are missing. I know. But I couldn’t care less.

If you want to provide a broken plugin experience for IE and Safari, then please do. But wherever possible make it easier for me to use.

It really isn’t hard. I attend a lot of video calls these days. The crushing majority of them are through WebRTC based services. Most of the services I used weren’t built by billion dollar companies.

Get your act together.

Start using WebRTC for your own business meetings.

The post Why I Hate Video Conferencing Plugins and LOVE WebRTC Services appeared first on BlogGeek.me.

Kamailio - TLSF – High Performance Memory Manager

miconda - Thu, 06/25/2015 - 19:53
Kamailio v4.3.0, Camille Oudout from Orange/Libon, France, pushed a new memory manager (named tlsf) focused on high performances on handling memory operations.It is well known that Kamailio (from its very beginning as SER project back in 2001) has its own memory manager. That simplifies especially the handling of shared memory on different oeprating systems. There were two available, that can be enabled at compile time, so  called:
  • f_malloc (aka fast malloc) – the one mostly used as default for stable releases
  • q_malloc (aka quick malloc) – the one more suitable for memory operations troubleshooting
While these two memory managers were designed to be fast for multi-process applications such as Kamailio (e.g., avoid thread locking for private memory) as well as dealing with the patterns of routing SIP traffic, few special cases could result in slowing down — one of this is when needing to free a lot of allocated chucks of same size.Worth to mention that system memory manager could be (and can still be) enabled to be used for private memory needs. Some other attempts to add new memory managers were not yet completed, therefore not being ready for use (e.g., the Doug Lea allocator or Lock Less allocator — you can check the source code tree, inside mem/ folder, for more details).Camille implemented the Two Level Segregated Fit (TLSF) memory allocator, know to be O(1) for both malloc() and free() operations (no worse case behavior). It has a 4 byte block overhead, but hardware memory it cheap these days. You can read more about it at:It is not enabled by default, being rather young code now, but it is a good candidate to become in the near future. To enable it, you have to install Kamailio from sources and compile using:make MEMDBG=1 MEMMNG=2 cfg
make all
make installIt will enable the debugging mechanism as well, that can be disabled by using MEMDBG=0.If you start using it, do provide us feedback about how it performs, because it helps to assert its relevance and stability. Also, do not hesitate to start a discussion if you have questions or suggestions via the sr-dev mailing list.Have a great summer!

My First @W3C #WebRTC Editor’s Call

webrtc.is - Thu, 06/25/2015 - 19:50

As newly appointed co-chair in the W3C WebRTC WG, I just participated in my first Editor’s Call, and I’m impressed.

We had to address nearly dozens of Pull Requests and Issues on the associated github repos. We managed to knock down quite a few that ended up getting merged and a few that were closed today, despite not having 1 co-chair and 1 editor present.

There were some suggestions on how we could make the processes a bit more effective, allowing everyone to understand more what’s expected of them. It’s going to take a few meetings I suspect to get a real feel for how I can be adding the most value possible.

Overall, it feels like we are all trying our best to do what the new charter has set out, to get 1.0 done before getting on with the next chapter. I am excited to be part of it and look forward to continue helping!

If you have any thoughts on how the WebRTC Working Group could be doing things differently to be more effective and efficient, I would like to hear your thoughts.


How the Politics of Standardization Plays in WebRTC, WebAssembly and Web Browsers

bloggeek - Thu, 06/25/2015 - 12:00

Companies care little about standards. Unless it serves their selfish objectives.

The main complaint around WebRTC? When is Apple/Microsoft going to support it.

How can that be when WebRTC is being defined by the IETF and W3C? When it is part of HTML5?

WebAssembly

We learned last week on a brand new initiative: WebAssembly. The concept? Have a binary format to replace JavaScript, act as a kind of byte-code. The result?

  1. Execute code on web pages faster
  2. Enable more languages to “run” on web pages, by compiling them to this new byte-code format

If the publication on TheNextWeb is accurate, then this WebAssembly thing is endorsed by all the relevant browser vendors (that’s Google, Apple, Microsoft & Mozilla).

WebAssembly is still just a thought. Nothing substantiate as WebRTC is. And yet…

WebAssembly yes and WebRTC no. Why is that?

Why is that?

Decisions happen to be subjective and selfish. It isn’t about what’s good for the web and end users. Or rather, it is, as long as it fits our objects and doesn’t give competitors an advantage or removes an advantage we have.

WebAssembly benefits almost everyone:

  • It makes pages smaller (binary code is smaller than text in general)
  • It makes interactive web pages run faster, allowing more sophisticated use cases to be supported
  • It works better on mobile than simple text

Google has no issue with this – they thrive on things running in browsers

Microsoft are switching towards the cloud, and are in a losing game with their dated IE – they switched to Microsoft Edge and are showing some real internet in modernizing the experience of their browser. So this fits them

Mozilla are trying to lead the pack, being the underdog. They will be all for such an initiative, especially when WebAssembly  takes their efforts in asm.js and build assets from there. It validates their credibility and their innovation

Apple. TechCrunch failed to mention Apple in their article of WebAssembly. A mistake? On purpose? I am not sure. They seem to have the most to lose: Better web means less reliance on native apps, where they rule with current iOS first focus of most developers

All in all, browser vendors have little to lose from WebAssembly while users theoretically have a lot to gain from it.

WebRTC

With WebRTC this is different. What WebRTC has to offer for the most part:

  • Access to the camera and microphone within a web browser
  • Ability to conduct real time voice and video sessions in web pages
  • Ability to send arbitrary data directly between browsers

The problem stems from the voice and video capability.

Google have Hangouts, but make money from people accessing web pages. Having ALL voice and video interactions happen in the web is an advantage to Google. No wonder they are so heavily invested in WebRTC

Mozilla has/had nothing to lose. They had no voice or video assets to speak of. At the time, most of their revenue also came from Google. Money explains a lot of decisions…

Microsoft has Skype and Lync. They sell Lync to enterprises and paid 8.5 billions for Skype. Why would they open up the door to competitors so fast? They are now headed there, making sure Skype supports it as well

Apple. They have FaceTime. They care about the Apple ecosystem. Having access to it from Android for anything that isn’t a Move to iOS app won’t make sense to them. Apple will wait for the last moment to support it, making sure everyone who wishes to develop anything remotely related to FaceTime (which was supposed to be standardized and open) have a hard time doing that

All in all, WebRTC doesn’t benefit all browser vendors the same way, so it hasn’t been adopted in the same zealousness that WebAssembly seems to attract.

Why is it important?

Back to where I started: Companies care little about standards. Unless it serves their selfish objectives.

This is why getting WebRTC to all browser vendors will take time.

This is why federating VoIP/WebRTC isn’t on the table at this point in time – the successful vendors who you want to federate with wouldn’t like that to happen.

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

 

The post How the Politics of Standardization Plays in WebRTC, WebAssembly and Web Browsers appeared first on BlogGeek.me.

FreeSWITCH Week in Review (Master Branch) June 13th-19th

FreeSWITCH - Tue, 06/23/2015 - 19:28

Hello, again. This passed week in the FreeSWITCH master branch we had 94 commits! We had a large amount of work done this week and a few of the highlights are: added mod_local_stream video support, added member status in json format to the conference live array, added function to enable debug information about Opus payload, and a security issue concerning enabling cert CN/SAN validation.

Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.

Security issues:
FS-7708 Fixed docs on enabling cert CN/SAN validation

New features that were added:
FS-7656 [mod_localstream] Added mod_local_stream video support, and make mod_conference move the video in and out of a layer when the stream has video or not, scan for relative file in art/eg.wav.png and display it as video when playing audio files, put video banner up if artist or title is set, and fixed a/v sync on first connection
FS-7629 [mod_conference] Added member status in json format to the conference live array, add livearray-json-status to conference-flags to enable
FS-7517 FS-7519 [mod_av] [mod_openh264] Added H264 STAP-A packeting support so it would work with FireFox
FS-7664 [mod_verto] Set ICE candidate timeout to wait for only 1 second to fix media delays
FS-7660 [mod_opus] Enabled with new API command “opus_debug” to show information about Opus payload for debugging.
FS-7519 [mod_av] Fixed bitrate and added some presets
FS-7693 [mod_conference] Lower the default energy level in sample configs to improve voice quality

Improvements in build system, cross platform support, and packaging:
FS-7648 More work toward setting up a QA testing configuration, add condition testing for regex all and xor cases, adding profile-variable for testing cases , add lipsync tests for playback and local stream, add stereo, and configuration for mcu test
FS-7338 Fixed bug in Debian packaging when trying to build against custom repo
FS-7609 Enable building of mod_sangoma_codec for Debian Wheezy/Jessie
FS-7667 [mod_java] Fixed include directory detection when using Debian java packages and use detected directory
FS-7655 Make libvpx and libyuv optional (none of the video features will work without them) The following modules require these libraries to be installed still: mod_av mod_cv mod_fsv mod_mp4v2 mod_openh264 mod_vpx mod_imagick mod_vpx mod_yuv mod_png mod_vlc, fix build issue w/ strict prototypes, and fix a few functions that need to be disabled without YUV
FS-7605 Fixed default configuration directory in Debian packages and fixed Debian packaging dependencies on libyuv and libvpx
FS-7669 When installing from Debian packaging if you don’t have the /etc/freeswitch directory, we will install the default packages for you. If you already have this directory, we’ll let you deal with your own configs.
FS-7297 [mod_com_g729] Updated the make target installer
FS-7644 Added a working windows build without video support for msvc 2013
FS-7666 [mod_managed] Fixed error building mod_managed on non windows platforms

The following bugs were squashed:
FS-7641 Fixed a segfault in eavesdrop video support
FS-7649 [mod_verto] Fixed issue with h264 codec not being configured in verto.conf.xml
FS-7657 [mod_verto] Fixed a bug with TURN not being used. Note, you can pass an array of stun servers, including TURN, to the verto when you start it up. (see verto.js where iceServers is passed)
FS-7665 [mod_conference] Fixed a bug with the video floor settings not giving the video floor to the speaker
FS-7650 [mod_verto] Fixed crash when making a call from a verto user with profile-variables in their user profile
FS-7710 [mod_conference] Added the ability to set bandwidth to “auto” for conference config
FS-7432 Fixed dtls/srtp, use correct a=setup parameter on recovering channels
FS-7678 Fixed for fail_on_single_reject not working with | bridge
FS-7709 [mod_verto] Verto compatibility fixes for Firefox
FS-7689 [mod_lua] Fixed a bug with lua not loading directory configurations
FS-7694 [mod_av] Fixed for leaking file handles when the file is closed.

Why Did Atlassian Switch Jitsi’s Open Source License from LGPL to Apache?

bloggeek - Tue, 06/23/2015 - 12:00

Jitsi switching to the Apache open source license is what the doctor ordered.

Blue Jimp, and with it Jitsi, was acquired by Atlassian in April this year. I wrote at the time about Jitsi’s open source license:

The problem with getting the Jitsi Videobridge to larger corporations was its open source license

  • Jitsi uses LGPL. A non-permissive license that is somewhat challenging for commercial use. While it is suitable for SaaS, many lawyers prefer not to deal with it
  • This reduces the Jitsi Videobridge’s chance to get adopted by enterprise developers who can pour more resources into it
  • This may limit Jitsi from building the ecosystem Atlassian wants (i.e – outsourcing some of the development effort to an external developers community)
  • Using BSD, MIT or Apache licenses would have been a better alternative. Will Atlassian choose that route? I am not sure
  • Did Atlassian leave the open source offering due to legal issues or real intent in becoming an open source powerhouse?

You can read my explanation on open source licenses. If you read the comments as well, you’ll see how complex and mired with landmines this domain is.

Last week, an announcement was made in the jitsi-dev mailing list: Jitsi is switching from LGPL to Apache license:

LGPL, our current license allows everyone to integrate and ship our various jars. Once you start making changes and distributing them however, then you you need to make sure these changes are also available under LGPL, AKA the LGPL reciprocity clause.

What I found interesting weer the next two paragraphs:

As the copyright holder, in BlueJimp we have been been exempt from this reciprocity clause. Even though we rarely use it, we had the liberty to modify our code without making our changes public. No one else had this option.

Switching to Apache ends our advantage in this regard, and allows everyone to use, integrate and distribute Jitsi with a lot less limitations.

Some things to notice here:

  • People who made changes to the Jitsi code base had to contribute back the code to Blue Jimp, along with the ability for Jitsi not to act the same – Jitsi maintained a different “license” for itself – this works well when your business model is consulting and customization of the open source project you maintain – not so good for a large enterprise
  • Atlassian took a different approach here by switching to Apache:
    • Atlassian internally has the same decision making processes as other large enterprises. LGPL is harder to adopt than Apache, making a switch to the Apache license for Jitsi a reasonable step to take – preferential treatment for Apache license in Atlassian and elsewhere played a key role here
    • It removed the possible nightmare of maintaining all of the existing CLAs (contributor license agreements) – they might have found them inaccurate, requiring a modification in their terms, needing a reassignment to Atlassian, etc – it was a good time to make the switch to Apache anyway
    • It gives a strong signal to the market, and especially to large enterprises that Jitsi is something they can use – if this turns out well, there will be additional contributors to this software package, as it is a popular one in the WebRTC industry
  • This switch from LGPL to APL (Apache) changes nothing in the ability of Blue Jimp and Atlassian to modify the base code and not contribute it back to the open source package
    • This kind of a thing has happened before during acquisitions of open source project teams
    • It also happens when competition starts using your own open source against you (think Google’s Android)
    • It is unlikely to happen in the short or medium term, based on the signals coming from Atlassian and their current focus
  • This opens up a powerful WebRTC media server (an SFU actually) to a larger number of vendors

All in all, this is a great move to our WebRTC ecosystem. Atlassian is doing the right moves in maintaining the Jitsi community happy and engaged while attracting the larger players in the market. I wouldn’t have done it any other way if I were in their shoes.

 

Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.

 

The post Why Did Atlassian Switch Jitsi’s Open Source License from LGPL to Apache? appeared first on BlogGeek.me.

Kamailio Server Maintenance – Wed-Thu Night

miconda - Tue, 06/23/2015 - 10:16
During the night between Wed (June 24, 2015) and Thu (June 25, 2015), planned to start not early than 00:00 GMT+1, there will be some schedule maintenance work to the infrastructure that is hosting some of the kamailio.org servers.The main affected services will be:
  • main website (www.kamailio.org)
  • wiki portals
  • mailing lists (lists.sip-router.org)
  • git mirror (git.kamailio.org)
It is expected to have short downtimes of few minutes.

W3C ORTC CG – Editors Draft Update

webrtc.is - Mon, 06/22/2015 - 21:37

Big thanks to everyone (especially Bernard) for putting in the extra work required here for our next CG meeting:

Draft Community Group Report 22 June 2015

 

B.1 Changes since 7 May 2015
  1. Addressed Philipp Hancke’s review comments, as noted in: Issue 198
  2. Added the “failed” state to RTCIceTransportState, as noted in: Issue 199
  3. Added text relating to handling of incoming media packets prior to remote fingerprint verification, as noted in: Issue 200
  4. Added a complete attribute to the RTCIceCandidateComplete dictionary, as noted in:Issue 207
  5. Updated the description of RTCIceGatherer.close() and the “closed” state, as noted in: Issue 208
  6. Updated Statistics API error handling to reflect proposed changes to the WebRTC 1.0 API, as noted in: Issue 214
  7. Updated Section 10 (RTCDtmfSender) to reflect changes in the WebRTC 1.0 API, as noted in: Issue 215
  8. Clarified state transitions due to consent failure, as noted in: Issue 216
  9. Added a reference to [FEC], as noted in: Issue 217

How OTTs are Challenging VoLTE’s Prime Asset on Smartphones

bloggeek - Mon, 06/22/2015 - 12:00

While our smartphones aren’t phone anymore, their phone-calling real estate is still a prime asset.

VoLTE stands for Voice over LTE. It has been in the making for quite some time, but haven’t made its grand public appearance yet. While carriers around the globe boast LTE adoption stats, this says NOTHING about the lag of the carrier’s once main service – the humble voice call.

Today, in almost all cases where you open your smartphone greeted with an LTE network, if you make a phone call, it will go over 3G or GSM. Why? Because for voice to traverse LTE it requires VoLTE – or some other workaround means. Once VoLTE makes it into the scene, it will need to replace the voice calls today – and be a part of the smartphone’s dialer.

But there are other means of making calls these days, and I am not talking about Skype buddy lists.

Here is how the different players on the market redefining how we make calls, and trying to win the real-estate of the phone’s dialer by… replacing it.

Apple

In some ways, Apple is dependent on carriers selling its smartphones through contract agreements. So it can’t piss off their channel to market too much. But they can are treading on a very fine line here.

It started with FaceTime. Apple’s video chat service. Which then grew to iMessage, and later an introduction of FaceTime Audio.

Apple controls the iPhone’s UI, which means it decides how the dialer looks like and what functions it exposes to the user.

The end result?

  • When you want to send an SMS to someone, Apple will automatically “convert” it to an iMessage if possible
  • When you want to make a call to someone, if he uses an Apple device, you have the option to call him – voice or video – using FaceTime
Google

Google has Hangouts. You get it pre-installed in Android devices. Many never use it, but it is there.

Google tried making Hangouts sticky in the past, so they allowed it to receive and send SMS – similar in some ways to how Apple does iMessage, but different as the experience isn’t as seamless.

On a mobile phone, think of Hangouts as a step in the way. Google’s Project Fi, their new MVNO initiative, probably uses Hangouts internally – it does connect with Hangouts as their website explains:

Connect any device that supports Google Hangouts (Android, iOS, Windows, Mac, or Chromebook) to your number. Then, talk and text with anyone—it doesn’t matter what device they’re using.

Google is bulking up its communication chops nicely these past few years, and Fi is the next step. I am certain that part of the tech and learnings that Google gains from Fi will find its way back to their general Hangouts service.

Facebook

Facebook had its share of romance with mobile. From rumors of Facebook smartphones, to a failed Facebook Home app.

For Facebook mobile is critical. Many of its customers use it exclusively on mobile. How do you increase your share in a digital life pie if you are Facebook? You try to control the smartphone experience.

Building a smartphone is hard (ask Amazon), so Facebook tried controlling the home screen by developing a Facebook centric Android launcher. This didn’t work, but wasn’t a failure at the scale of a smartphone launch.

Next up, is their relatively new Hello app. It looks rather innocuous – you receive calls through their Hello app to get a “social ID” – Facebook will match the phone number to a person’s Facebook account to show to you on incoming calls.

The end result?

  • Facebook Hello is used as your smartphone’s calling app
  • They didn’t miss the opportunity of adding their own dialer in – which enables you to call via Messenger
Whatsapp (still Facebook)

Whatsapp is a part of Facebook, but it took a very different approach. It simply added voice calling to its app.

If you are interested in understanding the size of Whatsapp, then there’s a good bulleted list on Mobile Industry Review.

Think of this move in the following context:

  • As of April 2015, WhatsApp has more than 800 million active users
  • Average amount of time spent by users on WhatsApp is 195 minutes a week
  • Teenagers use Whatsapp all the time. At least here in Israel. They don’t talk – they text. Faced with the need to escalate a text chat to a voice call – will they switch app and context or just press the phone icon on the Whatsapp page?

What is your dialer now? The traditional phone dialer with its contacts app or Whatsapp?

Why is it important?

Communication is being redefined. Switching from voice and video towards data access and messaging.

This brings with it a bigger change of what is considered prime real estate on one smartphone’s display, and there are non-telco vendors who are positioned nicely to displace the carriers from the dialer as well. Where would that leave the carrier’s efforts with VoLTE?

 

Kranky and I are planning the next Kranky Geek in San Francisco sometime during the fall. Interested in speaking? Just ping me through my contact page.

The post How OTTs are Challenging VoLTE’s Prime Asset on Smartphones appeared first on BlogGeek.me.

Changes in the W3C WebRTC Working Group

webrtc.is - Fri, 06/19/2015 - 20:40

With the forthcoming re-charter @W3C WebRTC Working Group, there were also a few managerial changes:

  • Peter Saint Andre (@andyet fame), will be joining as co-editor
  • Erik Lagerway, yours truly (co-founder @hookflash), will be joining as co-chair
  • Vivien Lacourba, W3C staff, will be helping out Dominique Hazael-Massieux with increased W3C staff time in the WebRTC Working Group

I am personally flattered and over the moon excited to have been asked to co-chair the WebRTC Working Group and look forward to working with Harald and Stefan to help usher in the next era of WebRTC standards work.

/Erik


Why You Should Start Using WebRTC TODAY and Abandon Perfection?

bloggeek - Thu, 06/18/2015 - 12:00

To paraphrase Seth Godin, WebRTC is about breaking things.

Seth Godin (who you should definitely read) had an interesting post this week, titled Abandoning perfection. It is short so go over and read it. I’ll just put one of the paragraphs of this post here, to serve as my context:

Perfect is the ideal defense mechanism, the work of Pressfield’s Resistance, the lizard brain giving you an out. Perfect lets you stall, ask more questions, do more reviews, dumb it down, safe it up and generally avoid doing anything that might fail (or anything important).

Now that we have it here, why don’t we check on the excuses people (and companies) give for not using WebRTC?

  • “Microsoft and Apple don’t support it”
    • Do you have any better idea on how to do video calling in browsers? Because I don’t
    • And there are WebRTC plugins for those who want them in Safari and IE
    • There are also those who can live with Chrome and Firefox use cases only
  • “You can’t do multiparty calls with it”
    • This is true for any client side VoIP solution. They require a server
    • And since WebRTC is a technology, it is up to you to come up with the solution and implement server side multiparty
    • Join my webinar next week with TokBox on this subject while you’re at it…
  • “There’s no quality of service”
    • No VoIP service has quality of service
    • WebRTC changes nothing in this regard
    • And people are still happy to use Skype (!) for their business meetings
  • “Without signaling, it can’t interoperate with anything else”
    • True. WebRTC comes without signaling
    • Which means you can add your own – SIP, XMPP or anything you fancy. To fit your exact need and use case
    • In many cases, interoperability is overrated anyway, and building your own service silo is good enough
  • “Mobile First, iOS First. Apple not there, so no way I can use WebRTC”
    • You’ll be surprised how many commercial iOS production apps there are that use WebRTC
    • That’s why I even published a report on WebRTC adoption in mobile apps

Got a lizard brain? Make sure you use the excuses above in the next weekly meeting with your boss. Want to break things and be useful? Check out what WebRTC can do for you.

Oh, and when someone tells you that WebRTC isn’t ready for prime time yet, but will be in 2-3 years – and a lot sooner than you expect – tell him it is ready. Today.

I’ve seen companies using WebRTC daily – in ways that advances their business – adding more flexibility – enabling them to make better decisions – lowers their costs – or allow them to exist in the first place.

Got a good use case that requires real time communications? First check if WebRTC fits your needs – REALLY check. 80% or more of the time – it will.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

Trying to understand how to get your service to mobile with WebRTC? Read my WebRTC Mobile Adoption report, written specifically to assist you with this task.

Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.

Kranky and I are planning the next Kranky Geek in San Francisco sometime during the fall. Interested in speaking? Just ping me through my contact page.

Looking for a WebRTC related job? Need the best WebRTC developer out there? You should definitely try out the WebRTC Job Board - satisfaction guaranteed!

The post Why You Should Start Using WebRTC TODAY and Abandon Perfection? appeared first on BlogGeek.me.

The new Android M App Permissions – Dag-Inge Aas

webrtchacks - Wed, 06/17/2015 - 15:30

Android got a lot of WebRTC’s mobile development attention in the early days.  As a result a lot of the blogosphere’s attention has turned to the harder iOS problem and Android is often overlooked for those that want to get started with WebRTC. Dag-Inge Aas of appear.in has not forgotten about the Android WebRTC developer. He recently published an awesome walkthrough post explaining how to get started with WebRTC on Android. (Dag’s colleague Thomas Bruun also put out an equally awesome getting started walkthrough for iOS.) Earlier this month Google also announced some updates on how WebRTC permissions interaction will work on the new Android.  Dag-Inge provides another great walkthrough below, this time covering the new permission model.

{“editor”: “chad“}

 

At this year’s Google I/O, Google released the Android M Developer Preview with lots of new features. One of them is called App Permissions, and will have an impact on how you design your WebRTC powered applications. In this article, we will go through how you can design your applications to work with this new permissions model.

To give you the gist of App Permissions, they allow the user to explicitly grant access to certain high-profile permissions. These include permissions such as Calendar, Sensors, SMS, Camera and Microphone. The permissions are granted at runtime, for example, when the user has pressed the video call-button in your application. A user can also at any time, without the app being notified, revoke permissions through the app settings, as seen on the right. If the app requests access to the camera again after being revoked, the user is prompted to once again grant permission.

This model is very similar to how iOS has worked their permission model for years. User’s can feel safe that their camera and microphone are only used if they have given explicit consent at a time that is relevant for them and the action they are trying to perform.

However, this does mean that WebRTC apps built for Android now have to face the same challenges that developers on iOS have to face. What if the user does not grant access?

To get started, let’s make sure our application is built for the new Android M release. To do this, you have to edit your application’s build.gradle file with the following values:

targetSdkVersion "MNC" compileSdkVersion "android-MNC" minSdkVersion "MNC" // For testing on Android M Preview only.

Note that these values are prone to change once the finalized version of Android M is out.

In addition, I had to update my Android Studio to the Canary version (1.3 Preview 2) and add the following properties to my build.gradle to get my sources to compile successfully:

compileOptions { sourceCompatibility JavaVersion.VERSION_1_7 targetCompatibility JavaVersion.VERSION_1_7 }

However, your mileage may vary. With all that said and done, and the M version SDK installed, you can compile your app to your Android device running Android M.

Checking and asking for permissions

If you start your application and enable its audio and video capabilities, you will notice that the camera stream is black, and that no audio is being recorded. This is because you haven’t asked for permission to use those APIs from your user yet. To do this, you have to call requestPermissions(permissions, yourRequestCode)  in your activity, where permissions is a String[] of android permission identifiers, and yourRequestCode is a unique integer to identify this specific request for permissions.

String[] permissions = {   "android.permission.CAMERA",   "android.permission.RECORD_AUDIO" }; int yourRequestId = 1; requestPermissions(permissions, yourRequestCode);

Calling requestPermissions will spawn two dialogs to the user, as shown below.

When the user has denied or allowed access to the APIs you request, the Activity’s onRequestPermissionsResult(int requestCode, String permissions[], int[] grantResults) method is called. Here we can recognize the requestCode we sent when asking for permissions, the String[] of permissions we asked for access to, and an int[]  of results from the grant permission dialog. We now need to inspect what permissions the user granted us, and act accordingly. To act on this data, your Activity needs to override this method.

@Override public void onRequestPermissionsResult(int requestCode, String permissions[], int[] grantResults) { switch (requestCode) { case YOUR_REQUEST_CODE: { if (grantResults[0] == PackageManager.PERMISSION_GRANTED){ // permission was granted, woho! } else { // permission denied, boo! Disable the // functionality that depends on this permission. } return; } } }

Handling denied permissions will be up to your app how you want to handle, but best practices dictate that you should disable any functions that rely on these permissions being granted. For example, if the user denies access to the camera, but enables the microphone, the toggle video button should be disabled, or alternatively trigger the request again, should the user wish to add their camera stream at a later point in the conversation. Disabling access to video also means that you can avoid doing the VideoCapturer  dance to get the camera stream, it will be black anyway.

One thing to note is that you don’t always need to ask for permission. If the user has already granted access to the camera and microphone previously, you can skip this step entirely. To determine if you need to ask for permission, you can use checkSelfPermission(PERMISSION_STRING)  in your Activity. This will return PackageManager.PERMISSION_GRANTED  if the permission has been granted, and PackageManager.PERMISSION_DENIED  if the request was denied. If the request was denied, you may ask for permission using requestPermissions.

if (checkSelfPermission(Manifest.permission.CAMERA) == PackageManager.PERMISSION_DENIED) { requestPermissions(new String[]{Manifest.permission.CAMERA}, YOUR_REQUEST_CODE); }

When to ask for permission

The biggest question with this approach is when to ask for permission. When are the user’s more likely to understand what they are agreeing to, and therefore more likely to accept your request for permissions?

To me, best practices really depend on what your application does. If your applications primary purpose is to enable video communication, for example a video chat application, then you should at initial app startup prime the user for setting up their permissions. However, you must at the same time make sure that permissions are still valid, and if necessary, reprompt the user in whatever context is natural for permissions should you need it. For example, a video chat application based on rooms may prompt the user to enable video and audio before entering a video room. If the user has already granted access, the application should proceed to the next step. If not, the application should explain in clear terms why it needs audio and video access, and ask the user to press a button to get prompted. This makes the user much more likely to trust the application, and grant access.

If your application’s secondary purpose is video communication, for example in a text messaging app with video capabilities, best practices dictate that the user should get prompted when clicking the video icon for starting a video conversation. This has a clear benefit, as the user knows their original intention, and will therefore be likely to grant access to their camera and microphone.

And remember, camera and microphone access can be revoked at any time without the app’s knowledge, so you have to run the permission check every time.

The new Android M permissions UI

But do I have to do it all now?

Yes and no. Android M is still a ways off, so there is no immediate rush. In addition, the new permission style only affects applications built for, or targeting, Android M or greater API levels. This means that your application with the old permission model will still work and be available to devices running Android M. The user will instead be asked to grant access at install time. Note that the user may still explicitly disable access through the app settings, at which point your application will show black video or no sound. So unless you wish to use any of the new features made available in Android M, you can safely hold off for a little while longer.

{“author”: “Dag-Inge Aas“}

Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart@reidstidolph, @victorpascual and @tsahil.

The post The new Android M App Permissions – Dag-Inge Aas appeared first on webrtcHacks.

Comverse Acquires Acision, Framing Digital an APIs Around WebRTC

bloggeek - Tue, 06/16/2015 - 12:00

Is Comverse becoming a serious WebRTC player?

Comverse is a company in transition. It has been catering the world’s telcos for many years. In recent years, it had its share of issues. Why are they important in the context of this blog?

  1. They acquired Solaiemes. But that was in August 2014. Almost a year ago
  2. Less than 2 months ago, Comverse sold its BSS business to Amdocs
  3. Yesterday, it acquired Acision, for around $210M
What does this say about Comverse?

Comverse is a company searching for their way. Their current focus is digital services with the set of customers being Telcos.

Digital focus means APIs and platforms that enable rapid creation of services.

The interesting part here is that Comverse is getting a sales team and an operation that knows how to sell to enterprises and not only to Telcos. I do hope they will be smart enough to keep that part of the business alive and leverage it.

Open questions include: Will Comverse merge Acision assets with Solaiemes? Try to build one on top of the other?

What does this say about Acision?

Acision got acquired for their SMS and voice business more than for their WebRTC or API platform components. No one gets acquired for that much money for WebRTC. Yet.

It is funny to note that Acision Forge platform, which runs their WebRTC PaaS part, was an acquisition of Crocodile RCS.

Comverse being focused on Telcos, how will they view the Forge platform?

  • As something to be sold to carriers or through carriers? This means taking the route that Tropo took in recent years
  • Would they try to leverage it and expand their offering to enterprises in other areas?
  • Will Comverse management understand the enterprise business enough to try and let it grow unhindered?
Why is this important?

This isn’t the first or last WebRTC related acquisition of the year. We had a few already.

If you are looking to use any vendor for your WebRTC technology, you need to consider the possibility of acquisition seriously.

It also led me to updating my WebRTC dataset subscription service: as of today, its subscribers also receive an updated acquisitions table, detailing all acquisitions related to WebRTC since 2012.

 

Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.

The post Comverse Acquires Acision, Framing Digital an APIs Around WebRTC appeared first on BlogGeek.me.

3CX Phone System v14: Preview tecnica

Libera il VoIP - Tue, 06/16/2015 - 09:08

La nuova major release di 3CX Phone System è pronta! Il nostro team Ricerca&Sviluppo c’è riuscito un’altra volta e ci ha fornito una versione straordinaria: pronta per il Cloud e corredata di una serie di miglioramenti e funzionalità innovative.

Di particolare interesse per i partners e i Service Providers sono le nuove funzionalità di centralino virtuale contenute in 3CX Phone System v14. Il vecchio 3CX Cloud Server fa ormai parte del passato e questa funzionalità è ora integrata nel setup principale, consentedovi di scegliere se installare una v14 come sistema on-premise o come centralino virtuale in grado di gestire fino a 25 istanze per ogni macchina. L’installazione e la gestione delle istanze virtuali possono da ora essere effettuate attraverso il nostro sistema ERP o via API. Entra in competizione con altri fornitori di centralini virtuali con una piattaforma migliore e più ricca di funzioni, pur mantenendo il controllo dei dati del cliente e potendo scegliere il VoIP Provider più adatto!

Breve elenco delle nuove funzionalità:

  • Opzione di virtualizzazione centralino integrata nel setup
  • Client Android completamente ridisegnato
  • Nuovo client iPhone con tunnel integrato (in attesa di rilascio sull’appstore)
  • Tutti i client: risposta più rapida a seguito di miglioramenti nell’architettura del push
  • Tutti i client: consumo ridotto della batteria
  • Funzionalità di failover integrata
  • Backup & Restore programmabili
  • Nuovi criteri di gestione delle Voice mail
  • Voice mail e registrazioni in formato compresso
  • Nuovo sistema di reportistica con rapporti inviati via email
  • Supporto per contatti Office 365
  • Aggiunta di numerosi nuovi SIP Trunk providers

Download Links e Documentazione
Scarica la 3CX Phone System v14 Technical Preview: http://downloads.3cx.com/downloads/3CXPhoneSystem14.exe
Scarica 3CXPhone per Android
Scarica 3CXPhone per Windows
Scarica 3CXPhone per Mac – Si prega di notare che il nuovo client per Mac sarà rilasciato nella prossima build
Scarica 3CXPhone per iOS
Scarica 3CX Session Border Controller per Windows
Manuale Amministratore:  Capitolo 8 : ‘Deploying 3CX Phone System as a Virtual PBX Server’
Demo Key: 3CXP-DEMO-EDIT-VEI4

Approfondimenti

FreeSWITCH Week in Review (Master Branch) June 6th-12th

FreeSWITCH - Tue, 06/16/2015 - 03:39

Hello, again. This passed week in the FreeSWITCH master branch we had 51 commits! Some of the new commits this week include: the addition of a new reserve-agents param to mod_callcenter, allowing for custom exchange name and type for mod_amqp producers, a sample build system for a stand alone(out of tree) FreeSWITCH module, and added video support to eavesdrop.

Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.

New features that were added:

  • FS-7620 [ftmod_libpri] Correctly set calling number presentation and screening fields
  • FS-7138 [mod_callcenter] Added a new reserve-agents param
  • FS-7436  FS-7601 [mod_opus] FEC support
  • FS-7623 [mod_amqp] Allow for custom exchange name and type for producers and fixed param name ordering bug caused by exposing these params
  • FS-7638 Allow ipv4 mapped ipv6 address to pass ipv4 ACLs properly
  • FS-7643 [mod_opus] Added interpretation of maxplaybackrate and sprop-maxcapturerate
  • FS-7641 Added video support to eavesdrop

Improvements in build system, cross platform support, and packaging:

  • FS-7635 Removed msvc 2005, 2008, and 2010 non working build systems
  • FS-7373 Expose the custom repo and key path to the build-all command too
  • FS-7648 Foundation for QA testing config , adding leave/check videomail test cases, adding videomail voicemail profile, adding video record/ playback test cases, adding set video on hold, force pre-answer prefix, and adding an eavesdrop test case.
  • FS-7338 Removed mod_shout dep libs to system libs to continue cleaning up the libs for the 1.6 build process and added Debian packaging for several new modules, as well as handle system lib change for a handful of modules
  • FS-7653 Sample build system for a stand alone(out of tree) FreeSWITCH module
  • FS-7601 [mod_opus] [mod_silk]  Removed a bounds check that can never be true in opus fec code and modify jitterbuffer usage to match the api change

The following bugs were squashed:

  • FS-7612 Fixed invalid json format for callflow key
  • FS-7609 [mod_sangoma_codec] Now that libsngtc-dev and libsngtc are in the FS debian repo, enable mod_sangoma_codec
  • FS-7621 [mod_shout] Fixed a slow interrupt
  • FS-7432 Fixed missing a=setup parameter from answering SDP
  • FS-7622 [mod_amqp] Make sure to close the connections on destroy. Currently the connection is malloc’d from the module pool, so there is nothing to destroy.
  • FS-7586 [mod_vlc] A fix for failing to encode audio during the recording of video calls
  • FS-7573 Fixed 80bit tag support for zrtp
  • FS-7636 Fixed an issue with transfer_after_bridge and park_after_bridge pre-empting transfers
  • FS-7654 Fixed an issue with eavesdrop audio not working correctly with a mixture of mono and stereo

ORTC Lib – mini update #webrtc

webrtc.is - Mon, 06/15/2015 - 23:48

It’s been about a year since we uploaded the ORTC Lib presentation on slideshare …

We have been rather busy since then…

Good things are coming! :)


Pages

Subscribe to OpenTelecom.IT aggregator

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Yet more available pages

Responsive grid

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Typography

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.