News from Industry

New Kamailio module: app_python3

miconda - Tue, 03/20/2018 - 21:00
A while ago app_python3 module was added to Kamailio’s GIT master branch (to be released as stable version 5.2.0 in several months), thanks to the development efforts of Anthony Alba.Although it started from the old app_python, besides being implemented to work with Python3, the new modules added a lot of improvements, leveraging the Python3 architecture for better performances, as well as including the support for Python script reload at runtime via an RPC command (so no need to restart Kamailio — the feature was ported to app_python meanwhile). The readme of the module is available at:Now all the Kemi interpreter modules can reload the SIP routing scripts without restarting Kamailio — it works for Lua, JavaScript, Python2/3 and Squirrel languages.Happy SIP routing in Python3! You can learn more about the Kemi scripting languages at Kamailio World Conference 2018 — an workshop is dedicated to this topic!Thanks for flying Kamailio!

How WebRTC Statistics and Performance Monitoring Changed VoIP Monitoring

bloggeek - Mon, 03/19/2018 - 12:00

Monitoring focus is shifting from server-side to client-side in WebRTC statistics collection.

WebRTC happens to decentralize everything when it comes to VoIP. We’re on a journey here to shift the weight from the backend to the edge devices. While the technology in WebRTC isn’t any different than most other VoIP solutions, the way we end up using it and architecting our services around it is vastly different.

One of the prime examples here is how we shifted focus for group calling from an MCU mixing model to an SFU routing model. Suddenly, almost overnight, the notion of deploying MCU started to seem ridiculous. And believe me – I should know – I worked at a company where %60+ came from MCUs.

The shift towards SFU means we’re leaning more on the capabilities and performance of the edge device, giving it more power in the interaction when it comes to how to layout the display, instead of doing all the heavy lifting in the backend using an MCU. The next step here will be to build mesh networks, though I can’t see that future materializing any time soon.

VoIP != WebRTC. Maybe not from a direct technical point, but definitely from how we end up using it. If you need to learn more about WebRTC, then my WebRTC training is exactly what you need:

Enroll to course

What I wanted to mention here is something else that is happening, playing towards the same trend exactly – we are moving the collection of VoIP performance statistics (or more accurately WebRTC statistics) from the backend to the edge – we now prefer doing it directly from the browser/device.

VoIP Statistics Collection and Monitoring

If you are not familiar with VoIP statistics collecting and monitoring, then here’s a quick explainer for you:

VoIP is built out of the notion of interoperability. Developers build their products and then test it against the spec and in interoperability events. Then those deploying them integrate, install and run a service. Sometimes this ends up by using a single vendor, but more often than not, multiple vendor products run in the same deployment.

There is no real specification or standard to how monitoring needs to happen or what kind of statistics can, should or is collected. There are a few means of collecting that data, and one of the most common approaches is by employing HEP/EEP. As the specification states:

The Extensible Encapsulation protocol (“EEP”) provides a method to duplicate an IP datagram to a collector by encapsulating the original datagram and its relative header properties (as payload, in form of concatenated chunks) within a new IP datagram transmitted over UDP/TCP/SCTP connections for remote collection. Encapsulation allows for the original content to be transmitted without altering the original IP datagram and header contents and provides flexible allocation of additional chunks containing additional arbitrary data. The method is NOT designed or intended for “tunneling” of IP datagrams over network segments, and best serves as vector for passive duplication of packets intended for remote or centralized collection and long term storage and analysis.

Translating this to plain English: media packets are duplicated for the purpose of sending them off to be analyzed via a monitoring service.

The duplication of the packets happens in the backend, through the different media servers that can be found in a VoIP network. Here’s how it is depicted on HOMER/SIPCAPTURE’s website:

HOMER collects its data directly from the servers – OpenSIPS, FreeSWITCH, Asterisk, Kamailio – there’s no user devices here – just backend servers.

Other systems rely on the switches, routers and network devices that again reside in the backend infrastructure. Since in VoIP production networks, we almost always route the media through the backend servers, the assumption is that it is easier to collect it here where we have more control than from the devices.

This works great, but not really needed or helpful with WebRTC.

WebRTC Statistics Collection and Monitoring

With WebRTC, there are only a handful of browsers (4 to be exact), and they all adhere to the same API (that would be WebRTC). And they all have that thing called getstats() implemented in them. These get the same information you find in chrome://webrtc-internals.

Many deployments end up running peer-to-peer, having the media traverse directly through the internet and not through the backend of the service itself. Google Hangouts decided to take that route two years ago. Jitsi added this capability under the name Jitsi P2P4121. How do these services control and understand the quality of their users?

If you look at other media servers out there, most of them are a few years old only. WebRTC is just 6 years old now. So everyone’s focused on features and stability right now. Quality and monitoring is not in their focus area just yet.

Last, but not least, WebRTC is encrypted. Always. And everywhere. So sniffing packets and deducing quality from them isn’t that easy or accurate any longer.

This led to the focus of WebRTC applications in gathering WebRTC statistics from the browsers and devices directly, and not trying to get that information from the media servers.

The end result? Open source projects such as rtcstats and commercial services such as callstats.io. At the heart of these, WebRTC statistics gets collected using the getstats() API at an interval of one or more seconds, sent over to a monitoring server, where it is collected, stored, aggregated and analyzed. We use a similar mechanism at testRTC to collect, analyze and visualize the results of our own probes.

What does that give us?

  1. The most accurate indication of performance for the end user – since the statistics are collected directly on the user’s device, there’s no loss of information from backend collection
  2. Easy access to the information – there’s a uniform means of data collection here taking place. One you can also implement inside native mobile and desktop apps that use WebRTC
  3. Increased reliance on the edge, a trend we see everywhere with WebRTC anyway
What’s Next?

WebRTC chances a lot of things when it comes to how we think and architect VoIP networks. The part of how and why this is done on statistics and monitoring is something I haven’t seen discussed much, so I wanted to share it here.

The reason for that is threefold:

  1. Someone asked me a similar question on my contact page in the last couple of days, so it made sense to write a longform answer as well
  2. We’re contemplating at testRTC offering a passive monitoring product to use “on premise”. If you want to collect, store and analyze your own WebRTC statistics without giving it to any third party cloud service, then ping us at testRTC
  3. My online WebRTC training is getting a refresher and a new round of office hours. This all starts in April. Time to enroll if you want to educate yourself on WebRTC

 

The post How WebRTC Statistics and Performance Monitoring Changed VoIP Monitoring appeared first on BlogGeek.me.

Twilio Flex = Twilio Flexing its Flexibility (or the programmable contact centers)

bloggeek - Wed, 03/14/2018 - 12:00

Twilio Flex is a peak into the future of enterprise software.

This week, Twilio announced a new product called Flex. The name and the broad strokes about what Flex is found their way to TechCrunch some two weeks ago. I wanted to share my thoughts about Twilio Flex.

A few notes before I start
  • Twilio isn’t paying me for writing this
    • They are a customer in other areas, but this one is all me. I think Flex (as well as Studio, Engagement Cloud, Functions, etc.) are interesting products coming from Twilio, and they are worth a long form analysis and review
    • Articles on BlogGeek.me are never paid for. Neither are guest posts or interviews. If something interests me, I’ll write about it
  • The information here is based mainly on a briefing I received about Flex and what I found since then on other sites (and on Twilio’s website)
  • Flex is a departure of many things Twilio has been doing, making it an interesting initiative to analyze
What is Twilio Flex?

Twilio Flex is CCaaS (Contact Center as a Service. It isn’t the first one. Twilio is touting it a Programmable Contact Center, which is how they are referring to all of their products.

Here’s Jeff Lawson’s keynote from Enterprise Connect, as usual, Jeff’s keynotes are worth the time and attention:

Where Twilio tried to differentiate Flex from existing solutions is by making it a fully functional contact center solution that is Flexible enough to customize and modify. It has APIs, but the day-to-day users won’t see them, and a lot of the customizations needed don’t require digging deep into the API layer either. That’s at least the intent (I didn’t have the chance to see the integration and API layers of Flex yet).

Twilio highlights 5 main benefits with Flex:

  • Unlimited customization – through the lower layers of Twilio’s product portfolio, along with a new addition to it, the Flex UI (not a lot/enough was explained about it thus far)
  • Instant omnichannel – support for multiple communication channels. More on this later
  • Contextual intelligent – Twilio’s ML/AI roadmap lies here
  • Trusted scale- due to its use of the Twilio infrastructure
  • 2 million developers – that’s the number of Twilio registered developers

Flex fits well into one of Twilio’s largest market segments – the contact center. And there, Twilio are aiming for the contact centers sizing 1,000+ seats. The big boyz.

As it was working to move up the food chain, offering ever larger components, migrating away from developers towards end users in the B2B space and in contact centers made sense.

Flex and the Twilio Portfolio

If I had to map the road Twilio is taking with its portfolio, it would end up being something like this (I’ve removed a lot of the products for simplicity):

Transactional: It started with SMS and Voice, adding VoIP services and later on expanding horizontally to other components and building blocks such as IP Messaging and others. In this layer, and to some extent in Omnichannel, Twilio’s focus is in a horizontal expansion towards “Best of Suite” offering.

Omnichannel: In 2017, Twilio added the Twilio Engagement Cloud. It placed a few existing products from its portfolio in that layer and added Notify and Proxy to them. They stated that these are “Declarative APIs” talking about general intent while including logic of their own. At the end of the day, many of the products/APIs in this layer are Omnichannel – they work across channels using the one available/preferred/whatever for the task at hand.

Visual: This is where the story became really interesting. Twilio added Studio to its portfolio. It went up the food chain again, this time, with a visual IDE and a message that Twilio is no longer a company that serves only developers, but one that can be used by others within the organization.

Programmable Enterprise Software: This is where Flex comes in, going up the food chain again. This time, offering a solution that doesn’t interact with the end users only as a consequence (a phone rings), but rather has a new set of users – people who aren’t developers or planners who sit in front of the tool every day and use it. The contact center agents and personnel.

Flex was defined to me in the domain of “Programmable Applications”. Twilio, in a way, trying to do two things with this definition:

  1. Programmable means it isn’t diverging from its roots completely, just taking the obvious next step in its evolution. All of its core products are Programmable X (X being SMS, Voice, Video, …)
  2. It allows it to position Flex not as another contact center, but rather as something new that is different

To me it is about the future of enterprise software and how to make it programmable and flexible in ways that are still impossible today. The closest to that we’ve got is probably having so many vendors integrate with Zapier.

I am sold to that kind of a future, but I am not sure others will be.

Flex Channels Proposition

Flex leans on a lot of other products in Twilio’s portfolio. One of its core values lies in omnichannel, and the fact that Twilio is already investing in a programmable layer that handles that (the Engagement Cloud). The proposition here is that whatever Twilio adds as a channel for developers, gets almost automatically added to Flex for its contact center customers.

Out the door, Flex comes with support for Voice, SMS, Chat, Video, Email, Fax, Twitter DM, Google RCS, Facebook Messenger and LINE. It also includes Screen Sharing and Co-Browsing as additional capabilities within the interactions. Developers can add additional channels to customize their contact center as well.

The list of channels is impressive, but somehow Apple Business Chat is missing in that list. Apple’s launch partners in this case were contact center vendors (LivePerson, Nuance, Genesys and Salesforce). Twilio, which is still recognized solely as a CPaaS vendor didn’t make the cut. I am sure Twilio tried becoming a partner, so this is more likely a decision made by Apple. I am also sure that once Apple opens up Business Chat to more developers, Twilio will be adding support to it.

The biggest promise here? Twilio is already committed to omnichannel in its products, and Flex will enjoy from that commitment as will Flex’ customers.

Think you know how WebRTC fits in a contact center? Check out with The Complete WebRTC Contact Center Uses Swipefile

Get the swipefile Machine Learning and Artificial Intelligence in Flex

A year or two ago, ML and AI in CPaaS was science fiction. Twilio as well as its competitors delved in the real time. In transactional and transient communications. If any machine learning work was taking place, it was in the operational layers – in an effort to optimize cost and deliverability of its service to its customers.

Last year, Twilio launched Understand, a layer built on top of Google’s Natural Language Processing capabilities (NLP). Understand is where Twilio started looking in ML and AI in the context of actual services for its customers. It looks at the problem domain of its customers (mainly contact centers) and tries to offer higher level APIs that are easier to use and are targeted at NLU (Natural Language Understanding). This then gets focused to the specific domain of the customer’s needs, and you get something that is usable today (as opposed to building a general purpose AI such as Siri, Alexa or Google Assistant).

The result in Understand is a way to simplify the development processes and requirements for Twilio’s customers when it comes to NLU.

That also got wrapped into Flex, at least on slides.

My feelings? The AI story of Flex is built out of two parts:

  1. Collecting all the existing ML/AI/intelligent related capabilities of Twilio and wrapping them inside Flex. This is done through internal APIs as well as via partners
  2. Having a roadmap vision / story of what AI means in Flex moving forward

AI being the holy grail that it is, you can’t ignore it when launching a new service these days.

Flex Pricing is Key

Pricing for Flex hasn’t been announced, but one thing was made clear – it will be based on a per seat price and not usage based as other Twilio products.

This is where things get somewhat challenging for Twilio, and here’s why:

  • Twilio has been comfortable so far to offer a usage based model. Switching to a per seat model will have its differences in how it calculates its revenue and margins
  • By opting for per seat pricing, Twilio falls into the contact center industry “comfort zone” – the model is known and accepted already
  • But this also makes comparing Twilio Flex pricing to other contact centers rather “easy”. It means I can now compare apples to apples when selecting between Flex and any other vendor
  • We don’t have price points, but if the price point will be based on the industry average or accepted standard, then many analysts and experts will end up saying that there’s no disruption or anything new in Twilio Flex. For the pundits, Flex may seem like an ordinary contact center and without price disruption there can be no disruption with that mindset
  • If the price points are too high, then Twilio will be going after its own contact center customers, who will see this as direct competition. Such a move can signal others that Twilio is willing to go into their turf as well. It will question the potential and attractiveness of joining the Flex marketplace
  • If the price points will be lower, then where will be the margins for Twilio?

My guess is that Twilio is still looking for price validation and it is doing so this week at Enterprise Connect and planning to continue doing so in the coming weeks until it is ready to announce the price points publicly.

Who is Twilio Flex for?

This is the main question, and one that I am not sure of the answer.

Twilio is saying the target audience is 1,000+ seats contact centers. It makes sense to go for the larger contact centers at a time when the transition towards the cloud and digital transformations of contact centers is happening more.

But would I be using it in my business or go through a third party?

Should a Twilio customer that built a contact center on its own on top of Twilio migrate to Flex?

Should a Twilio customer that built a contact center for others to use on top of Twilio see Flex as a threat or as an opportunity to improve its own contact center offering?

Twilio stated that 89% of contact centers today are still deployed on premise, and that the market is large enough. These statement was said to answer two questions:

  1. The market is big enough for both its existing customers and for Flex, so it isn’t competing directly with its customers (I guess its customers will have to decide if that’s true for them or not)
  2. The market is big for Twilio to grow in. Twilio is relying on that to keep growing

Twilio was already trending upwards when the word on Flex leaked by TechCrunch on Feb 17, and has increasing since:

source: Google

Is that related to Flex or not, I can’t say. To me, going to contact centers as an adjacent market and eating up more of the pie there is a bold move. If it succeed, then Twilio will be much bigger than it is today.

The Unknowns

There are things that are still unknown to me here. They are technical ones, but important for my own perspective and analysis. They are related to what wasn’t directly in the briefing or the materials I’ve seen since the official announcement.

Here are a few things I am really interested in:

  • What are the exact integration points for Flex?
  • How are developers expected to integrate with it?
  • Where do you use Twilio APIs? Where will you be making use of Twilio Studio? Where do you write a Twilio Function? How about Twilio Understand?
  • Flex UI is brand new. How does it fair as a standalone product enabler? What can developers do with it?
  • What will it mean to integrate Flex with a CRM? Does it make more sense to integrate the CRM into the Flex UI or does it make more sense to integrate Flex into the CRM UI?
  • What parts of “contextual intelligence” really exist in Flex today? How does it compare to existing market offerings?
  • What do contact center vendors using Twilio think about Flex? How will they react to it?
Is CPaaS Eating CCaaS?

Maybe.

Here’s one way to map the communications landscape:

And here’s another:

What’s your worldview here?

 

The post Twilio Flex = Twilio Flexing its Flexibility (or the programmable contact centers) appeared first on BlogGeek.me.

Kamailio At Fossasia Summit 2018

miconda - Wed, 03/14/2018 - 11:00
I, as a co-founder of Kamailio, will give a presentation at Fossassia Summit 2018, event taking place in Singapore, during March 22-25. It is the largest conference in Asia gathering a consistent group of speakers from many projects and organisations developing or supporting open source software.My presentation with the title “Kamailio – The open source framework to build your own VoIP service” is scheduled at 18:00 on Saturday, March 24, 2018. The focus is on highlighting how to build easily VoIP and realtime communication services with Kamailio in server side and other open source applications for client apps.If you attend the event or just happen to be in the city during the event, get in touch via email (miconda [at] gmail.com) in case you want to chat more about Kamailio and open source RTC.After Fossasia, the next event where to meet many folks from our community is the Kamailio World Conference, May 16-18 2018, in Berlin, Germany.

WebRTC 1.0 – What on earth is it anyway? (register to the webinar)

bloggeek - Mon, 03/12/2018 - 12:00

TL;DR – register to this webinar about WebRTC 1.0

As I am prepping to another launch of my Advanced WebRTC Architecture Course, I went through the content to make sure it is up to date. This is by far the hardest thing about a course about something like WebRTC – what was right on Chrome 63 might not be correct anymore for Chrome 64. Or is it 65 now?

I ended up spending time in updating and refreshing some of the lessons with some new material, but I ended up with one area that the course is weak at. And that’s WebRTC 1.0 information.

The problem there is that while I can tell some of the story, I definitely can’t tell it to the level I wanted. It got me to partner again with Philipp Hancke, which I love working with on lots of mini-projects. I asked Philipp if he will be willing to host such a lesson for me as a live webinar and he said yes (yippie).

What’s in the Webinar?

So here’s what we’re going to do:

Next month, right after Passover, and because Philipp asked for April, we’re going to host a lesson/webinar about WebRTC 1.0.

Philipp will skim quickly over the backstory of WebRTC 1.0, where we are today and more importantly where we’re headed with it. What we will cover in more detail will include answers to questions like:

  • What should you change in your app due to WebRTC 1.0?
  • What new tricks did 1.0 teach the “old” WebRTC dog?
  • Do you need to update my app to be compliant and work in Chrome next year?
  • How much effort is involved in this migration to WebRTC 1.0 anyway?
  • If you pick out a WebRTC project on github, how would you know if it supports WebRTC 1.0 or not?

What I want here is for you (and me) to really understand the impact WebRTC 1.0 is going to have on all of us in 2018 and on.

When?

This webinar/lesson will take place on

Tuesday, April 10

1-2PM EST (view in your timezone)

Save your seat →

The session’s recording will NOT be available after the event itself. While this lesson is free to attend live, the recording will become an integral part of the course’ lessons.

The post WebRTC 1.0 – What on earth is it anyway? (register to the webinar) appeared first on BlogGeek.me.

Kamailio At Asterisk Africa Conference 2018

miconda - Fri, 03/09/2018 - 12:30
Alex Balashov from Evariste Systems, one of our Kamailio management team members, went the long route from Atlanta, USA, to Johannesburg, South Africa, to participate at Asterisk Community Conference Africa 2018, event happening during March 14-15.He is presenting two sessions:The event is promoting Asterisk and open source VoIP technologies, with a selected group of local speakers and invited international guest, besides Alex, one can meet there with  Matt Fredrikson (project lead of Asterisk), David Duffett (community manager of Asterisk) or Lorenzo Emilitri (QueueMetrics) and interact via remote video participation with Allison Smith (the Asterisk IVR voice) and Dan Jenkins (CommCon UK).Should you be in the area and working with real time communications, try not to miss this conference. Catch Alex around and get more familiar with Kamailio and the latest project updates!Also do not forget about the next Kamailio World Conference, May 14-16, 2018, in Berlin, Germany! Alex will be there as well, the details for most of the sessions are published. Still few weeks for early registration price, however, be aware that the number of seats are limited, at the past editions we were fully booked. Do not delay the registration in order to secure your participation!Thanks for flying Kamailio!

Part 2: Building a AIY Vision Kit Web Server with UV4L

webrtchacks - Tue, 03/06/2018 - 12:36

In part 1 of this set, I showed how one can use UV4L with the AIY Vision Kit send the camera stream and any of the default annotations to any point on the Web with WebRTC. In this post I will build on this by showing how to send image inference data over a WebRTC […]

The post Part 2: Building a AIY Vision Kit Web Server with UV4L appeared first on webrtcHacks.

AIY Vision Kit Part 1: TensorFlow Computer Vision on a Raspberry Pi Zero

webrtchacks - Tue, 03/06/2018 - 12:35

A couple years ago I did a TADHack  where I envisioned a cheap, low-powered camera that could run complex computer vision and stream remotely when needed. After considering what it would take to build something like this myself, I waited patiently for this tech to come. Today with Google’s new AIY Vision kit, we are […]

The post AIY Vision Kit Part 1: TensorFlow Computer Vision on a Raspberry Pi Zero appeared first on webrtcHacks.

You Better Ignore the Default Protocol Ports You Implement

bloggeek - Mon, 03/05/2018 - 12:00

Default protocol ports are great, but ones that will work in the real world are better.

If you want something done properly, you should probably ignore the specification of the protocols you use every once in awhile. When I worked years ago in implementing protocols directly, there was this notion – you need to send messages in the strictest format possible but be very lenient in how you enable receiving them. The reason behind that is that by being strict on the sender side, you will achieve higher interoperability (more devices will be able to “decipher” what you sent) and by being lenient on the receiving side, you achieve the same (being able to understand messages from more devices). Somehow, it isn’t worth to be right here – it just makes more sense to be smart.

The same apply to default protocol ports.

Assume for the sake of argument that we have a theoretical protocol that requires the use of port number 5349. You setup the server, configure it to listen on that port (after all, we want to be standard compliant), and you run your service.

Will that work well for you?

For the most part, as the illustration above shows, yes it will.

The protocol is probably client-server based. A client somewhere from inside his private network is accessing the Internet, going to the public IP of your server to that specific port and connects. Life is good.

Only sometimes it isn’t.

Hmm… what’s going on here now? Someone in the IT department decided to block outgoing traffic to port 5349. Or maybe, just maybe, he decided to open outgoing traffic solely for ports 80 and 443. And why would he do that? Because that’s where HTTP and HTTPS traffic go to, which is web servers that our browsers connect to. And I don’t know any blue collar employee today who would be able to do his job without connecting the the Internet with his browser. Writing this draft of an article requires such a connection (I do it on Google Doc and then copy it to WordPress once done).

So the same scenario, with the same requirements won’t work if our server decides to use the default port 5349.

What if we decide to pass it through port 443?

Now it has a better chance of working. Why? Because port 443 is reserved for TLS traffic, which is encrypted. This means that beyond the destination of the data, the firewall we’re dealing with can’t know a thing about what’s being sent or where, so he will usually treat it as “HTTPS” type of traffic and will just pass it along.

There are caveats here. If the enterprise is enforcing a local trusted web proxy, it actually acts as a man in the middle and opens all packets, which means he now sees the traffic and might decide not to pass it since he can’t understand it.

What we’re aiming for is best coverage. And port 443 will give us that. It might get blocked, but there’s less of a chance for that to happen.

Here are a few examples where ignoring your protocol default ports is suggested:

TURN

The reason for this article is TURN. TURN is used by WebRTC (and other protocols) to get your media session connected in case you can’t send it directly peer-to-peer. It acts as a relay to the media that sits in the public internet with the sole purpose of punching holes in NATs and traversing firewalls.

TURN runs over UDP, TCP and TLS. And yes. You WANT to configure and run it on UDP, TCP and TLS (don’t be lazy – configure them all – it won’t cost you more).

Want to learn more about WebRTC in general and NAT traversal specifically? Enroll to my WebRTC training today to become a pro WebRTC developer.

Enroll to course

The default ports for your STUN and TURN servers (you’re most probably going to deploy them in the same process) are:

  • 3478 for STUN (over UDP)
  • 3478 for TURN over UDP – same as STUN
  • 3478 for TURN over TCP – same as STUN and as TURN over UDP
  • 5349 for TURN over TLS

A few things that come to mind from this list above:

  1. We’re listening to the same port for both UDP and TCP, and for both STUN and TURN – which is just fine
  2. Remember that 5349 from my story above?

Here’s the thing. If you deploy only STUN, then many WebRTC sessions won’t connect. If you deploy also with TURN/UDP then some sessions still won’t connect (mainly because of IT admins blocking UDP altogether). TURN/TCP might not connect either. And guess what – TURN/TLS on 5349 can still be blocked.

What a developer to do in such a case?

Just point your WebRTC devices towards port 443 for ALL of your STUN/TURN traffic and be done with it. This approach has no real downsides versus deploying with the default ports and all the potential upsides.

Here’s how a couple of services I checked almost on random do this properly (I’ve used chrome://webrtc-internals to get them):

Hangouts Meet

Or Google Hangouts. Or Google Meet. Or whatever name it now has. I did use the Meet one:

https://meet.google.com/goe-nxxv-ryp?authuser=1, { iceServers: [stun:stun.l.google.com:19302, stun:stun1.l.google.com:19302, stun:stun2.l.google.com:19302, stun:stun3.l.google.com:19302, stun:stun4.l.google.com:19302], iceTransportPolicy: all, bundlePolicy: max-bundle, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 }, {enableDtlsSrtp: {exact: false}, enableRtpDataChannels: {exact: true}, advanced: [{googHighStartBitrate: {exact: 0}}, {googPayloadPadding: {exact: true}}, {googScreencastMinBitrate: {exact: 400}}, {googCpuOveruseDetection: {exact: true}}, {googCpuOveruseEncodeUsage: {exact: true}}, {googCpuUnderuseThreshold: {exact: 55}}, {googCpuOveruseThreshold: {exact: 85}}]}

Google Meet comes with STUN:19302 with 5 different subdomain names for the server. There’s no TURN here because the service uses ICE-TCP directly from their media servers.

The selection of port 19302 is quaint. I couldn’t find any reference to that number or why it is interesting (not even a mathematical one).

Google AppRTC

You’d think Google’s showcase of WebRTC would be an exemplary citizen of a solid STUN/TURN configuration. Well… he’s what it got me:

https://appr.tc/r/986533821, { iceServers: [turn:74.125.140.127:19305?transport=udp, turn:[2a00:1450:400c:c08::7f]:19305?transport=udp, turn:74.125.140.127:443?transport=tcp, turn:[2a00:1450:400c:c08::7f]:443?transport=tcp, stun:stun.l.google.com:19302], iceTransportPolicy: all, bundlePolicy: max-bundle, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 },

It had TURN/UDP at 19305, TURN/TCP at 443 and STUN at 19302. Unlike others, it had explicit IPv6 addresses. It had no TURN/TLS.

Jitsi Meet

https://meet.jit.si/RandomWerewolvesPierceAlone, { iceServers: [stun:all-eu-central-1-turn.jitsi.net:443, turn:all-eu-central-1-turn.jitsi.net:443, turn:all-eu-central-1-turn.jitsi.net:443?transport=tcp, stun:all-eu-west-1-turn.jitsi.net:443, turn:all-eu-west-1-turn.jitsi.net:443, turn:all-eu-west-1-turn.jitsi.net:443?transport=tcp, stun:all-eu-west-2-turn.jitsi.net:443, turn:all-eu-west-2-turn.jitsi.net:443, turn:all-eu-west-2-turn.jitsi.net:443?transport=tcp], iceTransportPolicy: all, bundlePolicy: balanced, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 }, {advanced: [{googHighStartBitrate: {exact: 0}}, {googPayloadPadding: {exact: true}}, {googScreencastMinBitrate: {exact: 400}}, {googCpuOveruseDetection: {exact: true}}, {googCpuOveruseEncodeUsage: {exact: true}}, {googCpuUnderuseThreshold: {exact: 55}}, {googCpuOveruseThreshold: {exact: 85}}, {googEnableVideoSuspendBelowMinBitrate: {exact: true}}]}

Jitsi shows multiple locations for STUN and TURN – eu-central, eu-west with STUN:443, TURN/UDP:443 and TURN/TCP:443. No TURN/TLS.

appear.in

https://appear.in/bloggeek, { iceServers: [turn:turn.appear.in:443?transport=udp, turn:turn.appear.in:443?transport=tcp, turns:turn.appear.in:443?transport=tcp], iceTransportPolicy: all, bundlePolicy: balanced, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 }, {advanced: [{googCpuOveruseDetection: {exact: true}}]}

appear.in went for TURN/UDP:443, TURN/TCP:443 and TURN/TLS:443. STUN is implicit here via the use of TURN.

Facebook Messenger

https://www.messenger.com/videocall/incall/?peer_id=100000919010117, { iceServers: [stun:stun.fbsbx.com:3478, turn:157.240.1.48:40002?transport=udp, turn:157.240.1.48:3478?transport=tcp, turn:157.240.1.48:443?transport=tcp], iceTransportPolicy: all, bundlePolicy: balanced, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 }, {advanced: [{enableDtlsSrtp: {exact: true}}]}

Messenger uses port 3478 for STUN, TURN over UDP on port 40002, TURN over TCP on port 3478. It also uses TURN over TCP on port 443. No TURN/TLS for Messenger.

Here’s what I’ve learned here:

  • People don’t use the default STUN/TURN ports in their deployments
  • Even if they don’t use ports that make sense (443), they may not use the default ports (See Google Meet)
  • With seemingly something straightforward as STUN/TURN, everyone ends up implementing it differently
MQTT

We’ve looked at at NAT Traversal and its STUN and TURN server. But what about some signaling protocols? The first one that came to mind when I thought about other examples was MQTT.

MQTT is a messaging protocol that is used in the IOT and M2M space. Others use it as well – Facebook for example:

They explained how MQTT is used as part of their Messenger backend for the WebRTC signaling (and I guess all other messages they send over Messenger).

MQTT can run over TCP listening on port 1883 and over TLS on port 8883. But then when you look at the AWS documentation for AWS IOT, you find this:

There’s no port 1883 at all, and now port 443 can be used directly if needed.

 

It would be interesting to know if Facebook Messenger on their mobile app use MQTT over port 443 or 8883 – and if it is port 443, is it MQTT over TLS or MQTT over WebSocket. If what they do with their STUN and TURN servers is any indication, any port number here is a good guess.

SIP

SIP is the most common VoIP signaling protocol out there. I haven’t remembered the details, so I checked in Wikipedia:

SIP clients typically use TCP or UDP on port numbers 5060 or 5061 for SIP traffic to servers and other endpoints. Port 5060 is commonly used for non-encrypted signaling traffic whereas port 5061 is typically used for traffic encrypted with Transport Layer Security (TLS).

Port 5060 for UDP and TCP traffic. And port 5061 for TLS traffic.

Then I asked a friend who knows a thing or two about SIP (he’s built more than his share of production SIP networks). His immediate answer?

443.

He remembered 5060 was UDP, 5061 was TCP and 443 is for TLS.

When you want to deploy a production SIP network, you configure your servers to do SIP over TLS on port 443.

Next Steps

If you are looking at protocol implementations and you happen to see some default ports that are required, ask yourself if using them is in your best interest. To get past firewalls and other nasty devices along the route, you might want to consider using other ports.

While you’re at it, I’d avoid sending stuff in the clear if possible and opt for TLS on the connection, which brings us back to 443. Possibly the most important port on the Internet.

If you are serious about learning WebRTC, then check out my online WebRTC training:

Enroll to course

The post You Better Ignore the Default Protocol Ports You Implement appeared first on BlogGeek.me.

Kamailio v5.1.2 Released

miconda - Thu, 03/01/2018 - 21:00
Kamailio SIP Server v5.1.2 stable is out – a minor release including fixes in code and documentation since v5.1.1. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.Kamailio® v5.1.2 is based on the latest source code of GIT branch 5.1 and it represents the latest stable version. We recommend those running previous 5.1.x or older versions to upgrade. There is no change that has to be done to configuration file or database structure comparing with the previous releases of the v5.1 branch.Resources for Kamailio version 5.1.2Source tarballs are available at:Detailed changelog:Download via GIT: # git clone https://github.com/kamailio/kamailio kamailio
# cd kamailio
# git checkout -b 5.1 origin/5.1Relevant notes, binaries and packages will be uploaded at:Modules’ documentation:What is new in 5.1.x release series is summarized in the announcement of v5.1.0:Do not forget about the next Kamailio World Conference, taking place in Berlin, Germany, during May 14-16, 2018. The first group of sessions and speakers were announced, registration is open!Thanks for flying Kamailio!

Kamailio v5.0.6 Released

miconda - Tue, 02/27/2018 - 19:00
Kamailio SIP Server v5.0.6 stable is out – a minor release including fixes in code and documentation since v5.0.5. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.Kamailio v5.0.6 is based on the latest version of GIT branch 5.0. We recommend those running previous 5.0.x or older versions to upgrade. There is no change that has to be done to configuration file or database structure comparing with the previous release of the v5.0 branch.Resources for Kamailio version 5.0.6Source tarballs are available at:Detailed changelog:Download via GIT: # git clone https://github.com/kamailio/kamailio kamailio
# cd kamailio
# git checkout -b 5.0 origin/5.0Relevant notes, binaries and packages will be uploaded at:Modules’ documentation:What is new in 5.0.x release series is summarized in the announcement of v5.0.0:Note: the branch 5.0 is the previous stable branch. The latest stable branch is 5.1, at this time with v5.1.1 being released out of it. Be aware that you may need to change the configuration files and database structures from 5.0.x to 5.1.x. See more details about it at:Check also the details of next Kamailio World Conference, taking place in Berlin, Germany, during May 14-16, 2018. Details with a selection of speakers and sessions have been published. The registration is open!Thanks for flying Kamailio!

Kamailio v4.4.7 Released

miconda - Mon, 02/26/2018 - 18:00
Kamailio SIP Server v4.4.7 stable is out – a minor release including fixes in code and documentation since v4.4.6. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.Kamailio v4.4.7 is based on the latest version of GIT branch 4.4. We recommend those running previous 4.4.x versions to upgrade either to v4.4.7 or even better to 5.0.x or 5.1.x series. When upgrading to v4.4.7, there is no change that has to be done to configuration file or database structure comparing with the previous release of the v4.4 branch.Important: Kamailio v4.4.7 is the last planned release in 4.4.x series. From this moment, the maintained stable release series are 5.0.x and 5.1.x.Resources for Kamailio version 4.4.7Source tarballs are available at:Detailed changelog:Download via GIT: # git clone https://github.com/kamailio/kamailio kamailio
# cd kamailio
# git checkout -b 4.4 origin/4.4Relevant notes, binaries and packages will be uploaded at:Modules’ documentation:What is new in 4.4.x release series is summarized in the announcement of v4.4.0:Note: the branch 4.4 is an old stable branch, going out of mainenance with the release of v4.4.7 – if no major regression discovered, then no future releases will be made out of branch 4.4. The latest stable branch is 5.1, at this time with v5.1.1 being released out of it. The project is officially maintaining the last two stable branches, these are now 5.0 and 5.1. Therefore an alternative is to upgrade to latest 5.1.x – be aware that you may need to change the configuration files and database structures from 4.4.x or 5.0.x to 5.1.x. See more details about it at:We hope also to meet many of you at the next Kamailio World Conference, May 14-16, 2018, in Berlin, Germany. The details for a selection of speakers and sessions has been already published and the registration is open. See more on the website of the event at:Thanks for flying Kamailio!

“Open Source” SDK for SaaS and CPaaS are… Meh

bloggeek - Mon, 02/26/2018 - 12:00

Open Source SDKs from SaaS vendors aren’t interesting.

Every once in awhile, I see a SaaS vendor boasting to have open source SDKs. The assumption is that if you say “open source” on something you are doing it immediately makes the thing free and open. The truth is far from it.

Planning on selecting a CPaaS vendor? Check out this shortlist of CPaaS vendor selection metrics:

Get the shortlist

Open Source Today

I want to start with an explanation of open source today.

Open source is a way for a vendor or a single developer to share his code with the “community” at large. There are many reasons why a vendor would do such a thing:

  1. To get others in the industry to assist in the effort of building and maintaining that code base (in most cases, such initiatives fail to meet their objective)
  2. To show technical savviness as a company. This is good for the brand’s name and when a company wants to attract top notch developers
  3. To showcase one’s technical abilities. An individual developer can use his github account to attract potential employers and projects
  4. To offer a reference implementation or a helper library for integrating with the company’s application

The above reasons are related to companies with proprietary software that they want protected. What they end up doing, is share modules or parts of their codebase as open source. Usually ones they assume won’t help a competitor copy and compete with them directly.

The other approach, is to use open source as a full fledged business model:

  1. Releasing a project as open source, then offering a non-open source license
  2. Or offering support and an SLA to it
  3. Or offering a hosted version of it
  4. Or offering customization work around it

A good example here is FreeSWITCH. They are offering support and customization work around this popular open source project. And now, there’s SignalWire, an upcoming hosted version of FreeSWITCH.

You see, for a company to employ open source, there needs to be an upside. Philanthropy isn’t a business model for most.

Cloud versus On-premise when Consuming Open Source

SaaS changes the equation a bit.

I tried placing different open source licenses on a kind of a graph, alongside different deployment models. Here’s what I got:

(if you’re interested here’s where to learn more about open source licenses)

CPaaS and SaaS in general are cloud deployments. They enable the company more leeway in the type of open source licenses it can consume. An on-premise type of business better beware of using GPL, whereas a cloud deployment one is just fine using GPL.

This isn’t to say that GPL can’t be used by on premise deployments – just that it complicates things to a point that oftentimes the risks of doing so outweighs the potential reward.

CPaaS / SaaS vendors and Interfaces

On the other end of the equation you’ll find how customers interact with CPaaS vendors.

Towards that goal, the main approach today is by way of an API. And APIs today are almost always defined using REST.

In the illustration above, we have a SaaS or CPaaS vendor exposing a REST API. On top of that API, customers can build their own applications. The vendor wants to make life easier for them, to increase adoption, so he ends up implementing helper libraries. The helper libraries can be official ones or unofficial ones, either created by third parties or the vendor himself. They can just be reference implementations on top of the API, offered as starting points to customers with no real documentation or interface of their own.

For the most part, helper libraries are something I’d expect customers to deploy and run on their servers, to make it easier for them to connect from whatever language and framework they want to use to the vendor’s service.

On a client device, we have SDKs. In some ways, SDKs are just like helper libraries. They connect to the backend REST API, though sometimes they may have a more direct/optimized connection to the platform (proprietary, undocumented WebSocket connection for example).

SDKs is something you’ll find with most of the services where a state machine needs to be maintained on the client side. In the context of most of the things I write here, this includes CPaaS platforms deciding to offer VoIP calling (voice or video) by way of WebRTC or by other means over non-browser implementations. In many of these cases, the developers never actually implement REST calls – they just use the SDK’s interface to get things done.

Which is where the notion of open source SDKs sometimes comes up.

The Open Source SDK

If we’re talking about a SaaS platform, then having the source code of the SDK has its benefits, but none of them relate to “open source”. There’s no ecosystem or adoption at play for the open source code.

The reasons why we’d like to have the source code of an SDK are varied:

  1. Reading the code can give us better understanding of how the service works
  2. Being able to run the code step by step in a debugger makes it easier to troubleshoot stuff
  3. Stack traces are more meaningful in crashes

Here’s the thing though –

Trying to market the SDK as open source is kinda misleading as to what you’re getting out of your end of the deal.

When it comes to CPaaS and WebRTC, there’s this added complexity: vendors will “open source” or give the source code of their JS SDK (because there’s no real alternative today, at least not until WebAssembly becomes commonplace). As for the Android and iOS SDKs, I don’t remember seeing one that is offered in source code form – probably because all vendors are tweaking and modifying the baseline WebRTC code.

SaaS and Open Source

In a way, SaaS has changed the models and uses of open source. When it was first introduced to the world, software was executed on premise only. There was no cloud, and SDKs and frameworks were commercially licensed. If you wanted something done, you either had to license it or build it yourself.

Open source came and changed all that by enabling vendors to build on top of open source code. Vendors came out with business models around dual licensing of code as well as support and customization models.

SaaS vendors today use open source in three different ways:

  1. They use it to build their platform. Due to their model, they are less restricted as to the type of open source licenses they can live with
  2. They open source code modules. Either by forking and sharing modified open source modules they use or by open sourcing specific modules
    1. Mostly because their developers push towards that goal
    2. And because they believe these modules won’t give away any of their competitive advantages
    3. Or to attract potential customers
  3. They may open source their whole platform. Not common, but it does happen. Idea here is to make revenue out of hosting the service at scale and giving away the baseline service for free (think WordPress for example)

 

Planning on selecting a CPaaS vendor? Check out this shortlist of CPaaS vendor selection metrics:

Get the shortlist

The post “Open Source” SDK for SaaS and CPaaS are… Meh appeared first on BlogGeek.me.

Do I Need a Media Server for a One-to-Many WebRTC Broadcast?

bloggeek - Tue, 02/20/2018 - 12:00

TL;DR – YES.

Do I need a media server for a one-to-many WebRTC broadcast?

That’s the question I was asked on my chat widget this week. The answer was simple enough – yes.

Decided you need a media server? Here are a few questions to ask yourself when selecting an open source media server alternative.

Get the Selection Sheet

Then I received a follow up question that I didn’t expect:

Why?

That caught me off-guard. Not because I don’t know the answer. Because I didn’t know how to explain it in a single sentence that fits nicely in the chat widget. I guess it isn’t such a simple question either.

The simple answer is a limit in resources, along with the fact that we don’t control most of these resources.

The Hard Upper Limit

Whenever we want to connect one browser to another with a direct stream, we need to create and use a peer connection.

Chrome 65 includes an upper limit to that which is used for garbage collection purposes. Chrome is not going to allow more than 500 concurrent peer connections to exist.

500 is a really large number. If you plan on more than 10 concurrent peer connections, you should be one of those who know what they are doing (and don’t need this blog). Going above 50 seems like a bad idea for all use cases that I can remember taking part of.

Understand that resources are limited. Free and implemented in the browser doesn’t mean that there aren’t any costs associated with it or a need for you to implement stuff and sweat while doing so.

Bitrates, Speeds and Feeds

This is probably the main reason why you can’t broadcast with WebRTC, or with any other technology.

We are looking at a challenging domain with WebRTC. Media processing is hard. Real time media processing is harder.

Assume we want to broadcast a video at a low VGA resolution. We checked and decided that 500kbps of bitrate offers good results for our needs.

What happens if we want to broadcast our stream to 10 people?

 

Broadcasting our stream to 10 people requires bitrate of 5mbps uplink.

If we’re on an ADSL connection, then we can find ourselves with 1-3mbps uplink only, so we won’t be able to broadcast the stream to our 10 viewers.

For the most part, we don’t control where our broadcasters are going to be. Over ADSL? WiFi? 3G network with poor connectivity? The moment we start dealing with broadcast we will need to make such assumptions.

That’s for 10 viewers. What if we’re looking for 100 viewers? A 1,000? A million?

With a media server, we decide the network connectivity, the machine type of the server, etc. We can decide to cascade media servers to grow our scale of the broadcast. We have more control over the situation.

Broadcasting a WebRTC stream requires a media server.

Sender Uniformity

I see this one a lot in the context of a mesh group call, but it is just as relevant towards broadcast.

When we use WebRTC for a broadcast type of a service, a lot of decisions end up taking place in the media server. If a viewer has a bad network, this will result with packet loss being reported to the media server. What should the media server do in such a case?

While there’s no simple answer to this question, the alternatives here include:

  • Asking the broadcaster to send a new I-frame, which will affect all viewers and increase bandwidth use for the near future (you don’t want to do it too much as a media server)
  • Asking the broadcaster to reduce bitrate and media quality to accomodate for the packet losses, affecting all viewers and not only the one on the bad network
  • Ignoring the issue of packet loss, sacrificing the user for the “greater good” of the other viewers
  • Using Simulcast or SVC, and move the viewer to a lower “layer” with lower media quality, without affecting other users

You can’t do most of these in a browser. The browser will tend to use the same single encoded stream as is to send to all others, and it won’t do a good job at estimating bandwidth properly in front of multiple users. It is just not designed or implemented to do that.

You Need a Media Server

In most scenarios, you will need a media server in your implementation at some point.

If you are broadcasting, then a media server is mandatory. And no. Google doesn’t offer such a free service or even open source code that is geared towards that use case.

It doesn’t mean it is impossible – just that you’ll need to work harder to get there.

Looking to learn more about WebRTC? In the coming weeks, I’ll be refreshing my online WebRTC training. Join now so you don’t miss out.

Enroll to the WebRTC course

 

The post Do I Need a Media Server for a One-to-Many WebRTC Broadcast? appeared first on BlogGeek.me.

Kamailio World 2018: Preview With A Selection Of Sessions

miconda - Mon, 02/19/2018 - 22:00
Less than 3 months till the start of the 6th edition of Kamailio World Conference, time if flying fast!About one week ago we published the details for a group of accepted speakers, today we made a selection of sessions at the Kamailio World 2018. We had more proposals than we could accommodate, we are trying hard to fit in as many as possible, taking also in consideration the feedback from the participants at the past editions.For now you can head to the Schedule page and see the details of 15 sessions, from both workshops and conference days:A very divers range of topics, from using Kamailio for emergency services (112/911), scaling with Redis backend, deploying in a containerized environment with Docker and Kubernetes, how to migrate the SIP routing logic to rich KEMI languages such as Lua, Python or Javascript, unit testing for Kamailio and test driven deployments, to blockchains in telephony, using Kamailio and FreeSwitch together, or latest updates from Asterisk PBX.The IMS/VoLTE workshop is going to show what you can do with latest Kamailio in mobile networks. And, of course, we have the very popular two sessions that never missed a Kamailio World edition: Dangerous Demos with James Body and VUC Visions with Randy Resnick.The details for other speakers and sessions will be published in the near future, stay tuned!Do not miss Kamailio World Conference 2018, it is going to be another great edition! You can register now!Looking forward to meeting many of you at the next Kamailio World Conference, during May 14-16, 2018, in Berlin, Germany!

DB_REDIS – Kamailio Database Connector Module For Redis Serve

miconda - Fri, 02/16/2018 - 21:18
Andreas Granig from Sipwise has pushed recently a new module for Kamailio, respectively db_redis, which implements database connector API. The readme of the module can be found at:Practically it should be possible to use db_redis module instead of any other database connector module, such as db_mysql or db_postgres., for modules like usrloc, auth_db, a.s.o.Redis is know to be very fast key-value storage system, with very good replication and redundancy option, already popular in Kamailio ecosystem  – see also ndb_redis or topos_redis modules.Andreas is testing the performances of Kamailio with db_redis versus other popular database connectors, the results are very promising in a boost of performances.As a matter of fact, Andreas will give a presentation about this topic at Kamailio World Conference 2018, a session you should not miss if scalability is important for your VoIP/RTC service! See you there!Thanks for flying Kamailio!

Testing Kamailio On RaspberryPi 3

miconda - Thu, 02/15/2018 - 21:17
Stefan Mititelu has shared some statistics about stressing Kamailio on a Raspberry PI 3 device. All the relevant details were made available at:Here are device’s characteristicsAn over-clocked Raspberry PI 3 running Raspbian Stretch with a U3 MicroSD card.

pi@raspberrypi:~ $ cat /etc/issue
Raspbian GNU/Linux 9 \n \l
pi@raspberrypi:~ $ uname -a
Linux raspberrypi 4.9.59-v7+ #1047 SMP Sun Oct 29 12:19:23 GMT 2017 armv7l GNU/Linux

pi@raspberrypi:~ $ cat /boot/config.txt
...
total_mem=1024
arm_freq=1300
core_freq=500
sdram_freq=500
sdram_schmoo=0x02000020
over_voltage=2
sdram_over_voltage=2His remarks on Kamailio’s sr-users mailing list:The tests ran for 60 seconds, repeated a couple of times, and they were done in a LAN, using PI’s ethernet interface, running Kamailio 5.1.1.
  1. REGISTER/200, __with db_text__
    – at 900 cps test did finish: all UAC registered; pi htop threads were ~15-20%
    – at 950 cps test did NOT finish: got “Overload warning” on my UAC/UAS SIPp testing machine
  2. INVITE/180/200/PAUSE(3sec)/BYE/200, __with no media__
    – at 370 cps test did finish: all UAC->UAS calls completed; ~150 “180 Trying” Unexpected-Msg on UAC side; pi htop threads were ~50%
    – at 380 cps test did NOT finish: few(~5) UAC->UAS calls not completed; pi htop threads were ~50%
The results are really impressive (even if the used testing configs were really basic ones)!!!Moreover, I think that I’ve reached the limit of my current SIPp testing machine, but not of PI’s.Should you have something interesting to share about using Kamailio, do not hesitate to contact us, we will gladly publish an article on our website.Thanks for flying Kamailio!

Transcoding With Kamailio And RTPEngine

miconda - Wed, 02/14/2018 - 21:11
The developers at Sipwise were very engaged and creative lately, bringing major features in the Kamailio ecosystem:
  • audio transcoding support in RTPEngine by Richard Fuchs
  • database API connector implementation for Redis by Andreas Granig (expect a post here about it very soon as well as a presentation at Kamailio World Conference 2018)
Sipwise is one of the oldest companies involved in Kamailio project, since SER/OpenSER times — likely out there in the community are very few that used (or even heard of) the OpenSER Configuration Wizard published by Andreas Granig around years 2006-2007, but that helped many to start building Kamailio-based VoIP platforms back in those days. Andreas, the CTO and one of the founders of Sipwise, has been member of Kamailio management team for more than 10 years now.Back to the topic of this article, RTPEngine introduced recently the capability of transcoding audio channel for SIP/VoIP calls. It relies on ffmpeg project, therefore the it supports the relevant codecs out there, respectively:
  • G.711 (a-Law and µ-Law)
  • G.722
  • G.723.1
  • G.729
  • Speex
  • GSM
  • iLBC
  • Opus
  • AMR (narrowband and wideband)
Another feature added along with the transcoding was the support for repacketization of the RTP traffic, which can help in increasing QoS over long distance connections.These features are immediately available even on old releases of Kamailio (such as v5.0.x or 5.1.x), the control protocol for RTPEngine being flexible to support such new commands. The commands are not yet documented inside Kamailio’s rtpengine module, but you can read more about them in the README of RTPEngine application:It is no wonder that this topic became a hot discussion on Kamailio’s sr-users mailing list.Along with its old popular feature to gateway between WebRTC DTLS-SRTP and plain RTP (decryption/encryption) as well as the high throughput capacity with in-kernel RTP packets forwarding (useful for NAT traversal or QoS), RTPEngine is nowadays a must-have component in modern Kamailio-based RTC platforms.Here we express our great appreciation for all these contributions by Sipwise and their continuous support for Kamailio project over the years!Exciting times ahead for Kamailio project, a lot of new features are baking as you read here! Join us at the 6th edition of Kamailio World Conference, May 14-16, 2018, in Berlin, Germany, to meet the developers and learn more about using Kamailio and related projects. Registration is open!Thanks for flying Kamailio!

The Internet of Things or Things on the Internet?

bloggeek - Mon, 02/12/2018 - 12:00

Time to stop playing things on the internet and start building the internet of things.

We’ve been using that stupid IOT acronym for quite some time. Probably a decade. The idea and notion that every object can be network enabled, share its collected data and receive its commands remotely is quite exciting. I think we’re far from that vision.

It isn’t that we’re not making progress. We are. The apartment building I now live in is 3 years old. It is more automated than the previous apartment building I lived in, which was 15 years old. I wouldn’t call it IOT or a smart building quite yet. And I don’t think there’s a simple way to turn a dumb building into a smart one either.

When we moved to our new apartment we renovated a bit. There was this opportunity to add smart-home capabilities into the apartment. There were just a few teeny set of problems here:

  1. There’s no real business case for us yet. As a family, we really don’t need a smart-home, and frankly – I still haven’t seen one to appreciate the added benefit
  2. Since we’re in a highrise, the need for an apartment security/surveillance system seemed like an overkill. The most we ended up with is a peephole camera for the door. Mainly to empower or kids to see who’s knocking (no IOT or smarts in it)
  3. Talking to the electrician to ended up dealing with our power outlets at home, I understood that there’s not enough electricians available who know how to install a smart-home kit here in Israel

And to top it all, it felt like a one time undertaking that will be hard/impossible to upgrade or modify later on without a complete overhaul. That wasn’t what I was aiming for.

Mozilla just announced their Things Gateway that can be installed on a Raspberry Pi 3. It is a rather interesting project, especially since its learnings are then applied to the W3C Web of Things Interest Group with the intent of reducing the fragmentation of IOT. They’ve got their hands full of work.

IOT today is a patchwork of devices and companies, each trying to become a dominant player. The end result is that we’re living in a world where things can be placed on the internet, but they don’t amount for an internet of things.

Here are a few questions/hurdles that I think we’ll need to answer as an industry before we can reach that vision of IOT.

Security

I am putting security here first. Here’s why:

  1. We all know it is mandatory
  2. We all know it is left as a backlog item if it is considered at all

I’ve seen it happen with VoIP and it is definitely happening today with IOT.

Until this becomes a priority, IOT will not really happen.

Security has many different aspects to it:

  • Encryption of the communications, to maintain privacy and allow for authorization and authentication of it
  • Upgradability, which itself should be secure, straightforward and automated
  • Audit logs that are hard to tamper with, so we can investigate hacks

Most vendors won’t be able to get these done properly to being with. And they don’t have any real incentive to do that either.

Standardization

There’s a need for standardization in this space. One that tackles all levels of the IOT food-chain.

Out of the top of my head, here are a few areas:

  • Physical – Wi-Fi, Zigbee, Bluetooth – all are standards for the underlying network layer to be used. There’s also RFID and other type of connections that can be used. And we need to factor in 5G at some point. We’ve got wireless ones and wireline ones. A total mess. Just look at the mozilla Things Gateway announcement for the set of connectors they support and how these get supported. Too much information to get things done easily
  • Transport – once we get communications, and assume (naively) that we have IP communications going, do we then run our data over TCP? Or TLS? Or maybe UDP? Or should we go for QUIC? Or HTTP/2? Should we do it over MQTT maybe? Over a WebSocket? There’s too many alternatives here
  • Signaling – What are the types of messages we’re going to allow? What controls what sensor data? How do we describe it in a way that can be easily extendable and unambiguous? I’ve been there with VoIP and it was hard enough. Doing it for IOT is an order of magnitude harder (more players, more devices, more everything)
  • Processing – this relates to the next topic of automation. Once we can collect, control and make decisions over a single device, can we do it in aggregate, and in ways that won’t lock us in to a single vendor?

I don’t believe we’ll get this thing standardized properly in our industry for quite some time.

Automation

I’ve seen a lot of rules engines when it comes to IOT. You can program them to create sequences of events – if the density sensor indicates someone is at home, open the lights.

The problem is that you need to program them. This can’t scale.

The other problem is the issue of what to do with all that sensor data? Someone needs to collect it, aggregate it, process it, analyze it and make decisions out of it.

Simple rule engines are nice, but they won’t get us far down the IOT path.

We also need to add machine learning and AI into the mix.

The end result? Probably similar in nature to AWS Deep Lens. Only problem, it either needs to be really generic and flexible.

Different Industries, Different Requirements and Ecosystems

There are different markets in IOT. they have different needs and different customers. They will have different ecosystems around them.

In broad strokes, we can split to consumer and enterprise. Enterprise here includes industrial, smart cities, etc. The consumer is all about the home, the car and the self.

Who will be the players here?

From Smartphones to Smart Speakers

This is where I think we made the most progress.

Up until a year ago, IOT was something you end up delivering to customers via apps on a smartphone. You purchase a lightbulb, you get an app. You get a new TV, there’s an app. Refrigerator? App.

Amazon Alexa did something miraculous. It moved the discussion over the home from an app towards a stationary home device with voice activation and control. No screen or touch screen needed.

Since then, Google and Apple have joined and voice assistants in the home are all the rage now.

In some ways, I expect this to find its way into the enterprise as well. First via conference rooms and later – who knows?

This is one more piece in the IOT puzzle.

Where do we go from here?

I have no clue.

To me, it seems that we’re still in the things on the internet, and we will be there for a lot longer.

The post The Internet of Things or Things on the Internet? appeared first on BlogGeek.me.

Kamailio World 2018 – First Group Of Speakers

miconda - Thu, 02/08/2018 - 14:06
The details for the first group of speakers at Kamailio World Conference 2018 have been published. So far they come from three continents: Europe, North America and Asia, many presenting for the first time at our event.The two sessions present at all editions so far will be there also in 2018, at our 6th edition, respectively Dangerous Demos with James Body and VUC Visions with Randy Resnick.Besides covering various use cases for KamailioAsterisk or FreeSwitch, the sessions go into WebRTC, VoLTE/IMS, IoT, blockchains for telecommunications or scalability using NoSQL data storage systems. Definitely another edition with very interesting content – soon we will publish more details about the sessions as well.See more about the speakers at:You can register now to benefit of the early registration price:Looking forward to meeting many of you at Kamailio World Conference, May 14-16, 2018, in Berlin, Germany!Thanks for flying Kamailio!

Pages

Subscribe to OpenTelecom.IT aggregator

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Yet more available pages

Responsive grid

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Typography

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.