News from Industry

Jeff Lawson on the Past, Present and Future of Programmable Communications

bloggeek - Thu, 11/16/2017 - 12:00

An interview with Jeff Lawson, Co-founder and CEO of Twilio.

After going to Twilio Signal event in London in September, I was asked by Twilio’s analyst relations about the event. I shared my thoughts in a lengthy article already, so it was easy to send out a link.

I did one more thing.

I decided to ask her if I can interview Jeff Lawson in person the next time I’ll be in San Francisco (which happened to be the following month during Kranky Geek). My expectation was to be ignored, or to just be declined.

But when she came back with an approval… I was clueless as to how to proceed.

We ended up deciding together on a recorded video interview.I was given free reign as to what questions to ask, with the request to share them if possible before the interview. No restrictions were placed. I reached out to a few friends asking for their thoughts of good questions, added a few of mine and prepared for the interview.

Jeff gave me his full attention for the better part of an hour. I ended up using everything we recorded – not removing any of the answers.

The result? A longish interview of around 37 minutes. I’ve added the transcript below the interview as well, if you’re more of a textual person.

I’d like to thank Jeff and the team at Twilio that made this one happen.

Transcript

Tsahi Levent-Levi: Good morning, Jeff.

Jeff Lawson: Good morning.

Tsahi: Okay. I’d like to start with something, a question that I was very interested in. You have two kids, right?

Jeff: Yeah.

Tsahi: Are they young?

Jeff: Yeah.

Tsahi: How do you explain to them what you do every day?

Jeff: That’s a great question. It’s hard to explain to a young kid what Twilio is, but here’s what I’ve found is they use their phones … They don’t use their phones. They steal our phones, but the only thing we really let them do is communicate. If you think about it, that’s the very first thing that a kid wants to do. Call Grandma, and I’ll FaceTime Grandma from the phone. I explain that Twilio … Twilio is a technology. We let everybody who wants to be able to build things that communicate, we let them do that.

Tsahi: Okay. So that’s CPaaS in a way, right?

Jeff: CPaaS. Yeah. In an essence, we let companies call Grandma.

Tsahi: Yes. Okay. Letting companies call Grandma. I’ll tell that to my daughter.

Jeff: If Grandma is your customer and you need to engage with her.

Tsahi: Yes. When you started Twilio, like nine or 10 years ago, what was the original vision behind it? I guess it was slightly different than what it is today.

Jeff: It’s actually pretty similar to what it is today, I have to say. We started Twilio because I’m a software developer. I’ve been a developer for 20 years, and I also started multiple companies prior to Twilio. At each company, a common thread arose. At every single one of those companies, first of all, we were using the power of software to build a customer experience that was better than anything in the industry that had come before us.

I had started a variety of companies. An academic content company for college students online, StubHub, the online ticket exchange for secondhand tickets, and a brick and mortar retailer, of all things. The common thread among all of these was we were using software to build a great customer experience. We were using software to build amazing web applications, to represent the business, to enable us to touch customers. StubHub is the whole ability just to be able to connect folks together to buy and sell tickets. Software was key to that, and the key of software is agility. The ability to constantly iterate, constantly listen to your customers, put something out there in the world that you think solves a problem for them, get feedback and iterate. Sprint over sprint, every couple of weeks, you’re putting out something better, learning from your customers. That’s the super power of software. In every one of those companies, I had another problem. At some point or another, I had always needed to reach out and communicate with my customers. Just makes sense. Every time it happened, I said, “Well, that’s neat, but I’m a software developer. What do I know about making the phone ring?” That’s like magic. I have no idea how that works.

So I’d go to the industry, and I’d say, “How are we supposed to build this idea that we have?” We want to integrate with these systems. I have this idea for how I want to touch our customers, and the industry would say, “Oh, okay. Yeah, yeah. We think we can help you with that. First thing, let’s pull a bunch of copper wires from the carrier to your data center. Then we’re going to rack up a bunch of carrier gear in your data center, and then, let’s see. None of this was designed to do this idea you have, so we’re going to bring in this professional services army. They need to come integrate it, and they’re going to beat up all that equipment and get it to work and do exactly what you want. That will take about two million bucks, and it will take a couple of years to build. Sign here.”

Every time, I remember thinking, “Huh. First of all, millions of dollars for this one part of my customer experience? That’s a lot of money. I don’t think I have that, but if it’s not for the money, though, what’s much more important? The time.” Think about it. Two years before I get version one in front of my customers, before I get that prototype in front of my customers? Get any feedback whatsoever? That’s insane. To software people, to spend two years before you get anything in front of a customer? That’s crazy.

After having that experience at three companies in a row over the course of 10 years, I realized, “Huh. The ethos of communications is diametrically opposed to the ethos of software.” It kind of makes sense. If I was shooting satellites into the air and laying down millions of miles of wire everywhere, I would operate slowly and methodically, and that’s what I would do. That’s what the industry of communications industry has done for 100 years. The thing is, how you and I, how individuals, how companies, get value out of these networks has shifted. It’s no longer about the physical networks. It’s about the software that’s running that defines how we get value out of that network, what we can do, what’s possible. That’s all about software.

So we started Twilio in 2008 to solve the problem of bringing communications out of its legacy in hardware and physical networks and into its future, which is software. Now, we do that with a powerful set of APIs that run in the cloud that let any software developer be able to start building that future.

Tsahi: I’d say you succeeded in that.

Jeff: Oh, well, thank you. We feel like we’ve just started.

Tsahi: Okay. In all of these years, what would be one of the most surprising use cases that you can say that you’ve seen or come in front was like, “Whoa. That’s cool. That’s neat”?

Jeff: There’s so many. We build the platform. We never know what people are going to build. In fact, one of the little Easter eggs in Twilio’s history is that in every press release when we launch a new product, my quote ends with the words, “We can’t wait to see what you build.” Every press release, year after year after year, that was always the line. Nobody ever caught on.

There’s so many use cases. There’s the obvious ones. The whole on demand economy. Things like Uber and Lyft and Airbnb, where Twilio is not only notifying you that your car is arriving, but also connecting drivers and riders together. That whole idea that I would use the internet and my phone to get a stranger to pull a car up and get in the car, I was always told to not get in stranger’s cars. But now, that’s what we do every day, and use cases around how communications, and Twilio has made that safe, made that convenient, made that easy. I never would have thought of those the day we launched Twilio, because really, mobile phones, their current incarnation, smart phones, were just getting started, and that whole idea of it; the applications of it were still completely unknown.

But then there’s the crazy use cases that I still can’t imagine. One of my favorite crazy use cases is there’s some researchers in the United States who study the migratory habits of bears.

Tsahi: Okay.

Jeff: Right? It turns out that if you study the migratory habits of bears, you spend your days in a helicopter flying around looking for bears with binoculars. When you see a bear, you land your helicopter. You shoot the bear with a tranquilizer, then you climb up on the bear. You hope it’s tranquilized, and you put a collar on its neck that’s going to track its location. Then you run away very quickly, hopefully before the bear wakes up. Then a year later, you’re circling in your helicopter. You spot the bear again. You land. You shoot it with a tranquilizer again. You climb up on the bear again, hoping it’s actually tranquilized. You pull the data card out of the collar. You put a new one in, and you run away before the bear wakes up.

They’re like, “There’s got to be a better way. We would love to stop shooting bears with tranquilizers.” So they built a collar that had a 2G radio in it that collects all the data. When the bear wanders into an area with some cell service … They don’t exactly walk around in shopping malls. When it wanders in, it picks up coverage, and it texts all that data off the collar to a receptor they built on Twilio. That was, I thought, such a cool use case, because they’re using this technology, 2G radios. They’re low power. They’ve got maximum range, and it is texting the data off to build an app. You’re like, “Who would have thought of this?” We call this the internet of bears. I’m like, this is a use case I never would have imagined that there were people whose days were spent doing this. They found a use case for Twilio to solve this problem.

Here’s another crazy use case I love. There’s a researcher in the UK who built an app that allows you to call a phone number, and based on taking a recording of your voice, can detect with a very high degree of accuracy whether you’re likely to be predisposed to Parkinson’s disease.

Tsahi: I should use that one.

Jeff: You’ve done it?

Tsahi: No, but do you have the number?

Jeff: It’s a medical trial. They ran this trial. They found it to be an incredibly accurate way of assessing whether or not you are likely to develop Parkinson’s just by calling a phone number on Twilio and recording your voice for about 30 seconds. What’s amazing, as a researcher, he said trials like this would have usually cost millions of dollars to set up and run, because you would have needed all this sort of expertise and specialization. The doctor and his staff built it in a couple of weeks using Twilio for less than $1,000. They ran the whole trial, so it’s amazing.

Tsahi: Yes it is. I want to talk to you a little bit about the market itself and the different players in that market. The main ones that you would have thought that you would have lead or be part of that are the actual Telcos, the carriers, the ones that offer the phone service to the consumers. When you look at what they are doing in CPaaS and in APIs, they have services, but none of them are quite as successful as the other vendors out there. Why do you think that is?

Jeff: Well, I love the carriers. They have a very valuable product in that they are building out all the infrastructure that we all use every day to communicate in every way we can. I would say, though, that the carriers are not well situated to solve these software problems. Historically, carriers have not been software organizations. They’ve been very effective at ground operations, at getting infrastructure out in the field, repairing it, installing it. They’re very good at sales and marketing and servicing customers, but they historically have not been great software organizations, and that’s why I think a new type of company has been needed to come and solve this problem. A company that is a software company.

Twilio, half of our company is our software R&D group. That’s a different ethos. Building a world class software engineering organization, one that can ship and be agile and build resiliency with agility, which is what we call that process of having a high velocity of innovation but also achieving five nines of availability and things like that. That is a hard software problem, and so it takes a different kind of company to solve that.

Tsahi: Okay. What about all of the IaaS vendors? AWS, Google Cloud Platform, Microsoft Azure? They offer infrastructure. They give you compute and storage and databases today, and it’s like shouldn’t they also do communications? It’s the next step. Why do you think that they aren’t there yet or aren’t there today?

Jeff: I think two things. First is, these companies have been primarily focused in the communications for online consumers. A lot of them have a consumer play, whether it’s Microsoft with Skype or Google with Hangouts and things like that. Then on the infrastructure side, I think they’ve gone to the things that they do particularly well on the infrastructure to build, which is to say it’s compute and storage, the most common areas of software computation, which has been a huge meaty market to go after, which has meant that communications hasn’t been the focus of theirs.

I think companies like Twilio, we focus on communications all day every day. That’s what we wake up to do, and so I think we’re uniquely situated to be able to build out great services that target exactly the use cases of communications while the other platforms have been really focused more on compute and storage and the key areas of general purpose computation.

Tsahi: Okay. Another trend that I’ve seen in the last year or so is around UCaaS, Unified Communication as a Service. These companies that offer you desk phones, the video conferencing systems, the things that you need in order to run and operate your enterprise internally. Communication between people inside the enterprise. It seems that all or most of these vendors today start offering APIs. They bundle APIs on top of their service. When you go and talk to them, they usually say, “We’ve got APIs just like Twilio. When you use us, you don’t need to pay for blah, blah, blah, whatever.” It’s like they compare themselves and position themselves as direct competitors to Twilio. Where do you see these two markets going? UCaaS and CPaaS. Where do they meet?

Jeff: Yeah. It’s a very different thing. If you think about Unified Communications as a Service, you’ve got an application. When you build an application, you make all sorts of assumptions about how the world works. You have a domain. You’ve got models. You’ve got all the core components of unified communications. Then when you add APIs to it, which by the way, it makes a ton of sense. Every SaaS product has APIs. In fact, UCaaS has been a little late to that game, I actually believe. Most SaaS companies have had APIs for 10 years. But when you add APIs to a software application, those APIs bring with it all the assumptions that you made about that application. That’s both good for some things … If you want to extend the application in a certain way and you want APIs to do it, that’s what those kinds of APIs are good for.

Twilio is designed from the ground up to be a set of APIs, to be ultimate flexibility. To not make all those assumptions about the one application that the end user is going to use it for, but rather to say these APIs are designed like building blocks to be put together in any way you see fit. That’s why we can address a wide variety of use cases, whether it’s two-factor authentication, identity verification, call centers, anonymous communications, notifications, alerts, anything you can imagine, you can build with Twilio. That’s because we were created from the ground up for this recombination of these building blocks as opposed to taking something that’s already built and fixed in place and then saying, “We’re going to add APIs to it.” It’s just a different way of approaching the API problem. Both of them have merits, but I like our approach, because it gives us the ultimate flexibility to really enter any of these use cases in a really wide breadth of things.

Tsahi: Do you see a unified communication platform as a service; A vendor that does such a service deciding not to build the whole communication infrastructure on its own, but instead using someone like Twilio, a communication platform as a service, to build on top what it is that he is doing?

Jeff: Yeah. I believe that companies whose primary business is communications can and definitely should and would get competitive advantage by using a platform like Twilio to build upon. The reason why is this. It used to be when those UC companies started, their core competency was making the phone ring. Then they’d add some software functionality on top of it, sure, but the vast majority of what they worried about was how do I make the phone ring? The problem is Twilio has democratized that ability.

Every developer … Every mobile developer, every web developer … now has the ability to make the phone ring in 100 countries around the world where we have phone numbers and touch every phone on the planet … Mobile, landline, et cetera … with an API that is reliable, that is scalable, that is global. Now, you’ve got developers out there who get to focus solely on customer experience, features, integration, UX, mobile. Build the things customers really care about and bring this core competency of focusing on user experience that software developers do so well. A one or two developer team can actually create a customer experience that is better than some large company that is focused purely on Unified Communications as a Service.

The existing UCaaS vendors, they would be wise to build on top of the same platform that any developer in the world can come and start to compete with them on. If they don’t, those independent software developers, they can actually start and build companies that are really compelling competitors, because they don’t have to focus on the low level bits. They’re focused on the things customers really care about, which is features, functionality, and the user experience that matters.

We have seen this play out, for example, in the call center market. We’ve seen … At our first conference back in 2011, Tiago was the founder of the company TalkDesk. One developer. Do you know Tiago?

Tsahi: Yes.

Jeff: Back in 2011, Tiago was the founder of TalkDesk. Single developer. He was a web developer. He knew web development really well and focused on building a product that he thought would be really compelling. Because of Twilio, he didn’t have to worry about any of the underlying infrastructure. Now, TalkDesk is hundreds of employees, has raised a lot of venture capital, has Fortune 1000 companies running call centers on them all because he was able to focus on the things customers really care about, is the features and functionality of the application. He did not have to worry about making the phone ring. That’s a really powerful competitive dynamic, as new players come in fundamentally uplevelled, because they’re building on platforms.

Tsahi: When I look at the feature set that you have at Twilio, the different types of functions that you offer, at the end of the day, that is something that is always commented when people talk about Twilio and they’re trying to attack Twilio as a company. They say, “All of the money comes at the end of the day from SMS and voice. That’s what they do, and at the end of the day, that’s too competitive as a market today.” If you actually look and search all of the CPaaS vendors, all of the direct competitors that you have, almost all of them have the same type of characteristics. They make most of their revenue today from SMS and voice and a lot less from the IP based services that they have, from the new things that come out. How do you as the leader in the CPaaS space deal with that and meet that challenge?

Jeff: I think there’s two things. First of all, most mature products for any company are generally going to be the largest contributors of revenue. Especially with developer products. We have a very long commitment to developers, and that takes a little longer than other products to adopt, because you launch a product, then developers have to see that product, understand it, and build their product, and then bring their product to market. You’ve got a little bit of an extra delay as a developer-focused company before products become commercially viable.

That is a long commitment, and that, quite frankly, is why a lot of companies don’t have the stomach to serve developers, because it’s a long commitment to developers to get those products to grow and be large. But we have that commitment. The way we look at developer products is that they have a slower start but then a fantastic ramp up capability. So I wouldn’t worry about the short term. We’re planning for the long term. In the long term, it is blatantly obvious that the software APIs and software communications are going to win. We’re there with all the products that developers need to build it. We see developers building amazing things using our software products, our video SDKs, Twilio Clients for Voice Over IP, the rest of our software products.

The other thing I’ll point out is that our software products often drive usage and adoption of our voice and SMS products as well. They don’t exist in a vacuum. When a customer builds a call center using Twilio’s TaskRouter product, which is a globally scalable cloud-based ACD … When you use TaskRouter to build a call center, guess what? It drives more voice revenue. When you use Twilio Client as the basis of your call center, it drives more PSTN revenue, generally, as well, because you’ve got an inbound phone number.

It’s interesting is that these new technologies, software-based communications, are actual drivers of competitive advantage for our customers who adopt them, whereas if you think about the customers of ours who’ve adopted Twilio Client to allow any computer with a web browser to be able to now become a call center by just plugging in a headset and using our Twilio Client product that’s powered by WebRTC, that has leveled the playing field because you no longer have to manufacture or sell hardware phones or PBXs in a closet. These new software technologies have been huge drivers of a new set of players to arise in this industry who previously wouldn’t have been able to do it. That’s creating a new market dynamic here of new players entering the field and new products entering the field that wouldn’t have existed 10 years ago.

That’s really exciting, and it’s creating a huge market shift, but it also draws more usage of the PSTN right along with it. The same thing you can say for our Twilio Chat product. The same thing you can say for a number of our products, Twilio Studio. So all of these products together, you usually don’t use them in a vacuum. You use them together with other products. That’s part of the nature of APIs. But having them all together and being able to plug them in together to do these interesting things is fundamentally changing the landscape of the companies and the products that are out there that are really pushing the ball forward on communications.

Tsahi: I think I saw the first thing that you said when I worked at RADVISION years ago, but in the opposite sense. At RADVISION, you had two business units. One of them was a technology business unit. We sold SDKs to others to build their own products. The second business unit dealt with selling videoconferencing equipment. Whenever there was a downturn in the company because of the market, the CEO came out and said, “We have this business unit that sells videoconferencing. It’s now slow because of the market. Then the TBU, the technology business unit, we’re still going strong because we see that this will go upstream three years from now when developers actually launch it.”

There, the business model was flipped. We usually licensed the software in advance so developers had to invest when they started, and not when they saw the revenue. What you are saying is that today, in order to be in the developer space, you don’t make the money up front from developers that build stuff in the future. You wait and you grow with them. That waiting for that growth is what makes a company big at the end, is being patient.

Jeff: Exactly right. It’s the combination of our usage-based revenue model that tightly aligns us with our customer’s success. This is key. When we think about what is the driver of innovation, what makes developers be successful in building their next idea, it is experimentation. Experimentation is the prerequisite to innovation. Everything that we do is about lowering the barriers to a developer getting started and running as many experiments as they can for an idea that they want to try out. That’s why we have such a low upfront. You get started … Every developer who has used Twilio started by spending their first penny to make that first phone call, send that first text message, fire up that first video session.

You never know which one of these ideas that developers are building is going to be the next great big idea. Our job is to make it so developers can try as many of these ideas and run as many experiments as they can until they find product market fit with the thing that they’re building. That’s why it’s a long commitment to developers, because you need to give them the runway. You need to have that patience, but you also need to have that attitude that it’s not about, “Hey, a developer came to our door. I’m here to get all the money from you today.” You’re like, “No. We’ll do well if you do well. I’m just here to make sure you do well. I’m here to do everything I can to make you successful in building your ideas.” Ultimately, that’s how I’m going to be successful, but it’s a long commitment.

We like to say, though, it is a compounding interest business, essentially. You invest in developers, and they build. With the usage-based model, as they grow, as they’re successful, that, then, turns into our success. For us, that means customer success is the very first thing. It’s the prerequisite to our own success. Everyone at Twilio is always focused on customer success first.

Tsahi: I’ve been to two Twilio SIGNAL events, both very interesting events. I really loved them. What I noticed that you know exactly what the product does. When there is a product launch, you play with it. You do it on stage. You use it. You’re a developer yourself. How can you do that and still be a CEO of more than 900 employees?

Jeff: I think as an API developer-first company, I have to do that. That’s how I can make sure that we’re building the right things, and that’s how I can make sure I’m close to our customers and I’m close to our products. I love playing around with the new Twilio products. I am the first person they give access to when we build stuff, or at least, I hope I am, because that’s how I love playing around. I just dive in there. I read the docs. I started building stuff. That’s really exciting.

Recently, I was building something for Halloween with my kids with some Arduinos. I love building internal things at Twilio. A few years ago, I built our goal-setting software that we were using at the time. I just dove in. They don’t let me touch production code anymore, which is probably a good thing, but I just love being a developer. Even though I’m a CEO, I love continuing to invest in that part of my life. Obviously, I don’t get to do it as much as I used to, but it would make me very sad if I had to stop. I’ve just arranged my schedule and arranged my life so that I always make sure I’ve got some time to stay current on new stuff, both inside Twilio and outside Twilio and build. I’ve always thought that just building, just having a project idea in mind and committing yourself to building it and picking even some new technologies you’ve never used before, that’s a great way to keep learning and keep building and keeping your skills up.

Tsahi: I can easily relate to that. Talking about products and what is it you do, the last year it seems that you have somewhat shifted. If up until now, you could have said that when Twilio launches a new product or introduces a new product, that would be yet another building block that you can use to do some kind of communication. A new communication service that you couldn’t build before. It seems that you’ve started moving upstream. There is the Engagement Cloud with Notify and Authy. Then there is even Twilio Studio that goes for me even one level above that. Why did you make that move? Why the shift?

Jeff: Well, we don’t see it as a shift, because to us, it’s always about having the right API for a developer to get the job done. As a platform, you start off with a set of building blocks that provide maximum flexibility, because you don’t necessarily know what developers are going to want to build. As you learn from developers what are the most common things that they want to get done, but also what was really hard? What did they think would be easy to build and it turned out was very hard?

We view our job as making our customers successful. When we see the things that we can do to make their lives easier, help them get the job done faster or not have to reinvent the wheel because they’re trying to figure out, “Hey, how do I figure out how to distribute calls?” and I see every other customer trying to figure that out, too, as they’re building a call center, it becomes obvious. You say, “Wow. My job is to make my customer’s life easier and make them more successful. Why don’t I build a product that does that thing?” So you end up with Twilio TaskRouter, for example.

In the case of Studio, we view it as making the developer’s job even easier and allowing more people to participate in the development and the maintenance of these applications they’re building. Why? Because we saw developers build an application, and certain parts of it are really exciting, like how do I figure out the exact experience I want? How do I integrate all this stuff? Then parts of it are really boring and become a tax to the developer and to the whole organization, such as when folks are saying, “Hey.” Product manager says, “Hey, can we update the text? We’re going to run an A/B test. Can you try 50% on this and 50% on that? Can you change the SMS text? Can you change how the call center greets the people coming in?”

The developers don’t see that as exciting. They see that as, “Oh, it’s continual maintenance. It keeps pulling story points off of me every week, because I’ve got to keep maintaining the thing.” We said, “Isn’t there a way that we can allow the developer to do the really important parts, the parts that are about integrating systems and things like that, and then take the other parts that are a little more standard and make it so not only the developer doesn’t have to write it … They can just drag and drop and build it easily … but they can also hand some of that off to other people in the organization.” Maybe the marketing people have ideas about how they want the content to work. Maybe the ops people want to change how the IVR call flow works. There’s all sorts of different people who are invested in these communications applications, because customer engagement touches so many parts of the company.

If we can offload a bunch of that work from the developer, that ultimately will accelerate our customer’s roadmap and make them more successful. Again, you go back. That’s our goal. By the way, when we make our customer successful, that makes us successful, so we’re all aligned in this. Studio is a great way to do that. So we keep listening to customers, hearing the things that they love about the API approach, the flexibility it gives them, the fact that they can now build things that they were never able to do in the past because pre-built software applications weren’t flexible enough. But then we say, “Great. How do I make it so that you can get that flexibility faster and easier than ever before?” You do that by listening to your customers and solving the most common pain points.

Tsahi: I really love Studio. I’ve played with it. It’s a great tool. Really.

Jeff: Awesome.

Tsahi: How do you make the definition of it? Going … Building a UI tool, an IDE that can mix and match stuff and do this logic is never easy. I’ve used tools before that are similar. Some of them are good. Most of them not the good. How did you nail that experience in a way that, at least for me, was just point on?

Jeff: I think there have been fits and starts in the history of computation around visual designing of programming. Sometimes they work. Sometimes they don’t. To us, there were two things that were involved in that. Number one is working with a lot of customers and a lot of users. We actually started with paper and sticky notes and starting to design with them how they would want to design something like an IVR or an SMS bot or a chat bot, things like that. We actually did it with sticky notes before we wrote a single line of code. To us, that was the equivalent of for APIs, it’s writing the API docs first, putting them in front of a user and saying, “Hey, is this the API you would want?” We do that before we build the product. We did the same. We applied the same logic to building a user interface for drag and drop development.

Then the second thing was I think we constrained it down a bit to say, “This isn’t about general purpose computation,” because you get in all sorts of hairy things. We’re focused on the customer engagement. If we scope it down and we say, “We want to make the very best visual designer for Twilio for customer engagement. What are the things it should encompass?” I think that the key of building both power and simplicity is really understanding your domain that your customers are operating in and then designing the perfect thing for that domain.

I think that obviously, we’re just at the very beginning. We launched it just over a month ago, and so we’re continuing to learn from customers and get that feedback, but that’s our approach that I think has helped us to build something that customers find both powerful but also easy to adopt and easy to use. That comes from the same approach we’ve used to design APIs that I think customers would articulate in the same way. They’re powerful and easy to use.

Tsahi: What’s the feedback that you get about the engagement cloud? It’s out there for what, half a year now?

Jeff: Mm-hmm (affirmative). Look, when we talk to customers and we take a step back and we say, “What is Twilio all about? Why is Twilio important to you, ING Bank? Why is Twilio important to you, Morgan Stanley bank?” Some of these very large organizations, so obviously have a lot of options and a lot of legacy systems they could have kept using. The answer we get is, first of all, flexibility. With Twilio, we get this unprecedented flexibility.

When you think about the importance of customer engagement to a company, almost nothing is more important. When I talk to a CEO of a bank, and you ask them, “What’s important?” they are so concerned about, “How can I maintain my relationship with my customer?” That’s the biggest fear that C-level executives have. That is done with customer engagement. How do you keep up? If you think about the problem space here, it’s insane.

As consumers, the technology that we use has advanced incredibly rapidly in the last five to 10 years. We’ve got a wide variety of new applications that we use. We use video. I use video almost daily. I would have thought that was crazy 10 years ago. I would have thought that was stupid, and now here we are. We use video on a daily basis. We’ve got great chat applications. We’ve got apps in our chat and chat in our apps. It’s amazing. Yet, for companies to communicate to their customers, it is incredibly broken. Why? Because companies can’t keep up with the pace at which our expectations are changing for how communications is going to work and how great of an experience it’s going to be.

We’re still stuck in the days where you essentially call an IVR of a company and they don’t know who you are. You enter your 40-digit account number and then you talk to an agent. They’re still asking your name five times. You’re like, if I had that experience with a friend, if I called my friend and they asked me my name five times during the call, I would think there was something medically wrong with them. Yet when you call a company, that’s the experience you expect. Nothing is more broken about communications than how companies talk to their customers. We want to fix that.

When you talk to executives at companies and you say, “What keeps you up at night?” It’s, “Yeah. I’m worried about losing my connection to my customer. Being disintermediated by all these other technologies that are coming out. I need to keep the connection in order to stay top of mind and stay relevant to my customer.” When I think about how that works, it’s like, “Well, you’ve got rapidly proliferating ways in which you need to reach your customer.”

10, 15 years ago, talking to your customer generally meant you had a phone number and customers could call it. Now, you’ve got not just phone calls. You’ve got text messaging, you’ve got chat, you’ve got mobile apps with push notifications. You’ve got WeChat, WhatsApp, Facebook Messenger. You’ve got so many different … Now Alexa, Google Home, personal assistants. You have so many ways and very finite development resources to keep up with this changing world. By the way, it’s not just the ways in which you need to communicate that is proliferating. Think about all the departments in a company that need to actually keep up. You’ve got sales, marketing, customer support, onboarding, product teams. Every part of the company is trying to keep up with every part of this changing technology landscape. It is an unsolvable problem for most companies.

That’s what the engagement cloud is here to sell. We want to provide one system that allows companies to keep building, keep iterating, but to reduce the barriers, reduce the time to do that and give one tool to all these different teams who need to touch customers, to be able to keep up with this rapidly changing landscape and constantly iterating on those customer experiences with easy to use tools and infrastructure that they don’t have to worry about scaling. They don’t have to worry about reliability. They don’t have to worry about onboarding new platforms. We’re going to do that for them as the world is changing. They get all that stuff from us, and so they focus on, “Okay, what’s my special sauce? What’s the thing that makes my brand and my company engaging to my customer?” I’m going to focus on that last bit, and we’re going to iterate on that constantly, and I’m going to empower all these different teams inside the company to be able to have that at their fingertips. That’s what the engagement cloud vision is all about.

Tsahi: Thank you for your time, Jeff.

Jeff: Thank you, Tsahi.

Tsahi: I thoroughly enjoyed it.

The post Jeff Lawson on the Past, Present and Future of Programmable Communications appeared first on BlogGeek.me.

DID Routing Solution With Kamailio

miconda - Wed, 11/15/2017 - 16:27
Time for sharing details of another tutorial and configurations for a common Kamailio use case shared by community members, this time by Surendra Tiwari.He has used the Kamailio and Redis to create a DID routing solution with following features:
  • Inbound Termination with Carrier IP validation
  • Carrier LCR for DID/TFN to PSTN forwarding
  • Inbound Abuse Block
  • CDR in MongoDB
  • IPTables Block for SIP Scanners
  • Integration with RTPEngine
  • RedisDB for quick DB Access
Guidelines and configuration files are shared via a Github repository:Enjoy!Thanks for flying Kamailio!

Vidyo and RTC in 2018

bloggeek - Mon, 11/13/2017 - 12:00

Vidyo has made several announcements in the past couple of weeks. Time to see why the time is right for RTC across markets.

It has been a busy month for Vidyo. It has made two interesting announcements:

  1. The introduction of VP9 into its products
  2. Streamlining its product line

Vidyo has been known for their video routing technologies for many years. Well before WebRTC came into the ring. It is great to see how they have come in merging the two, along with how they are trying to fit their business model to the realities of WebRTC.

Vidyo, WebRTC, VP9 and SVC

How do you compete in a world where WebRTC is becoming the dominant media engine? Especially when the baseline implementation is dictated by what you get by default in the browser?

Vidyo has always had its own proprietary codec implementations. Ones that are optimized for SVC – Scalable Video Coding. Alex Eleftheriadis guest posted here last year with an explanation of SVC. To simplify, SVC gives two big advantages:

  1. Better error resiliency on poor network conditions
  2. Better support for multiparty and broadcast interactions

In many cases, you can get these things done without SVC and the end result would be good enough. But there are times when this extra kick to quality and optimization of how the network gets used makes all the difference.

When it comes to current browser implementations of WebRTC, the only video codec that has any kind of SVC support is VP9 and that takes place in Chrome. To take advantage of SVC, there are only two routes a company can take:

  1. Rely on the browser implementation and exposure of VP9/SVC features, and then implement these capabilities in its application
  2. Build its own XXX/SVC implementation into a non-browser application

Option (1) is great, but it assumes that:

  • Browsers prioritize VP9/SVC over other features. The challenge here is that things like aligning with the upcoming WebRTC 1.0 spec is most likely a lot more important
  • VP9/SVC will be implemented soon, and controlling its SVC capabilities will be exposed to the developers via JS APIs or additional SDP parameters
  • The existence of media servers that support SVC and optimize and fine-tune well for it

Reality is that on Chrome, the VP9 implementation in WebRTC supports SVC on the decoder side, but it doesn’t yet supports WebRTC in the encoder side.

Vidyo took the middle ground here, trying to enjoy both worlds: It always had its own SVC implementation in H.264 but allowed using WebRTC. Now, with its VP9/SVC implementation, it gets the freedom to improve video quality of its sessions in ways that others can’t.

If you use Vidyo.io today (and its other products in the near future), then Vidyo will try and prioritize the use of VP9 over other video codecs. And if some of the users in the session are making use of Vidyo’s SDKs instead of the native browser WebRTC implementation (i.e – joining from mobile or a desktop app), they will encode VP9 with SVC capabilities, and Chrome will be able to decode the bitsream – though the browser’s own encoded bitstream won’t be using SVC (at least not for now).

This places Vidyo ahead of the pack in SVC support that plays well with WebRTC.

Vidyo’s Product Line

Here’s the gist of the new product live view from Vidyo:

Vidyo has taken the approach of offering a single technical infrastructure to host and run all of its products. This is the right move forward and an embrace of the cloud. In a way, Vidyo is continuing its shift from on premise deployments towards a Vidyo hosted and managed cloud platform.

Vidyo.io can be defined as CPaaS, a Communication Platform as a Service; while its VidyoCloud can be defined as UCaaS, a Unified Communication Platform as a Service.

Vidyo started life in the UC business, moving to the cloud and then adding an API platform. In many other cases, UC / UCaaS vendors take the approach of adding an API on top of their UCaaS product and then just calling it CPaaS. Vidyo decided on “separating” the two which feels to me as the better approach. It casts a wider net over the potential target market and the types of use cases that Vidyo can now cater for.

To this product line, Vidyo has added earlier this year VidyoEngage, its answer to video based contact centers.

The end result? Vidyo can now be used in the 3 biggest domains for visual communications:

  1. Unified Communications, with its VideoCloud offering; providing a complete video communications platform
  2. Contact Centers, with VidyoEngage; providing a higher level abstraction of the call center modal to its customers
  3. All the rest, through its Vidyo.io platform for developers

You can use Vidyo.io to build a UC or a CC application if that’s your need, or you can just pick up VidyoCloud or VidyoEngage to get there.

What’s Next?

The challenge for Vidyo will be in competing in 3 different fronts at the same time, and the threat of losing focus. I am guessing this is one of the reasons for this streamlining – it is meant to simplify its internal infrastructure that is used in these 3 products on the technical level.

Managing these separate businesses and keeping abreast in all 3 markets will be hard, but Vidyo is off to a good start here.

When it comes to Vidyo.io, the addition of VP9/SVC support positions Vidyo as the technology leader in its space with the ability to offer the best media quality. Its competitors will require

The post Vidyo and RTC in 2018 appeared first on BlogGeek.me.

Kamailio Git Branch 5.1 Created

miconda - Sun, 11/12/2017 - 16:26
The GIT branch 5.1 for Kamailio project has just been created, it will host the release series v5.1.x. To get this branch from GIT, you can use: git clone https://github.com/kamailio/kamailio.git kamailio
cd kamailio
git checkout -b 5.1 origin/5.1Hopefully in two-three weeks time frame the full release of Kamailio v5.1.0 will be out.From now on, any corresponding fix has to be pushed first to master branch and then cherry-picked to branch 5.1. No new features can get in branch 5.1. Enhancements to documentation or helping tools, as well as kemi exports are still allowed. If you are not sure about doing or not a backport, ask on sr-dev mailing list.The master branch can now get new features, which will be part of the future 5.2.x release series.Thanks for flying Kamailio!

ClueCon Weekly With Fred Posner

miconda - Mon, 11/06/2017 - 18:09
FreeSwitch project is running a weekly video conferencing call for many years now, named ClueCon Weekly. Kamailio project has been present several times in the past.The edition from Wednesday, Nov 8, 2017, has Fred Posner as a guest. Long time Kamailio community member and advocate, Fred invites you to attend the session and be part of the discussions about Kamailio and FreeSwitch, how to use them together for building modern real time communication systems.How to connect to the conference call is detailed at:For video you need an WebRTC capable browser, but there are good options to connect only for audio, via SIP or even from PSTN or mobile phone — very convenient ways to listen while still working.Thanks for flying Kamailio!

What’s New With the Jitsi Videobridge?

bloggeek - Mon, 11/06/2017 - 12:00

Jitsi is getting a boost in its development.

When a developers focused company gets acquired it is time to start worrying.

Was the acquisition due to the technology, the customers or the business model?

Will the product continue to grow and flourish in the new regime?

Are the current signed agreements going to be renewed?

For open source, there are even more questions.

How will the community that was created around the open source project be treated?

Will existing business models around support, customization and dual licensing be maintained or will they be killed?

Two and a half years ago or so we had 3 popular open source media servers for WebRTC: Janus, Jitsi and Kurento.

Kurento got acquired by Twilio and Jitsi got acquired by Atlassian. Janus is still independent.

The progress made around Kurento since its acquisition was minimal at best. My guess is that Twilio is just too busy in getting its own multiparty video ready for GA to focus on the Kurento open source project itself. It also haven’t quite acquired everything that is Kurento – parts of it were left for the community and the original parent company Naevatec. The time passed is making a lot of the Kurento adopters frustrated and in search of different alternatives.

Best time to join my WebRTC Course? Today. Office hours are starting next week, and there’s a great bonus ebook of how meet.jit.si built its scalable infrastructure.

Enroll now

So time to ask –

How did Jitsi fair since its acquisition?

Surprisingly well.

And it seems to be getting a lot more interesting lately.

In the past 4 months, I’ve been adding almost on a weekly basis a post about Jitsi into the WebRTC Weekly. The team there has been continuously churning out new features into the project.

Here’s what was announced on the Jitsi blog since June when it comes to new features:

June

July

August

September

October

There’s a mix of announcements here. They range from addition of UX feature to some deep optimizations of the media server itself. And part of it is due to GSoC, Google Summer of Code, a project started by Google some years ago where university students can join open source projects as interns. Jitsi has been part of this project for some time now.

UX Improvements

In a way, these are the least interesting features when it comes to a media server, but the ones that makes it easier to use.

What Jitsi did in this round was tweak the UI to be a bit more modern and easier to use. For video layouts, there was a decision to better cater for 1:1 scenarios and to move video thumbnails from the bottom of the page to the right side of the page. This is also what Google decided to do once they shifted away from Hangouts to Meet. This makes for a more modern approach that sits well with the wider displays we have in recent years.

An audio only button was added to the UI. I am assuming it is just a shortcut to muting incoming and outgoing video. Having this UI element there makes it easier for users to operate (and easier for adopters of the Jitsi Videobridge to customize).

The interesting addition to me is the speaker times one.

I am intrigued in this case to know how easy would it be for an application to get that information from the Jitsi Videobridge – is this supported via the signaling offered by Jitsi towards the web client or is it also available as a backend-to-backend REST API? I can see this being used later in various ways, assuming the API is detailed enough and easy to use.

Integrations

A WebRTC media server is but a part of what you need to run a full application. While central and important, there are other aspects to it. In recent months, Jitsi have added a few additional integrations, making it easier to use and connect to.

Three such integration points were announced:

1. Mobile SDK

Jitsi had mobile applications for quite some time. While nice, it is different than having a mobile SDK.

Something I’ve been telling media server vendors for a few years now, is that they should offer a mobile SDK as part of their media server. In WebRTC, it is an important part of their offering and one that is hard to ignore.

In the case of Jitsi, users had to use the mobile application as a reference and modify it to their heart’s content. The problem with this approach starts when you need to maintain the codebase in the long run. When a new version of the mobile app comes out – how do you know which parts are critical to upgrade (=without them the app will break with the new Jitsi Videoserver) and which ones are just UI fixes that you can ignore or just pass since you’ve created your own UI experience already?

This is exactly why an SDK is such an important aspect of the solution:

With a mobile SDK, application developers can now just use the Jitsi Meet mobile application as a reference or even write something from scratch on top of the mobile SDK itself. Each is independently updated and maintained, making it easier to upgrade to newer releases.

2. Speech to text

Translation and NLP seems all the rage these days.

The way you get these things connected to WebRTC varies, but follows a similar approach for media servers:

You somehow collect the audio streams on the media server, mix and process them to the format supported by a 3rd party speech-to-text engine (Google Cloud speech-to-text seems quite popular these days), and once you get the resulting text, you do something with it.

In the case of Jitsi, this was a GSoC project. Information about its current status can be found on the developer’s website – Nik Vaessen.

This probably requires some more improvements and polish, but offers a good starting point for developers.

I’d wager that in GSoC 2018, the Jitsi team is planning on adding translation and text-to-speech to it.

3. Telephony

Telephony was already available in Jitsi before. It is implemented via a Jigasi server (JItsi GAteway to SIP). Now Atlassian is eating its own dogfood and not only with its internal HipChat service but in its free meet.jit.si showcase service.

In the case of meet.jit.si, the length of calls was limited to 2 minutes, enabling hunting down meeting participants who haven’t joined the session.

This serves two purposes:

  1. Show that Jigasi works and showcase its use
  2. Work out the kinks of getting this into the UX
Media Server Optimizations

At the heart of Jitsi is the media server itself. This is what developers aim for to begin with and the additions there are quite interesting.

The first one is that Jitsi now supports peer to peer media traversal for 1:1 sessions – in effect – no media server. The reasoning being that many of the calls end up being 1:1 and it is far easier and cost effective to share media directly between the participants.

In the past, supporting such a thing with Jitsi required running a separate signaling mechanism for 1:1 sessions and then once the need arise to grow, shift and renegotiate everything in front of Jitsi. It was tedious at best.

The other work effort is way more interesting.

Bandwidth estimation is nasty. Network conditions are varying and dynamic. You can start a session with 2Mbps and have it considerably drop throughout the session, coming back up again and changing characteristics.

To get that right, WebRTC (and any other VoIP alternative) needs to use bandwidth estimation. This is a process where the device tries to understand how much bandwidth is available to him at any given point in time. The algorithm can be naive, smart, complex, whatever. And a lot of the perceived quality of a call would rely on the quality of the algorithm used for bandwidth estimation.

WebRTC has its own built in bandwidth estimation mechanism. It works. But you need your own algorithm in a media server. Jitsi has its algorithm, and it is work in progress.

The Jitsi team are now taking it to the next level, trying to not only understand availability of bandwidth but also what the best course of action should be – it is trying to discern if it is better to reduce bitrate or add forward error correction instead.

It also does that with the coolest set of tech tools available to us today – Tensor Flow and Machine Learning.

Here’s what Emil Ivov shared during our Kranky Geek event last month:

Where to Next?

Looking for an open source alternative for your media server?

The most popular approaches out there for you are Janus and Jitsi.

Which one to pick out of the two seems to be based on personal taste more than anything else.

Best time to join my WebRTC Course? Today. Office hours are starting next week, and there’s a great bonus ebook of how meet.jit.si built its scalable infrastructure.

Enroll now

 

The post What’s New With the Jitsi Videobridge? appeared first on BlogGeek.me.

OSS IRIS Broadcast Project Launched

miconda - Thu, 11/02/2017 - 18:06
Olle E. Johansson, a very long time community member and developer of Kamailio, has announced the launch of IRIS Broadcast Project, an open source radio broadcast software:“”IRIS Broadcast is a project founded in Sweden to publish Open Source software for professional radio broadcast.Our solutions are based on the EBU and IETF standards and are built for national public radio to manage our external contribution platform.””Olle had a related presentation at Kamailio World Conference 2017 (video), talking about using Kamailio in radio bradcasting industry.The code of IRIS project is published on Github at:It includes the repository that shows how to configure Kamailio to use it as part of IRIS project:With the inevitable phaseout of ISDN lines in the next years, SIP has been more and more adopted in the broadcasting industry, allowing open source RTC software to enter easier into this market.Wishing all the best to IRIS project and looking forward to more usage of Kamailio beyond IP telephony.Thanks for flying Kamailio!

FOSDEM 2018 – Call For Papers

miconda - Wed, 11/01/2017 - 18:03
FOSDEM is one of the world’s premier meetings of free software developers,
with over five thousand people attending each year. FOSDEM 2018
takes place 3-4 February 2018 in Brussels, BelgiumThis email contains information about:
  • Real-Time Communications dev-room and lounge
  • speaking opportunities
  • volunteering in the dev-room and lounge
  • related events around FOSDEM, including the XMPP summit
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners
  • provide endless networking opportunities)
  • the Planet aggregation sites for RTC blogs
Call for participation – Real Time Communications (RTC)The Real-Time dev-room and Real-Time lounge is about all things involving
real-time communication, including: XMPP, SIP, WebRTC, telephony,
mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room
is a successor to the previous XMPP and telephony dev-rooms.
We are looking for speakers for the dev-room and volunteers and
participants for the tables in the Real-Time lounge.The dev-room is only on Sunday, 4th of February 2018. The lounge will
be present for both days.To discuss the dev-room and lounge, please join the FSFE-sponsored
Free RTC mailing list: https://lists.fsfe.org/mailman/listinfo/free-rtcTo be kept aware of major developments in Free RTC, without being on the
discussion list, please join the Free-RTC Announce list:Speaking OpportunitiesNote: if you used FOSDEM Pentabarf before, please use the same account/usernameReal-Time Communications dev-room: deadline 23:59 UTC on 30th of November.
Please use the Pentabarf system to submit a talk proposal for the
dev-room. On the “General” tab, please look for the “Track” option and
choose “Real Time Communications devroom”.Other dev-rooms and lightning talks: some speakers may find their topic is
in the scope of more than one dev-room. It is encouraged to apply to more
than one dev-room and also consider proposing a lightning talk, but please
be kind enough to tell us if you do this by filling out the notes in the form.
You can find the full list of dev-rooms at:and apply for a lightning talk atMain track: the deadline for main track presentations is 23:59 UTC
3rd of November. Leading developers in the Real-Time Communications
field are encouraged to consider submitting a presentation to
the main track at:First-time Speaking?FOSDEM dev-rooms are a welcoming environment for people who have never
given a talk before. Please feel free to contact the dev-room administrators
personally if you would like to ask any questions about it.Submission GuidelinesThe Pentabarf system will ask for many of the essential details. Please
remember to re-use your account from previous years if you have one.In the “Submission notes”, please tell us about:
  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs
You can use HTML and links in your bio, abstract and description.If you maintain a blog, please consider providing us with the
URL of a feed with posts tagged for your RTC-related work.We will be looking for relevance to the conference and dev-room themes,
presentations aimed at developers of free and open source software about
RTC-related topics.Please feel free to suggest a duration between 20 minutes and 55 minutes
but note that the final decision on talk durations will be made by the
dev-room administrators based on the number of received proposals.
As the two previous dev-rooms have been combined into one, we may decide to
give shorter slots than in previous years so that more speakers can
participate.Please note FOSDEM aims to record and live-stream all talks.
The CC-BY license is used.Volunteers NeededTo make the dev-room and lounge run successfully, we are looking for
volunteers:
  • FOSDEM provides video recording equipment and live streaming,
    volunteers are needed to assist in this
  • organizing one or more restaurant bookings (depending upon number of
    participants) for the evening of Saturday, 3rd of February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the
    Saturday night dinner and any other expenses
  • circulating this Call for Participation to other mailing lists
Related Events – XMPP And RTC SummitsThe XMPP Standards Foundation (XSF) has traditionally held a summit
in the days before FOSDEM. There is discussion about a similar
summit taking place on the 2nd of February 2018
http://wiki.xmpp.org/web/Summit_22 – please join the mailing
list for details: http://mail.jabber.org/mailman/listinfo/summitSocial Events And DinnersThe traditional FOSDEM beer night occurs on Friday, 2nd of February.On Saturday night, there are usually dinners associated with
each of the dev-rooms. Most restaurants in Brussels are not so
large so these dinners have space constraints and reservations are
essential. Please subscribe to the Free-RTC mailing list for
further details about the Saturday night dinner options and how
you can register for a seat:Spread The Word And DiscussIf you know of any mailing lists where this CfP would be relevant, please
forward this email. If this dev-room excites you, please blog or microblog
about it, especially if you are submitting a talk.If you regularly blog about RTC topics, please send details about your
blog to the planet site administrators:All projects http://planet.freertc.org – planet@freertc.orgXMPP http://planet.jabber.org – ralphm@ik.nuSIP http://planet.sip5060.net – planet@sip5060.net
(Español) http://planet.sip5060.net/es/ – planet@sip5060.netPlease also link to the Planet sites from your own blog or web site as
this helps everybody in the free real-time communications community.ContactFor any private queries, contact us directly using the address
fosdem-rtc-admin@freertc.org and for any other queries please ask on
the Free-RTC mailing list:The dev-room administration team:Saúl Ibarra Corretgé
Iain R. Learmonth
Ralph Meijer
Daniel-Constantin Mierla
Daniel Pocock

FUSECO Forum 2017

miconda - Tue, 10/31/2017 - 18:01
The 8th edition of FUSECO Forum conference is organized by Fraunhofer Fokus Institute in Berlin, during Nov 9-10, 2017:The event’s chairman is once again Prof. Dr. Thomas Magedanz, from TU Berlin and Fraunhofer Fokus.This year’s event will again feature three dedicated tracks consisting of tutorials and interactive workshops on the first day, namely:
  1. Multi-access Network Technologies in 5G-Ready Networks
  2. 5G Edge and Core Software Networks and Emerging 5G Applications
  3. Network Virtualization and Network Slicing for 5G-Ready Networks
The second day features a full-day conference uniting these topics under one umbrella „The 5G Reality Check: 5G-Ready Applications and Technological Enablers for 5G Implementation“.Daniel-Constantin Mierla, co-founder Kamailio project, will participate to the event, being part of the panel “Practical Experiences in Moving to NFV Infrastructures and Open Challenges” during the first day. The agenda of the two days is available at:Fraunhofer Fokus is the place where Kamailio Project was started back in 2001 (as SIP Express Route, aka SER), a research institute in next generation communication technologies. If you want to learn about what’s going to happen next in RTC, this is a must attend event.Besides using it in research projects, Fraunhofer Fokus Institute keeps close to Kamailio project, hosting and co-organizing all the editions so far of Kamailio World Conference.Thanks for flying Kamailio!

Kranky Geek 2017: What Does the Pulse of WebRTC Tells Us?

bloggeek - Mon, 10/30/2017 - 12:00

Kranky Geek 2017 has been a roller coaster event for me. Time to discuss what I learned about the WebRTC last week.

Yap. We had a full room.

Well… More like 2 full rooms.

When talking to Lawrence some time in the afternoon, he joked with me, saying that apparently we have a problem – the overflow room is overflowing.

The best problem an event organizer could ever ask for.

If you are looking for the event videos, then they are already on YouTube.

I want to share some of my thoughts prior to the event and during to the event. And if possible, try and shed some light on where we’re headed from here.

Want to keep abreast of the WebRTC ecosystem? Join the WebRTC Weekly Challenges Abound

Putting up an event is a stressful undertaking. There are a lot of aspects that needs to be covered with this constant worry that you’ll end up forgetting something or that something will screw you over. Both are guaranteed to happen no matter how much planning and effort you put into it.

This time, our challenges started early on. It was somewhat harder than usual to decide how to price the event to make it worthwhile doing. Kranky Geek events are expensive to run. From the beginning, we’ve aimed for events that are free to attend (I consider a $10 admission fee that gets donated as a free to attend event). This left us with covering our expenses and making some revenue out of it something that relies on sponsors.

Kranky Geek is all about quality content. High quality content. Top notch. The best you can find.

Which means that we select the topics we want. We then hunt for the speakers that fit into that. And we work with our speakers to make them shine.

This process doesn’t always work with sponsors… it is sometimes hard to explain how we operate and why. And at times, sponsors can focus on hard selling their warez, which doesn’t fit into the Kranky Geek spirit (and definitely not to our audience).

This time, it took us slightly longer than usual to get the sponsors onboard and to be certain that we can pull off the event.

It also caused some more stress than usual among us partners. Kranky Geek is a joint effort of 3 people: Chris Koehncke (aka Chris Kranky), Chad Hart (the living spirit behind webrtcHacks) and me.

We don’t always agree, but somehow we fit well together, each one covering the other one’s shortcomings. We make a good team for getting these events done. I hope

Why am I sharing all this?

To set the stage to what comes next for Kranky Geek, but also to explain the amount of work, effort,time, stress, pain and love that has been put into the Kranky Geek events in general and to this one in particular.

It hasn’t been all happy, but I am proud of the result and happy that we did this.

We Had a Fire Drill!

During the day, we’ve had our share of technical challenges.

The projectors in the main room didn’t work at the beginning (that was before we started the day), and then a few other issues cropped up on us.

Doing this event in Google’s San Francisco office meant we had the best A/V team in the world on site to help us. The crew Google is working with there is top notch. The best I worked with. They made the problems seem easy to solve.

We had this to deal with…

Great @KrankyGeek schedule at #webrtclive this year includes exercise and fresh air, with @Google providing simulated earthquakes & flames! pic.twitter.com/r0QHATG5Wj

— Lawrence Byrd (@LawrenceByrd) October 27, 2017

A week before the event we were told we will have a fire drill in the building on the day of the event. The time kept moving around, settling at 2pm. We’ve scheduled our breaks and sessions around it, with a huge worry of having people leave once the fire drill started.

(that’s Kranky going down the staircase during the drill)

We decided to embrace the fire drill and tried to celebrate it with our audience, and I hope we succeeded. Back from the fire drill, we had almost everyone back.

We should probably make fire drills an integral part of Kranky Geek events.

Time to stop rambling.

The Event Recordings

The recordings are available online.

You can find them here.

We’ve had to reorder the sessions from our original agenda due to constraints we had with some of our speakers – late arrivals and early exits.

So I’ve reordered the sessions here. Following this, are the 13 sessions we had, in the original order we wanted (not that it really mattered).

I added some of my commentary on what I liked and learned in each of the sessions.

Kranky Geek Team

Nothing to say here really, besides the fact that I envy Chad’s ability to create slides and present them.

Facebook

This is the first time we had Facebook join us and share a story at Kranky Geek. We had the pleasure to have Li-Tal Mashiach an Engineering Manager at Facebook do the talk.

The numbers there are impressive as hell. 400 million monthly active users doing voice and video calls on Facebook Messenger using WebRTC. 400 million.

The next one who asks me if WebRTC is being adopted – I’ll just say 400 million. And then he’ll complain that this isn’t an enterprise application…

Anyways, what I found really interesting is how Facebook is dealing with optimization. The effort placed in the decision making process around video codecs, bitrates, etc.

WebRTC comes in a neat open source package that anyone can use. But it needs a lot more love and care when it comes to making it work at scale – just like any other technology.

TokBox

Badri Rajasekar, CTO of TokBox, shared an experiment that TokBox has been running recently. It was about using head tracking technology to improve video quality.

The idea behind it is that you can scale up a region of interest in an image sacrificing other regions, which ends up putting more pixels encoded for these regions.

The great thing here, that you do it without touching the encoder or the decoder. Why do we want that? Because the more generic you can make an encoder, the easier it is to implement it in hardware.

VoiceBase

Walter Bachtiger, Co-founder and CEO of VoiceBase talked about NLP (Natural Language Processing), and how great insights can be derived out of voice.

It was a bit of creepy, understanding how accurate machine learning can be at scale in a contact center.

The part I liked best in this one was how a contact center can decide within 30 seconds how likely you are to buy – if only the people who call me would have used it… it would have saved me a lot of time as a customer.

Atlassian

Emil Ivov, Chief Video Architect at Atlassian, and a serial speaker at Kranky Geek gave a very interesting talk about machine learning and bandwidth estimation.

The team at Jitsi now use Tensor Flow to sift through metadata they have of calls to try and understand how the network behaves and what strategy would work best in improving network quality.

It seems like reducing bitrate doesn’t always have the necessary effect on things, and FEC might end up working better.

Vidyo

Roi Sasson, CTO of Vidyo, talked about scale.

This wasn’t about how to scale a service, but rather how to scale a single call. Want 10 people on a call? You may not need to worry, but if you go to a 100 or a 1,000 – you need to think differently about it.

Which is where taking SFUs and cascading them, both within a single data center and geographically, starts making a lot of sense.

WebKit

For the first time, we had a representative from Safari. We got to hear what Apple’s default browser does with WebRTC and how from Youenn Fablet, a contributor to WebKit.

It was great to have WebKit join us at Kranky Geek, and to hear their fresh thinking about privacy in WebRTC and how they’ve taken care of that in Safari.

Peer5

Hadar Weiss, Co-founder and CEO of Peer5 talked about P2P CDN and using the WebRTC data channel.

We never did have a focused talk at the data channel in Kranky Geek, so this was a first.

I found really interesting how Peer5 does things differently than the rest of the WebRTC community. Mostly because they care less about call setup times and TURN connectivity and a lot more about throughput.

Hadar showed a few techniques I really liked, like the simple compression of SDP messages (which starts to make sense when you process and send millions of these a day).

Slack

From Slack we had Lynsey Haynes and Andrew MacDonald.

Two things interesting about this session:

  1. The shift they made from a custom WebRTC implementation towards the use of Electron with a vinyl WebRTC implementation in Chromium – all due to maintenance costs
  2. Switching from a custom Janus media server towards a self developed one written in Elixir

During the Q&A (which didn’t make it to the recording), Slack were asked about their support of Firefox. Andrew answered that support for Firefox is unlikely to come due to the shift of Slack towards focusing on less browsers and on their Electron-based desktop application. I see this thought process taking place elsewhere as well – it doesn’t bode well to the future of browsers.

Twilio

Rob Brazier from Twilio showed an AR (Augmented Reality) use case.

I’ve never been a fan of these acronyms such as IOT, AR, VR. Marrying them with WebRTC always seemed to me somewhat forced.

That said, Rob did a great job in making a case for AR in communication interactions. I am sure more exist.

Frozen Mountain

Anton Venema, CTO of Frozen Mountain was there to give an interesting demo.

He cobbled up text to speech, translation and speech to text to their media server platform, doing a demo of live language translation taking place in a WebRTC session.

Google

Niklas Blum, Huib Kleinhout and Justin Uberti from Google shared the progress made in WebRTC towards WebRTC 1.0.

This one had a lot of details for developers about things they need to know with the latest versions of Chrome and what to prepare for moving forward.

Appear.in

This year’s closing session was given by Philipp Hancke of appear.in. He’s a repeat speaker at Kranky Geek.

Philipp delved into NSFW (Not Safe For Work) related technologies, experimenting with recognizing such content and deciding what to do with it.

It was an interesting mix of technologies, human behavior and compromises.

Our Event Sponsors

Did I already say that Kranky Geek relies of its sponsors?

This year we had 6 of them:

I’d like to again thank our sponsors.

Diversity and Kranky Geek

For the first time, we had female speakers. Great female speakers.

I want more of this.

If you are a woman, or know of a woman. One that has technical WebRTC chops. And a desire to share your experiences. Contact me…

What’s Next for Kranky Geek?

We weren’t sure if we will have another Krank Geek event. But due to the success of the one we just had, there’s high probability that we will do another one next year.

So…

Get ready for Kranky Geek 2018.

With more great content, and maybe – a fire drill.

And while at it, if you increase your visibility in the market, know that sponsoring a Kranky Geek is a great way to go about it. So put some budget aside for it. Q3/Q4 2018 is where it will take place.

Want to keep abreast of the WebRTC ecosystem? Join the WebRTC Weekly

The post Kranky Geek 2017: What Does the Pulse of WebRTC Tells Us? appeared first on BlogGeek.me.

Kamailio v5.0.4 Released

miconda - Wed, 10/25/2017 - 19:39
Kamailio SIP Server v5.0.4 stable is out – a minor release including fixes in code and documentation since v5.0.3. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.Kamailio v5.0.4 is based on the latest version of GIT branch 5.0. We recommend those running previous 5.0.x or older versions to upgrade. There is no change that has to be done to configuration file or database structure comparing with the previous release of the v5.0 branch.Resources for Kamailio version 5.0.4Source tarballs are available at:Detailed changelog:Download via GIT: # git clone https://github.com/kamailio/kamailio kamailio
# cd kamailio
# git checkout -b 5.0 origin/5.0Relevant notes, binaries and packages will be uploaded at:Modules’ documentation:What is new in 5.0.x release series is summarized in the announcement of v5.0.0:Thanks for flying Kamailio!

Kamailio Autumn Events Summary

miconda - Mon, 10/23/2017 - 14:33
Time flies, feels like just returning from summer vacation, but it’s already past the mid of autumn and Kamailio members and community member have been present at several event world wide.After Cluecon in Chicago, USA, which ended the summer events in August, it was the time for TADHack Global 2017, running two rounds during Sep 22-24 and Sep 29-Oct 1.IIT RTC Conference happened during Sep 25-28, 2017, in Chicago, hosted as usual by the Illinois Institute of Technology.Astricon 2017 (Orlando, FL, Oct 3-5) was a big hit for Kamailio project, a good location and great time meeting many kamailians and friends from the VoIP world.November is rather busy, next are the events where you can meet people from Kamailio project:New events may be added to the list, keep an eye on our website!Should you participate to a local or global event and involve Kamailio in some way, contact us, we are more than happy to publish an article about the event.Thanks for flying Kamailio!

6 Ways Vendors Sell WebRTC Developer Tools

bloggeek - Mon, 10/23/2017 - 12:30

How can you make a living from WebRTC? You offer WebRTC developer tools.

One of the interesting questions is around monetizing WebRTC. The truth is, it is hard to monetize a concept, or a piece of technology. Kranky said it well over 3 years ago – WebRTC Market Size (is 0).

What does this mean? That you can either make money by selling tools to developers who need WebRTC. Or you make money by offering a service that makes use of WebRTC, but we can now debate if that’s WebRTC or not.

Anything that isn’t WebRTC developer tools talls into other market niches – healthcare, education, gaming, … all these compete and create business far from the WebRTC core itself.

Want to learn who’s offering WebRTC Developer Tools? Check out my WebRTC Developer Tools Landscape infographic.

WebRTC developer tools though – that’s where a small WebRTC market niche exist. And there are several ways to make money in this market. Here are 6 different types of services you can offer to sell WebRTC to developers – some will offer multiple services.

#1 – Sell a Managed Service (SaaS)

You can sell a managed service.

Find something that developers need.

Create a service that offers that solution.

Sell it in XaaS model.

  • We do it at testRTC for testing and monitoring WebRTC services.
  • Callstats.io does that for monitoring.
  • XirSys and a few others offer a managed service for NAT Traversal (=someone else hosts the TURN and STUN servers that your application uses)
  • Mobilinq and others offer a customized hosted offering
  • And then there are CPaaS vendors. Many of them offering WebRTC as well (check out this report on WebRTC CPaaS)

This market is rather challenging, as the name of the game is scale, and getting there is hard. For some reason, this is also where most customers end up penny pinchin.

#2 – License Software

You can develop a product that others need and offer it under a commercial license.

There are those who want or need to run their own service, not relying on managed services. And at times, they are happy to pay for a commercial license that comes with an SLA and someone you can shout at and threaten.

The best thing about most commercially licensed software is that the people behind it work on that software. And once they have paying customers, they are bound by contracts to support and maintain it, usually for long periods of time.

In this category, you can find companies such as Dialogic, Frozen Mountain and SwitchRTC.

#3 – Support and Customization of Open Source

Open Source doesn’t mean free.

People need to be able to make money out of their work – even if they are idealists who are just contributing to the community as a whole.

The way to go about doing that is by writing software that then gets distributed freely under an open source license. This allows anyone to take that software, use it, modify it and even try and contribute back to it and improve upon it.

For popular open source projects, this creates a nice feedback loop that everyone enjoys. For the most obscure projects, it remains the work of a single maintainer.

So how can someone make a living out of open source? By offering one of three different alternatives (usually a mix of them):

  1. Support contracts – if you’re the owner and main maintainer of the open source, then you can sell support contracts. Those who use your open source project may have questions, and giving them priority support can be an income source. For companies, having support available on the open source projects they use can be an important aspect of choosing one open source project over another
  2. Customization work – copmanies who adopt open source projects sometimes need modifications to these projects. They can attempt to do it on their own, or they can just have the main maintainer of the project do it for them at a price
  3. Commercial license – LGPL, GPL, AGPL and other open source licenses are often considered as cancerous licenses for commercial products. The reason for that is that they “contaminate” the code written around them forcing their license terms on that code as well. There are other open source licenses that are more tolerable to companies (more about it here). Which is why in many cases, a company would prefer paying to get a commercial license instead of using the free open source licenses of a project. Dual licensing is another way of making a living

Jitsi, for example, was distributed under an LGPL license. This allowed the team behind it to make a living through all 3 approaches: support contracts, customization work and offering commercial licenses. After its acquisition by Atlassian, it switched from LGPL to a more lenient APL license. The main reason? Atlassian had other objectives for Jitsi and they weren’t about deriving direct monetary value from it. The Jitsi team no longer offers paid support or customization – it doesn’t mean they don’t support the code base, it just means that you can’t pay them for priority support.

Kurento got acquired by Twilio. Naevatec, the company behind Kurento made most of its direct revenue from Kurento by offering support and customization work. After the acquisition, Naevatec was left without its engineers that were experienced with Kurento and has since been struggling to maintain the Kurento codebase.

Janus is still an open source project. The company behind it offers support and customization work if someone needs it.

To be able to make a living out of an open source project, it needs to be one that is mission critical to the companies who use it, and it needs to be popular enough. If you plan on taking that route, remember that maintaining such a project can make you proud at the number of companies that end up adopting it, but may well frustrate you if you look at how many of these companies won’t be willing to pay for it at all.

#4 – Conduct Analysis

This is something I wasn’t aware of up until several months ago.

There’s this interesting market niche in WebRTC, and I am not sure how prevalent it is with other technologies.

It is of companies and enterpreneurs who set out building a product with not enough knowledge and experience in WebRTC. They try to learn as they go along, floundering while at it. Many reasons why this happens:

  • They are doing it with an itnernal team that doesn’t have the skill set
  • They outsourced the project to an open source vendor who knows nothing about WebRTC, but knows how to build a mobile app, a website or even a VoIP service
  • They outsource the project but don’t scope it properly, getting a product that isn’t what they really wanted – and then blaming the outsourcing company about it
Need to beef up your WebRTC experience? Enroll your developers to the Advanced WebRTC Architecture course.Enroll to the WebRTC course

When this happens, companies start looking for alternatives. And there really are only 4 things to do here:

  1. Close shop and go home. Consider this a failure and just move on to other projects
  2. Reboot. Look at all of it as sunk costs and start from scratch
  3. Fix. Get your team or pay the outsourcing vendor (or other outsourcing vendors) to continue working on the project until it is working
  4. Salvage. Get an expert to look at the existing codebase, analyse it, offer his advice and even let him do the fixing

Salvage is somewhat different from fixing, as it focuses on analyzing the whole architecture along with the implementation instead of just diving right in and continuing with the same approach that brought you to where you are in the first place.

And there are companies who offer such packaged services. Look at Blacc Spot Media and WebRTC.ventures for that if this is what you’re after.

#5 – Outsource Your R&D Skills

You’re good with coding and know WebRTC?

Great.

Outsource it to others.

Many of the people who contact me are after developers with WebRTC experience. Some of them want to have these developers work as freelancers. Others want to outsource to a company. Others still are looking to recruit skilled workers, but understand they may end up outsourcing anyway.

There are quite a few companies and individuals who offer their outsourcing services around WebRTC.

The known freelancers who do WebRTC work are usually fully booked. It is hard to get their attention and time for new projects, but it is worth a try.

The outsourcing companies come in different shapes and sizes. Many don’t have the relevant skillset. Some will place inexperienced developers on your project. Some will do the best work for you.

Quality here varies greatly, so you should take the time to pick the right outsourcing vendor to work with.

In many cases, my role in such projects is to assist in deciding on the exact requirements, selecting the outsourcing vendor and “translating” the requirements between the company and the outsourcing vendor.

#6 – Consult

There are those who simply offer consulting (I do that by the way).

Their role is to assist in the thought processes – be it the initial phases of helping in fleshing out the product’s roadmap and differentiation, assisting in the competitive analysis, in writing down the RFPs (or the response to an RFP), selecting vendors, suggesting architecture, etc.

Many of the experienced outsourcing vendors will usually add a consulting component into their service, and their customers will usually benefit from that consulting.

What’s Next?

Looking to start a WebRTC project? Trying to understand how to get that done? Know that the market is dynamic and always changes.

Which is why I am in the process of updating two resources on my site:

  1. Choosing a WebRTC API Platform report
    1. If you think a vendor that isn’t in the report needs to be added to it – tell me
    2. If you plan on purchasing this report, then the best time would be from now until the publication of the update (see below)
  2. WebRTC Developer Tools Landscape will be updated soon – if you miss vendors here – tell me
My WebRTC API Report is getting an update and you’re getting a discount. From now, until the report gets updated during December, there’s a 20% discount. The discount will include the upcoming update (and a full year of updates).

Get your discounted report

 

The post 6 Ways Vendors Sell WebRTC Developer Tools appeared first on BlogGeek.me.

AstriCon 2017 Remarks

miconda - Wed, 10/18/2017 - 14:31
The 2017 edition of AstriCon was very intense, or at least it was for me (Daniel-Constantin Mierla, Asipto) and the Kamailio presence at the event. Three days without any time to rest!Before summarising the event from personal perspective, I want to give credits to the people that helped at Kamailio booth and around. Big thanks to Fred Posner (The Palner GroupLOD), he did the heavy lifting on all booth logistics, from preparing required things in advance, setting up the space, banners and rollups, stickers, a.s.o. Of course, Yeni from DreamDayCakes baked again the famous Kamailio and Asterisk cookies, very delicious bits that people could taste at our booth.Carsten Bock from NG-Voice was there with his Kamailio-VoLTE demos and devices. Torrey Searle’s giveaways from Voxbone were very popular again. Alex Balashov from Evariste Systems ensured that anyone in doubt understands properly the role of Kamailio in a VoIP network and the benefits of using it along with PBX systems. Joran Vinzens from Sipgate completed our team, being around as we needed.It was a great time to catch up with many friends, VoIP projects and companies in the expo area, sharing the space with Dan Bogos from CGRateS project, chatting with the guys from Obihai, IssabelPBX, FreeSwitch, Janus WebRTC Gateway, Simwood, Bicom, Homer Sipcapture, Telnyx, Greenfield…There were four presentations by the people at the booth — I, Carsten, Fred and Joran had talks on Wednesday or Thursday. On Tuesday, I, Fred and Torrey participated to AstriDevCon, as always a very good full day session with technical debates, with Mathew Friedrikson and Matt Jordan coordinating and talking about what’s expected next in Asterisk.Close to the end of the event on Thursday, it was the open source project management panel, with me among the panelists. Being completely warmed up and with some pressure from James Body, I also did the Dangerous Demos, where the Ubuntu Phone decided to reboot as I was on stage, leading me towards the Riskiest Demo Prize (aka Crash & Burn). Carsten and Torrey did dangerous demos as well, with Carsten being the runner up on one of the tracks, which secured him a nice prize as well.Kamailio related presentations will be collected at:I expect that recordings of the sessions will become available in the near future from the organizers of Astricon.During the breaks and evenings, I enjoyed amazing time with friends and kamailians around the world. It’s no time to bore with people such as James Body, Simon Woodhead, Susanne Bowen, David Duffett, Nir Simionovich, Lorenzo Miniero … and many others that I miss to remember at this moment…Definitely it was one of the best AstriCon ever, credits to Digium and the organizing team! Kamailio had a great time there, see you at the next editions!Thanks for flying Kamailio!

Development For Kamailio v5.1 Series Is Frozen

miconda - Tue, 10/17/2017 - 14:30
A short note to mark the freezing of development for Kamailio v5.1 series.For few weeks, no new features will be pushed in the master branch. Once the branch 5.1 is created (expected to happen in 3-4 weeks from now), the master branch becomes again open for new feature. Meanwhile the focus is going to be on testing current code.Work on related tools (e.g., kamctl) or documentation can still be done as well as getting the new modules in 5.1 in good shape, plus adding exports to kemi interface (which should not interfere with old code).The entire testing phase is expected to be 4 to 6 weeks, then the release of v5.1.0 – likely by end of November should be out.What is new in current master branch comparing with previous stable series (v5.0.x) will be collected at:Changes required to do the update from v5.0.x will be made available at:Helping with testing is always very appreciated, should you find any problem in current master branch, just open an issue on bug tracker from:Thanks for flying Kamailio!

Do We Need WebRTC Events?

bloggeek - Mon, 10/16/2017 - 12:00

Yes. We do need WebRTC events. Which is why you should join us at Kranky Geek next week.

I’ve been asked a few times in the past several months by people about events to go to.

Should I go to that event? Will it help me with my current WebRTC project?

What event should I go to, considering I am in need of WebRTC technology?

Where can I travel to learn about WebRTC? Is there a specific event?

Which event will guide me towards what I need with WebRTC? Have me understand the market dynamics? Be a place to mingle with the industry?

Register for a Kranky Geek AMA webinar – a week ahead of our event, Chad Hart will be joining me to discuss WebRTC statistics and what to expect from this year’s Kranky Geek event

Register to the pre-event AMA webinar

The problem with events and WebRTC

If you’re in telecom, then this is how you see WebRTC:

For telecom, WebRTC is just a piece of telecom. An evolution of it. Some way of getting the telecom and VoIP infrastructure into a web browser.

If you’re in web development, then this is how you see WebRTC:

For web developers, WebRTC seems just like another piece of the HTML5 technology stack. You learn a few JS APIs. Maybe some nifty CSS and a few HTML5 tags and you’re done.

And this is how I see WebRTC:

Now, most WebRTC related events so far have been initiated by people in the telecom industry. The end result is usually a very narrow prism of what WebRTC is what it is capable of achieving. And the side tracks done in the web related events? Most of them end up explaining what WebRTC is, not going nearly deep enough.

The end result has been unsatisfying. At least for me.

This was one of the reasons I started Kranky Geek along with the help of Chris Koehncke some 4 years ago. We’ve since had Chad Hart join.

4 years into it, the question starts to crop up – do we still need WebRTC events?

Why do we still need WebRTC events?

Is there still room with a WebRTC centric theme to it?

Shouldn’t WebRTC just be wrapped into all the telecom, communications and web events out there and be done with it?

I mean, we’ve got enough meetup groups around the world for this technology, but who wants to attend a longer event on WebRTC?

I think it boils down to that illustration up there – the one where WebRTC is smack in the middle of VoIP (telecom) and the web (internet). In a way, we’re still figuring out what that means exactly. How does the infrastructure of such a thing needs to be designed; how do you scale it; what kind of monitoring mechanisms do you need to have in place; what’s the team sizes, resources and time needed to get something from a proof of concept to production.

WebRTC might not be new, but the fact that it relies on a mix of technologies and disciplines make for a rather complex and interesting ecosystem.

Join us at Kranky Geek SF 2017

Our next Kranky Geek event takes place on October 27 in San Francisco.

Kranky Geek is about WebRTC developers. Our role is to educate and share the experience coming from developers to developers.

The theme we’ve selected this time is twofold: implementation and beyond RTC.

  1. Implementation: Production ready systems. Those that have battle scars and live to tell their story. We have companies who’ve been running WebRTC in production, at scale for quite some time, and now they are here to explain what they are doing – the challenges they faced and the solutions they came up with
  2. Beyond RTC: You’ve probably heard a word or two about VR, AR, NLP, AI – acronyms that seem to be capturing the news and the imagination lately. We’ve decided to bring in a few experts in this field to explain how that fits into the story of WebRTC

We reached out to Youenn Fablet, who works on the WebKit WebRTC implementation. He will be speaking about iOS and Safari support of WebRTC.

Google will talk about their progress and roadmap of WebRTC.

Talking about Implementations, we will have Atlassian, Facebook, Peer5, Slack and Vidyo- each talking about different aspects of implementations and scaling.

Affectiva, TokBox, Twilio and VoiceBase will cover issues beyond RTC.

For our end-of-day session, we will have a repeat speaker at Kranky Geek – Philipp Hancke from appear.in – working his way around NSFW. Knowing Philipp (and seeing his draft slides), you definitely want to stick around for this one.

Register for a Kranky Geek AMA webinar – a week ahead of our event, Chad Hart will be joining me to discuss WebRTC statistics and what to expect from this year’s Kranky Geek event

Register to the pre-event AMA webinar

There’s a token admission fee in place, to control headcount and showups (free events tend to be under-attended, and we’re shifting away from that). The way this event ends up being funded is by our sponsors, who make this thing happen at all. They are part of our speakers and play an important role in the event itself.

This time, we’ve got Frozen Mountain, Google, Tokbox, Twilio, Vidyo and VoiceBase as our sponsors.

See you at Kranky Geek.

 

The post Do We Need WebRTC Events? appeared first on BlogGeek.me.

Freezing Development For Kamailio v5.1

miconda - Mon, 10/09/2017 - 23:25
The development of new features for next major release, Kamailio v5.1, is going to be frozen on Monday, October 16, 2017. The master branch received plenty of new features since the release of v5.0, which was out by end of February 2017.Next release will bring at least 7 new modules (although we are expecting one or more to make it in during next days). Not really up to date, the list of new features is collected in the wiki page at:After the freeze date, we start the testing phase, which is expected to last for 4 to 6 weeks, then we will have the first release in the 5.1 series, respectively the version 5.1.0.Shall you have plans to include new features in v5.1, it is time to hurry up and have the commit or pull request ready by end of next Monday.Thanks for flying Kamailio!

Thoughts about Twilio Studio and the Future of CPaaS

bloggeek - Mon, 10/09/2017 - 12:00

How does Twilio Studio fit into Twilio’s Ask Your Developer campaign?

Last month I participated in Twilio’s Signal event that took place in London. I was invited to speak there on test automation in WebRTC. You can watch my video session on YouTube. That isn’t the point of this article though.

Signal is where Twilio announces most of its major new releases. Last time, earlier this year, it was all about the engagement cloud – a restructuring of how Twilio explains its services – and a migration from a single channel world into an omnichannel one. I’ve written at length about it in Is Twilio Redefining CPaaS (hint: it is). I wrote there:

Twilio has introduced a new paradigm for the way it is layering its product offerings.

In the process, it repositioned all of its higher level APIs as the Engagement Cloud. It stitched these APIs to use its lower Programmable Communications APIs, adding business logic and best practices. And it is now looking into machine learning as well.

It is a powerful package with nothing comparable on the market.

Twilio are the best of suite approach of CPaaS – offering the largest breadth of support across this space. And it is making sure to offer powerful building blocks to make developers think twice before going for an alternative.

I think that at Signal London 2017, they outdid that with the introduction of Twilio Studio.

Trying to figure out the best approach for developing your application? Check out this free WebRTC Development Paths Matrix to understand your alternatives

Get your WebRTC Development Paths Matrix

Before We Begin

You might want to take the time to watch Signal London 2017 keynote by Jeff Lawson.

A large part of the London keynote was a rehash of what was said in San Francisco earlier this year. It was about the shift towards omnichannel and the engagement cloud. The words that struck to to me when explaining the engagement cloud were BEST PRACTICES, BUSINESS PROCESSES, REINVENT THE WHEEL (=what not to do).

I’d like to touch in this articles a few main themes and approaches that Twilio is taking, which are shaping its vision and execution at the moment.

“Ask Your Developer” is The Wrong Approach

I’ll start with where I think Twilio is missing the mark.

Ask Your Developer took center stage. Jeff Lawson wanted companies and the business people inside it to go ask their developers what they can do. How they can improve the business.

It gives us developers a great feeling of being in control. Of being valued. But for the most part, and for most developers, this is probably the wrong approach.

Most developers would be happy to work by spec.

The few that aren’t will be promoted quite fast to system architects, managerial roles in development or god forbid to product managers. Why? Because they can see the big picture.

They are the people that get asked. Or the people that answer without asking.

We should be asking our developers, but it should not be our strategy.

Which is where the miss came.

Twilio announced later on in the keynote Twilio Studio. A tool that takes some of that control from developers, putting it at the hands of decision makers.

You no longer have to ask your developer. You can work with him. Together.

More about this later.

The Code that Counts

Some 20 minutes into the keynote, Jeff Lawson invited Patrick Malatack. He started with this:

It was core to how Twilio approaches its customers. Patrick explained that this is the most important code – it is the code that counts.

The idea being that your life as a developer should be made easy, so Twilio is adding not only APIs that serve the functions you need, but also a runtime behind it to facilitate rapid development and deployment – from helper libraries, to logging and debugging facilities, the new Twilio Functions, etc.

I think the code that counts here is developers focusing on their specific business problem – abstracting everything else.

It ended up being a concept of what Twilio Runtime is:

The yellow parts in that screenshot above are the newest announcements. The rest were there earlier. Twilio isn’t only adding more features to its platform – it is beefing up its runtime, making it another competitive advantage in front of many others where it comes to pure SMS and voice capabilities.

The message here is an interesting one, but it wasn’t polished enough. I think this is where we will see more in future Signal events from Twilio.

Twilio Studio

At about 1:24:00 of the keynote, Jeff Lawson introduces Twilio Studio.

It starts by explaining that building is fun but maintaining isn’t (he is correct).

The goal, based on Jeff Lawson, is to massively accelerate roadmaps of Twilio’s customers.

I think it is a lot more than that.

Because this is so new and fresh, still in developer preview (and something I’ve started playing with a bit), it is hard to write this in an ordered fashion. Which means I’ll be going for a bulleted list instead

  • This is a really cool tool. From the demos and the time I’ve spent with Twilio Studio, it is really powerful
  • Getting UI tools that handle state machines for developers is not easy. The Twilio Studio experience has a nice feel to it – I liked the experience
  • Twilio Studio reminds me of Zapier. But where Zapier has a 1D linear approach to tooling and integration, Studio is its big brother, offering 2D visualization to communication state machines
  • There’s no support for the visible communication parts in Twilio Studio. Yet
    • You can send and receive programmable SMS and voice with it
    • A bit of messaging as well
    • But you can’t connect it to the voice in your SDK or manage a video chat room with it
    • This will need to be added later at some point to complete the puzzle
  • Is Twilio Studio the centerpoint of a customer’s flow or a corner piece of it?
    • Twilio Studio can be used to express your whole business process, fleshing out the important parts and branching away to your integrations
    • It can also be used to solve a minor piece of your bigger puzzle
    • It is up to you to decide how you use it
  • At the hand of an experienced architect, Twilio Studio will offer super powers
    • There are many ways to define and template what you need
    • Some approaches will work better, offering more flexibility
    • The focus should be around inclusion of as many stakeholders in the company as possible – being able to show them and interact with them by looking at a Twilio Studio Flow
  • Here’s a question: Is Twilio Studio a tool for Developers? Designers? Implementers? Analysts?
    • Twilio Studio today is fit for developers, but it won’t stay that way long
    • It can be used by implementers that know a bit about code but aren’t developers
    • It can be used to open a discussion between a developer and a business analyst
    • This is a way for expanding the target market within a Twilio’s customer from solely one of developers towards a larger audience. The motto is no longer “Ask your developer”
  • Twilio Studio can be enhanced
    • It is a great first step, but the next ones are a lot more interesting
    • They are also a lot more threatening to competitors
    • If Twilio succeeds here, it will dominate this space with the companies that matter the most
  • Twilio Studio is the ultimate vendor lock-in
    • Enterprises will adopt it, due to its many benefits
    • They will find it hard to switch because of these benefits
    • Enterprises won’t want to switch… Twilio Studio will be too valuable. Too transformative

This tool can do to contact centers what marketing automation is doing to email newsletters. If I were a contact center vendor… I’d consider Twilio Studio my biggest threat moving forward.

Pricing

There were 3 price points for Studio:

  1. FREE – up to 1,000 Engagements. To get developers hooked up to this tool and make them not bother with actually “developing” using “code”. It is also a great way of getting developers to NOT look at other competing vendors
  2. The minimal plan, at +$100/month price point. Covers up to 20,000 Engagements. This is probably where most small companies will be “living”, which is just fine
  3. The enterprise, unlimited plan, at $10,000/month or more. Expensive, but it depends how much traffic you’re handling

Then there’s the question of what an Engagement is exactly. Is it a flow of a single event in a Flow? Is it a widget being accessed inside a Flow? In a 2-way bot conversation, each message exchange is probably an exchange I am assuming – the more talkative your app – the more Engagements it will eat up.

Not sure if I am missing a tier between PLUS and ENTERPRISE here. There seems to be too big of a gap in there.

Positioning

One last thing – Twilio Studio has been positioned by Jeff Lawson inside the Engagement Cloud, below all of its current logical components:

I’d place it as a vertical bar next to the whole Twilio stack. Probably adding Functions write next to it:

My guess? Product management had a lot of internal discussions on this one, trying to decide where to place Studio – inside the engagement cloud, above it, right next to it. They ended up picking inside it.

A Word About GDPR

GDPR stands for General Data Protection Regulation. It is a piece of legislation that will become effective May 2018, in less than a year. A period of two years of grace has been given to reach that date.

It deals with the protection and processing of private information of citizens of the EU, which practically covers any global player out there, and even many who aren’t.

In a nutshell, it is a headache. Especially if you’re making use of analytics, personalization, automation, chat bots, AI or any other big data related technology. It is also relevant if you just hold an SQL database of your customers.

If you were working in a specific regulated vertical, such as healthcare or finance, then you might be used to such things. If you’re not, then you should start paying attention. Especially with the communication part of whatever it is that you do – this is where personal information gets passed along with the metadata that needs to be handled with care.

Twilio pushing GDPR this early on means two things to me:

  1. They are looking at the enterprise, and making sure their platform is fit for their purpose (large multinational enterprises will be the first to adopt and adhere to something like GDPR)
  2. They are making sure that they are leading the CPaaS pack here. I am unaware of any other CPaaS vendor who has been pushing GDPR besides stating that they will be ready by May 2018. Twilio is trying to make sure it is synonymous with “GDPR compliant CPaaS”.

It also means that communication – telecom or IP based – is becoming slightly harder to handle. Something that works well for a vendor like Twilio whose purpose in life is simplifying complexity (=the more complexity the more value derived by Twilio).

Where do we go from here?

Twilio was and still is the undisputed CPaaS king. They are bigger than anyone else by a large margin and they are working hard on maintaining a technology edge on everyone else.

Twilio’s stock has been somewhat volatile lately with Uber’s announcement and later Amazon’s text messaging announcement (which ended up about Amazon using Twilio). Twilio seem vulnerable.

The two main announcements here were Studio and GDPR. Studio brings Twilio to a larger audience and increases their vendor lock-in, whereby reducing the effectiveness of their competition. GDPR is put in place as another headache Twilio solves for its customers – the more regulatory and bureaucracy like GDPR the better for a company like Twilio – it reduces the competition from in-house developers – which is doubly important now.

These two announcements are there to deal with its perceived vulnerability. They make developing using Twilio easier than ever – almost risk-free. And it makes it harder for competition to succeed in future land grabs trying to go after Twilio’s bigger accounts.

It will be interesting to see how competitors would react to this in the long run, and even more interesting to see what will Twilio Studio grow into.

Trying to figure out the best approach for developing your application? Check out this free WebRTC Development Paths Matrix to understand your alternatives

Get your WebRTC Development Paths Matrix

The post Thoughts about Twilio Studio and the Future of CPaaS appeared first on BlogGeek.me.

H.264 or VP8 in Your WebRTC Application?

bloggeek - Mon, 10/02/2017 - 12:00

No simple answer.

Apple recently announced that Safari will be supporting WebRTC. That support isn’t there yet to the point where it is stable enough, but we already know one thing:

Safari supports only the H.264 video codec.

Codec wars are over? 2 MTI (mandatory to implement) codecs in the form of VP8 and H.264?

Who cares?

Reality is that Apple decided at this stage not to support VP8 – and it hasn’t said anything about plans to support or not support VP8 in the future. That said, all signals indicate that support for VP8 in Safari is unlikely to happen.

This brings us to a simple yet challenging question:

When writing a WebRTC application. Should you make use of VP8 or H.264?

The answer isn’t a simple one. Choosing VP8 will leave you without Safari. Choosing H.264 will leave you without other important features and capabilities, as well as create a potential legal headache.

This is why I decided to create a new free video mini course – to guide you through the process and help you make the best decision here.

This video course, Picking a WebRTC Video Codec, is free and includes 4 lessons and a cheat sheet.

Find out which codec to use: VP8 or H.264

The post H.264 or VP8 in Your WebRTC Application? appeared first on BlogGeek.me.

What’s in my Online WebRTC Course?

bloggeek - Mon, 09/25/2017 - 12:00

Looking for a WebRTC training? Search no more. My online WebRTC course is here.

I will be relaunching my Advanced WebRTC Architecture Course next week, so it is time to see what you’ll find in this WebRTC training program I’ve created and fine-tuned for over a year now.

Prefer watching and listening more than reading? Join my free webinar on Wednesday for a quick lesson on WebRTC architecture related topics, where I’ll also be explaining the WebRTC training course and its contents.

Register and Grok media in WebRTC

The sections below explain the various parts of this unique WebRTC training. These are decidedly focused on delivering the best learning experience possible.

WebRTC Training Main Modules

The course is designed and built around 7 main modules:

Each module includes multiple lesson, and each lesson is a recorded video session of anywhere between 10-40 minutes of length. Most lessons also include additional links and some written content in them.

Module 1 gives you the baseline information about what WebRTC is. Consider it your introduction to the topic.

Modules 2-3 focus on signaling. They’ll take you from an understanding of UDP and TCP up to deciding what signaling protocol to use in each case and why.

Modules 4-5 are all about media. They explain voice and video codecs – in the context of their relevance to WebRTC. They also deal with the various media architectures available in group calling and recording scenarios.

Module 6 is all about the ecosystem. It lists the different strategies developers have in front of them when designing a WebRTC application, and then goes into details of each one of these.

Module 7 brings it all together. It takes different scenarios and use cases, analyzes them and builds the necessary architectures to support each use case. This is where the theory comes into practice.

The total length of the recordings in all modules and lessons? Over 15 hours.

You progress with the material at your own pace, jumping between lessons as you see fit, or through the original order they were laid out in.

If you’re looking for something to print and share, there’s a PDF version of the WebRTC course syllabus available.

Get Your WebRTC Questions Answered in the Course Forum

The course itself is supplemented with an online forum.

I’ve been contemplating making that forum a Slack channel or a Facebook group. Decided against it. While that may change with time, the course does have a forum built into it.

When you enroll to the course, you also gain access to the forum, which is where you can ask questions and get answers to them.

At any point in time.

Be it about a specific lesson, or a challenge you have in what you’re currently doing at work with WebRTC.

And if sharing openly isn’t your thing, you can always just email me directly.

WebRTC Training Office Hours

Twice a year, a series of office hours are provided for the course.

There are 12 such live sessions, taking place on roughly a weekly basis. They happen in 2 different times of the day, to fit different timezones.

These office hours include two parts in them:

  1. Me rambling about a topic. Call it a live lesson. It can be something from the actual course, or just thoughts and updates on what’s been going on lately with WebRTC out there
  2. Q&A. In this part, those enrolled to the course can ask anything they want. It is a part of the course which not many use, but those that do seem to enjoy it and derive benefit from it

The office hours are recorded and available for playback as well, so if you miss a session – you can always return to it and play it back.

WebRTC Course Bonus Materials

Besides all the 7 course module, I’ve added a bonus module.

This one contains some extra lessons as well as cheat sheets and templates that are spread all over my site in an easy to reach location.

What lessons are in the bonus materials?

4 recorded lessons

  1. WebRTC standardization
  2. Writing RFP requirements for WebRTC
  3. Media algorithms
  4. Using testRTC

The media algorithms lesson is really important. It covers topic that I touch only lightly during the course such as echo cancellation and jitter buffer.

2 recorded guest lessons

In my last round of the course, appear.in, who took the corporate plan, were also kind enough to share two new guest lessons:

  1. Video Quality in WebRTC
  2. Deploying (co)TURN on AWS

Philipp Hancke and Bradley T. Hughes were the instructors for these two and I found myself learning a lot in these lessons as well. Now, they are part of the course bonus materials.

What’s New in This Round of the WebRTC Course?

This is the third time I am running this course, and the second round of updates to it.

  1. I’ve updated some of the materials where appropriate (someone told me recently that Apple is doing something with WebRTC, so it had to find its way to the course )
  2. I also recorded a session from scratch because apparently, the audio recording of that one wasn’t the best
  3. The bonus materials (described above), are going to go away. They will be available only during course launch periods (=this week) or for corporate plans
  4. There’s a new eBook that is going to be added as a bonus to the course. It is called “Built to Scale”, and it is a look behind the scenes of how meet.jit.si is… built to scale

A Few Questions Answered About the WebRTC Course

I am now adding an option to take my WebRTC training as part of every consulting project I take. Sometimes, the customer takes me up on the offer, and other times they don’t. There are questions that get asked almost all the time about the course by these customers, so I decided to answer the most common ones here.

How long will it take to work through the WebRTC course?

It is entirely up to you.

There’s over 15 hours of recorded content in the course. More if you start going through the links, external slide decks and videos that I share in the course lessons.

But at the end of the day:

  1. You decide on the pace of your WebRTC studies
  2. You decide which lessons to start with first
  3. You decide if there are lessons you prefer skipping
  4. You decide if you want to watch to a specific lesson again

If you take a lesson in each working day, then 2 months is approximately what you’ll need to get from start to end.

Is there any prerequisite to taking this WebRTC training?

This WebRTC training program assumes you have some good understanding of technology. The rest – it fills in with the various modules of the course.

You don’t need to have knowledge in VoIP to take this course. You don’t need to be a web developer either. What you do need, is to have some technical grasp and understanding.

If you already have prior knowledge, then that’s fine – this WebRTC course isn’t forcing you to take its modules and lessons by their order, so you can skip to the relevant topics that interest you.

Is there a certificate?

As most online learning courses go, so too the WebRTC course offers a certificate.

Once you’ve completed the course, you will be receiving a WebRTC certificate indicating you’ve passed the course.

For companies, there’s a separate plan, which enables them to hold a badge of the WebRTC course. You can find the vendors that have taken this plan in the corporate partners page.

What’s Next?

Want to learn more about media in WebRTC? Join this free webinar to see an analysis of a real case study I came across recently. What did the company had in mind to build and how they botched their architecture along the way.

Register and Grok media in WebRTC

And if you’re really serious, enroll to my Advanced WebRTC Architecture Course.

The post What’s in my Online WebRTC Course? appeared first on BlogGeek.me.

Pages

Subscribe to OpenTelecom.IT aggregator

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Yet more available pages

Responsive grid

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Typography

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.