News from Industry

Kamailio v4.2.7 Released

miconda - Thu, 12/17/2015 - 22:59
Kamailio SIP Server v4.2.7 stable is out! This is a minor release including fixes in code and documentation since v4.2.6.Kamailio v4.2.7 is based on the latest version of GIT branch 4.2.  If you are running previous 4.2.x versions are advised to upgrade to 4.2.7 (or to 4.3.x series). If you upgrade from older 4.2.x to 4.2.7, there is no change that has to be done to configuration file or database structure comparing with older v4.2.x.Resources for Kamailio version 4.2.7Source tarballs are available at:Detailed changelog:Download via GIT: # git clone git://git.kamailio.org/kamailio kamailio
# cd kamailio
# git checkout -b 4.2 origin/4.2Binaries and packages will be uploaded at:Modules’ documentation:What is new in 4.2.x release series is summarized in the announcement of v4.2.0:Note: the branch 4.2 is the previous stable branch. The latest stable branch is 4.3, at this time with v4.3.1 being released out of it. The project is officially maintaining the last two stable branches, these are 4.3 and 4.2. Therefore an alternative is to upgrade to latest 4.3.x – be aware that you may need to change the configuration files and database structures from 4.2.x to 4.3.x. See more details about it at:

Twilio and WebRTC: An Interview with Al Cook

bloggeek - Thu, 12/17/2015 - 20:55
isVisible=false; function show_hide_searchbox(w){ if(isVisible){ document.getElementById('filterBoxSelection').style.display = 'none'; w.innerText='Filter ▼'; }else{ document.getElementById('filterBoxSelection').style.display = 'block'; w.innerText='Filter ▲'; } isVisible=!isVisible; } function checkIfSelected(chk){ if(chk.checked==1) chk.parentNode.className = "selected"; else chk.parentNode.className = "notselected"; getSelectedValues(); } function getSelectedValues(){ var a=document.getElementsByClassName('selected'); var vtVal=[] , ctVal=[] , ftVal=[]; var ct=0,vt=0,ft=0; for (x = 0; x < a.length; ++x) { try{ if(a[x].getElementsByTagName('input')[0].className=='companyType'){ ctVal[ct]= a[x].getElementsByTagName('input')[0].value; ct++; } if(a[x].getElementsByTagName('input')[0].className=='vendorType'){ vtVal[vt]= a[x].getElementsByTagName('input')[0].value; vt++; } if(a[x].getElementsByTagName('input')[0].className=='focusType'){ ftVal[ft]= a[x].getElementsByTagName('input')[0].value; ft++; } }catch(err){ } } search_VType(vtVal); search_CType(ctVal); search_FType(ftVal); } function search_VType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null){ a[x].style.display='block'; } } if(val.length==0){ a[x].style.display='block'; } } } function search_CType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } } function search_FType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } }Check out all webRTC interviews >>

Twilio: Al Cook

December 2015

Communication API

Cloud Communication APIs.

[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]

API platforms fascinate me. Especially communication API platforms. You can’t get any bigger than Twilio these days. This year, they’ve announced and launched a slew of new capabilities – task routing, video calling, IP messaging and a lot of enhancements to their existing services.

I’ve been wanting to land an interview with Twilio for quite some time. I was happy when Al Cook, Director of Product Marketing at Twilio, obliged. Here’s what he had to say.

 

What is Twilio all about?

Twilio is a cloud communications platform. We provide programmable building blocks that developers use to embed communications into their mobile and web apps – from voice, messaging, and video to authentication. So when you are communicating with your Uber driver via text or anonymous phone call, calling Hulu customer support, or shopping via text with the help of your Nordstrom personal shopper, that’s Twilio. Or to give a WebRTC example – when you call a customer support team powered by Zendesk, the agent is talking to you over a WebRTC connection powered by Twilio. We have over 700,000 developers generating over 50 billion API transactions a year. In WebRTC we’ve powered over half a billion minutes of WebRTC to date.

 

Twilio Video went to public beta today. You’ve been in private beta for a while. How is it going? What have you learned?

That’s right, the private beta started in May and we collaborated with developers to build the right solution, with the right developer experience. Video is in public beta as of now. Now anyone can sign up for immediate access to our WebRTC-powered web and mobile SDKs, and the cloud-based signaling/media services that power them.

During the private beta we onboarded several thousand developers from our base. This group size was critical for gaining useful feedback and insights, while still allowing meaningful interactions.

Interesting. Did you check what users do during the private beta?

During the private beta onboarding, we asked participants to tell us about their use cases. I read every single entry and categorized the use cases. The top categories break out as follows:

  • 21% healthcare
  • 14% support (in-app enterprise customer support, visual customer support)
  • 12% tutoring
  • 10% collaboration
  • 5% recruiting
  • 5% call an expert
  • 4% marketplace / sharing economy
  • 4% interpretation services (including assistive deaf/blind services)

Two of the big areas we spent considerable time refining during the beta were improving the mobile media stack performance, and building a signaling model that allows us to continue to add new capabilities for multi-party, multi-endpoint IP and carrier communications.

 

I have to ask. These developers in the private beta – how many of them were existing Twilio developers who just added video versus new ones?

It’s a mix. A lot of folks are with us because they want multiple channels of communication, and so video is a natural extension for them. But we’ve also had a lot of people who were new to Twilio, and excited to have a better alternative than their current video solution.

 

How is your video offering different from other alternatives that are out there today?

We believe this solution is not available anywhere else. Here’s some insight on the areas where we invested the most time to ensure we were building the right solution for needs that had not been addressed.

  • Without this, each communication capability would either have to be built from scratch or individually purchased and pieced together, if possible. And that’s just the beginning. Our SDKs are designed as a platform to add more communication channels over time.
  • We designed a conversation model that scales in volume, use case and breadth of different endpoint types. Conversations can be either call-based or room-based; start peer-to-peer and move to network-mixed; and interoperate with SIP endpoints and carrier endpoints. Our signaling model is built to fulfill this vision. Some features are enabled today; others are coming. The important thing is we’ve laid the foundation for one platform that can power all communications needs.
  • Our pricing makes it accessible to everyone, and to scale to the very largest deployments. Most video services require per user fees, which are expensive for starting-up and scaling. Twilio video is aimed at infrastructure level pricing where it’s faster and cheaper than building and operating your own service at any scale. And users get the benefit of our ongoing work to deliver high quality and resiliency.

 

What excites you about working in WebRTC?

To me, the most exciting aspect of WebRTC – and really programmable real-time communications more generally – is that it stands to fundamentally change the way we communicate. Through every iteration of the phone, the basic interaction hasn’t really changed. Historically, there has been little-to-no ability to gain immediate context of why the caller is calling, what they were doing beforehand, and what they may need. Embedding communications into applications allows for a far more meaningful and relevant communication. Imagine calling your car insurance company from your car insurance app following an accident, and instantly the call is routed with the right prioritization based on the GPS of your phone to an agent who speaks your prefered language. The app enables you to instantly share a video feed of the accident scene and collaboratively annotate the video using the app. All this while the agent captures the information in their record system to avoid a separate visit from a damage appraiser.

We believe every single app will have communications built into it. Every. Single. App.

 

Where do you see WebRTC going in 2-5 years?

WebRTC/ORTC is moving at such a velocity that 5 years out is pretty hard to forecast. But we believe:

  • In this timeframe, browser support should be ubiquitous. We’ve seen Microsoft Edge get there already (barring video codec support), and we know Apple is working on it for Safari.
  • Ubiquitous doesn’t mean standardized or non-contentious. We expect to continue to see differences in implementation of particular features that the developer will either have to keep track of and deal with directly, or use an SDK such as Twilio Video.
  • Media quality requires continuous improvement. We’ll continue to make it better and more resilient to bad networks.  However, in this timeframe, there will remain some networks that are not viable for real-time video.
  • Mobile in-app usage will be the most important use case for consumers. This means that most consumers won’t be using Google’s latest WebRTC engine off the shelf, but rather a version that has been packaged – and often modified and enhanced – along the way.
  • B2C Communications will focus on high-value, contextual interactions. Low-value B2C interactions will be increasingly handled through self-service channels. WebRTC will be one of the core technologies powering the high value segment.

 

If you had one piece of advice for those thinking of adopting WebRTC, what would it be?

Experiment – and think about how you scale the experiments that find success. It’s relatively simple to get a basic WebRTC call working. But plan for what happens if your new service finds success. Consider how will you scale, maintain and operate your TURN media relay. How will you collect and analyze voice quality diagnostics from all your endpoints. How will you interoperate with SIP networks and PSTN networks.

 

Given the opportunity, what would you change in WebRTC?

Some improvements have been addressed by ORTC. We’re big fans of these improvements and we look forward to the standards combining.

We would like more control over the media stack in a browser environment, if the browser makers could figure out a secure way to enable this. We spend a considerable amount time testing and measuring voice quality in impaired networks. In fact, we open-sourced the testing tool we use. On the mobile side, we operate the media stack and we do a lot of fine tuning to constantly improve the media quality.  This includes taking into account the performance of different networks and hardware configurations. Whether it’s adding codecs to use in particular scenarios, adding Forward Error Correction (FEC) techniques, or other areas we are working on. But when our endpoints call a browser-based endpoint, they have to fall back to the default media stack and it is not possible to layer on additional media enhancements, which is why we’d like more control in the browser environment.

In the more immediate time frame, the subject of handling QoS in WebRTC is tricky, and far from standardized. Plus, QoS behavior, like with much of WebRTC, tends to require significant reverse engineering to establish the exact behavior in different scenarios. We’re happy we can provide this capability on behalf of our customers – but we’d like more control over the experience.

 

What’s next for Twilio?

We’ve talked about a few of them – interoperability with SIP endpoints and PSTN endpoints for example. Of course we’re also working on SFU functionality for large scale video conferences – that should be no surprise to our customers. But we want to provide this capability in such a way that a developer doesn’t have to choose between either peer-to-peer routing or SFU mixed. The solution should intelligently move from one to another as the call topology requires. We also want a solution that scales beyond any existing solutions. And then, well…that’s enough to keep us busy for now Tsahi.

The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.

 

The post Twilio and WebRTC: An Interview with Al Cook appeared first on BlogGeek.me.

Surviving Mandatory HTTPS in Chrome (Xander Dumaine)

webrtchacks - Thu, 12/17/2015 - 13:11

Xander Dumaine provides some strategies and code for dealing with the new secure origin only policy in Chrome 47+ that forces the use of HTTPS.

The post Surviving Mandatory HTTPS in Chrome (Xander Dumaine) appeared first on webrtcHacks.

New Logo for Kamailio Project

miconda - Wed, 12/16/2015 - 22:58
As of today, December 16, 2015, the Kamailio Project is officially using a new logotype:Different formats and 3D artwork of the new logo can be found at:If you do any artwork based on the new logo and you want to share it with the community, we will gladly host it. Please do not hesitate to contact us.We encourage everyone displaying an old logo of Kamailio to update to the new one at the earliest convenience, it will help propagating the logo. Posting about it on blogs, forums, social media channels a.s.o. is very much appreciated.The community of the project liked the new logo, we hope everyone else will find it nice as well!Enjoy the winter holidays!

FreeSWITCH Week in Review (Master Branch) December 5th – December 12th

FreeSWITCH - Wed, 12/16/2015 - 18:49

This week the verto communicator had some new updates to the administrator menu and the core added a new origination_audio_mode variable. Join us Wednesdays at 12:00 CT for some more FreeSWITCH fun! This week we have Italo Rossi and the Evolux call center team! And, head over to freeswitch.com to learn more about FreeSWITCH support.

New features that were added:

  • FS-8616 [verto_communicator] A new menu for moderator, added gain buttons, and removed the 3-dot-button, moving its behavior to member-name div
  • FS-8632 [core] Add origination_audio_mode originate variable with options for sendonly, recvonly or sendrecv

Improvements in build system, cross platform support, and packaging:

  • FS-8293 [verto] Add quality level 0 to conference (default is 1) and fix some logic in auto bandwidth

The following bugs were squashed:

  • FS-8625 [core] Fixed a segfault caused by an external incoming call from Google Voice.
  • FS-8642 [core] Fixed CF_VIDEO_READY being set on non-video calls
  • FS-8603 [verto_communicator] Added device validation to prevent lost microphones after reset
  • FS-8640 [verto_communicator] Don’t clear conference member reservation id on members that don’t have a reservation ID
  • FS-8633 [mod_verto] Fix for the first verto to join a conference does not get “conference-livearray-join” event
  • FS-8621 [mod_av] Fixed H264 HD1080P video quality issues
  • FS-8631 [mod_db] Updated the regex to allow DSN to match the rest of FS code
  • FS-8643 [mod_sofia] Fixed some memory leaks

 

New Kamailio Module: cfgt

miconda - Tue, 12/15/2015 - 22:56
Victor Seva from Sipwise has published a new module forKamailio, named cfgt. The module is to be part of the next major release of Kamailio – v4.4.0, expected to be out on early spring of 2016.The module is useful for unit testing, to compare the results of the Kamailio configuration file routing logic. The report of execution is in JSON format, making it easy to analyse. Among the contents of the report can be the values for variables used in the configuration file.You have to make your test scenario (e.g., using sipp), send the traffic to Kamailio and then check if the report contains expected results.The documentation of the module is available at:

    The functionality of the module is rather small by now, but Victor is committed to add more in the near future, including examples of how to use it. Stay tuned and keep an eye on kamailio.org for updates!

    The public perception of open-source software

    FreeSWITCH - Tue, 12/15/2015 - 01:40

    These days “free” software seems to be a scary prospect to the general public. The association between open-source software and malicious “click here for free stuff” ads is strong and the fear of unknown “hackers” runs rampant. The old adage that “nothing good in life comes for free” has ingrained the idea that free is synonymous with scams. Why would anyone in their right mind give away a great product for free? This thought process is why most of the general public limits themselves to costly, proprietary services. The tech industry is huge and understanding it all is impossible, but buying trust isn’t the answer to guaranteed safety. There is plenty of fantastic open-source software available and it shouldn’t only be accessible to experienced, tech savvy individuals. And, as we move toward a more tech based culture, the up and coming generations can have an especially difficult time trying to explain this misconstrued conclusion to their older peers. Jim Salter from Opensource.com addressed this issue with an open letter to all parents with kids that want to use open-source software. He goes on to say free open-source software (FOSS) “is not “stolen” software. Free software licenses like the GPL and the BSD and Apache licenses allow users the ability to freely use, and developers the ability to freely develop, the software placed under those licenses. Another important thing to understand about FOSS is that it is not merely “free” in the sense of “free in every box of cereal.” Making a new copy of a piece of software literally costs nothing at all—this has made it possible for community efforts to produce world-class products in a way material goods never could be.” Helping the general public to understand the definition and motivation behind open-source will bring it out of the shadows of the industry and help it become mainstream. You can read his letter here: https://opensource.com/life/15/12/dear-parents-let-your-kids-use-open-source-software

    Shut up! Monitoring audio volume in getUserMedia

    webrtchacks - Thu, 12/10/2015 - 13:14

    A few days back my old friend Chris Koehnke, better known as “Kranky” asked me how hard it would be to implement a wild idea he had to monitor what percentage of the time you spent talking instead of listening on a call when using WebRTC. When I said “one day” that made him wonder whether he could offshore it to save money. Well… good luck!

    A week later Kranky showed me some code. Wait, he is writing code? It was not bad – it was using the WebAudio API so going in the right direction. It was enough to prod me to finish writing the app for him.

    The audio stream volume sample application from Google calculates the root mean square (RMS) of the audio signal which is extracted from the input stream using a script processor every 200ms. There is a lot of tuning options here of course.

    Instead of starting from scratch, I decided to use hark, a small open source module for this task that my coworker Philip Roberts had built in mid-2013 when the WebAudio API became first available.

    Instead of the RMS, hark uses the Fast Fourier Transformation to obtain a frequency domain representation of the input signal. Then, hark picks the maximum amplitude as an indication for the volume of the signal. Let’s try this (full code here):

    var hark = require('../hark.js') var getUserMedia = require('getusermedia') getUserMedia(function(err, stream) { if (err) throw err var options = {}; var speechEvents = hark(stream, options); speechEvents.on('volume_change', function(volume) { console.log('current volume', volume); }); });

    On top of this, hark uses a simple speech detection algorithm that considers speech to be started when the maximum amplitude stays above a threshold for a number of milliseconds. Much less complicated than typical voice activity detection algorithms but pretty effective. And easy to use as well, just subscribe to two additional events:

    speechEvents.on('speaking', function() { console.log('speaking'); }); speechEvents.on('stopped_speaking', function() { console.log('stopped_speaking'); });

    Tuning the threshold for accurate speech detection is pretty tricky. So I needed visualization (and just requiring hark only took five minutes so I had plenty of time). Using the awesome Highcharts graph library I quickly added plot bands to the graph I was generating:

    With the visualization I could easily see that the speech detection events happened a bit later than I expected since hark requires a certain history over the threshold for the trigger to work (say: 400ms).  To adjust for this in the graph had to substract this speech starting to trigger time from my x-axis (now()– 400ms for example).

    That graph is still visibile on the more techie variant of the website so if you think the results are not accurate… it might help you figure out what is going on. I am happy with the current behavior.

    The percentage of speech then calculated as the sum of the intervals that speech is detected divided by the duration of the call. As a display, a gauge chart is used with three different colors:

    • up to 65% speech time: green
    • up to 79%: yellow
    • more than 80%: red

    Adding remote audio to this would be awesome. However, while the WebAudio API is supported for local media streams in Chrome, Firefox and Edge, it is only supported for remote streams in Firefox. Hooking this up with the getStats API (in Chrome) to get the audio level would certainly be possible, but would require calling getStats at a very high frequency to get proper averages.

    Check out the app in action at talklessnow and let us know what you think.

    {“author”: “Philipp Hancke“}

    Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.

    The post Shut up! Monitoring audio volume in getUserMedia appeared first on webrtcHacks.

    SaferMobility and WebRTC: An Interview With Matthew Mah

    bloggeek - Thu, 12/10/2015 - 12:00

    Your private 911 system.

    [If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]

    I have seen a lot of applications lately that target public safety. Some offer you a “ghost” partner to “walk” with you home, while others focus on the reporting aspects.

    SaferMobility targets the authorities as the owners of the system (college campuses, municipalities, business zones, etc) and provides a mobile application to the users. It is reimagining how a 911 service would look like if it was being specified today.

    Matthew Mah, CTO of SaferMobility, was kind enough to answer my questions on what role WebRTC plays in their service.

     

    What is SaferMobility all about?

    SaferMobility focuses on using the capabilities of modern smartphones for enhancing safety. The public safety system in the United States is built around wired telephones, and it is more difficult for authorities to respond to mobile phones because they are harder to locate than fixed telephones. The modern smartphone has audio, video, location, and text capability that just are not being used efficiently yet.

     

    There are many other safety related apps out there. What differentiates you from the rest of the pack?

    Our systems focus on real-time interaction with authorities. Authorities receive enhanced calls with audio, video, location, and text information in real-time without it having to filter through friends or storage systems.

     

    You told me you launched your service using Flash. Why did you migrate to WebRTC?

    WebRTC is a huge improvement over Flash in terms of security, support, and capability. Adobe is not really interested in supporting Flash for mobile devices, so capabilities like acoustic echo suppression are not available. This makes a huge difference in communication quality.

     

    What signaling have you decided to integrate on top of WebRTC?

    We use a proprietary message system built on websockets.

     

    Backend. What technologies and architecture are you using there?

    Our Java application server runs Tomcat with a PostgreSQL database. It handles the signaling and issues commands to a media server for recording capabilities. We currently run on Dialogic’s Extended Media Server (XMS).

    Mobile. You decided to port WebRTC to iOS and Android on your own. How was the experience?

    Porting was difficult because of compatibility issues between our WebRTC media server with web, iOS, and Android clients. We would get two clients to work with the server, then upgrade the server and have two different clients work.

    For stability on the web side, the nwjs project has been very helpful for producing an application that works even while the web browser updates are racing ahead and frequently breaking things.

     

    Where do you see WebRTC going in 2-5 years?

    WebRTC will replace stagnant technologies like Flash. The ability to communicate through the browser will also lower the barrier for application development.

     

    If you had one piece of advice for those thinking of adopting WebRTC, what would it be?

    Be prepared for things to change quickly because WebRTC is still growing and maturing.

     

    Given the opportunity, what would you change in WebRTC?

    Aside from the expected growing pains, I am pleased with WebRTC.

     

    What’s next for SaferMobility?

    There’s a huge opportunity to improve public safety, security services, and general communication with modern mobile devices, and SaferMobility will be part of making those improvements.

    The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.

    The post SaferMobility and WebRTC: An Interview With Matthew Mah appeared first on BlogGeek.me.

    The Hidden Gems of WebRTC Goodness May Well Lie Within GetUserMedia Itself

    bloggeek - Wed, 12/09/2015 - 12:00

    WebRTC GetUserMedia is more important than the rest of this communication stack.

    Who would have believed? With all the magic and distraction that video calling from a browser brings with it, the real treasure trove resides in the basics – WebRTC GetUserMedia.

    Simplifying things, WebRTC has 3 distinct areas/APIs to it:

    1. GetUserMedia, allowing access to camera and microphone inside the browser
    2. PeerConnection, taking care of all the mess that is a voice/video call
    3. Data Channel, making it possible to send any arbitrary message across browsers directly

    I’ve pointed up in the past how WebRTC GetUserMedia gets used by Mailchimp and WhatsApp. Taking a camera snapshot is nice, but what else can we achieve with this access we’ve been given?

    TalkLessNow

    Chris Kranky had an idea a few weeks ago. Measuring how much you’re yapping in a call as opposed to listening. So he made it happen. On a shoestring budget, some connections and a bit of time and TalkLessNow was born.

    How it works?

    The website is quite spartan. When you go on a phone call (not a WebRTC one), you just press the green Call button on talklessnow.com.

    The code on the site “listens” through the machine’s microphone to your call. Whenever it hears enough of a volume – it assumes you’re talking. If the volume is lower than its configured threshold – you’re listening.

    Just WebRTC GetUserMedia. No PeerConnection or any other fuss.

    Will it work?

    Here in Israel, I am sure the results won’t be good. We’re used to talking over each other and interrupting. Efficiency at its best. If in a call between Israelis it shows less than 70% of talk time per participant, I’ll crown that session a success.

    Seriously though, we should be listening a lot more than we’re talking.

    Same but different

    The now defunct Guitar Tuner works the same way. It doesn’t work anymore because the site is served on HTTP and WebRTC GetUserMedia now requires HTTPS to work with the latest Chrome release (progress, you know).

    Ziggeo

    Here’s another example.

    Ziggeo is making use of WebRTC to record videos. They do that by employing WebRTC GetUserMedia, storing the resulting media locally and at the end of the recording sending it to their servers. The sending part doesn’t occur via WebRTC.

    There’s an interesting interview with Susan Danziger, CEO of Ziggeo from last week that you should read.

    Is this Real Time Communications?

    WHO CARES?

    It works. It gives business value – and in ways that weren’t really possible up until today.

    There’s a lot more to WebRTC than classic VoIP.

     

    Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

    The post The Hidden Gems of WebRTC Goodness May Well Lie Within GetUserMedia Itself appeared first on BlogGeek.me.

    The FreeSWITCH project and Static Analysis

    FreeSWITCH - Tue, 12/08/2015 - 19:11

    The FreeSWITCH project is nearly ten years old, and the FreeSWITCH git repo has commits from about 214 different authors and over 3.2 million lines of code with 875k of those lines under the src directory. Some of the maintenance challenges associated with such a large software project include: detecting and resolving human errors such as typos, logic inversions, and dangerous formatting. Implementing code review is a must, and there are different techniques common to the industry used to reduce the defect density and standardize the code format: autobuilding against multiple compilers, routine testing, and static code analysis. The core development team at FreeSWITCH uses all three techniques.

    Both autobuilding and routine testing can be applied with in-house system workflows. Routinely building the packages against different compilers allows for consistent tracking to make sure additional commits won’t break existing code in any of the prepackaged builds. This also allows for consistent handling of packages for multiple operating systems. By autobuilding against different compilers, we can make sure that a commit for one set of packages doesn’t break the builds for the others. Routine testing is another viable option for code review, and routinely testing and implementing a bug tracking system allows the community members to report bugs found in unique environmental circumstances. Open-source software relies on many different eyes to keep bugs shallow, and this practice opens up different configurations and applications of the software for a more thorough testing. Each year hundreds of tickets are opened on the FreeSWITCH project JIRA, and the developers work tirelessly to address all of them.

    Static analyzers can scan thousands to millions of lines of code without getting tired and usually don’t require many manual steps to run. The relationship between a project’s developers and the creators of a static code analyzer can be a symbiotic one. The analyzer works by using a database of multiple tiers of positive and negative heuristics. First, it runs the low cost patterns against the entire code base to generate a large list of possible issues, then runs more accurate and higher cost patterns against the bug candidates to reduce the number of false positives, and finally evaluates the severity and more accurately classifies the issues. Once the analyzer has completed its run, it requires an experienced software developer familiar with the code base to review each issue reported.

    Most static analyzers are built to report possible candidates in the first pass, and thus immature analyzers are perceived to red flag everything. They tend to create a lot of noise by reporting a large number of false positives and misclassifying the severity of issues. After the developers for the software being analyzed have reviewed the results of the analysis, they can give specific examples of why they determined it to be a false positive which can be used to improve the static analyzer’s heuristics. As the database matures, the quality of the negative heuristics improves and reduces the volume of false positives. The advantage here is that each report triaged leads to a commit resolving a bug or an improvement to the analyzer.

    The team over at Program Verification Systems have built a static analyzer for C/C++ code that integrates into Microsoft Visual Studio. According to their website, the program allows the user to scan of lines of code to locate various typos and other errors. Their analyzer supports C/C++, C++/CLI, and C++/CX with support for C# language coming soon. The PVS-Studio is also available as a standalone utility through the distribution packages which allows for viewing the analysis logs on a machine without Visual Studio. It can also be used to track multiple sub-builds and analyze non-standard build systems. The reports for the open-source projects that have been analyzed with this software can be found on their website in the Checked Projects section.

    The FreeSWITCH team ran the open-source FreeSWITCH project through the PVS analyzer. A decent majority of the issues reviewed were determined to be minor Windows-specific bugs not previously flagged by compilers currently implemented by the team. The team is continuing to review and resolve the alerts from the analysis and have integrated this analyzer into the code review workflow. They look forward to continuing this symbiotic relationship with the goal of improving the quality of software.

    If you would like to replicate the results you can use the following steps.

    • Set up an instance of Microsoft Windows 10, install Microsoft Visual Studio 2015, and install the analyzer from the http://www.viva64.com/en/ website.
    • Create a new FreeSWITCH project.
    • Clone the FreeSWITCH repo into your project.
    • Open the FreeSWITCH project.
    • Set the debug to ‘x64’ if not already set.
    • Click the PVS studio drop down box and select “check solution” to run the analyzer.
    • Settle in and wait for the results.

    FreeSWITCH Week in Review (Master Branch) November 28th – December 5th

    FreeSWITCH - Tue, 12/08/2015 - 19:11

    Our features this week include: improvements to the auto bitrate features in mod_conference, the addition of the Debian install script for the verto communicator, and separate controls for gain and volume for verto. Join us Wednesdays at 12:00 CT for some more FreeSWITCH fun! This week we have Tsahi Levent-Levi talking about WebRTC! And head over to freeswitch.com to learn more about FreeSWITCH support.

    New features that were added:

    • FS-8595 [mod_conference] Improve auto bitrate in personal canvas mode and do not let auto bitrate exceed native picture size

    Improvements in build system, cross platform support, and packaging:

    • FS-8614 [verto_communicator] Add Debian developers install script and update README.md to reference it

    The following bugs were squashed:

    • FS-8585 [mod_commands] Expanded {} and <> to [] for each dial string in group_call to allow for multiple device registrations for the same user
    • FS-8589 [mod_conference] Fixed using conference playback with full-screen=true not working correctly
    • FS-8354 [mod_conference] Fixed G722 audio issues with mod_conference caused by previous commit fab43547
    • FS-8602 [mod_conference] Fixed conference not auto-generating layouts properly when callers with no camera are present
    • FS-8615 [mod_conference] Fixed a crash when quickly changing layouts and setting reservation ids
    • FS-8588 [mod_httapi] Fixed a crash found while fixing unreliable digit collection
    • FS-8599 [verto] Removed a workaround for Mozilla that is no longer needed for video size
    • FS-8590 [verto_communicator] Fixed sending malformed vid-res-id command when changing layouts by treating no res-id the same as clear
    • FS-8612 [core] Fixed a rare IVR originated calls crash due to read codec leak
    • FS-8619 [mod_rayo] Reply with conflict stanza error if bind is attempted with duplicate JID. Improve error handling when ‘ready’ callback fails.

    The FreeSWITCH 1.4 branch had a couple of bug fixes back ported. And again, keep in mind that 1.4 is quickly moving toward end of life and won’t be supported any longer except for high level security issues.

    The following bugs were squashed:

    • FS-8582 [mod_httapi] Fixed a crashed caused by null URL being passed

     

    ClueCon Weekly – Nov 18, 2015 – David Taht

    FreeSWITCH - Mon, 12/07/2015 - 18:58

    Links: http://www.dslreports.com/speedtest 

    ClueCon Weekly – Flowroute Justin Grow – November 11, 2015

    FreeSWITCH - Mon, 12/07/2015 - 18:53

    Links:https://support.flowroute.com/customer/en/portal/articles/2205573-freeswitch—add-flowroute-as-sip-gatewayhttps://developer.flowroute.com/

    The First WebRTC Earthquake in Video Conferencing: Acano vs Polycom

    bloggeek - Mon, 12/07/2015 - 12:00

    The future isn’t what it used to be.

    I’ve been babbling here a lot about the enterprise video conferencing market and WebRTC’s role in disrupting it. When it first came out, I believed the existing companies are going to be struggling with it. I was mostly ignored by these companies – it is hard to see what’s just around the corner when you’re stuck in the echo chamber of your company and its immediate industry.

    When I meet old colleagues of mine from the video conferencing industry and see them working in the same companies, I suggest they leave. Find another company or industry, because the outcome is known – just the timing factor is missing. They dismiss it, probably thinking that I am saying it our of a grudge to the company. I am not.

    What happened in November should hit home.

    We had two separate news items that in some cosmic way happened in the same week:

    1. Cisco acquired Acano. For $700M USD. A company with around 350 employees (that’s $2M per employee)
    2. Polycom announced closing its Israeli office. Moving the operations to India. That’s 200 employees + 80 contractors

    Dumbing things down a bit:

    • Acano was about building a cloud MCU. Polycom Israel was about building an on-premise MCU
    • Acano started life in 2012, making immediate use of WebRTC. Polycom just launched their first MCU to support WebRTC this year (2015)

    It isn’t that WebRTC is the reason why Acano succeeded and Polycom Israel has failed. It is that the mindset of these two companies was different. Acano looked into what can be done in this modern age and made use of WebRTC to get there. Polycom looked at how they slowly evolve their product offering. I am sure people in Polycom knew about WebRTC. It probably was on roadmaps and discussions since 2012, never to be given priority, because who needs it? It can’t compete with the high end systems of Polycom. But then the basis of competition changed. What customers care about changed. It isn’t anymore about resolutions and frame rates. It’s about utility and usability – something most video conferencing companies never knew how to handle.

    Polycom Israel didn’t have the foresight to make themselves attractive enough to their corporate overlords in San Jose. Probably because they weren’t given the opportunity to do so. The end result? They just weren’t important. Their technology and architecture is now stable and understood enough to move it to countries with lower salaries.

    I remember doing a training to developers about WebRTC in 2014. I asked people in the room what they do. There were media engineers and signaling protocols developers. I told them that they are going to be out of work. They saw it as a joke. Some of them are now updating their resume.

    What is it that you are doing for a living? What is your company developing? Does it make sense? Do you take the effect WebRTC (and other technologies) have on your job seriously?

     

    Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

    The post The First WebRTC Earthquake in Video Conferencing: Acano vs Polycom appeared first on BlogGeek.me.

    Next Kamailio World – May 18-20, 2016, in Berlin

    miconda - Mon, 12/07/2015 - 10:48
    Kamailio project is pleased to announce that the date and location for next Kamailio World Conference and Exhibition were decided – May 18-20, 2016, in Berlin, Germany.Kamailio project is celebrating 15 years of development in 2016, therefore we plan to have a special edition, many guests that impacted the evolution of the project since its start in 2001 at FhG Fokus Institute.Website of the event and call for presentations will be launched very soon. Meanwhile, if you haven’t participated at a past edition, you can check the previous edition website in order to get an idea about the structure and content of the event:Keep an eye on this news feed for updates in the near future!

    Proposing a New Logo for the Kamailio Project

    miconda - Thu, 12/03/2015 - 13:56
    During the Kamailio IRC level meeting this summer, a need for refreshing the logotype of the project was discussed. The current (embedded in the upper right corner of kamailio.org main page) is based on the one used for during the former OpenSER name of the project, with changes of the text to reflect the SIP Router and Kamailio names, somehow not longer very balanced, lacking good quality and high resolution graphics. The participants agreed that a refresh would be better than keeping that version.One option was to reuse the graphics from Kamailio World Conference logo, simply with Kamailio name. It was used even before as alternative logo by various peoples and companies.We now want to finish this process and we considered also the possibility of a new logo design. Thanks to Asipto and their deal with 99Designs, we ran a design contest and see if someone proposes an interesting logotype. Based on the result of the contest, followed by discussions on management group and the people interested in updating the logotype, we are proposing a new logo for the project:During the next days we are expecting feedback from community, especially if it looks too similar with other logos they know or if they like it or not. Based on that, a final decision will be taken and either we will switch to the new proposed logo or keep looking for a new one.Join the discussion about the new logo on users mailing listsr-users@lists.sip-router.org .2D and 3D variants in different formats, as well as some combinations with few pictures, can be found at:As a preview, a few variants are embedded here:

      Ziggeo and WebRTC: An Interview With Susan Danziger

      bloggeek - Thu, 12/03/2015 - 12:00
      isVisible=false; function show_hide_searchbox(w){ if(isVisible){ document.getElementById('filterBoxSelection').style.display = 'none'; w.innerText='Filter ▼'; }else{ document.getElementById('filterBoxSelection').style.display = 'block'; w.innerText='Filter ▲'; } isVisible=!isVisible; } function checkIfSelected(chk){ if(chk.checked==1) chk.parentNode.className = "selected"; else chk.parentNode.className = "notselected"; getSelectedValues(); } function getSelectedValues(){ var a=document.getElementsByClassName('selected'); var vtVal=[] , ctVal=[] , ftVal=[]; var ct=0,vt=0,ft=0; for (x = 0; x < a.length; ++x) { try{ if(a[x].getElementsByTagName('input')[0].className=='companyType'){ ctVal[ct]= a[x].getElementsByTagName('input')[0].value; ct++; } if(a[x].getElementsByTagName('input')[0].className=='vendorType'){ vtVal[vt]= a[x].getElementsByTagName('input')[0].value; vt++; } if(a[x].getElementsByTagName('input')[0].className=='focusType'){ ftVal[ft]= a[x].getElementsByTagName('input')[0].value; ft++; } }catch(err){ } } search_VType(vtVal); search_CType(ctVal); search_FType(ftVal); } function search_VType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null){ a[x].style.display='block'; } } if(val.length==0){ a[x].style.display='block'; } } } function search_CType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } } function search_FType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } }Check out all webRTC interviews >>

      Ziggeo: Susan Danziger

      December 2015

      Video recording

      Asynchronous video meets WebRTC.

      [If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]

      One area where WebRTC is making strides recently is video streaming. Some of the hyped use cases today are those that enable broadcasting in real time, but there’s another interesting approach – one where WebRTC is employed when the video consumption is asynchronous from its creation.

      Ziggeo is an API provider in this specific niche. I met with Susan Danziger, CEO of Ziggeo, and asked her to share a bit of what it is they do with WebRTC and how it is being adopted by their customers.

       

      What is Ziggeo all about?

      Ziggeo is the leader in asynchronous (recorded) video offering a programmable video recorder/player through our API/native SDKs.

       

      You started by working on an HR interviews platform. What made you pivot towards a video recording API platform instead?

      In building our own video recording/playback solution for the platform, we realized what a complicated and time-consuming process building our own solution was.  We had to make sure that videos could be recorded and played across all devices and browsers (even as new ones were released) and build a permissions-based security solution that would withstand hackers.  We were surprised there were no off-the-shelf solutions available so decided a bigger opportunity would be to release our technology as an API — and then native SDKs (and shortly thereafter closed our B2C platform).

       

      On the same token – you have Flash there. Why did you add WebRTC? Wasn’t Flash enough for your needs?

      For the most part our customers hate Flash.  And no wonder: browsers that support Flash have an awful user experience in which you need to basically hit 3 different buttons before you can begin recording from your web camera (once to resume the suspended Flash applet and twice to access the camera).

      We added WebRTC to avoid Flash whenever possible.  That said, for certain browsers, e.g. Safari and Internet Explorer we need to default to Flash as they don’t yet support WebRTC.

       

      How are customers reacting to the introduction of WebRTC to Ziggeo?

      Customers love it!  In fact, our customers seek us out in part because we’re the only API for asynchronous video recording that supports WebRTC.

       

      Can you share a few ways customers are using Ziggeo?

      In addition to recruiting (where candidates introduce themselves on video), we’ve seen Ziggeo used for training (e.g. trainees record video sales pitches for feedback); dating (potential dates exchange video messages); “Ask Me Anything” (both questions and responses on video); e-commerce (products introduced on video and video reviews recorded); advertising (user-generated videos submitted for contests or for use in commercials); and journalism (crowd-sourcing videos for news from around the world).  I’m still waiting for someone to create a video version of Wikipedia where pieces of knowledge are recorded on video from around the world — that would be the most amazing use case of all.

       

      A video version of Wikipedia. Have it in Hebrew and I’ll sign up my daughter on it.

      You don’t use the Peer Connection APIs at all – Just getUserMedia. Why did you make the decision to record locally and not use the Peer Connection and record on the server?

      Folks like to re-record locally so we chose not to use unnecessary resources.  We pride ourselves on making our technology as efficient and seamless as possible.

      How do you store the file locally and how do you then get it to your data centers?

      We use IndexedDB to store the file locally and then push it using chunked http.

       

      Viewing. Over what protocols do you do it, and how do you handle the different codecs and file formats?

      Protocols: Http pseudo streaming, HLS, rtmp, rtsp

      Formats: we transcode videos to different formats (mp4, webm) and resolutions

       

      Where do you see WebRTC going in 2-5 years?

      We imagine there will be full support of WebRTC across all browsers and devices as well as better support for client-side encoding of video data.

       

      Given the opportunity, what would you change in WebRTC?

      We’d like to see improved support for consistent resolution settings as well as for encoding

       

      What’s next?

      We’re planning the 2nd Annual Video Hack Day in NYC for this coming May.  You can find more information here at: videohackday.com or follow @videohacknyc on Twitter

      The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.

      The post Ziggeo and WebRTC: An Interview With Susan Danziger appeared first on BlogGeek.me.

      FreeSWITCH Week in Review (Master Branch) November 21st – November 28th

      FreeSWITCH - Tue, 12/01/2015 - 20:03

      This week we had a few features including: allowing building with OpenSSL without EC support, a video quality parameter to allow for conference configuration for verto, and some improvements to conference layouts for verto as well. If you haven’t already, it is highly recommended that you upgrade to the newest 1.6 release as soon as possible to avoid the vulnerability from last week. Join us Wednesdays at 12:00 CT for some more FreeSWITCH fun! This week we have James Tagg! And head over to freeswitch.com to learn more about FreeSWITCH support.

      New features that were added:

      • FS-8568 [core] Allow building using system OpenSSL without EC support
      • FS-8293 [verto][mod_conference] Made sanity level based on 1080p and added a video-quality conference profile parameter for specifying the motion factor when calculating video bitrate, defaulting to 1.
      • FS-8264 [verto_communicator][verto]  Adapted the layout select to new response, added a separated menu in members list to set its reservation id, and added all the reservation IDs in the return of “list-videoLayouts” command
      • FS-8433 [mod_sofia] Allow hangup cause to be set inside redirect data

      Improvements in build system, cross platform support, and packaging:

      • FS-8592 [Windows] Fixed some simple compiler errors
      • FS-8578 [mod_verto] Fixed build error for missing __bswap_64 on osx
      • FS-8152 [Debian] Make sure to package the image directories too
      • FS-8576 [Debian] Fixed a package upgrade issue related to the fonts being installed in multiple packages

      The following bugs were squashed:

      • FS-8569 [mod_conference] Fixed undefined symbol conference_cdr_test_mflag
      • FS-8574 [mod_conference] Fixed a read write lock issue
      • FS-8566 [core] Fixed calls failing when put on hold in bypass media mode with inbound late negotiation set to false
      • FS-8573 [core] Fixed one way audio after resuming from hold in bypass media mode
      • FS-8575 [core] Fixed DTMF not being passed from a to b during rfc 2833 events
      • FS-8582 [mod_httapi] Fixed a crashed caused by null URL being passed

       

      The FreeSWITCH 1.4 branch had a couple of bug fixes back ported as well as the release of 1.4.26. And again, keep in mind that 1.4 is quickly moving toward end of life and won’t be supported any longer except for high level security issues.

      New features that were added:

      • FS-8547 [core] Add error log into stats to log when quality impacting events begin and end

      The following bugs were squashed:

      • FS-8537 [mod_lua] Fixed a segfault caused by passing nil to various lua functions

      Pages

      Subscribe to OpenTelecom.IT aggregator

      Using the greatness of Parallax

      Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

      Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

      Get free trial

      Wow, this most certainly is a great a theme.

      John Smith
      Company name

      Yet more available pages

      Responsive grid

      Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

      More »

      Typography

      Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

      More »

      Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.