WebRTC Broadcast will be all the rage in 2016.
As I am working my way in analyzing the various use case categories for WebRTC, I decided to check what’s been important in 2015. The “winner” in attention was a relatively new category of WebRTC broadcast – one in which WebRTC is being used when what one is trying to achieve is sending a video stream to many viewers. These viewers can be passive, or they can interact with the creator of the broadcast.
Up until 2014, I had 4 such vendors in my list. 2015 brought 15 new vendors to it – call it “the fastest growing category”. And this is predominantly a US phenomena – only 3 of the new vendors aren’t US based startups.
Periscope and Meerkat are partly to “blame” here. The noise they made in the market stirred others to join the fray – especially if you consider many of them are based in San Francisco as well.
TokBox just introduced Spotlight – their own live broadcast APIs – for those who need. At its heart, Spotlight enables the types of interactions that we see on the market today for these kind of solutions:
Here are some of my thoughts on this new emerging category:
2016 will be a continuation of what we’ve seen during 2015. More companies trying to define what live WebRTC broadcast looks like and aiming for different types of architectures to support it. In most cases, these architectures will combine WebRTC in them.
Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.
The post The Rise of WebRTC Broadcast and Live Streaming appeared first on BlogGeek.me.
Is it just me or are browsers fun again?
Who would have believed? Microsoft releasing their JavaScript engine as open source. And under a permissive MIT license.
While there are many browsers and vendors out there, there are probably only 4 that matter: Chrome (Google), Firefox (Mozilla), Edge (Microsoft) and Safari (Apple).
Who haven’t I included?
What should we expect in 2016 from the browsers? A lot.
Google ChromeFor Google, Chrome is an important piece of the puzzle. It lives in the web and the more control points it has over access to information the better positioned it is.
The ongoing activity of Google in WebRTC is part of the picture, and probably not the biggest one.
Google is the company with the least amount of regard to legacy code that there is. When something requires fixing, Google developers are not afraid to rewrite and refactor large components, and management allows and probably even encourages this behavior – something I haven’t seen anywhere else.
A few examples for recent years:
That said, it seems that Google have been somewhat complacent in the area of speed and size with Chrome. I am sure the Chrome team is aware of it and working hard to fix it, but the results haven’t been encouraging enough. This will change – mostly because of the actions of the other browser vendors.
Mozilla FirefoxMozilla is in transition. From relying on Google as its main benefactor to spreading the risks.
In the past few months though, Mozilla has started trimming down its projects:
These changes indicate that Mozilla understood it can’t just try and replicate every cool new Google project and open source it – it will now focus on making Firefox better. This is a much needed focus, with Firefox slipping in market share for quite some time now.
On the browser front, the notable changes Firefox is making are around privacy and the pornprivacy mode.
Microsoft EdgeEdge is new. It is a complete rewrite of what a browser is. It is speedy, clean and with huge potential. It has its own adoption challenges to overcome (mainly people comfortable enough with Chrome and not caring to try out Edge).
What to do? Microsoft just open sourced the JavaScript engine in Edge – Chakra. It shows some interesting performance results that seem to rival Chrome’s V8. The more interesting aspect of it, is the clear intent in getting Chakra into Node.js as a V8 alternative. Not sure if it will work, but it does has merit. It shows to me that:
I am sure there’s an engineer at Google already tasked at reviewing the code of Chakra once it gets a public git repository.
Edge is trying to move the envelope. This will challenge Google further with Chrome – always a good thing.
Apple SafariSafari seems second place at Apple. It is working, but not much is said or done about it.
We hear a lot of rumblings about WebRTC in Safari lately. How will this shape into Safari, iOS and Mac is anyone’s guess. The bigger question is will this be the only significant browser change to be introduced by Apple or part of a larger overhaul?
Why is this important?The web isn’t standing still. It is evolving and changing. Earlier this year, WebAssembly was announced – an effort to speed up the interactive web.
While many believe that apps have won over the web when it comes to development, we need to remember two things:
An interesting road ahead of us.
Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.
The post The Browser Wars are Back appeared first on BlogGeek.me.
It’s easy, as long as you know where to look for it.
This was published yesterday. Oftentimes, the things I read out there about WebRTC sounds just like this conversation from Dilbert’s life.
WebRTC is elusive. It is located in the cracks between VoIP and the web – a place where most people are just clueless. My own pedigree is VoIP. About 6 years ago, as an “aging” CTO trying to build a cloud service with an API for developers that runs a VoIP service, I was given an important lesson – there’s much to be learned from a 24 year old kid with milk teeth. In a span of a year and a half I got introduced to agile methodologies, internet scale, continuous deployment and a slew of other techniques – none of them was given the term we use today – but they were all there. It helped me later in understanding how and why WebRTC is so transformative.
As we head into 2016, I guess it is time to state a few of the great resources out there for WebRTC – the places I rely on in my own reading about WebRTC.
The BloggersOut of the people out there that cover WebRTC, there are 3 that I make it a point to read. All of them are good friends of mine:
Most company blogs suck. Big time. They are boring, and usually read like brochures or press releases. There are a few decent corporate blogs covering WebRTC – some of them can be considered mandatory reading.
TokBoxTokBox has the best corporate blog all around if what you are looking for is WebRTC related information. Now that they have recruited Philipp Hancke they probably will improve further.
Between their new offerings and features announcements are gems of information in the form of whitepapers of certain verticals and insights on WebRTC from the service they operate. They also run TechToks that get recorded and published on YouTube.
callstats.ioThe callstats.io blog is another great resource, especially when it comes to covering getstats() related stuff and media quality. Highly recommended.
AT&TI’ve written my own guest post on the AT&T Developers blog once or twice, so I know how they operate. While being a large corporation has a lot of limitations, when they publish content about WebRTC or adjacent technologies – it is worth the time to read.
A testament to that is the recent series of WebRTC UX/UI posts they have commissioned from &yet – mandatory reading for anyone who delves into web apps for WebRTC.
SinchWhile Sinch’s blog hasn’t been too interesting when it comes to WebRTC lately, earlier this year they had great content to share. Lately, it tends to be around use cases of their customers – totally interesting, but from a different angle.
I’d register on their blog if I were you to keep posted. I am sure they’ll have interesting articles for us next year as well.
WebRTC Digest & Blacc Spot MediaBlacc Spot Media started WebRTC Digest they also run their own Blacc Spot Media blog. Both are great resources with good content.
The digest site is all about acquisitions and money raising in the space, while Blacc Spot Media tries to cover the industry and the ecosystem.
At times, there needs to be some further validation to the vendors being written about there (some aren’t really doing WebRTC but are in the real time space), but all in all, one of the better resources out there.
webrtcHacksBy far the best place for WebRTC developers to go.
In-depth and timely content.
If you aren’t subscribed – then please do.
WebRTC WeeklyIf you don’t want to subscribe to too many resources, and are in the need for a single source, then Chris Kranky and me operate the WebRTC Weekly. Subscribe by email to receive one email a week with links to the relevant articles and posts from all over the web related to WebRTC.
There are three reasons why something doesn’t get included in the WebRTC Weekly:
The post Where to find Quality WebRTC Resources appeared first on BlogGeek.me.
Communication API
API Platform
Large
Voice, Video
Cloud Communication APIs.
[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]API platforms fascinate me. Especially communication API platforms. You can’t get any bigger than Twilio these days. This year, they’ve announced and launched a slew of new capabilities – task routing, video calling, IP messaging and a lot of enhancements to their existing services.
I’ve been wanting to land an interview with Twilio for quite some time. I was happy when Al Cook, Director of Product Marketing at Twilio, obliged. Here’s what he had to say.
What is Twilio all about?
Twilio is a cloud communications platform. We provide programmable building blocks that developers use to embed communications into their mobile and web apps – from voice, messaging, and video to authentication. So when you are communicating with your Uber driver via text or anonymous phone call, calling Hulu customer support, or shopping via text with the help of your Nordstrom personal shopper, that’s Twilio. Or to give a WebRTC example – when you call a customer support team powered by Zendesk, the agent is talking to you over a WebRTC connection powered by Twilio. We have over 700,000 developers generating over 50 billion API transactions a year. In WebRTC we’ve powered over half a billion minutes of WebRTC to date.
Twilio Video went to public beta today. You’ve been in private beta for a while. How is it going? What have you learned?
That’s right, the private beta started in May and we collaborated with developers to build the right solution, with the right developer experience. Video is in public beta as of now. Now anyone can sign up for immediate access to our WebRTC-powered web and mobile SDKs, and the cloud-based signaling/media services that power them.
During the private beta we onboarded several thousand developers from our base. This group size was critical for gaining useful feedback and insights, while still allowing meaningful interactions.
Interesting. Did you check what users do during the private beta?
During the private beta onboarding, we asked participants to tell us about their use cases. I read every single entry and categorized the use cases. The top categories break out as follows:
Two of the big areas we spent considerable time refining during the beta were improving the mobile media stack performance, and building a signaling model that allows us to continue to add new capabilities for multi-party, multi-endpoint IP and carrier communications.
I have to ask. These developers in the private beta – how many of them were existing Twilio developers who just added video versus new ones?
It’s a mix. A lot of folks are with us because they want multiple channels of communication, and so video is a natural extension for them. But we’ve also had a lot of people who were new to Twilio, and excited to have a better alternative than their current video solution.
How is your video offering different from other alternatives that are out there today?
We believe this solution is not available anywhere else. Here’s some insight on the areas where we invested the most time to ensure we were building the right solution for needs that had not been addressed.
What excites you about working in WebRTC?
To me, the most exciting aspect of WebRTC – and really programmable real-time communications more generally – is that it stands to fundamentally change the way we communicate. Through every iteration of the phone, the basic interaction hasn’t really changed. Historically, there has been little-to-no ability to gain immediate context of why the caller is calling, what they were doing beforehand, and what they may need. Embedding communications into applications allows for a far more meaningful and relevant communication. Imagine calling your car insurance company from your car insurance app following an accident, and instantly the call is routed with the right prioritization based on the GPS of your phone to an agent who speaks your prefered language. The app enables you to instantly share a video feed of the accident scene and collaboratively annotate the video using the app. All this while the agent captures the information in their record system to avoid a separate visit from a damage appraiser.
We believe every single app will have communications built into it. Every. Single. App.
Where do you see WebRTC going in 2-5 years?
WebRTC/ORTC is moving at such a velocity that 5 years out is pretty hard to forecast. But we believe:
If you had one piece of advice for those thinking of adopting WebRTC, what would it be?
Experiment – and think about how you scale the experiments that find success. It’s relatively simple to get a basic WebRTC call working. But plan for what happens if your new service finds success. Consider how will you scale, maintain and operate your TURN media relay. How will you collect and analyze voice quality diagnostics from all your endpoints. How will you interoperate with SIP networks and PSTN networks.
Given the opportunity, what would you change in WebRTC?
Some improvements have been addressed by ORTC. We’re big fans of these improvements and we look forward to the standards combining.
We would like more control over the media stack in a browser environment, if the browser makers could figure out a secure way to enable this. We spend a considerable amount time testing and measuring voice quality in impaired networks. In fact, we open-sourced the testing tool we use. On the mobile side, we operate the media stack and we do a lot of fine tuning to constantly improve the media quality. This includes taking into account the performance of different networks and hardware configurations. Whether it’s adding codecs to use in particular scenarios, adding Forward Error Correction (FEC) techniques, or other areas we are working on. But when our endpoints call a browser-based endpoint, they have to fall back to the default media stack and it is not possible to layer on additional media enhancements, which is why we’d like more control in the browser environment.
In the more immediate time frame, the subject of handling QoS in WebRTC is tricky, and far from standardized. Plus, QoS behavior, like with much of WebRTC, tends to require significant reverse engineering to establish the exact behavior in different scenarios. We’re happy we can provide this capability on behalf of our customers – but we’d like more control over the experience.
What’s next for Twilio?
We’ve talked about a few of them – interoperability with SIP endpoints and PSTN endpoints for example. Of course we’re also working on SFU functionality for large scale video conferences – that should be no surprise to our customers. But we want to provide this capability in such a way that a developer doesn’t have to choose between either peer-to-peer routing or SFU mixed. The solution should intelligently move from one to another as the call topology requires. We also want a solution that scales beyond any existing solutions. And then, well…that’s enough to keep us busy for now Tsahi.
–
The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.
The post Twilio and WebRTC: An Interview with Al Cook appeared first on BlogGeek.me.
Xander Dumaine provides some strategies and code for dealing with the new secure origin only policy in Chrome 47+ that forces the use of HTTPS.
The post Surviving Mandatory HTTPS in Chrome (Xander Dumaine) appeared first on webrtcHacks.
This week the verto communicator had some new updates to the administrator menu and the core added a new origination_audio_mode variable. Join us Wednesdays at 12:00 CT for some more FreeSWITCH fun! This week we have Italo Rossi and the Evolux call center team! And, head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
These days “free” software seems to be a scary prospect to the general public. The association between open-source software and malicious “click here for free stuff” ads is strong and the fear of unknown “hackers” runs rampant. The old adage that “nothing good in life comes for free” has ingrained the idea that free is synonymous with scams. Why would anyone in their right mind give away a great product for free? This thought process is why most of the general public limits themselves to costly, proprietary services. The tech industry is huge and understanding it all is impossible, but buying trust isn’t the answer to guaranteed safety. There is plenty of fantastic open-source software available and it shouldn’t only be accessible to experienced, tech savvy individuals. And, as we move toward a more tech based culture, the up and coming generations can have an especially difficult time trying to explain this misconstrued conclusion to their older peers. Jim Salter from Opensource.com addressed this issue with an open letter to all parents with kids that want to use open-source software. He goes on to say free open-source software (FOSS) “is not “stolen” software. Free software licenses like the GPL and the BSD and Apache licenses allow users the ability to freely use, and developers the ability to freely develop, the software placed under those licenses. Another important thing to understand about FOSS is that it is not merely “free” in the sense of “free in every box of cereal.” Making a new copy of a piece of software literally costs nothing at all—this has made it possible for community efforts to produce world-class products in a way material goods never could be.” Helping the general public to understand the definition and motivation behind open-source will bring it out of the shadows of the industry and help it become mainstream. You can read his letter here: https://opensource.com/life/15/12/dear-parents-let-your-kids-use-open-source-software
A few days back my old friend Chris Koehnke, better known as “Kranky” asked me how hard it would be to implement a wild idea he had to monitor what percentage of the time you spent talking instead of listening on a call when using WebRTC. When I said “one day” that made him wonder whether he could offshore it to save money. Well… good luck!
A week later Kranky showed me some code. Wait, he is writing code? It was not bad – it was using the WebAudio API so going in the right direction. It was enough to prod me to finish writing the app for him.
The audio stream volume sample application from Google calculates the root mean square (RMS) of the audio signal which is extracted from the input stream using a script processor every 200ms. There is a lot of tuning options here of course.
Instead of starting from scratch, I decided to use hark, a small open source module for this task that my coworker Philip Roberts had built in mid-2013 when the WebAudio API became first available.
Instead of the RMS, hark uses the Fast Fourier Transformation to obtain a frequency domain representation of the input signal. Then, hark picks the maximum amplitude as an indication for the volume of the signal. Let’s try this (full code here):
var hark = require('../hark.js') var getUserMedia = require('getusermedia') getUserMedia(function(err, stream) { if (err) throw err var options = {}; var speechEvents = hark(stream, options); speechEvents.on('volume_change', function(volume) { console.log('current volume', volume); }); });On top of this, hark uses a simple speech detection algorithm that considers speech to be started when the maximum amplitude stays above a threshold for a number of milliseconds. Much less complicated than typical voice activity detection algorithms but pretty effective. And easy to use as well, just subscribe to two additional events:
speechEvents.on('speaking', function() { console.log('speaking'); }); speechEvents.on('stopped_speaking', function() { console.log('stopped_speaking'); });Tuning the threshold for accurate speech detection is pretty tricky. So I needed visualization (and just requiring hark only took five minutes so I had plenty of time). Using the awesome Highcharts graph library I quickly added plot bands to the graph I was generating:
With the visualization I could easily see that the speech detection events happened a bit later than I expected since hark requires a certain history over the threshold for the trigger to work (say: 400ms). To adjust for this in the graph had to substract this speech starting to trigger time from my x-axis (now()– 400ms for example).
That graph is still visibile on the more techie variant of the website so if you think the results are not accurate… it might help you figure out what is going on. I am happy with the current behavior.
The percentage of speech then calculated as the sum of the intervals that speech is detected divided by the duration of the call. As a display, a gauge chart is used with three different colors:
Adding remote audio to this would be awesome. However, while the WebAudio API is supported for local media streams in Chrome, Firefox and Edge, it is only supported for remote streams in Firefox. Hooking this up with the getStats API (in Chrome) to get the audio level would certainly be possible, but would require calling getStats at a very high frequency to get proper averages.
Check out the app in action at talklessnow and let us know what you think.
{“author”: “Philipp Hancke“}
Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.
The post Shut up! Monitoring audio volume in getUserMedia appeared first on webrtcHacks.
Your private 911 system.
[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]I have seen a lot of applications lately that target public safety. Some offer you a “ghost” partner to “walk” with you home, while others focus on the reporting aspects.
SaferMobility targets the authorities as the owners of the system (college campuses, municipalities, business zones, etc) and provides a mobile application to the users. It is reimagining how a 911 service would look like if it was being specified today.
Matthew Mah, CTO of SaferMobility, was kind enough to answer my questions on what role WebRTC plays in their service.
What is SaferMobility all about?
SaferMobility focuses on using the capabilities of modern smartphones for enhancing safety. The public safety system in the United States is built around wired telephones, and it is more difficult for authorities to respond to mobile phones because they are harder to locate than fixed telephones. The modern smartphone has audio, video, location, and text capability that just are not being used efficiently yet.
There are many other safety related apps out there. What differentiates you from the rest of the pack?
Our systems focus on real-time interaction with authorities. Authorities receive enhanced calls with audio, video, location, and text information in real-time without it having to filter through friends or storage systems.
You told me you launched your service using Flash. Why did you migrate to WebRTC?
WebRTC is a huge improvement over Flash in terms of security, support, and capability. Adobe is not really interested in supporting Flash for mobile devices, so capabilities like acoustic echo suppression are not available. This makes a huge difference in communication quality.
What signaling have you decided to integrate on top of WebRTC?
We use a proprietary message system built on websockets.
Backend. What technologies and architecture are you using there?
Our Java application server runs Tomcat with a PostgreSQL database. It handles the signaling and issues commands to a media server for recording capabilities. We currently run on Dialogic’s Extended Media Server (XMS).
Mobile. You decided to port WebRTC to iOS and Android on your own. How was the experience?
Porting was difficult because of compatibility issues between our WebRTC media server with web, iOS, and Android clients. We would get two clients to work with the server, then upgrade the server and have two different clients work.
For stability on the web side, the nwjs project has been very helpful for producing an application that works even while the web browser updates are racing ahead and frequently breaking things.
Where do you see WebRTC going in 2-5 years?
WebRTC will replace stagnant technologies like Flash. The ability to communicate through the browser will also lower the barrier for application development.
If you had one piece of advice for those thinking of adopting WebRTC, what would it be?
Be prepared for things to change quickly because WebRTC is still growing and maturing.
Given the opportunity, what would you change in WebRTC?
Aside from the expected growing pains, I am pleased with WebRTC.
What’s next for SaferMobility?
There’s a huge opportunity to improve public safety, security services, and general communication with modern mobile devices, and SaferMobility will be part of making those improvements.
–
The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.
The post SaferMobility and WebRTC: An Interview With Matthew Mah appeared first on BlogGeek.me.
WebRTC GetUserMedia is more important than the rest of this communication stack.
Who would have believed? With all the magic and distraction that video calling from a browser brings with it, the real treasure trove resides in the basics – WebRTC GetUserMedia.
Simplifying things, WebRTC has 3 distinct areas/APIs to it:
I’ve pointed up in the past how WebRTC GetUserMedia gets used by Mailchimp and WhatsApp. Taking a camera snapshot is nice, but what else can we achieve with this access we’ve been given?
TalkLessNowChris Kranky had an idea a few weeks ago. Measuring how much you’re yapping in a call as opposed to listening. So he made it happen. On a shoestring budget, some connections and a bit of time and TalkLessNow was born.
How it works?The website is quite spartan. When you go on a phone call (not a WebRTC one), you just press the green Call button on talklessnow.com.
The code on the site “listens” through the machine’s microphone to your call. Whenever it hears enough of a volume – it assumes you’re talking. If the volume is lower than its configured threshold – you’re listening.
Just WebRTC GetUserMedia. No PeerConnection or any other fuss.
Will it work?Here in Israel, I am sure the results won’t be good. We’re used to talking over each other and interrupting. Efficiency at its best. If in a call between Israelis it shows less than 70% of talk time per participant, I’ll crown that session a success.
Seriously though, we should be listening a lot more than we’re talking.
Same but differentThe now defunct Guitar Tuner works the same way. It doesn’t work anymore because the site is served on HTTP and WebRTC GetUserMedia now requires HTTPS to work with the latest Chrome release (progress, you know).
ZiggeoHere’s another example.
Ziggeo is making use of WebRTC to record videos. They do that by employing WebRTC GetUserMedia, storing the resulting media locally and at the end of the recording sending it to their servers. The sending part doesn’t occur via WebRTC.
There’s an interesting interview with Susan Danziger, CEO of Ziggeo from last week that you should read.
Is this Real Time Communications?WHO CARES?
It works. It gives business value – and in ways that weren’t really possible up until today.
There’s a lot more to WebRTC than classic VoIP.
Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.
The post The Hidden Gems of WebRTC Goodness May Well Lie Within GetUserMedia Itself appeared first on BlogGeek.me.
The FreeSWITCH project is nearly ten years old, and the FreeSWITCH git repo has commits from about 214 different authors and over 3.2 million lines of code with 875k of those lines under the src directory. Some of the maintenance challenges associated with such a large software project include: detecting and resolving human errors such as typos, logic inversions, and dangerous formatting. Implementing code review is a must, and there are different techniques common to the industry used to reduce the defect density and standardize the code format: autobuilding against multiple compilers, routine testing, and static code analysis. The core development team at FreeSWITCH uses all three techniques.
Both autobuilding and routine testing can be applied with in-house system workflows. Routinely building the packages against different compilers allows for consistent tracking to make sure additional commits won’t break existing code in any of the prepackaged builds. This also allows for consistent handling of packages for multiple operating systems. By autobuilding against different compilers, we can make sure that a commit for one set of packages doesn’t break the builds for the others. Routine testing is another viable option for code review, and routinely testing and implementing a bug tracking system allows the community members to report bugs found in unique environmental circumstances. Open-source software relies on many different eyes to keep bugs shallow, and this practice opens up different configurations and applications of the software for a more thorough testing. Each year hundreds of tickets are opened on the FreeSWITCH project JIRA, and the developers work tirelessly to address all of them.
Static analyzers can scan thousands to millions of lines of code without getting tired and usually don’t require many manual steps to run. The relationship between a project’s developers and the creators of a static code analyzer can be a symbiotic one. The analyzer works by using a database of multiple tiers of positive and negative heuristics. First, it runs the low cost patterns against the entire code base to generate a large list of possible issues, then runs more accurate and higher cost patterns against the bug candidates to reduce the number of false positives, and finally evaluates the severity and more accurately classifies the issues. Once the analyzer has completed its run, it requires an experienced software developer familiar with the code base to review each issue reported.
Most static analyzers are built to report possible candidates in the first pass, and thus immature analyzers are perceived to red flag everything. They tend to create a lot of noise by reporting a large number of false positives and misclassifying the severity of issues. After the developers for the software being analyzed have reviewed the results of the analysis, they can give specific examples of why they determined it to be a false positive which can be used to improve the static analyzer’s heuristics. As the database matures, the quality of the negative heuristics improves and reduces the volume of false positives. The advantage here is that each report triaged leads to a commit resolving a bug or an improvement to the analyzer.
The team over at Program Verification Systems have built a static analyzer for C/C++ code that integrates into Microsoft Visual Studio. According to their website, the program allows the user to scan of lines of code to locate various typos and other errors. Their analyzer supports C/C++, C++/CLI, and C++/CX with support for C# language coming soon. The PVS-Studio is also available as a standalone utility through the distribution packages which allows for viewing the analysis logs on a machine without Visual Studio. It can also be used to track multiple sub-builds and analyze non-standard build systems. The reports for the open-source projects that have been analyzed with this software can be found on their website in the Checked Projects section.
The FreeSWITCH team ran the open-source FreeSWITCH project through the PVS analyzer. A decent majority of the issues reviewed were determined to be minor Windows-specific bugs not previously flagged by compilers currently implemented by the team. The team is continuing to review and resolve the alerts from the analysis and have integrated this analyzer into the code review workflow. They look forward to continuing this symbiotic relationship with the goal of improving the quality of software.
If you would like to replicate the results you can use the following steps.
Our features this week include: improvements to the auto bitrate features in mod_conference, the addition of the Debian install script for the verto communicator, and separate controls for gain and volume for verto. Join us Wednesdays at 12:00 CT for some more FreeSWITCH fun! This week we have Tsahi Levent-Levi talking about WebRTC! And head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
The FreeSWITCH 1.4 branch had a couple of bug fixes back ported. And again, keep in mind that 1.4 is quickly moving toward end of life and won’t be supported any longer except for high level security issues.
The following bugs were squashed:
Links: http://www.dslreports.com/speedtest
Links:https://support.flowroute.com/customer/en/portal/articles/2205573-freeswitch—add-flowroute-as-sip-gatewayhttps://developer.flowroute.com/
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.