Communication API
API Platform
Large
Voice, Video
Cloud Communication APIs.
[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]API platforms fascinate me. Especially communication API platforms. You can’t get any bigger than Twilio these days. This year, they’ve announced and launched a slew of new capabilities – task routing, video calling, IP messaging and a lot of enhancements to their existing services.
I’ve been wanting to land an interview with Twilio for quite some time. I was happy when Al Cook, Director of Product Marketing at Twilio, obliged. Here’s what he had to say.
What is Twilio all about?
Twilio is a cloud communications platform. We provide programmable building blocks that developers use to embed communications into their mobile and web apps – from voice, messaging, and video to authentication. So when you are communicating with your Uber driver via text or anonymous phone call, calling Hulu customer support, or shopping via text with the help of your Nordstrom personal shopper, that’s Twilio. Or to give a WebRTC example – when you call a customer support team powered by Zendesk, the agent is talking to you over a WebRTC connection powered by Twilio. We have over 700,000 developers generating over 50 billion API transactions a year. In WebRTC we’ve powered over half a billion minutes of WebRTC to date.
Twilio Video went to public beta today. You’ve been in private beta for a while. How is it going? What have you learned?
That’s right, the private beta started in May and we collaborated with developers to build the right solution, with the right developer experience. Video is in public beta as of now. Now anyone can sign up for immediate access to our WebRTC-powered web and mobile SDKs, and the cloud-based signaling/media services that power them.
During the private beta we onboarded several thousand developers from our base. This group size was critical for gaining useful feedback and insights, while still allowing meaningful interactions.
Interesting. Did you check what users do during the private beta?
During the private beta onboarding, we asked participants to tell us about their use cases. I read every single entry and categorized the use cases. The top categories break out as follows:
Two of the big areas we spent considerable time refining during the beta were improving the mobile media stack performance, and building a signaling model that allows us to continue to add new capabilities for multi-party, multi-endpoint IP and carrier communications.
I have to ask. These developers in the private beta – how many of them were existing Twilio developers who just added video versus new ones?
It’s a mix. A lot of folks are with us because they want multiple channels of communication, and so video is a natural extension for them. But we’ve also had a lot of people who were new to Twilio, and excited to have a better alternative than their current video solution.
How is your video offering different from other alternatives that are out there today?
We believe this solution is not available anywhere else. Here’s some insight on the areas where we invested the most time to ensure we were building the right solution for needs that had not been addressed.
What excites you about working in WebRTC?
To me, the most exciting aspect of WebRTC – and really programmable real-time communications more generally – is that it stands to fundamentally change the way we communicate. Through every iteration of the phone, the basic interaction hasn’t really changed. Historically, there has been little-to-no ability to gain immediate context of why the caller is calling, what they were doing beforehand, and what they may need. Embedding communications into applications allows for a far more meaningful and relevant communication. Imagine calling your car insurance company from your car insurance app following an accident, and instantly the call is routed with the right prioritization based on the GPS of your phone to an agent who speaks your prefered language. The app enables you to instantly share a video feed of the accident scene and collaboratively annotate the video using the app. All this while the agent captures the information in their record system to avoid a separate visit from a damage appraiser.
We believe every single app will have communications built into it. Every. Single. App.
Where do you see WebRTC going in 2-5 years?
WebRTC/ORTC is moving at such a velocity that 5 years out is pretty hard to forecast. But we believe:
If you had one piece of advice for those thinking of adopting WebRTC, what would it be?
Experiment – and think about how you scale the experiments that find success. It’s relatively simple to get a basic WebRTC call working. But plan for what happens if your new service finds success. Consider how will you scale, maintain and operate your TURN media relay. How will you collect and analyze voice quality diagnostics from all your endpoints. How will you interoperate with SIP networks and PSTN networks.
Given the opportunity, what would you change in WebRTC?
Some improvements have been addressed by ORTC. We’re big fans of these improvements and we look forward to the standards combining.
We would like more control over the media stack in a browser environment, if the browser makers could figure out a secure way to enable this. We spend a considerable amount time testing and measuring voice quality in impaired networks. In fact, we open-sourced the testing tool we use. On the mobile side, we operate the media stack and we do a lot of fine tuning to constantly improve the media quality. This includes taking into account the performance of different networks and hardware configurations. Whether it’s adding codecs to use in particular scenarios, adding Forward Error Correction (FEC) techniques, or other areas we are working on. But when our endpoints call a browser-based endpoint, they have to fall back to the default media stack and it is not possible to layer on additional media enhancements, which is why we’d like more control in the browser environment.
In the more immediate time frame, the subject of handling QoS in WebRTC is tricky, and far from standardized. Plus, QoS behavior, like with much of WebRTC, tends to require significant reverse engineering to establish the exact behavior in different scenarios. We’re happy we can provide this capability on behalf of our customers – but we’d like more control over the experience.
What’s next for Twilio?
We’ve talked about a few of them – interoperability with SIP endpoints and PSTN endpoints for example. Of course we’re also working on SFU functionality for large scale video conferences – that should be no surprise to our customers. But we want to provide this capability in such a way that a developer doesn’t have to choose between either peer-to-peer routing or SFU mixed. The solution should intelligently move from one to another as the call topology requires. We also want a solution that scales beyond any existing solutions. And then, well…that’s enough to keep us busy for now Tsahi.
–
The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.
The post Twilio and WebRTC: An Interview with Al Cook appeared first on BlogGeek.me.
Xander Dumaine provides some strategies and code for dealing with the new secure origin only policy in Chrome 47+ that forces the use of HTTPS.
The post Surviving Mandatory HTTPS in Chrome (Xander Dumaine) appeared first on webrtcHacks.
This week the verto communicator had some new updates to the administrator menu and the core added a new origination_audio_mode variable. Join us Wednesdays at 12:00 CT for some more FreeSWITCH fun! This week we have Italo Rossi and the Evolux call center team! And, head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
These days “free” software seems to be a scary prospect to the general public. The association between open-source software and malicious “click here for free stuff” ads is strong and the fear of unknown “hackers” runs rampant. The old adage that “nothing good in life comes for free” has ingrained the idea that free is synonymous with scams. Why would anyone in their right mind give away a great product for free? This thought process is why most of the general public limits themselves to costly, proprietary services. The tech industry is huge and understanding it all is impossible, but buying trust isn’t the answer to guaranteed safety. There is plenty of fantastic open-source software available and it shouldn’t only be accessible to experienced, tech savvy individuals. And, as we move toward a more tech based culture, the up and coming generations can have an especially difficult time trying to explain this misconstrued conclusion to their older peers. Jim Salter from Opensource.com addressed this issue with an open letter to all parents with kids that want to use open-source software. He goes on to say free open-source software (FOSS) “is not “stolen” software. Free software licenses like the GPL and the BSD and Apache licenses allow users the ability to freely use, and developers the ability to freely develop, the software placed under those licenses. Another important thing to understand about FOSS is that it is not merely “free” in the sense of “free in every box of cereal.” Making a new copy of a piece of software literally costs nothing at all—this has made it possible for community efforts to produce world-class products in a way material goods never could be.” Helping the general public to understand the definition and motivation behind open-source will bring it out of the shadows of the industry and help it become mainstream. You can read his letter here: https://opensource.com/life/15/12/dear-parents-let-your-kids-use-open-source-software
A few days back my old friend Chris Koehnke, better known as “Kranky” asked me how hard it would be to implement a wild idea he had to monitor what percentage of the time you spent talking instead of listening on a call when using WebRTC. When I said “one day” that made him wonder whether he could offshore it to save money. Well… good luck!
A week later Kranky showed me some code. Wait, he is writing code? It was not bad – it was using the WebAudio API so going in the right direction. It was enough to prod me to finish writing the app for him.
The audio stream volume sample application from Google calculates the root mean square (RMS) of the audio signal which is extracted from the input stream using a script processor every 200ms. There is a lot of tuning options here of course.
Instead of starting from scratch, I decided to use hark, a small open source module for this task that my coworker Philip Roberts had built in mid-2013 when the WebAudio API became first available.
Instead of the RMS, hark uses the Fast Fourier Transformation to obtain a frequency domain representation of the input signal. Then, hark picks the maximum amplitude as an indication for the volume of the signal. Let’s try this (full code here):
var hark = require('../hark.js') var getUserMedia = require('getusermedia') getUserMedia(function(err, stream) { if (err) throw err var options = {}; var speechEvents = hark(stream, options); speechEvents.on('volume_change', function(volume) { console.log('current volume', volume); }); });On top of this, hark uses a simple speech detection algorithm that considers speech to be started when the maximum amplitude stays above a threshold for a number of milliseconds. Much less complicated than typical voice activity detection algorithms but pretty effective. And easy to use as well, just subscribe to two additional events:
speechEvents.on('speaking', function() { console.log('speaking'); }); speechEvents.on('stopped_speaking', function() { console.log('stopped_speaking'); });Tuning the threshold for accurate speech detection is pretty tricky. So I needed visualization (and just requiring hark only took five minutes so I had plenty of time). Using the awesome Highcharts graph library I quickly added plot bands to the graph I was generating:
With the visualization I could easily see that the speech detection events happened a bit later than I expected since hark requires a certain history over the threshold for the trigger to work (say: 400ms). To adjust for this in the graph had to substract this speech starting to trigger time from my x-axis (now()– 400ms for example).
That graph is still visibile on the more techie variant of the website so if you think the results are not accurate… it might help you figure out what is going on. I am happy with the current behavior.
The percentage of speech then calculated as the sum of the intervals that speech is detected divided by the duration of the call. As a display, a gauge chart is used with three different colors:
Adding remote audio to this would be awesome. However, while the WebAudio API is supported for local media streams in Chrome, Firefox and Edge, it is only supported for remote streams in Firefox. Hooking this up with the getStats API (in Chrome) to get the audio level would certainly be possible, but would require calling getStats at a very high frequency to get proper averages.
Check out the app in action at talklessnow and let us know what you think.
{“author”: “Philipp Hancke“}
Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.
The post Shut up! Monitoring audio volume in getUserMedia appeared first on webrtcHacks.
Your private 911 system.
[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]I have seen a lot of applications lately that target public safety. Some offer you a “ghost” partner to “walk” with you home, while others focus on the reporting aspects.
SaferMobility targets the authorities as the owners of the system (college campuses, municipalities, business zones, etc) and provides a mobile application to the users. It is reimagining how a 911 service would look like if it was being specified today.
Matthew Mah, CTO of SaferMobility, was kind enough to answer my questions on what role WebRTC plays in their service.
What is SaferMobility all about?
SaferMobility focuses on using the capabilities of modern smartphones for enhancing safety. The public safety system in the United States is built around wired telephones, and it is more difficult for authorities to respond to mobile phones because they are harder to locate than fixed telephones. The modern smartphone has audio, video, location, and text capability that just are not being used efficiently yet.
There are many other safety related apps out there. What differentiates you from the rest of the pack?
Our systems focus on real-time interaction with authorities. Authorities receive enhanced calls with audio, video, location, and text information in real-time without it having to filter through friends or storage systems.
You told me you launched your service using Flash. Why did you migrate to WebRTC?
WebRTC is a huge improvement over Flash in terms of security, support, and capability. Adobe is not really interested in supporting Flash for mobile devices, so capabilities like acoustic echo suppression are not available. This makes a huge difference in communication quality.
What signaling have you decided to integrate on top of WebRTC?
We use a proprietary message system built on websockets.
Backend. What technologies and architecture are you using there?
Our Java application server runs Tomcat with a PostgreSQL database. It handles the signaling and issues commands to a media server for recording capabilities. We currently run on Dialogic’s Extended Media Server (XMS).
Mobile. You decided to port WebRTC to iOS and Android on your own. How was the experience?
Porting was difficult because of compatibility issues between our WebRTC media server with web, iOS, and Android clients. We would get two clients to work with the server, then upgrade the server and have two different clients work.
For stability on the web side, the nwjs project has been very helpful for producing an application that works even while the web browser updates are racing ahead and frequently breaking things.
Where do you see WebRTC going in 2-5 years?
WebRTC will replace stagnant technologies like Flash. The ability to communicate through the browser will also lower the barrier for application development.
If you had one piece of advice for those thinking of adopting WebRTC, what would it be?
Be prepared for things to change quickly because WebRTC is still growing and maturing.
Given the opportunity, what would you change in WebRTC?
Aside from the expected growing pains, I am pleased with WebRTC.
What’s next for SaferMobility?
There’s a huge opportunity to improve public safety, security services, and general communication with modern mobile devices, and SaferMobility will be part of making those improvements.
–
The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.
The post SaferMobility and WebRTC: An Interview With Matthew Mah appeared first on BlogGeek.me.
WebRTC GetUserMedia is more important than the rest of this communication stack.
Who would have believed? With all the magic and distraction that video calling from a browser brings with it, the real treasure trove resides in the basics – WebRTC GetUserMedia.
Simplifying things, WebRTC has 3 distinct areas/APIs to it:
I’ve pointed up in the past how WebRTC GetUserMedia gets used by Mailchimp and WhatsApp. Taking a camera snapshot is nice, but what else can we achieve with this access we’ve been given?
TalkLessNowChris Kranky had an idea a few weeks ago. Measuring how much you’re yapping in a call as opposed to listening. So he made it happen. On a shoestring budget, some connections and a bit of time and TalkLessNow was born.
How it works?The website is quite spartan. When you go on a phone call (not a WebRTC one), you just press the green Call button on talklessnow.com.
The code on the site “listens” through the machine’s microphone to your call. Whenever it hears enough of a volume – it assumes you’re talking. If the volume is lower than its configured threshold – you’re listening.
Just WebRTC GetUserMedia. No PeerConnection or any other fuss.
Will it work?Here in Israel, I am sure the results won’t be good. We’re used to talking over each other and interrupting. Efficiency at its best. If in a call between Israelis it shows less than 70% of talk time per participant, I’ll crown that session a success.
Seriously though, we should be listening a lot more than we’re talking.
Same but differentThe now defunct Guitar Tuner works the same way. It doesn’t work anymore because the site is served on HTTP and WebRTC GetUserMedia now requires HTTPS to work with the latest Chrome release (progress, you know).
ZiggeoHere’s another example.
Ziggeo is making use of WebRTC to record videos. They do that by employing WebRTC GetUserMedia, storing the resulting media locally and at the end of the recording sending it to their servers. The sending part doesn’t occur via WebRTC.
There’s an interesting interview with Susan Danziger, CEO of Ziggeo from last week that you should read.
Is this Real Time Communications?WHO CARES?
It works. It gives business value – and in ways that weren’t really possible up until today.
There’s a lot more to WebRTC than classic VoIP.
Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.
The post The Hidden Gems of WebRTC Goodness May Well Lie Within GetUserMedia Itself appeared first on BlogGeek.me.
The FreeSWITCH project is nearly ten years old, and the FreeSWITCH git repo has commits from about 214 different authors and over 3.2 million lines of code with 875k of those lines under the src directory. Some of the maintenance challenges associated with such a large software project include: detecting and resolving human errors such as typos, logic inversions, and dangerous formatting. Implementing code review is a must, and there are different techniques common to the industry used to reduce the defect density and standardize the code format: autobuilding against multiple compilers, routine testing, and static code analysis. The core development team at FreeSWITCH uses all three techniques.
Both autobuilding and routine testing can be applied with in-house system workflows. Routinely building the packages against different compilers allows for consistent tracking to make sure additional commits won’t break existing code in any of the prepackaged builds. This also allows for consistent handling of packages for multiple operating systems. By autobuilding against different compilers, we can make sure that a commit for one set of packages doesn’t break the builds for the others. Routine testing is another viable option for code review, and routinely testing and implementing a bug tracking system allows the community members to report bugs found in unique environmental circumstances. Open-source software relies on many different eyes to keep bugs shallow, and this practice opens up different configurations and applications of the software for a more thorough testing. Each year hundreds of tickets are opened on the FreeSWITCH project JIRA, and the developers work tirelessly to address all of them.
Static analyzers can scan thousands to millions of lines of code without getting tired and usually don’t require many manual steps to run. The relationship between a project’s developers and the creators of a static code analyzer can be a symbiotic one. The analyzer works by using a database of multiple tiers of positive and negative heuristics. First, it runs the low cost patterns against the entire code base to generate a large list of possible issues, then runs more accurate and higher cost patterns against the bug candidates to reduce the number of false positives, and finally evaluates the severity and more accurately classifies the issues. Once the analyzer has completed its run, it requires an experienced software developer familiar with the code base to review each issue reported.
Most static analyzers are built to report possible candidates in the first pass, and thus immature analyzers are perceived to red flag everything. They tend to create a lot of noise by reporting a large number of false positives and misclassifying the severity of issues. After the developers for the software being analyzed have reviewed the results of the analysis, they can give specific examples of why they determined it to be a false positive which can be used to improve the static analyzer’s heuristics. As the database matures, the quality of the negative heuristics improves and reduces the volume of false positives. The advantage here is that each report triaged leads to a commit resolving a bug or an improvement to the analyzer.
The team over at Program Verification Systems have built a static analyzer for C/C++ code that integrates into Microsoft Visual Studio. According to their website, the program allows the user to scan of lines of code to locate various typos and other errors. Their analyzer supports C/C++, C++/CLI, and C++/CX with support for C# language coming soon. The PVS-Studio is also available as a standalone utility through the distribution packages which allows for viewing the analysis logs on a machine without Visual Studio. It can also be used to track multiple sub-builds and analyze non-standard build systems. The reports for the open-source projects that have been analyzed with this software can be found on their website in the Checked Projects section.
The FreeSWITCH team ran the open-source FreeSWITCH project through the PVS analyzer. A decent majority of the issues reviewed were determined to be minor Windows-specific bugs not previously flagged by compilers currently implemented by the team. The team is continuing to review and resolve the alerts from the analysis and have integrated this analyzer into the code review workflow. They look forward to continuing this symbiotic relationship with the goal of improving the quality of software.
If you would like to replicate the results you can use the following steps.
Our features this week include: improvements to the auto bitrate features in mod_conference, the addition of the Debian install script for the verto communicator, and separate controls for gain and volume for verto. Join us Wednesdays at 12:00 CT for some more FreeSWITCH fun! This week we have Tsahi Levent-Levi talking about WebRTC! And head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
The FreeSWITCH 1.4 branch had a couple of bug fixes back ported. And again, keep in mind that 1.4 is quickly moving toward end of life and won’t be supported any longer except for high level security issues.
The following bugs were squashed:
Links: http://www.dslreports.com/speedtest
Links:https://support.flowroute.com/customer/en/portal/articles/2205573-freeswitch—add-flowroute-as-sip-gatewayhttps://developer.flowroute.com/
The future isn’t what it used to be.
I’ve been babbling here a lot about the enterprise video conferencing market and WebRTC’s role in disrupting it. When it first came out, I believed the existing companies are going to be struggling with it. I was mostly ignored by these companies – it is hard to see what’s just around the corner when you’re stuck in the echo chamber of your company and its immediate industry.
When I meet old colleagues of mine from the video conferencing industry and see them working in the same companies, I suggest they leave. Find another company or industry, because the outcome is known – just the timing factor is missing. They dismiss it, probably thinking that I am saying it our of a grudge to the company. I am not.
What happened in November should hit home.
We had two separate news items that in some cosmic way happened in the same week:
Dumbing things down a bit:
It isn’t that WebRTC is the reason why Acano succeeded and Polycom Israel has failed. It is that the mindset of these two companies was different. Acano looked into what can be done in this modern age and made use of WebRTC to get there. Polycom looked at how they slowly evolve their product offering. I am sure people in Polycom knew about WebRTC. It probably was on roadmaps and discussions since 2012, never to be given priority, because who needs it? It can’t compete with the high end systems of Polycom. But then the basis of competition changed. What customers care about changed. It isn’t anymore about resolutions and frame rates. It’s about utility and usability – something most video conferencing companies never knew how to handle.
Polycom Israel didn’t have the foresight to make themselves attractive enough to their corporate overlords in San Jose. Probably because they weren’t given the opportunity to do so. The end result? They just weren’t important. Their technology and architecture is now stable and understood enough to move it to countries with lower salaries.
—
I remember doing a training to developers about WebRTC in 2014. I asked people in the room what they do. There were media engineers and signaling protocols developers. I told them that they are going to be out of work. They saw it as a joke. Some of them are now updating their resume.
What is it that you are doing for a living? What is your company developing? Does it make sense? Do you take the effect WebRTC (and other technologies) have on your job seriously?
Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.
The post The First WebRTC Earthquake in Video Conferencing: Acano vs Polycom appeared first on BlogGeek.me.
Video recording
Other
Small
Video
Asynchronous video meets WebRTC.
[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]One area where WebRTC is making strides recently is video streaming. Some of the hyped use cases today are those that enable broadcasting in real time, but there’s another interesting approach – one where WebRTC is employed when the video consumption is asynchronous from its creation.
Ziggeo is an API provider in this specific niche. I met with Susan Danziger, CEO of Ziggeo, and asked her to share a bit of what it is they do with WebRTC and how it is being adopted by their customers.
What is Ziggeo all about?
Ziggeo is the leader in asynchronous (recorded) video offering a programmable video recorder/player through our API/native SDKs.
You started by working on an HR interviews platform. What made you pivot towards a video recording API platform instead?
In building our own video recording/playback solution for the platform, we realized what a complicated and time-consuming process building our own solution was. We had to make sure that videos could be recorded and played across all devices and browsers (even as new ones were released) and build a permissions-based security solution that would withstand hackers. We were surprised there were no off-the-shelf solutions available so decided a bigger opportunity would be to release our technology as an API — and then native SDKs (and shortly thereafter closed our B2C platform).
On the same token – you have Flash there. Why did you add WebRTC? Wasn’t Flash enough for your needs?
For the most part our customers hate Flash. And no wonder: browsers that support Flash have an awful user experience in which you need to basically hit 3 different buttons before you can begin recording from your web camera (once to resume the suspended Flash applet and twice to access the camera).
We added WebRTC to avoid Flash whenever possible. That said, for certain browsers, e.g. Safari and Internet Explorer we need to default to Flash as they don’t yet support WebRTC.
How are customers reacting to the introduction of WebRTC to Ziggeo?
Customers love it! In fact, our customers seek us out in part because we’re the only API for asynchronous video recording that supports WebRTC.
Can you share a few ways customers are using Ziggeo?
In addition to recruiting (where candidates introduce themselves on video), we’ve seen Ziggeo used for training (e.g. trainees record video sales pitches for feedback); dating (potential dates exchange video messages); “Ask Me Anything” (both questions and responses on video); e-commerce (products introduced on video and video reviews recorded); advertising (user-generated videos submitted for contests or for use in commercials); and journalism (crowd-sourcing videos for news from around the world). I’m still waiting for someone to create a video version of Wikipedia where pieces of knowledge are recorded on video from around the world — that would be the most amazing use case of all.
A video version of Wikipedia. Have it in Hebrew and I’ll sign up my daughter on it.
You don’t use the Peer Connection APIs at all – Just getUserMedia. Why did you make the decision to record locally and not use the Peer Connection and record on the server?
Folks like to re-record locally so we chose not to use unnecessary resources. We pride ourselves on making our technology as efficient and seamless as possible.
How do you store the file locally and how do you then get it to your data centers?
We use IndexedDB to store the file locally and then push it using chunked http.
Viewing. Over what protocols do you do it, and how do you handle the different codecs and file formats?
Protocols: Http pseudo streaming, HLS, rtmp, rtsp
Formats: we transcode videos to different formats (mp4, webm) and resolutions
Where do you see WebRTC going in 2-5 years?
We imagine there will be full support of WebRTC across all browsers and devices as well as better support for client-side encoding of video data.
Given the opportunity, what would you change in WebRTC?
We’d like to see improved support for consistent resolution settings as well as for encoding
What’s next?
We’re planning the 2nd Annual Video Hack Day in NYC for this coming May. You can find more information here at: videohackday.com or follow @videohacknyc on Twitter
–
The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.
The post Ziggeo and WebRTC: An Interview With Susan Danziger appeared first on BlogGeek.me.
This week we had a few features including: allowing building with OpenSSL without EC support, a video quality parameter to allow for conference configuration for verto, and some improvements to conference layouts for verto as well. If you haven’t already, it is highly recommended that you upgrade to the newest 1.6 release as soon as possible to avoid the vulnerability from last week. Join us Wednesdays at 12:00 CT for some more FreeSWITCH fun! This week we have James Tagg! And head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
The FreeSWITCH 1.4 branch had a couple of bug fixes back ported as well as the release of 1.4.26. And again, keep in mind that 1.4 is quickly moving toward end of life and won’t be supported any longer except for high level security issues.
New features that were added:
The following bugs were squashed:
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.