March 05, 2015

This is post #1 of 1 in the series “Text-to-speech technology for struggling readers”

The first in our series based on the work of Dr Michelann Parr, a researcher in the use of text-to-speech technology in education, discusses whether text-to-speech technology is truly an aid for struggling readers, or just more educational technology hype.

Text-to-speech technology, which gives students the ability to listen to an audio version of any written content, fits well within a literacy environment of multiple intelligences, multi-modalities and multiple literacies.

Adhering to Universal Design for Learning (UDL) principles, the presentation of texts in different formats (auditory and visual) provides learners with a variety of ways to access the content, allowing each individual student to learn in the way that is personally most effective. This bimodal presentation of content improves comprehension and academic results.

Wise, Ring, and Olson (2000) found text-to-speech to support decoding, which frees the listener to focus on the meaning of the content rather than just the act of reading itself. This in turn encourages comprehension of larger concepts, student dialogue and writing.

But as Dr Parr points out, this assistive technology has even more important implications: increasing motivation and self-confidence for all different kinds of students. Research shows us that belief in oneself and choice in what and when to read act as powerful motivators for children to learn. Text-to-speech technology does just that: it facilitates independence since the student can read on his own, choice of what to read and self esteem as he is successful not only in reading the text, but in understanding grade-level content alongside his peers.

“For those students who are frustrated because of a lack of decoding skills and fluency,” observes Dr Parr, “text to speech is a confident internal voice, a support for comprehension and a valuable lifelong tool.”

Dr. Michelann Parr, Schulich School of Education at Nipissing University, Canada

Michelann Parr taught Kindergarten to Grade 6 for over ten years. Her experience includes early literacy intervention and working with struggling students. She teaches language, literacy, and special education, at both graduate and undergraduate levels, in the Schulich School of Education at Nipissing University. She holds workshops on successful approaches to teaching literacy, poetry, writing, drama, and using technology as literacy support.

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


Generating leads from social media can be a challenge for many businesses. Many marketing teams and the C-Level executives that manage them don’t always see the value in allocating budget for social media. But with the explosion of mobile Internet usage and its impact on social media, it’s more important than ever for marketing teams to take their social lead generation strategy seriously – and that means allocating budget, time, and resources to these channels in 2015. According to Salesforce, 66% of marketers now rate social media as core to their business strategy in 2015 – up from 24% in 2014.

But what is most interesting about social media right now is the hold mobile devices have over it. Americans now officially spend more time on social media than they spend doing any other Internet activity (including email) and 60% of that social media activity takes place on smartphones and tablets (Business Insider). Social media and mobile go hand in hand, so optimizing all of your social marketing for mobile devices is the sure way to improve your lead generation strategy on the channel. (Can we have a lead-in here? “here are some tips for…blah blah blah”)

Build Mobile-Optimized Landing Pages

Since the goal of most of your social media posts is to drive traffic to your website, having mobile-optimized landing pages is the first thing you can do to improve lead conversion rates. 31% of all website referral traffic is now driven by social media (Forbes) – so driving traffic from social posts to your website is primary. If you don’t have a mobile-specific or responsive website yet, most modern CMS enable you to build out mobile-specific landing pages. Mobile friendly landing pages will improve the mobile user experience and help keep visitors on your website longer.

Build Custom Landing Pages for Social Media

Building custom landing pages for each social media network you promote content on is another way to improve conversion rates and generate more leads. If you are promoting a white paper, eBook, or other piece of content, building a landing page specific to the social media platform you’re promoting on will also help convert users into leads.

This holds true especially for paid search on social media. If a user clicks on the Facebook ad you are running and the landing page content is specific to Facebook and the ad they saw – they are more likely to convert on that page.

Staff Social Media Managers on the Weekend

The Salesforce 2015 State of Marketing found that consumers are most engaged on social media on the weekends – yet Saturdays and Sundays are the days businesses are the least engaged on social channels. Staffing a social media manager (or more than one, depending on your engagement rates) is key to keeping lead flow steady. Not only should these managers post new content throughout the weekend, they should also engage with users who are liking, commenting, and engaging with your brand. This helps customers trust your brand while also helping them get the nurturing and support needed to complete a purchase.

Staffing social media managers to engage customers with your brand over the weekend also means staffing sales and customer support staff to handle inbound calls. Since social media activity is primarily mobile, users are more likely to call a business (whether it be to complete a purchase or to ask a support question) after being referred to their website from social media than filling out a form.

Spend More on Paid Social Media Advertising

A 2014 study by Kenshoo found that the more money advertisers spent on Facebook advertising, the more conversions they generated. In other words, the more you spend on social advertising, the more you get in return.

The first places to start spending more are Facebook and LinkedIn – they are the top two most popular social networks, with the highest number of active users and active mobile users. Facebook has more than 1 billion users accessing the site from mobile devices every month (eMarketer), and 47% of all LinkedIn visits are via mobile devices (DMR). The tip here is simple: optimize your paid social ads and landing pages for mobile, bid higher on clicks or impressions from those devices, see an increase in social media leads.

2015 really is the year for mobile marketing optimization – across all channels. Social need not be forgotten in the mix as it continues to drive more and more traffic to websites each year. To learn more about generating leads from social media, download this free webinar about boosting call conversions from Facebook and LinkedIn.

The post How to Generate More Leads from Social Media appeared first on DialogTech.

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


This year’s Signal Conference brings together the people who are building the future of communications. You might be surprised how familiar you already are with them. If you use Netflix, you might owe a hat tip to Mike. If you live in Minneapolis and read the MinnPost, you can thank Alan for keeping your data on point.

We’re excited to have a crew of inventive speakers on the Signal calendar, and we’d like to introduce you to a few of them.

Meet The Speakers

SignalKateKate Heddleston was unsure of what to major in at Stanford. After nailing an “Introduction to CS” exam she got an inkling she was pretty good at this whole engineering thing. As a Software Engineer at Runscope, Kate builds customer facing products mostly in Python, Flask, and AngularJS. When she’s not committing code, she’s coaching swim teams and teaching women at the HackBright academy to program.

 
 
 

SignalMike You might know Mike McGarr’s employer, Netflix. Netflix certainly knows you, and that you watched Weekend At Bernies on Sunday night. Who can blame you?! Mike’s an Engineering Manager at Netflix, making sure that your movies, shows, and series are ready for you to stream whenever, wherever. Working on the Netflix Build Tools Team, Mike strives to build quality software through automation. You can hear him talk about all things engineering on his Ship Show Podcast.

 

SignalAlanAlan Palazzolo writes in a newsroom at MinnPost, a small non-profit newspaper in Minneapolis. Alan doesn’t just write stories, he also writes the code that informs those stories. As an open source advocate, Alan manages MinnPost’s code base, and uses his technical savvy to comb through data, model data, and create visualisations for stories. As a former Code For America fellow, he’s focused on how code can help a better world from the government sector, to civil, to the newsroom.

 

SignalProPrabode Webadde has had a whirlwind of a year. Shortly after founding CarCodeSMS, aplatform that lets car dealers communicate with customers directly via SMS, he won a hackathon. The company that ran the hackathon then acquired CarCodeSMS. Now, Prabode is focusing on delivering awesome customer experiences at scale for Edmunds, less than a year after winning that fateful hackathon.

 

Grab your tickets to Signal right here

Meet Signal Speakers: From Coding In The Newsroom, To Making You Sure Can Stream House of Cards

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


March 04, 2015

I have to fill out a “forgot password” page at least once a week. That’s probably why I’m so excited about passwordless authentication. Passwordless authentication is a system where the application you’re logging into generates a one time use token for you to log in with and delivers that token via SMS or some other means. You can then verify this password from the device you were attempting to log in with.

In January we started a series of blog posts building out our own application that supports passwordless authentication. For me, this series is about trying and learning new things. In part 1 of the series we walked through building the API we’ll use for our passwordless SMS authentication application using Laravel and Twilio. Today we’re going to move on to building the iOS front end. This seemed like the perfect opportunity to dive deeper into using Apple’s new-ish programming language, Swift.

Our Tools

We’ll be using the following tools to build our app. We’ll introduce them each throughout the post:

Hi, I’m An iPhone

We last left off with a simple server side application with three API endpoints:

  • /user/validate/ – POST requests to this endpoint with a phone number validate that a user account is associated with that phone number and send a verification token.
  • /user/auth/ – POST requests to this endpoint with a phone number validate that a user account is associated with that phone number and send a verification token.
  • /profile – GET requests to this endpoint will return some super secret data. This endpoint can only be accessed after a user has authenticated.

Today we’re going to build out our iOS front end that interacts with these endpoints using Swift. Let’s launch Xcode and get started. We’ll be creating a new Single View Application:

Give your project a name (feeling uninspired? “PasswordlessAuth” will work), make sure the language is set to Swift and the device is set to iPhone.

In an attempt to do things the Apple way we’ll be building out a our basic interface using Storyboards instead of nib files. Open Main.storyboard and start by adding a label with the text ‘Phone Number’, a text field and a button with the text ‘Next’ to our view. Our view should now look something like this:

Now that we have our first view setup we can start writing our Swift code. Open up ViewController.swift. We need to add one outlet and one action to our class:

class ViewController: UIViewController {

@IBOutlet weak var phoneNumber: UITextField?

@IBAction func buttonClicked(sender: UIButton!) {
  println(phoneNumber!.text)
}

This will allows us to access the information in our UITextField and to detect when someone clicks on our UIButton. Before this will work properly we need to wire them up in our Storyboard. To connect our UITextField control+click on the View Controller in our Storyboard, drag to the UITextField and select the phoneNumber outlet:

To connect our UIButton control+click on it, drag to the View Controller and select buttonClick from Send Events like this:

Let’s make sure everything is working as we expect. Run your app (command+r), enter a phone number and press the next button. We now see our phone number logged to our Xcode console. It’s ok to do a little dance, I am.

Of course, when we press our button we want to do a little more than output to our console. Let’s update our buttonClicked code to make our request to the Laravel API we built. We’re going to make the request this using a library called SwiftRequest that I’ve been building as I learn Swift. See anything in SwiftRequest that’s missing or broken? I accept pull requests! Follow the instructions on the SwiftRequest github to install the library.

Let’s update our buttonClicked method to make a request to our API using SwiftRequest:

@IBAction func buttonClicked(sender: UIButton!) {
  let url = "http://localhost:8000"
  var swiftRequest = SwiftRequest()
  var params:[String:String] = [
    "phone_num" : phoneNumber!.text
  ]

  swiftRequest.post(url + "/user/validate/", data: params, callback: {err, response, body in
    if( err == nil && response!.statusCode == 200) {
      println(body)
    } else {
      println("something went wrong!")
    }
  })
}

If you’re running your app in the simulator you can keep running your API on local host. If you want to test this on a device or move into production make sure to move your API to a publicly accessible location and update your code to use that url. Run your application, enter the phone number you have associated with your user and press next. You should see our JSON response in the xcode console:

Now that we’re making the request to our API we can add the code that takes the appropriate action based on our response. Let’s start with handling the error case. Firs, we’re going to add a new function to our view controller called showAlert that helps us avoid writing duplicate code:

func showAlert(message: String) {
  dispatch_async(dispatch_get_main_queue()) {
    let alertController = UIAlertController(title: "Passwordless Auth", message:
      message, preferredStyle: UIAlertControllerStyle.Alert)

    alertController.addAction(UIAlertAction(title: "Dismiss", style: UIAlertActionStyle.Default,handler: nil))

    self.presentViewController(alertController, animated: true, completion: nil)
  }
}

Now let’s add the code to show an alert whenever we don’t have a successful response back from our API:

swiftRequest.post(url + "/user/validate/", data: params, callback: {err, response, body in
  if( err == nil && response!.statusCode == 200) {
    if((body as NSDictionary)["success"] as Int == 1) {
      println(body)
    } else {
      self.showAlert("We're sorry, something went wrong")
    }
  } else {
    self.showAlert("We're sorry, something went wrong")
  }
})

Give this a try by running the app and passing an invalid phone number.

In order to handle the success condition for this view we need to update our application to no longer be a single view application. First, let’s embed our current view into a navigation controller. This will allow as to travel between each our views easily. In your storyboard, select your view and then in the toolbar pick Editor -> Embed In -> Navigation Controller.

Now add a new view controller to your Storyboard. We’ll worry about setting this up in the next section, but for now we just need to make sure that it exists. Now drag the Manual reference under Triggered Segues on our current view controller to the new view controller we just created and then select Show:

Finally select the segue we just created and call it Verify:

Head back to ViewController.swift and we can add the code within our SwiftRequest callback that performs the segue:

if((body as NSDictionary)["success"] as Int == 1) {
  println(body)
  dispatch_async(dispatch_get_main_queue()) {
    self.performSegueWithIdentifier("Verify", sender: self)
  }
}

 

You’re probably wondering what this dispatch code we’re using is all about. Because we’re performing this segue from within a callback, it’s no longer running on the main thread. For our segue to run properly we’re using the dispatch to run the code back on the main thread. Now that we’ve got our segue set up, run your application again and enter your phone number. The application will then segue to our new empty view controller.

Turning The Page

We now have an application that allows a user to enter their phone number, send them a verification token and move them over to a new view where they should be able to verify that token. The first step in making that possible is creating a new Swift file for our AuthViewController. Select File -> New -> File… -> Swift File and call it AuthViewController.swift. We can then set up our AuthViewController class to inherit from UIViewController:

import UIKit

class AuthViewController: UIViewController {
}

Like our previous ViewController, we’re going to have two connections to our view, a UITextField which will house our token and a buttonClicked function that will be called when the user wants to submit their token:

class AuthViewController: UIViewController {

  @IBOutlet weak var token: UITextField?

  @IBAction func buttonClicked(sender: UIButton!) {
  }
}

Now that we have the foundation of our Swift ViewController, let’s head back to our Storyboard and set up our view. Right now our view is empty so let’s add a new label with the word “Token”, text box and button. The end result should look something like this:

Before we can wire up our text field and button, we need to set this ViewController to be our class AuthViewController:

Now we can connect our UITextField with the token variable in our AuthViewController:

And our button to our buttonClicked function:

Now that we have everything wired up, let’s jump back to our AuthViewController and write the code to make the request to our auth method:

@IBAction func buttonClicked(sender: UIButton!) {
  let url = "http://localhost:8000"
  var swiftRequest = SwiftRequest()
  var params:[String:String] = [
    "token" : token!.text
  ]

  swiftRequest.post(url + "/user/auth/", data: params, callback: {err, response, body in
    if( err == nil && response!.statusCode == 200) {
      if((body as NSDictionary)["success"] as Int == 1) {
        self.showAlert("User successfully authenticated!");
      } else {
        self.showAlert("That token isn't valid");
      }
    } else {
      self.showAlert("We're sorry, something went wrong");
    }
  })
}

This code is almost identical to what we did to send our verification token. The only differences are we’re passing the token (not phone number) and the endpoint we’re making the request too. You’ll notice we’re using the showAlert function again. In our final application we would want to abstract this function out so we can share it between our ViewControllers and not repeat ourselves. But in an effort to keep this post simple and focused we’ll just add the showAlert directly to AuthViewController:

func showAlert(message: String) {
  dispatch_async(dispatch_get_main_queue()) {
    let alertController = UIAlertController(title: "Passwordless Auth", message:
      message, preferredStyle: UIAlertControllerStyle.Alert)

    alertController.addAction(UIAlertAction(title: "Dismiss", style: UIAlertActionStyle.Default,handler: nil))

    self.presentViewController(alertController, animated: true, completion: nil)
  }
}

Run your application, submit your phone number and see that your token verifies correctly. You can find the final code for our application on github. There’s one last thing missing in this application which is accessing the secure data. Can you add the code that access this data after the token is verified? Spoiler alert: the code is going to look a lot like what you’ve already written twice in this post.

Hope You’re Ready for the Next Episode

In part 1 of this series we explored how to build the API for our passwordless authentication system using Laravel and Twilio. Today, in part 2, we built the iOS front end for our passwordless authentication system using Swift. What do you think we’ll build in part 3? I’ll give you a hint – expect to get your daily dose of Java in the next post. Have questions or what to show off what you’ve built? Holler at me on twitter (@rickyrobinett) or e-mail (ricky@twilio.com)

Passwordless SMS Authentication: Part 2 – Building iOS Front End With Swift

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


In Part 1 of this series, we learned how to use an SMS sent to your Twilio number to create a note in Evernote. Text is great for most note taking use cases but sometimes you need to be more expressive. Some information is best captured visually so we will add support for MMS to add photos to notes.

I also thought it would be incredibly cool if people could leave me notes in my Evernote notebook simply by calling my Twilio number. To enable this scenario we’ll use Twilio’s voice recording and transcription capabilities. The end result is a Twilio+Evernote-powered voicemail system complete with original call audio and a transcription.

This post adds functionality on top of what was created in Part 1. You want to complete that tutorial before starting this one if you haven’t already. This will walk you through getting started with the Evernote API and teach you how to create a note in Evernote using SMS. I’ll wait here while you work on that and we’ll resume when you get back.

All done? Great, let’s get started with the MMS use case.

If you want to grab the code for this tutorial to follow along with, I created a Github repo for you.

Adding Pictures to Evernote Using Twilio MMS

They say a picture is worth a thousand words. Sometimes ideas need to be captured visually. In the context of note taking this can be especially powerful. Maybe you’re doing research at the local hardware store for some renovations you’re making on your home. The words “antique white ceiling tile” might mean something to you while you’re staring at it in the store. However, a picture will be much better in two weeks when you’re trying to make a decision. Let’s extend the /message route in the application to allow us to send a picture along with text when we create a note.

In the Evernote API there is a concept called Resources. You can think of a resource as very similar to an email attachment. To create a resource we will need the contents of the file the resource represents as well as the MIME type for the resource. From those we can create a resource which can then be embedded into the note we create in Evernote.

The first thing we’ll need to do to create a resource from our MMS message is download the image that was sent. We’ll create a method that opens the file at the URL given to us by Twilio and returns the contents as a string. Add the following code to app.rb:

require 'open-uri'

def download_file(file_url)
  # Read content of file into memory and return it
  open(file_url).read
end

Now in the /message route add a line to download the image using our new method:

# If there's an attached image, download it
image = download_file(params[:MediaUrl0]) if params[:NumMedia].to_i > 0

Next we need to modify the make_note method we created in Part 1 such that it handles adding a picture to the note. We’ll pass in the image we just downloaded as well as the MIME type specified by Twilio for the image. Modify the call to make_note so that it looks like this:

make_note note_store, 'From MMS', params[:Body], image, params[:MediaContentType0], "From Evernote-Twilio"

Now we’ll modify make_note to handle these new parameters. Update it with the following:

def make_note(note_store, note_title, note_text, resource, mime_type, notebook_name)
  notebook_guid = find_or_create_notebook(notebook_name)

  # Create note object
  new_note = Evernote::EDAM::Type::Note.new(
    title: note_title,
    notebookGuid: notebook_guid
  )

  hexdigest = new_note.add_resource('Attachment', resource, mime_type) if resource
  note_body = generate_note_body(note_text, hexdigest, mime_type)

  new_note.content = note_body

  create_note(new_note)
end

Working with resources changes how we construct our note object. We need to create the note before specifying the content for the note since we will use the note object to create the resource for our image. The method add_resource is a convenience method in the evernote_oauth gem that handles embedding the resource into the note for us. It returns a hexdigest which is a pointer we can use to refer to our resource from the ENML that defines our note body. We pass this hexdigest along with the mime_type into generate_note_body which we will update shortly. Then we assign note_body to our note’s content property and create the note using the unchanged create_note method from Part 1.

Let’s update generate_note_body to add the image to our note:

def generate_note_body(note_text, resource_hexdigest, mime_type)
  note_body = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"
  note_body += "<!DOCTYPE en-note SYSTEM \"http://xml.evernote.com/pub/enml2.dtd\">"
  note_body += "<en-note>#{note_text} "
  note_body += "<en-media type='#{mime_type}' hash='#{resource_hexdigest}'/>" if resource_hexdigest
  note_body += "</en-note>"
end

The key piece here is the line that adds an <en-media> tag to the note body referencing the resource_hexdigest that we passed in. This will embed the image after the text in the note.

That’s it, you can now create a note in Evernote that contains an image. Send an MMS to the Twilio number you set up in Part 1 and a note with a picture should be created in your Evernote Sandbox account.

ceiling.png

Hopefully now I’ll make sure to get the right ceiling tiles.

Creating Evernote Voicemail

Text and pictures are great but the spoken word can be pretty powerful too. Complex ideas with complicated terminology might be too tedious to capture by typing on a phone keyboard. Also, there’s great value in giving someone a phone number that allows them to leave a note in my Evernote notebook in their own voice. I’ll get the convenience of a text transcription while also being able to listen to the tone in which they left the message. You can think of this as voicemail for Evernote, powered by Twilio voice recording and transcription.

The first thing we’ll need to do is create a /voice route in app.rb for Twilio to use as a webhook for incoming voice calls. This method will tell Twilio what to do when someone calls using TwiML generated by the twilio-ruby helper library:

post '/voice' do
  content_type :xml

  Twilio::TwiML::Response.new do |r|
    r.Say 'Record a message to put in your default notebook.'
    r.Record(transcribeCallback: "http://brent.ngrok.com/transcription", playBeep: "true")
  end.to_xml
end

The TwiML returned by this method looks like this:

<Response>
  <Say>Record a message to put in your default notebook.</Say>
  <Record transcribeCallback="http://your-host.com/transcription" playBeep="true"/>
</Response>

When someone calls your Twilio number, they will be prompted to record a message followed by a beep. When the call is completed Twilio will transcribe the audio and make a POST request to http://your-host.com/transcription with the resulting text and a link to the audio. To handle this POST request let’s add a /transcription route to our Sinatra app:

post '/transcription' do
  sound = download_file(params[:RecordingUrl])

  make_note note_store, 'From voice call', params[:TranscriptionText], sound, "audio/mpeg", "From Evernote-Twilio"
end

This code should look fairly familiar since it’s the same structure as the /message code we wrote earlier. We first download the audio from the RecordingUrl Twilio provided us in the request parameters. Then we pass that audio to make_note along with the TranscriptionText (also from the request parameters) and the MIME type of audio/mpeg. That’s all there is to it, let’s test it out. Head to your Twilio number and change the Voice URL to point at our shiny new /voice endpoint:

update-voice-url.png

Hit Save and then call your Twilio number and leave yourself a message. After the transcription is complete a new note should appear in your Evernote notebook. You can click on the audio file to hear yourself in case the text doesn’t quite match up with what you said. The result will look like this in your Evernote Sandbox notebook:

ceilingreminder.png

Guess I really need to make sure to get the right ceiling tiles. Thank goodness I have that picture I added via MMS earlier!

Next Steps

Evernote is a great system for taking notes and it opens up the door for some really interesting collaboration when combined with Twilio. In this post we took what we built in Part 1 and added the ability to attach an image to our note using MMS. Then we built an Evernote voicemail system in less than 15 lines of code! I’m really stoked to see what you can build with Twilio and the Evernote API. Try a few of these ideas to extend the system we have built:

  • Support more than one image in the MMS workflow
  • Create an IVR system for the voice workflow that provides the ability to pick a target notebook
  • Add security (this thing is ripe for trolling)

Let me know when you build something awesome by hitting me up on Twitter @brentschooley or emailing me at brent@twilio.com.

Note Taking on the Go With Evernote and Twilio – Part 2

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


As a sales manager at DialogTech, I answer a lot of questions about voice-based marketing automation—especially call tracking. (I actually addressed the top 5 questions I receive in another blog.) With all the questions I field, from mid-market businesses to enterprise-level companies, I’ve arrived at the conclusion that there are three levels that marketers can achieve when it comes to PPC call tracking. Do any of these sound familiar?

Beginner

The beginning level is very common. Many businesses are running PPC ads—often dedicating a big chunk of budget to it—but not tracking the calls those ads generate. But beginners or not, lately I’ve been noticing a pattern: that they’re having epiphanies, and that although they may not be tracking calls yet, they’ve become aware of its importance. 6 in 10 CMOs are on the hook to prove the value of marketing, and as a result they’re looking around them to find a solution. Beginners usually don’t stay beginners for long.

Intermediate

The intermediate level of marketers is a little more advanced than those at the beginner level: they are running paid campaigns and are tracking call conversions on a basic level. However, despite this slight edge, there is an overall similarity: none of the marketers at these levels are able to track call conversions where they are managing their bids. This means that they might be paying a lot of money for a campaign that is driving very few calls, and a small amount of money for campaigns that are actually high call converters. So while the intermediate level marketers have their eye on the right prize, they’re falling short in what they’re actually able to do with call tracking for PPC.

Advanced

Marketers at the advanced level have learned how to get an edge on the competition. They understand the importance of integrating with Google Analytics (or their preferred bid management solution, e.g. Kenshoo, Marin, DoubleClick, or Acquisio) and their CRM, and they see the power in being able to track keywords and caller location. They understand the value of tracking more than just leads: they want to track qualified leads by source; they want to track opportunities by source; they want to track revenue by source. They want to compare revenue from PPC to total spend for a given period, and so much more. Marketers at the advanced level are advanced because they see the way the world is changing—the increase in mobile traffic, for example: calls in the U.S. from mobile search projected to increase from 30 billion in 2013 to 73 billion in 2018—and want to do something to save their business from oblivion. They may be unable to do all of those things just yet, but that’s why they get on the phone with me.

Like anything else, success is often a mindset. With that—and the right set of tools—I’ve seen marketers go straight from beginning to advanced in one step.

What level is your understanding of PPC call tracking? Leave a comment below. And if you want to learn more about becoming a Hero, download this free eBook: The Marketer’s Guide to Call Tracking for Google SEO and PPC.

The post There Are 3 Levels of Marketers When It Comes to PPC Call Tracking appeared first on DialogTech.

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


March 03, 2015

Researchers from Australia’s RMIT University have developed a talking drone that can converse with air traffic controllers just like a normal pilot. View the video: http://bit.ly/talkingdrone The drones can communicate using a synthesized voice to respond to information requests and act on clearances issued by air traffic controllers. The drones are equipped with ATVoice Automated Voice […]

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


Marketing has gone mobile, and smartphones are having a huge impact on the way consumers want to do business. From the initial experience of a company, to the sales process, to all the interactions between client and company that happen after a consumer becomes a customer, studies show that conversations are critical to the customer journey – no matter the stage.

To help illustrate the role calls and conversations play in today’s business world, we have created this infographic. There are some powerful stats here. Your customers want to talk: are you listening?

customer journey infographic

The post Why Calls Have Become So Critical to the Customer Journey (Infographic) appeared first on DialogTech.

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


March 02, 2015

A picture is worth a thousand words, and Twilio customers are making the most out of each pixel they send using Twilio MMS. Customers use MMS to do everything from ensuring you get your packages, to creating collages, to fighting crime.

Today, in order to expand the reach of those innovative campaigns, we’re happy to announce MMS Converter. This is a new tool that enables you to reach audiences beyond the US and Canada, where MMS isn’t supported yet.

Here’s how it works: When Twilio identifies a number in your MMS campaign as international,  or MMS-incompatible, Twilio will store that media, generate a link to the media, and send that link to the user via SMS. (This means you can now also reach non-smartphones with MMS content) There is no additional charge for this — it costs the same as any SMS you’d send. You can learn more about our pricing and deliverability here.

To enable this feature:

  • Log in at Twilio.com
  • Click on your email in the top right hand corner of your screen
  • Click “Account” from the drop down menu
  • Scroll down to “MMS to SMS Failback”
  • Click enable

Here’s a GIF just to make sure you got it.

An API Request with MMS Converter Enabled

The following is an example of the new default behavior for customers sending MMS. In this particular case, the “To” recipient is not supported via Twilio MMS so while the ‘Body’ of the request does not include a shortened URL, it is included in the response.

account->messages->sendMessage("+14158141829", "+15558675309", "Jenny please?! I love you <3", "http://www.example.com/hearts.png");

 

Want to know more about how MMS can benefit your business? View our webinar here.

Making The Most Out Of Each Pixel: Reach More of Your Target Audience with MMS Converter

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


It’s time to start a revolution. For millions of years, the difference between humans and every other species on this planet has been our ability to talk to each other, to have a conversation; to have a dialog. Yet for close to 15 years, e-commerce technologists have been attempting to steal our humanity. They’ve told us that anything we ever want to buy can be dropped into a shopping cart and paid for with a credit card. No human interaction required.

Guess what? They’re wrong.

Here in Chicago this week, it”s freezing cold. If a pipe bursts in your basement, you’re not dropping a plumber into a shopping cart and then sitting by the door to wait. You’re picking up the phone and calling. You might do a local search (either on your smartphone or on another device) but to get a plumber into your home to fix your burst pipe, you’re going to want to call. You need to have a conversation–a dialog–with the plumber. You need to ask things like, “How do I turn off the water? What should I do until you get here?”

The same thing goes for other kinds of transactions. When I was growing up, the life insurance salesman used to come and sit on the couch and talk to my father. Nowadays, most of that process has changed, but one important part has not: no one buys life insurance without having a conversation, an exchange of questions and answers. A dialog.

Even businesses that aren’t driving leads for themselves (agencies, for example) understand the importance of conversation. It’s been proven that these voice interactions drive revenue, and as a result marketing firms are hot on the trail of how to master (and measure) one of the most basic parts of business: dialogs.

We are rebranding our company because we recognize the power of voice, the power of dialog, and the uniqueness of humans. As a company, we further recognize that voice-based dialogs no longer occur exclusively on phones, and we want to emphasize that we’re in the business of transforming commerce by helping businesses unleash the power of conversation to optimize marketing, grow revenue, and delight customers.

It used to be that companies could differentiate by building great websites and fantastic e-commerce experiences. Or, more recently, with wonderful apps. Today, that’s nothing unique. Every business has a website. Every business has an e-commerce site. Everyone has an app. As we move deeper into the Digital Age, our tools become more sophisticated and our toys become more advanced. But if you want to differentiate your company today, you have to do it the old-fashioned way: you have to talk to your customers.

At DialogTech, we make it easy to utilize voice conversations, and we see a bright future in the integration of technology and humanity. We aspire to lay to rest the idea that technology must be devoid of humanity, and we are ready to lead the charge in changing the world. Join us in the revolution.

The post Ifbyphone Is Now DialogTech, and We’re Starting a Revolution appeared first on DialogTech.

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


February 28, 2015

The big event is back. We’re headed to LeadsCon West with our fellow Twilio Partners Voicebase, CallerReady, WhitePagesPro and Datalot. LeadsCon focuses on the leading marketing technology and tactics, and how you can use them to grow your business.

Our partners Voicebase, CallerReady WhitePagesPro and Datalot specialize in all things real-time marketing, so you can make the most out of every phone call, every SMS lead, and every email campaign. From logging voice recording, to call tracking, to detailed lead analytics, real time marketing is critical to running your company’s communications. We’ll be on scene to talk about all things communications and how you can easily integrate Twilio into your marketing software. Find us at the Mirage Hotel and Casino March 3rd and 4th.

Datalot – Booth 601 is the leader in pay-per-call advertising space, with their ability to build the best advanced call marketplaces, Datalot delivers live qualified customers to your sales agents, making your phone ring with new customers.

Learn more about Datalot’s DialDrive and Twilio here

CallerReady – Booth 610 CallerReady’s best-in-class, Call Tracking and Call Distribution platform along with multi-buyer call auctions provides the tools businesses need to maximize lead generation efforts and optimize advertising ROI.

Voicebase – Booth 713 Built using the world’s most accurate speech engine, VoiceBase provides easy-to-use APIs that automatically transcribe audio and video, extract relevant keywords and topics and enable the instant search and discovery of spoken information.

Learn how Voicebase makes call recordings searchable with Twilio

WhitePagesPRO – Booth 712 WhitePagesPRO provides API access to the largest database of phone numbers in North America each associated with unique data points like location and phone type. WhitePages PRO enables business users to find consumer contact information.

Learn how WhitePagesPro optimizes marketing with Twilio

Win A Twilio-Powered Weekend

Together Twilio partners Datalot, Voicebase, WhitePagesPro and CallerReady, we will be giving away three Twilio-Powered Weekends at LeadsCon – your transportation, hotel, dining, and shopping needs are covered!

The Twilio-powered weekend includes $500 of Uber credit, $600 of AirBnB credit, a $400 OpenTable gift card, and a $350 GILT gift card. You pick a destination then ride in style, stay in a swanky local flat, make reservations at the best restaurants in town, and splurge on some new clothes for the weekend. Stop by the Twilio booth and enter to win.

Terms and conditions apply

Find Team Twilio at LeadsCon West

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


February 26, 2015

We’re very excited to announce the launch of Tropo Connect, the world’s first In-Call apps platform.  What is an In-Call app, you ask?  Excellent question!
Tropo_Connect

In-Call Apps with Tropo Connect

Hundreds of thousands of developers have launched millions of apps since we launched Tropo.com in 2009.  Traditional Tropo apps involve a virtual phone number that users can call (or text) and interact with a Tropo app, or that app can call (or text) end users. Examples of traditional Tropo apps include two-factor authentication services, robo-dialers, survey apps, and conference call apps.

In-Call Apps takes a different approach, using existing phone numbers (instead of virtual numbers), opening up the full power of the telephone network and allowing developers to do all kinds of fancy things before, during and after a phone call is connected.

In-Call Apps are location aware and integrate nicely with your favorite API’s using JavaScript. For example, let’s say you’re sitting at home blasting some White Stripes on Spotify and your phone rings. A simple Tropo Connect app could 1) know that you’re home, 2) know that you’ve got an incoming phone call, and 3) know that you likely want to turn the volume down on your music in order to answer the phone. So before your phone even rings, the Tropo Connect app is already lowering your volume.

Another example could be a Tropo Connect application that is connected to the Salesforce.com API. One of your important clients calls your cell phone to place an order. A Tropo Connect app could hit the Salesforce API, recognize the caller ID as one of your clients, record the call, attach the recording to a “To-do” item in Salesforce and you didn’t even have to walk off the green.

Internet of Things (IoT) with Tropo Connect

When paired with the new wave of IoT consumer devices that have recently been taking the world by storm, Tropo Connect becomes a powerful tool to automate your home or business. Because Tropo Connect apps are location aware, you can turn on your lights automatically when you arrive home, or connect with your smart thermostat to save money when you are away from home. Or even lock your doors from down the block. The options are only limited to your imagination.

Making Money with Tropo Connect

Tropo Connect is completely free for developers. When you’re ready to launch your Tropo Connect app to the world, just let us know because we’d love to help you monetize it.

Sign up for the Tropo Connect Beta

The Tropo Connect Beta is open for existing Tropo Developers. You can sign up to join the beta today at http://tropo.com/connect

 

Tropo Connect Press Coverage:

Full Press Release: Tropo Partners With Apcera and IBM’s SoftLayer to Launch Industry’s First In-Call Apps Platform

Opus Research:  Tropo Connect: InCall Apps Fulfill on the “Telco in Legoland” Promise

Forbes: Apcera and Tropo Launch Telco Platform

Programmable Web: Tropo Connect brings IoT, Cloud Apps to Live Phone Calls

Converge Digest: Tropo Debuts Connect Platform for Network Services on Clouds

The post Announcing Tropo Connect appeared first on Tropo.

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


The Arduino Yun has built-in WiFi and a second microprocessor which runs Linux. That means that you can write programs in your favorite scripting language and interact with APIs directly from your Arduino.

In this tutorial, we’ll learn how to send SMS and MMS from our Arduino Yun using Python and Twilio. By the end we will:

  • Install pip and the Twilio Python helper library on the Yun
  • Write a Python script to send an SMS and MMS
  • Send a text message from an Arduino program

Getting Started

This tutorial builds on our Getting Started with the Arduino Yun tutorial, which covers:

  • How to format your SD card for use in the Yun
  • How to upgrade to the latest version of OpenWRT-Yun
  • How to install the Arduino IDE
  • How to run your first sketch on the Yun

Assuming you’ve completed those steps, this tutorial on sending text messages should take about 15 minutes. The ingredients you’ll need are:

Install pip on the Arduino Yun

In order to install the Twilio Python helper library, we’ll need to install a Python package manager called pip. SSH into your Yun (if you don’t know how to do this, check out our getting started guide).

From there:

opkg install distribute 
opkg install python-openssl 
easy_install pip

And that’s it — now pip’s installed.

By default,

pip
  would install Python packages to the Yun’s onboard memory. However, the official Arduino Yun docs have an ominous warning that says, “You’re discouraged from using the Yún’s built-in non-volatile memory, because it has a limited number of writes.” Let’s install the Twilio library to the SD card instead.

This requires three steps:

  • Create a new directory on the SD card for our Python packages
  • Set a
    PYTHONPATH
      to tell Python to look for packages there
  • Force
    pip
      to install the Twilio library in our new directory

Plug a properly formatted SD Card into your Arduino (again, check out the Yun getting started guide if you need help with this), then create a new directory:

mkdir /mnt/sda1/python-packages

Now let’s edit our

/etc/profile
  and set a
PYTHONPATH
  so that Python will check our new directory for packages:

vim /etc/profile

You’ll see several lines that start with

export
 . Below those lines, then press
i
  to enter insert mode. Then paste this:

export PYTHONPATH=/mnt/sda1/python-packages

Press

esc
  to go back to command mode. Type
:wq
  to save the file and quit vim. Then, from the command line, reload your environment by typing:

source /etc/profile

Finally, install the Twilio library to our

python-packages
  directory:

pip install --target /mnt/sda1/python-packages twilio

Send SMS from your Arduino Yun

If you don’t already have a Twilio account, sign up now. You can do everything in this tutorial with the free trial.

We’re going to write a simple Python script to send a text message. There are two ways we could get this code on our Arduino:

  • plug the SD card into our computer, write the code there using our favorite text editor, save it to the card and plug the it back into the Yun.
  • Write the code directly on the Yun using Vim

For this tutorial, we’ll go with the latter.

First we need to make a directory for our Python code and navigate there:

mkdir /mnt/sda1/arduino
cd /mnt/sda1/arduino

Then create a new python script using Vim:

vim send-sms.py

Once in Vim, press

i
  to enter insert mode. Then include the Twilio python library:

from twilio.rest import TwilioRestClient

Next we need to set our Twilio credentials. In a browser, navigate over to your Twilio Dashboard, and click Show Credentials.

twilio-credentials

Copy these values into variables:

account_sid = "YOURACCOUNTSID"
auth_token = "YOURAUTHTOKEN"

Then create variables for both the Twilio phone number you’ll be using for this project and your personal cellphone:

twilio_phone_number = "YOURTWILIONUMBER"
cellphone = "YOURCELLPHONE"

Create a REST client to connect with the Twilio API.

client = TwilioRestClient(account_sid, auth_token)

Once we’ve got our client, sending an SMS a simple exercise of passing a

to
 ,
from
 , and a
body
  to
client.messages.create()
 :

client.messages.create(to=cellphone, from_=twilio_phone_number, body="COMING TO YOU LIVE FROM THE ARDUINO YUN!")

Type

:wq
  and press enter to save and exit. Then run our script:

python send-sms.py

If all goes well your phone should light up with a text message. If all doesn’t go well but Python didn’t give you an error message, check the Dev Tools under your Twilio Dashboard.

Chances are, you don’t want to edit the Python script every time you want to change the message body. Let’s modify the script to accept a message body from the command line parameter:

Open

send-sms.py
  again and add this line to the top of your file:

import sys

Then change the line that sends the SMS to replace the hardcoded message body with the first command line argument:

client.messages.create(to=cellphone, from_=twilio_phone_number, body=sys.arg[1])

Save and quit vim, then run your script again with the message body in quotes:

python send-sms.py "Coming to you live from the Arduino command line!"

Send MMS from your Arudino Yun

Last year Twilio launched the ability to send picture messages, a.k.a. MMS. To send an MMS we need only to pass one additional parameter to the

client.messages.create
 method: a
media_url
  to tell Twilio where our picture is located.

For the sake of accurate file names, let’s make a copy of our script. Then open the file in vim:

cp send-sms.py send-mms.py
vim send=mms.py

We can use a couple of vim’s keyboard shortcuts to navigate to our special spot.

  • Press
    shift-g
      to move to the end of the file
  • Press
    $
      to move to the end of the line
  • Press
    i
      to insert text at the spot prior to the cursor
  • Paste this code (make sure you get the preceding comma!):

, media_url=sys.argv[2]

So that whole line should look like:

client.messages.create(to=cellphone, from_=twilio_phone_number, body=sys.argv[1], media_url=sys.argv[2])

Save and quit vim, and run your script with two parameters: one for the message body, the other with a url of an image you’d like to send. Here’s one of our puppy on the day we brought her home:

python send-mms.py "Here’s my puppy." https://s3.amazonaws.com/baugues/kaira-puppy.jpg

While you’re waiting for that picture to arrive on your phone, let’s chat about MMS.

First, MMS is a pretty slow way to send data and an image is a few orders of magnitude more data than 140 characters of text. It could take up to 60 seconds before you receive your picture.

Second, because MMS requires a publicly accessible url, it’s a non-trivial exercise to send an MMS with a picture that’s residing on your Yun. Two options are:

  • Open a tunnel through your router to give your Arduino Yun a publicly accessible IP using a service such as Yaler.
  • Upload your file to the cloud.

If that last method interests you, check out our tutorial on how to build a photobooth with an Arduino Yun, where we demonstrate how to upload pictures to Dropbox from your Yun.

Alright, hopefully by now your picture has arrived on your phone. Let’s play with some Arduino code.

Send an SMS from an Arduino Sketch

If all you wanted to do send an SMS, you wouldn’t need an Arduino. The reason we’re doing this on the Yun is so that we can do some hardware hacking along with our software writing. Let’s write an Arduino sketch that will run our text message sending Python script.

Open the Arduino IDE and create a new sketch.

The Arduino Yun has two processors: the “Arduino chip” which controls the pins and interfaces with the hardware, and the “Linux and WiFi Chip.” These two chips communicate with one another via the Bridge  library. The Process  library is how we execute Linux commands from our Arduino code.

At the top of the sketch, include the bridge and process libraries:

#include <Bridge.h>
#include <Process.h>

Your sketch comes pre-populated with

setup()
  and
loop()
  functions. We’ll come back to those in a minute. First, let’s write the function to call our Python script. Add this to the bottom of your sketch:

void sendSms() {
  Process p; // Create a process and call it "p"
  p.begin("python"); // Process that launch the "python" command
  p.addParameter("/mnt/sda1/arduino/www/send-sms.py"); // Add the path parameter
  p.addParameter("\"Coming to you from the sketch\""); // The message body
  p.run(); // Run the process and wait for its termination
}

Now back to our

setup()
  and
loop()
 . The
setup()
  runs one time after you upload the sketch to your Arduino. Ours is pretty simple — we’re just going to initiate the Bridge.

void setup() {
  Bridge.begin();
}

Our loop is pretty simple too. We’ll call our

sendSms()
  function, then wait for 10 seconds.

void loop() {
  sendSms();
  delay(10000);
}

Click the checkmark in the top left corner to verify your script. Then click the upload button to send your script to the Arduino Yun.

arduino-upload

Shortly after you do that, your phone will light up with a text message. Then another. Then another. Once you’ve had enough, comment out the

sendSMS
  line in the sketch and re-upload it to your Yun.

Next Steps

Now you’ve got an Arduino sketch that can trigger a Python script that can send a message to the 6.8 billion cellphones of the world. What does one do with that kind of power?

Perhaps you could:

  • Use environmental sensors to alert you when the temperature in the fridge rises above a certain temperature
  • Build security system that texts you when motion is detected in your house
  • Hook up a webcam and have your dog send you selfies by hitting a big red button

Whatever you build with your texting Arduino, I’d love to hear about it. Leave me a message in the comments below, drop me an email at gb@twilio.com or hit me up on Twitter.

Happy Hacking!

Send SMS And MMS From Your Arduino Yun

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


We joined the Executives’ Club of Chicago for breakfast this morning to hear Irene Rosenfeld, Chairman and CEO of the global snacking powerhouse Mondel?z International, discuss leadership in turbulent times. And Rosenfeld certainly offers a wise perspective on change: it was under her leadership that Mondel?z was launched as a snack spin-off of Kraft Foods, establishing itself as a world leader in chocolate, biscuits, gum, candy, and more. But it hasn’t been all sunshine and rainbows: growth happened slowly at first due to things like rising costs of supplies, climate change, explosions in world population, and all the other uncontrollable factors that any business owner knows all too well.

Asked to give a speech on how to lead during turbulent times, Rosenfeld laughed and said, Times are always turbulent, so get over it! and offered these 4 lessons she’s learned that she wants to pass on to CEOs and leaders facing change.

1) Control the Controllable

Rosenberg quoted Jimmy Dean for this lesson: I can’t change the direction of the wind, but I can adjust my sails. Whether the winds that buffet your business are rising costs or shortage of much-needed resources, you can’t control what the challenges you will face. But you can control how you meet them. Rather than sitting tight and hoping the future will be better, adjust your course and navigate on.

2) 95% of Success Is Execution

According to Rosenberg, we should spend 5% of our time developing our strategy, and 95% of our time executing it. Once Rosenberg decided to make Mondel?z a snack spin-off of Kraft, execution took a lot of time and attention: changing the organizational structure to mesh with the company aspirations, simplifying processes, focusing investment on high-performance regions. New positions were created within the company to ensure that they weren’t attempting to do something new while approaching it the same old way. This is a particularly interesting way of thinking about and embracing change: it’s an opportunity to approach old processes and structures with a fresh set of eyes and goals.

3) Get the Right People on the Bus

A good CEO understands that changing a company isn’t hopping into a fast car and speeding off into the sunset, waving for everyone to follow. You need a bus sometimes a lot of buses and you need to get the right people on board with you. This means finding agile leaders with a global mindset, both internally and externally, mentoring them and nurturing them to be inspired and invigorated by change. You can’t do it alone, and the right leaders with the right levels of the excitement can guide the rest of the company on to greatness.

4) Manage Change Constantly and with Consistent Communication

Most CEOs have a clear and powerful vision of their company’s future this is how they became CEOs to begin with. But communicating that vision is just as important as the vision itself, in ways that are effective and timely. Rosenfeld described global meetings with conversations led by her, but also videos, social media initiatives, internal community meetings, and videos, all with the goal of transparently discussing upcoming changes. Alignment can mean the difference between collapse and success.

It’s true things are always turbulent when you’re running a business, and the bigger the business the more there is at stake. But when Rosenberg says get over it, she doesn’t mean resign yourself to a bumpy road. She means putting on your climbing gear and tackling the mountain ahead.

The post How CEOs Can Lead When the Winds of Change Are Blowing appeared first on DialogTech.

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


February 25, 2015

This is post #3 of 3 in the series “How the Digital Revolution is Reshaping Education”

In the first two parts of this series, we focused on how digital textbooks increase learning while reducing costs, and the benefits of the Virtual Learning Environment (VLE) for both the online-based and traditional classroom. Part three of this series focuses on Internet connectivity both on- and off-campus.classroom

Gone are the days of running to the computer lab between lectures, hoping to snag an available computer to do some research, check e-mail, or type up an assignment. Today’s students are connected nearly every minute of their waking life, and the majority are even connected whilst catching some z’s.

The majority of campuses today still have computer labs, though these labs often have a very specialised purpose; for example, the University of California Los Angeles has both generalised computer labs as well as special laboratories for statistics, psychology, computer programming, and chemistry (Source: UCLA). The University of Iowa has more than 26 computer labs on campus, comprising over 1000 workstations available for students (Source: UIowa).

Students can still drop in between classes or reserve a computer ahead of time to take advantage of high-speed Internet connections. On campus, Ethernet connections remain the fastest way to get online, with ping times of less than 10ms and download speeds ranging from 70 – 970 Mb/s Source: OOKLA). However, as more and more students bring their own devices to campus, wireless connectivity is becoming a major part of the learning experience, as students are able to get online anywhere, anytime, throughout campus without being figuratively chained to a computer lab table.

fellow-tablet-laptopWiFi connectivity has become a huge selling point for higher education. As early as 2008, 73% of students stated WiFi helps them achieve better grades, and in 2011, 60% of students reported that they would not attend a college without WiFi services (Source: Wi-Fi.org; DigitalTrends). By the start of the academic year 2014, universities reported that new students expected WiFi to be readily available throughout the campus. 97% of student housing in the United States had wireless access available to students by 2013, an increase of 16% from the previous year. (Source: EdTechMagazine). At California Polytechnic State University, more than 40,000 unique devices connected to the WiFi network per day in 2014, up from about 8,000 in 2012 (Source: EdTechMagazine). This huge increase cannot be understated: WiFi and connectivity in general are hugely important to today’s student.

Why is Internet connectivity so important? Today’s student is intricately connected to the world around him, constantly checking email, course information, grades. Pertinent information is often sent out via Internet, whether per email or via a virtual learning environment. Additionally, many technologies students use on a regular basis are cloud-based, and can therefore only be accessed with an Internet connection. These cloud-based technologies will be further discussed in the next part of this series.

Whatever the future may bring for education, one thing is certain: the Internet will remain of upmost importance, as more and more students have more and more devices and require more and more of what the Internet has to offer in order to achieve success.

This is the third part of our series: How the Digital Revolution is Reshaping Education. The next post will discuss computer labs and specialized software, with an focus on newer, cloud-based software.

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


In Google’s most recent nod to the importance of mobile marketing they’ve introduced a new campaign type for AdWords users – the ‘Call-Only’ campaign. The ‘Call-Only’ campaign feature is geared towards mobile-centric marketers who know that generating call conversions and leads from mobile users is much easier than generating web form submissions. In fact, 70% of mobile searchers call a business directly from search results (Google).

The Call-Only campaign type is designed to help AdWords advertisers reach customers who are actively searching for a phone number to call on their mobile device. Since calling is the preferred contact method by mobile users, the Call-Only campaign type will be essential in driving more calls – and sales – to businesses everywhere.

However, if you aren’t tracking the calls your mobile marketing generates you’ll have a hole in the ROI data you report on. By using a call tracking number in the ads or landing pages within your Call-Only campaigns you’ll have a true view of the number of website call conversions and web form conversions your AdWords campaigns are generating.

View Call Conversions Within the AdWords Interface

Also, in December 2014, Google quietly released custom columns support in AdWords. This enables advertisers using call tracking numbers to add a custom column for calls within the AdWords reporting interface.

Setting up custom columns is easy for customers who are already tracking calls as conversions.

Step 1: Select Customize columns and select the Custom columns tab.

Custom Columns Tab in AdWords

Step 2: Create a new column with name, description, and select your Calls segment.

custom-columns

Step 3: View website call conversions alongside all other AdWords data.

Website Call Conversions in AdWords

What this new feature gives advertisers is a view of how many total conversions your AdWords campaigns received, with an additional view of how many of those conversions were phone calls. You can set up custom columns down to the keyword level to see which campaigns, ad groups, and keywords are driving the most calls and conversions.

In the image below you can see that of 12 total conversions for this campaign, 8 were phone calls. Having this capability in the AdWords interface makes it even easier for agencies, advertisers, lead gen firms, and the like to show the value of driving phone calls to prove that their marketing is effective.

call conversions in adwords

Questions about tracking call conversions from AdWords? Request a demo of Ifbyphone today.

The post Google AdWords Introduces ‘Call Only’ Campaigns appeared first on DialogTech.

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


February 24, 2015

Communication at its core is just a conversation between two parties. We love good conversation, and we want to make as easy as possible for you build apps that power those connections, and do it securely. Today, we welcomed Authy to the Twilio tribe to make it even easier for you to integrate strong user identification into anything you’re building.

Sign up for Authy, and learn how to implement two factor authentication in Node Express right here.

Back in 2011, Daniel Palacio was getting Authy off the ground in a security landscape that’s virtually unrecognizable now. Those key fobs that used to dangle from our keychain were replaced by our phones. Today, instead of reading a code off a fob, we wait for a text with an authentication code. Two factor authentication is an essential layer of security for every business, but it hasn’t always been easy to implement.

In this video, Patrick Malatack and Daniel Palacio talk about how Authy helps developers save time building out essential security layers, reducing their development cycle from a few months to a few days.


 

Learn more about Authy joining the Twilio tribe here. #AhoyAuthy

Building With Authy And Twilio: Keeping Conversations Open And Security Gaps Closed

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


Comcast isn’t exactly known for its exceptional customer service. Just the opposite, in fact: some very serious gaffes in the past six months or so have put the provider of TV, Internet, and phone services in a negative spotlight more than it would like. I’m not sure if it was an attempt to repair a badly […]

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


We recently wrote about digital ad benchmarks and what they reveal about mobile vs. desktop conversions. There’s a lot to keep up with in regards to mobile in a marketing landscape changing as quickly as this one, but businesses who are seeing the most success in their embrace of mobile marketing understand these 3 things, and if you don’t want to get left in the dust, you must also.

#1) Smartphones Are Digital Portals, But They’re Also Phones

When mobile really got on its feet and became the portal to email, the Internet, and social media that we all know and (usually) love, the running joke was, Smartphones are amazing! People use them for everything even the occasional phone call! Ha ha! But that mode of thinking is not only outdated, it’s dangerous, as it risks being blind to the needs of a huge range of consumers and would-be customers. The numbers speak for themselves:

  • 79% of consumers would prefer to contact a customer service center over the phone
  • 68% of consumers on smartphones used the call button on a smartphone local listing
  • 70% of smartphone users think a call button is important
  • 67% of online shoppers will call a business directly for any purchase greater than $100

#2) Smartphones Are a Tool on the Path to Purchase, but They Don’t Do the Work for You

Consumers increasingly use their smartphones to help make a purchase decision, and often this means good things for you. After all, people using smartphones are 30% more likely visit a retailer website, 39% more likely to call a business, and 51% more likely to make a purchase. But the mere existence of smartphones and your business in the same universe is not enough to truly benefit from what can is possible as a result of their integration. You must do just that: integrate, and optimize. At this point, a responsive website is not optional. I repeat, not optional. Prospects are using their smartphone to access your site right this second, and if you’re not optimized for that user experience, they’re going to surf right on down the virtual road. As mentioned above, 70% of smartphone users think a call button is important, so ensure you have phone numbers on every page for easy connection between them and you.

#3) You’re Not the Only One Struggling to Measure Cross-Channel Performance

According to this CMO Club study, 82.2% of marketers don’t have the ability to measure cross-channel performance or return on investment. 4.7 billion people worldwide will have access to a mobile phone by the end of this year the mobile market is only getting bigger and that means digital marketers have continually more activity and behavior to track and differentiate in their reporting. You’re not alone in your struggle, but don’t get comfortable: the tools are out there to bridge the gap and you definitely don’t want to be the only one not taking advantage of them.

Surviving the mobile tsunami isn’t as hard as you think. This webinar can be your lifejacket: Riding the Impending Wave of Smartphone-Driven Calls.

The post 3 Things You Must Understand About Mobile Marketing Today appeared first on DialogTech.

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


It’s 2015, and every week there’s another security breach. We’ve learned that retailers aren’t safe from their HVAC vendors, Seth Rogen can stir an international cybersecurity incident, and not even the venerable OpenSSL can be trusted. The only strategy is multiple layers of security, and so every login box on the Internet needs to be secured and secured again.

That’s non-controversial, but doing so has traditionally introduced substantial user friction. Five years ago, two factor authentication required hardware fobs that were expensive – and truthfully, who wants more hardware on their keychain? And what about email-based verification? No one wants to abandon the app and refresh their spam folder. Security is only as good as its usability – the most secure scheme is rendered useless if users don’t adopt it.

Mobile has provided a great solution to create strong identity verification with reduced friction. For the past five years, Twilio customers have been using our voice and SMS APIs to verify a user’s identity via their phone number. In fact, this year Twilio will perform strong identity verification for over half a billion people across apps like Box, Intuit, Github, and more.

But SMS and voice are only part of the picture – and developers building two-factor authentication and phone verification with Twilio have historically had to re-invent their own TFA implementation using our APIs. Beyond voice and SMS, TOTP (time-based one time passwords) is the new(ish) player on the block, but existing mobile solutions to enable TOTP kinda suck (we’re being truthful, right?). Developers love inventing – but re-inventing is just grunt-work.

At Twilio, our goal is to let you build more with less code. And one of our customers has a great solution for developers seeking easy to implement strong identity verification: Authy.

Authy was born because security only works when end-users choose to adopt it. Since 2012, Authy has been laser-focused not only on the great usability of its APIs, but the also end-user experience – making an API that helps you help your users be more secure.

That is why we’re excited today to announce that the Authy team is joining Twilio, bringing a world-class Strong User Authentication API into the Twilio product mix. Authy’s REST APIs and SDKs implement Two Factor Authentication and Phone Verification as a service – so as a developer looking to verify customer identity, you get to git commit faster than ever before. Their API’s let you incorporate two factor, like an SMS or TOTP verification, into your login flow, as well as phone number verification APIs your mobile app signup / signin experience. You can also finally offer TOTP that doesn’t suck – via the Authy mobile app.

Authy is already built on top of Twilio, and their APIs complement ours fantastically so you can get up and running in no time. Try it out here.  In the coming weeks, we will incorporate Authy’s developer portal into our Twilio account portal, and integrate their billing into our own – making the developer experience seamless for you.

Authy already protects over 7,000 web and mobile applications – and we hope to grow that number substantially now that we’ve joined forces. And to address the burning question on every M&A announcement: This isn’t a typical acquisition where the Authy team members will be absorbed into the borg and the product slowly forgotten. Nope. Just the opposite – we love the Authy product and are investing massively in expanding its footprint with developers of all kinds. For strong identity verification use-cases, Authy is a more complete solution than Twilio is today – and we’re excited to get it in your hands.

Please join me in welcoming Authy to the Twilio team, #ahoyauthy!

We can’t wait to see what you’ll build!

Ahoy Authy: Welcome Authy To The Twilio Family

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


February 23, 2015

Being a developer evangelist means I get to work from anywhere. Often times I find myself on the road going to wicked events and having great conversations with developers from all around the world.

The flipside is that most of my colleagues are also remotes and are usually doing the same things around the communities they serve. It can be difficult to have meetings with them, or anyone else that is not physically located in the same city, country or continent as I am.

We usually resolve this situation by holding meetings over the phone, but one of the problems that affects me is the fact that my calendar will pop up 15 minutes before and I will snooze it until 5 minutes before the meeting.

I will also sometimes hit dismiss accidentally only to be reminded via instant messenger of the fact I was meant to be on a meeting that started about eleven minutes ago.

One morning I was thinking about this situation and the fact that there has to be a better way of solving this problem other then adding multiple reminders on my calendar or phone.

My first solution was simple: “I need to get myself a Personal Assistant!

Now let’s take a step back – Twilio is all about making my life easier, but I don’t think I have enough appointments to justify hiring my own PA.

Being a resourceful developer I thought about the next best thing: build myself my own software PA. What if I could build a script that keeps tabs on my Google Calendar and whenever it found I’m meant to have a call with someone it would connect me with that person automatically by dialing out to both of us?

That sounds like something I can knock out in an afternoon” – I said!

To build our very own PA, we will need the following:

If you don’t want to go through the entire build, feel free to checkout the code repository here.

Finding your Google Calendar Credentials

Google Calendar credentials can be tricky to get if you’ve not handled them before. In the steps bellow I will walk you through the process getting those credentials.

Start by heading to Google Developer Console and click Create Project.

twilio-pa_0.png

The subsequent screen will ask you to give this project a name. We will call it Twilio PA and leave the Project ID with its default value.

After the project is created click on APIs & Auth on the left-hand side menu then click APIs. You should see a big list with all the APIs available to that project. The one we’re interested in is Google Calendar so let’s turn that on.

twilio-pa_1.png

On the left-hand side menu click on Credentials under APIs & Auth and click Create new Client ID under OAuth. Because we are creating a Web Application, we will choose this option.

Click on Configure consent screen and you will be redirected to a page that requires your email address and a Product Name. Fill that in with Twilio PA as the product name and click save.

twilio-pa_8.png

You will then be presented with a modal screen that lets you fill in a couple of URLs. For the purposes of this article our project will run on localhost on port 2002. Use the following values before clicking Create Client ID:

twilio-pa_2.png

This page will set up our application and give us the credentials. If you are seeing a screen similar to the one below, you have gone through the process successfully. Take note of the CLIENT ID and CLIENT SECRET.

twilio-pa_3.png

Choosing the calendar you want to monitor

I have many different calendars on my account. Usually I have personal appointments separated from my work ones to make sure I’m giving work appointments higher priority.

Because I want to make sure I never miss any of my work calls, I will now make a note of the ID for my “work” calendar. The easiest way to find out this ID is by going to your Google Calendar and clicking on Settings. Settings is represented by a cog icon on the right-hand side. You should now be on a screen called Calendar Settings. Clicking on the Calendars icon on the top menu will take you to a list with all the calendars you own. Click on your chosen calendar and on the subsequent screen you will see a category called Calendar Address.

twilio-pa_4.png

Make a note of the Calendar ID. This is the identifier of the calendar we want to use.

Setting up a phone number

Go to Twilio numbers page and click Buy a Number if you don’t already have one available.

twilio-pa_0_1.png

Pick a number with voice and SMS capabilities and click Buy.

twilio-pa_0_2.png

Click Buy This Number and you will see a confirmation screen.

twilio-pa_0_3.png

How is this going to work?

We now have successfully collected all the pieces we need in order to build this application. Every time the application runs, it will collect all the meetings scheduled for that day and create scheduled tasks on our MongoDB instance, as the diagram below shows.

twilio-pa (2) (1).png

As you can see on the diagram, for every event two scheduled tasks are created. Those tasks consist of one SMS message to be sent to us 5 minutes prior to the meeting and another task is triggered exactly at the start of the meeting and consists of a call to our telephone number which is then chained with another call to the number of the person we are meeting with.

Setting it all up

Let’s start by putting together a small Node.js application that uses Express for routing. You can do this anywhere you like, but in my case I’ve created a directory called twilio-pa under ~/Projects/JavaScript/.

Create a file under that directory called package.json. This file will contain the definitions of this project and have all the following dependencies:

  • Express as our web application framework and route management.
  • Googleapis for authentication and interaction with the Calendar API.
  • Agenda for our scheduled tasks.
  • DateJS to help us dealing with dates and times.
  • Twilio for interaction with Twilio’s API.
  • MongoDB for database interaction

{
  "name": "twilio-pa",
  "version": "0.0.1",
  "description": "A Google calendar personal assistant powered by Twilio",
  "author": "Marcos Placona",
  "dependencies": {
    "express": "~4.11.1",
    "googleapis": "~1.1.1",
    "agenda": "~0.6.26",
    "datejs": "~1.0.0-rc3",
    "twilio": "~1.10.0",
    "mongodb": "~1.4.30"
  }
}

 

If you want to read more about what each of the attributes mean, you can find their definitions here.

We need ensure these dependencies are installed in our project folder. We do so by going into terminal and running:

$ npm install

Notice your project folder has a new directory called node_modules and under it a bunch of other directories for each of the dependencies specified on package.json.

The next thing we’ll do is create two new files called config.js and app.js on the root of our project which in our example is ~/Projects/JavaScript/twilio-pa. We now have three files in that directory, config.js and app.js which are currently empty, and package.json.

In the config.js file we will add configurations for our application. These are the Google credentials we collected earlier, our Twilio telephone number and credentials. You can grab your Twilio credentials from the dashboard.

var config = {};

// HTTP Port to run our web application
config.port = process.env.PORT || 2002;

// My own telephone number for notifications and calls
config.ownNumber = process.env.TELEPHONE_NUMBER;

// Your Twilio account SID and auth token, both found at:
// https://www.twilio.com/user/account
// A good practice is to store these string values as system environment variables, and load them from there as we are doing below. 
// Alternately, you could hard code these values here as strings.
config.twilioConfig = {
    accountSid: process.env.TWILIO_ACCOUNT_SID,
    authToken: process.env.TWILIO_AUTH_TOKEN,
    // A Twilio number you control - choose one from:
    // https://www.twilio.com/user/account/phone-numbers/incoming
    number: process.env.TWILIO_NUMBER
  }
// Google OAuth Configuration
config.googleConfig = {
  clientID: process.env.GOOGLE_CLIENT_ID,
  clientSecret: process.env.GOOGLE_CLIENT_SECRET,
  calendarId: process.env.GOOGLE_CALENDAR_ID,
  // same as configured at the Developer Console
  redirectURL: 'http://localhost:2002/auth'
};
// MongoDB Settings
config.mongoConfig = {
    ip: '127.0.0.1',
    port: 27017,
    name: 'twilio-pa'
  }
// Export configuration object
module.exports = config;

We have configured two variables here we have not talked about yet. One of them is called ownNumber, which is your telephone number. This is the number our application will text and dial every time before the call starts.

The other configuration is mongoConfig with information about the MongoDB instance.

Edit app.js to make sure our setup is correct and all dependencies are properly installed.

var config = require('./config');

// Dependency setup
var express = require('express'),
  google = require('googleapis'),
  date = require('datejs'),
  twilio = require('twilio');

// Initialization
var app = express();

app.get('/', function(req, res) {
  res.send('Hello World!');
});

var server = app.listen(config.port, function() {
  var host = server.address().address;
  var port = server.address().port;

  console.log('Listening at http://%s:%s', host, port);
});

Save that, and make sure your MongoDB instance is started.

On a terminal window type:

$ node app.js

This will start up your application.

On the browser, you can navigate to http://127.0.0.1:2002. If you see “hello world” on the screen you’ve done it all right. We’re just about to get serious.

twilio-pa_5.png

As previously mentioned, we will be doing some database interaction in this app to store some of the authentication information to the database. Connecting to MongoDB is straightforward, but opening and closing connections every time is not cool, so we will create a new file called connection.js right on the root of our application.

$ touch ~/Projects/JavaScript/twilio-pa/connection.js

This file will deal with our connection and also make sure we only ever open one of them no matter how many times we access the database.

var config = require('./config');
var MongoClient = require('mongodb').MongoClient;

var dbSingleton = null;

var getConnection = function getConnection(callback) {
  if (dbSingleton) {
    callback(null, dbSingleton);
  } else {
    var connURL = 'mongodb://' + config.mongoConfig.ip + ':' + config.mongoConfig.port + '/' + config.mongoConfig.name;
    MongoClient.connect(connURL, function(err, db) {

      if (err)
        console.log("Error creating new connection " + err);
      else {
        dbSingleton = db;
        console.log("created new connection");

      }
      callback(err, dbSingleton);
      return;
    });
  }
}

module.exports = getConnection;

We now have a connection manager, so let’s do a bit of refactoring in app.js around the initialization. We need to authenticate via OAuth2 in order to get access to our Google Calendar and also add a dependency to our new connection manager. We do so by changing our code as per highlighted section:

var config = require('./config');
var getConnection = require('./connection');

// Dependency setup
var express = require('express'),
  google = require('googleapis'),
  date = require('datejs'),
  twilio = require('twilio');

// Initialization
var app = express(),
  calendar = google.calendar('v3'),
  oAuthClient = new google.auth.OAuth2(config.googleConfig.clientID, config.googleConfig.clientSecret, config.googleConfig.redirectURL);

All the information under the config scope is already available to us from the moment we included config.js to our file.

Let’s also create a Calendar Event class under the code above. This class will be helpful for when we need to pass event information around.

// Event object
var CalendarEvent = function(id, description, location, startTime) {
  this.id = id;
  this.eventName = description;
  this.number = location;
  this.eventTime = Date.parse(startTime);
  this.smsTime = Date.parse(startTime).addMinutes(-5);
};

Our next task is to modify our existing route so instead of returning “hello world” we get it to do something more meaningful.

app.get('/', function(req, res) {
  getConnection(function(err, db) {
    var collection = db.collection("tokens");
    collection.findOne({}, function(err, tokens) {
      // Check for results
      if (tokens) {
        // If going through here always refresh
        tokenUtils.refreshToken(tokens.refresh_token);
        res.send('authenticated');
      } else {
        tokenUtils.requestToken(res);
      }
    });
  });
});

A few things are going on here. We start off by by checking whether there are already any tokens in the database. They won’t exist the first time we run this script after starting our Node server, so we will always fall into the not authenticated category. This will call a function called requestToken, which we haven’t implemented yet.

The next time we run this same script, the logic at the top of the function will tell us we already have tokens on the database, therefore we only need to refresh them by calling refreshToken, which we also haven’t implemented yet.

We need to refresh the token is because Google only gives us tokens that are valid for 60 minutes for security reasons. If you try to use them after that, your authentication will fail since the token is already invalid. Refreshing it tells Google we need access to that account for a bit longer, and prompts Google to issue us with a new access token valid for another 60 minutes.

Let’s implement some of these utility functions on a new file called ~/Projects/JavaScript/twilio-pa/token-utils.js.  Add the following to it.

var getConnection = require('./connection');

/* 
  Receives a token object and stores it for the first time. 
  This includes the refresh token
*/
var storeToken = function(token) {
    getConnection(function(err, db) {
      // Store our credentials in the database
      var collection = db.collection("tokens");
      var settings = {};
      settings._id = 'token';
      settings.access_token = token.access_token;
      settings.expires_at = new Date(token.expiry_date);
      settings.refresh_token = token.refresh_token;

      collection.save(settings, {
        w: 0
      });
    });
  }
/* 
  Updates an existing token taking care of only updating necessary 
  information. We want to preserve our refresh_token
 */
var updateToken = function(token, db) {
  getConnection(function(err, db) {
    var collection = db.collection("tokens");
    collection.update({
      _id: 'token'
    }, {
      $set: {
        access_token: token.access_token,
        expires_at: new Date(token.expiry_date)
      }
    }, {
      w: 0
    });
  });
}

The above are a couple of utility functions that we use to store and update our tokens. There will always be one single entry in our database after the first authentication and it should always contain a refresh token.

The updateToken function has one caveat: it only updates the data we want it to update instead of updating the entire document. This guarantees our refresh token is always present and never gets overwritten.

Google only returns a refresh token to us the first time we authenticate, so we have to make sure we store it at that time. If you fail to store it for some reason, you can always head to Google Account Permissions and revoke the access to Twilio PA. Next time you authenticate again, you will be given a new refresh token.

twilio-pa_9.png

Moving on we will now create two authentication functions that will deal with Google’s authentication servers. These functions can both be used in a situation where we still haven’t authenticated and will need a full set of tokens, or when we have already authenticated and just need retrieve the tokens from the database.

/* 
  When authenticating for the first time this will generate 
  a token including the refresh token using the code returned by
  Google's authentication page
*/
var authenticateWithCode = function(code, callback, oAuthClient) {
  oAuthClient.getToken(code, function(err, tokens) {
    if (err) {
      console.log('Error authenticating');
      console.log(err);
      return callback(err);
    } else {
      console.log('Successfully authenticated!');
      // Save that token
      storeToken(tokens);

      setCredentials(tokens.access_token, tokens.refresh_token, oAuthClient);
      return callback(null, tokens);
    }
  });
}

/* 
  When authenticating at any other time this will try to 
  authenticate the user with the tokens stored on the DB.
  Failing that (i.e. the token has expired), it will
  refresh that token and store a new one.
*/
var authenticateWithDB = function(oAuthClient) {
  getConnection(function(err, db) {
    var collection = db.collection("tokens");
    collection.findOne({}, function(err, tokens) {
      // if current time < what's saved
      if (Date.compare(Date.today().setTimeToNow(), Date.parse(tokens.expires_at)) == -1) {
        console.log('using existing tokens');
        setCredentials(tokens.access_token, tokens.refresh_token, oAuthClient);
      } else {
        // Token is expired, so needs a refresh
        console.log('getting new tokens');
        setCredentials(tokens.access_token, tokens.refresh_token, oAuthClient);
        refreshToken(tokens.refresh_token, oAuthClient);
      }
    });
  });
}

The function authenticateWithCode is only used when we’re authenticating for the first time. It will take a code we received back from Google authentication servers and generate a set of tokens, which we store on the database. This function will also tell any calling functions when it’s completed via a callback. This callback is especially useful for when we call this function the first time in app.js and need to make sure the authentication has occurred before we redirect the user.

When we authenticate from the database using authenticateWithDB, we have to check whether the access token is still valid so we can use it. In case it has already expired, we use the function refreshToken as you can see below:

// Refreshes the tokens and gives a new access token
var refreshToken = function(refresh_token, oAuthClient) {
  oAuthClient.refreshAccessToken(function(err, tokens) {
    updateToken(tokens);

    setCredentials(tokens.access_token, refresh_token, oAuthClient);
  });
  console.log('access token refreshed');
}

var setCredentials = function(access_token, refresh_token, oAuthClient) {
  oAuthClient.setCredentials({
    access_token: access_token,
    refresh_token: refresh_token
  });
}

var requestToken = function(res, oAuthClient) {
  // Generate an OAuth URL and redirect there
  var url = oAuthClient.generateAuthUrl({
    access_type: 'offline',
    scope: 'https://www.googleapis.com/auth/calendar.readonly'
  });
  res.redirect(url);
}

The refreshToken and requestToken functions will take care of requesting a new token from Google, or refreshing it should it already be expired. These functions are the only two that actually communicate with google’s authentication servers in order to retrieve or refresh a token.

Finally, we need to make sure some of these functions are available outside of the scope of this file, we do so by exporting them as follows:

module.exports = function(oAuthClient){
  var module = {};

  module.refreshToken = function(refresh_token){
    refreshToken(refresh_token, oAuthClient);
  };

  module.requestToken = function(res){
    requestToken(res, oAuthClient);
  };

  module.authenticateWithCode = function(code, callback){
    authenticateWithCode(code, function(err, data){
      if(err){
        return callback(err)
      }
      callback(null, data);
    }, oAuthClient);
  };

  module.authenticateWithDB = function(){
    authenticateWithDB(oAuthClient);
  };

  return module;
};

Open up app.js and add the dependency to our new file right after we initialise oAuthClient. We need to do it at this stage since we will pass a reference to oAuthClient into our new file so it can reuse it.

// Initialization
var app = express(),
  calendar = google.calendar('v3'),
  oAuthClient = new google.auth.OAuth2(config.googleConfig.clientID, config.googleConfig.clientSecret, config.googleConfig.redirectURL),
  tokenUtils = require('./token-utils')(oAuthClient);

Google’s authentication server needs to know where to redirect us to once it has validated that we are authenticated. We already told the authentication servers where to redirect to by setting Authorize Redirect URIs in the developer console to http://localhost:2002/auth so all we need to do now is create the auth route in app.js underneath the “/” route.

// Return point for oAuth flow
app.get('/auth', function(req, res) {

  var code = req.query.code;

  if (code) {
    tokenUtils.authenticateWithCode(code, function(err, data) {
      if (err) {
        console.log(err);
      } else {
        res.redirect('/');
      }
    });
  }
});

We receive a code back from the request and then get an authentication token generated by passing it to the authenticateWithCode function. We are calling the authenticateWithCode function with a callback as this allows us to know when the authentication cycle has completed.

If the authentication is successful the user is redirected to our entry route “/”.

In case we find the authentication failed, we show an error on the console describing why the authentication failed. This failure could be because you typed your password wrong, or failed to give the correct permissions to view that calendar. Google has extensive documentation about their OAuth flow in case you want to learn more.

There is one last thing we need to do here, which is change our server initialisation so it tries to authenticate the user upon start-up. This initialisation flow will guarantee we always have a fresh access tokens immediately after we finish loading the database.

var server = app.listen(config.port, function() {
  var host = server.address().address;
  var port = server.address().port;

  // make sure user is authenticated but check for existing tokens first
  getConnection(function(err, db) {
    var collection = db.collection("tokens");
    collection.findOne({}, function(err, tokens) {
      if (tokens) {
        tokenUtils.authenticateWithDB();
      }
    });
  });

  console.log('Listening at http://%s:%s', host, port);
});

Start your Node server again and try to hit the application on http://127.0.0.1:2002 you should see the following screen:

twilio-pa_6.png

If you see a screen that asks you to select which account you would like to use, chances are you are already logged in. Clicking on the user that owns the calendar you chose would redirect you to a screen that says “authenticated”.

Congratulations! We have gone through the most excruciating part of this post and everything after here will be a lot more fun.

Schedules

Our scheduled tasks will be created using the Agenda module. Agenda is a great module for task management as it persists your tasks to a database, so even if you restart your application your tasks will still be runnable when the application starts again.

On your terminal, create a new file called job-schedule.js in the root directory of the application. In my case it is ~/Projects/JavaScript/twilio-pa.

$ mkdir ~/Projects/JavaScript/twilio-pa/job-schedule.js

Open this file up and add the following code to it.

var Agenda = require("Agenda");
var config = require('./config');
agenda = new Agenda({
  db: {
	address: config.mongoConfig.ip + ':' + config.mongoConfig.port + '/' + config.mongoConfig.name
  }
});
exports.agenda = agenda;

All the information to initialize the module is being read from our configuration file. If later on we decided to change one of the settings like which port MongoDB runs, all we would have to do is change the configuration file.

Create a new directory in the root of our application called jobs. This directory contains the definitions for the two scheduled tasks we want to create – Send SMS and Start Call.

On your terminal you can create the directory as such:

$ mkdir ~/Projects/JavaScript/twilio-pa/jobs

In this directory create a file called send-sms.js. This file is one of our scheduled tasks definitions and contains all the logic to send outbound SMS.

var config = require('../config');
var twilio = require('twilio');
// Create a new REST API client to make authenticated requests against the
// twilio back end
var client = new twilio.RestClient(config.twilioConfig.accountSid, config.twilioConfig.authToken);

exports.send = function(agenda, event, task, number) {
  agenda.define(task, function(job, done) {
    client.sendSms({
      to: number,
      from: config.twilioConfig.number,
      body: 'Your call ('+ event.eventName +') will start in 5 minutes. Make sure you\'re in a quiet place'
    }, function(error, message) {
      if (!error) {
        console.log('Success! The SID for this SMS message is:');
        console.log(message.sid);
        console.log('Message sent on:');
        console.log(message.dateCreated);
        console.log(message.to);
      } else {
        console.log(error);
        console.log('Oops! There was an error.');
      }
    });
    done();
  });
  agenda.create(task).schedule(event.smsTime).unique({'id': event.id}).save();
}

We are again using our configuration file but also now importing the Twilio library which will make it much easier for us to interact with the Twilio API.

All the code surrounded by the highlighted bit of code is what we want to have executed on our scheduled task.

The send method takes four arguments, the Agenda object initialized by job-schedule.js, a CalendarEvent object, a task name and a telephone number.

In the last highlighted bit, we make sure our tasks are scheduled at the time we want them to run as defined in the CalendarEvent object and are also making sure our tasks are created uniquely. This step is very important as it will guarantee the tasks aren’t created over and over again every time the script runs. It will also take care of any updates to that event such as a different telephone number or time changes.

Create another job called start-call.js. The job is similar to the one we just created above but as the name says it is responsible for starting telephone calls. This is the job that is going to run at the time of the meeting and connect us with the person we are meant to be with on a call.

For this step Twilio will need to be able to connect to our local application. We could handle the connection in two ways, by either deploying to it a public web server, or using ngrok to expose our local environment via tunneling. We will go with option two here to make things easier. My colleague Kevin Whinery wrote a great blog post on getting up and running with ngrok.

Once you have Ngrok installed get it connected to your app by opening up another terminal screen and running:

$ ngrok 2002

Once it is running, it will acquire a unique URL for our application, and this is what we will use as the URL attribute for the makeCall method.

twilio-pa_7.png

Make a note of this Forwarding URL as you will need to use it on the code below.

var config = require('../config');
var twilio = require('twilio');
// Create a new REST API client to make authenticated requests against the
// twilio back end
var client = new twilio.RestClient(config.twilioConfig.accountSid, config.twilioConfig.authToken);

exports.call = function(agenda, event, task, number) {
  agenda.define(task, function(job, done) {
    // Place a phone call, and respond with TwiML instructions from the given URL
    client.makeCall({

      to: number, // Any number Twilio can call
      from: config.twilioConfig.number, // A number you bought from Twilio and can use for outbound communication
      url: 'http://300dcd5b.ngrok.com/call/?number=' + event.number + '&eventName=' + encodeURIComponent(event.eventName) // A URL that produces an XML document (TwiML) which contains instructions for the call

    }, function(err, responseData) {
      if (err) {
        console.log(err);
      } else {
        // executed when the call has been initiated.
        console.log(responseData.from);
      }
    });
    done();
  });
  agenda.create(task).schedule(event.eventTime).unique({
    'id': event.id
  }).save();
}

Open up app.js on the project root again and include the schedule task definitions we’ve just created. These should go between our initialization and Event Object:

// Initialization
var app = express(),
  calendar = google.calendar('v3'),
  oAuthClient = new google.auth.OAuth2(config.googleConfig.clientID, config.googleConfig.clientSecret, config.googleConfig.redirectURL),
  tokenUtils = require('./token-utils')(oAuthClient);

// Schedule setup
var jobSchedule = require('./job-schedule.js'),
  smsJob = require('./jobs/send-sms.js'),
  callJob = require('./jobs/start-call.js');
// Event object
var CalendarEvent = function(id, description, location, startTime) {
  this.id = id;
  this.eventName = description;
  this.number = location;
  this.eventTime = Date.parse(startTime);
  this.smsTime = Date.parse(startTime).addMinutes(-5);
};

On our server initialization, we will now add another recurring scheduled task that will fetch any calendar events for us every 10 minutes. That way, you don’t need to manually run the script every time

var server = app.listen(config.port, function() {
  var host = server.address().address;
  var port = server.address().port;

  // make sure user is authenticated but check for existing tokens first
  getConnection(function(err, db) {
    var collection = db.collection("tokens");
    collection.findOne({}, function(err, tokens) {
      if (tokens) {
        tokenUtils.authenticateWithDB(db);
      }
    });
  });

  jobSchedule.agenda.define('fetch events', function(job, done) {
    fetchAndSchedule();
    done();
  });

  jobSchedule.agenda.every('10 minutes', 'fetch events');

  // Initialize the task scheduler
  jobSchedule.agenda.start();

  console.log('Listening at http://%s:%s', host, port);
});

The highlighted parts are the ones we have just added for the recurring task.

You will notice we are making a call to a function called fetchAndSchedule. This function is responsible for fetching all our calendar events every time it’s called by the scheduled task. Under the CalendarEvent function place the following code:

function fetchAndSchedule() {
  // Set obj variables
  var id, eventName, number, start;

  // Call google to fetch events for today on our calendar
  calendar.events.list({
    calendarId: config.googleConfig.calendarId,
    maxResults: 20,
    timeMax: Date.parse('tomorrow').addSeconds(-1).toISOString(), // any entries until the end of today
    updatedMin: new Date().clearTime().toISOString(), // that have been created today
    auth: oAuthClient
  }, function(err, events) {
    if (err) {
      console.log('Error fetching events');
      console.log(err);
    } else {
      // Send our JSON response back to the browser
      console.log('Successfully fetched events');

      for (var i = 0; i < events.items.length; i++) {
        // populate CalendarEvent object with the event info
        event = new CalendarEvent(events.items[i].id, events.items[i].summary, events.items[i].location, events.items[i].start.dateTime);

        // Filter results 
        // ones with telephone numbers in them 
        // that are happening in the future (current time < event time)
        if (event.number.match(/\+[0-9 ]+/) && (Date.compare(Date.today().setTimeToNow(), Date.parse(event.eventTime)) == -1)) {

          // SMS Job
          smsJob.send(jobSchedule.agenda, event, 'sms#1', config.ownNumber);

          // Call Job
          callJob.call(jobSchedule.agenda, event, "call#1", config.ownNumber);
        }
      }

    }
  });
}

Once today’s events return we loop through them to get their information.

We then filter these events with a regular expression to only get the ones that have a telephone number in the location field, and also check for events happening after the current time. Each match then populates a new CalendarEvent object with the information from the event.

Lastly, we set up the scheduled tasks by passing event information through to each of our task definitions. We do that once for SMS and once for a call.

Making the call

It is a good time for us to create our /call route. Still on app.js add the following just below the fetchAndSchedule function.

app.post('/call', function(req, res) {
  var number = req.query.number;
  var eventName = req.query.eventName;
  var resp = new twilio.TwimlResponse();
  resp.say('Your meeting ' + eventName + ' is starting.', {
    voice: 'alice',
    language: 'en-gb'
  }).dial(number);

  res.writeHead(200, {
    'Content-Type': 'text/xml'
  });
  res.end(resp.toString());
});

What the above code does is play a message and dial out to the person you’re supposed to have a call with.

Running our app

At this point our application is ready, and we should be able to run it and authenticate.

Make sure you have a calendar entry for today and that the entry has a telephone number on the location field.

Start your Node server again.

$ node app.js

Fire up your browser to http://127.0.0.1:2002 and you should the the word “Authenticated” show up on the screen. If you have calendar entries for today, the script will pick them up and add them to the database.

If you would like to confirm whether any new entries have been added as scheduled tasks, you can query MongoDB via terminal on a new terminal screen:

$ mongo twilio-pa
$ db.agendaJobs.find()

All the entries you have with names and timestamps will now be listed. Sure enough you will notice an SMS message coming in 5 minutes prior to the meeting starting and then a call from your Twilio number at the time of the meeting.

You can do the same to query your token information by running:

$ db.tokens.find()

Your Twilio PA is alive!

And there you go, within only a few steps we have built our own Twilio PA, which will handle our outgoing calls for us and guarantee we will never again be 11 minutes late to that call.

Ideally, I’d like my meeting reminders to be more forceful and also trigger a siren at the time it sends me an SMS. That would for sure get me out of the room and ready for that call. How about adding an interface that lets you see all future scheduled tasks? Well, maybe that’s an idea for Twilio PA v2.0.

I would love to hear about the wonderful ways you can make your Twilio PA do more work for you and let you focus on other work that can’t be easily automated.

I look forward to seeing your hacks. Reach me out on Twitter @marcos_placona, by email on marcos@twilio.com or MarcosPlacona on G+ to tell me more about it.

Building your own personal assistant with Twilio and Google Calendar

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


Do you have an efficient process in place to connect callers with the right agent right away? If the answer is no, then it”s about time. But why does speed matter? Leads contacted within the first hour of a query are 60 times more likely to convert than companies that wait 24 hours or longer (HBR). To establish this speed within your business, it’s not only important to make sure mobile callers are qualified and sales-ready (though this does help reduce time wasted by sales), it’s also critical to have a seamless process in place to respond to their calls. Especially when 61% of them are calling because they are ready to convert.

A growing driver of these calls is mobile search. Mobile search is driving quality calls to your business. 3 out of 4 calls that came from mobile search lasted longer than 30 seconds, which means these are engaged callers looking for information and looking to make a purchase. That’s why, in a mobile world, if you want to boost your call conversions it’s more important than ever to make sure these quality callers are quickly vetted and then quickly routed to the most appropriate sales rep to be converted.

Click-to-Call is a Driving Factor

I touched on it above, but mobile search is generating a lot of calls to businesses. In fact, BIA/Kelsey reports that mobile search will drive 75 billion inbound calls to businesses by 2018. And a key factor in driving those calls? Click-to-call. 70% of mobile searchers have reported they click-to-call directly from search results to connect with a business (Google). They make these calls because they want answers­ fast.

So the first step in quickly connecting with callers is simple enough: give them a number at which to call you (especially as more and more access these numbers on their smartphones and are thus already one step closer to calling). As straightforward as this is, some businesses still don’t realize that nearly half of mobile searchers report that they would explore other brands if they couldn’t easily find a phone number to contact a business (Google). That being said, the businesses that do provide mobile searchers with easy access to their phone numbers may still encounter challenges in converting these callers. That’s where step two comes in.

Speed Matters: Tools to Help Drive Business

After the call comes in, there are tools to help marketers quickly understand the intent of the caller. And if they are sales-ready, there are tools to properly route them. Using a hosted IVR (interactive voice response) system gives you the power to qualify phone leads automatically by asking callers questions relevant to specific marketing goals or campaigns. An IVR helps weed out misdials, solicitations, and basic inquiries, as well as helping callers with things such as account information lookup and assisting with call routing. Callers that then score high enough can be passed to a sales rep to engage in conversation. Your sales team will love you: they no longer have to waste time with mundane inquiries or wrong numbers, but rather will spend time converting the callers who are ready.

An IVR can help determine the initial route of an inbound call, but intelligent call routing can help you set parameters for how calls are directed. Whether you want top-performing agents to get the most calls, your calls to be routed based on certain hours of the day, or to be routed based on the marketing source that drove the call, it can all be done with call routing technology. Marketers using call routing help connect their sales team quickly with callers ready to convert.

Align Marketing and Sales for Max Speed

Strong communication between marketing and sales is a third step in making sure marketing quickly connects sales-ready callers and that sales is prepared to take these calls. Communication ensures your sales organization is always aware of current marketing campaigns. On the other side, marketing needs to confer with sales to understand what a quality lead looks like if they’re going to be able to drive more of them, and then qualify and route them appropriately –with speed.

When one of the main reasons mobile searchers call a business from their smartphones is to get an answer fast, you need to be ready to talk to them. And the elements to making this happen are at your doorstep. Learn more about how marketers are making the most of their inbound calls in our free white paper, Marketer’s Guide to Qualifying, Routing, and Scoring Inbound Calls to Optimize Sales.

The post Callers Are Ready to Convert, Make Sure You’’re Ready to Talk appeared first on DialogTech.

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


February 21, 2015

Here’s a little exercise. When you get off of an important call, try to remember the last thing you said. Your words have a funny way of escaping you. VoiceBase makes it incredibly easy to programmatically pull up call recordings, powered by Twilio, to find the information you need. In this blog series they’ll walk you through integrating and managing VoiceBase in your app.

The following is a guest post from VoiceBase

voicebase-logo-2Your Twilio app is up and running, thousands of connections are being made daily, and your library of recordings is growing fast. For your customers, knowing what is spoken inside these recordings is absolutely critical for monitoring agent compliance, keyword spotting, trend detection and much more. By taking a comprehensive look through this content, VoiceBase provides tools for Twilio users to identify and distinguish between hot leads, opportunities, complaints, etc. Wouldn’t it be great to deliver this valuable information straight into your customers’ laps?  The VoiceBase solution adds value to any Twilio user whether you are a contact center, CRM platform, sales organization or conferencing solution.

In the past, the only way to gather spoken information was to have a human physically listen to every recording, write descriptions, and manually tag calls or fill out scorecards. However, this is an expensive and time-consuming process. Fortunately, we have a solution! Twilio’s simple integration with VoiceBase allows users to incorporate speech analytics to multiple layers of businesses at disruptively low costs. 

VoiceBase fully indexes calls, making all of your content searchable and discoverable within minutes of being uploaded. Users can search into the timeline of a recording to play the precise part of any audio or video file. VoiceBase also provides a Web SDK enabling easy setup for a slick end user display.

VoiceBase-Player-Plugin2 (1)VoiceBase’s powerful API is a cloud-based solution, with zero upfront costs.  Simple API calls allow VoiceBase to grab .wav.files from Twilio recording URLs. VoiceBase utilizes parallel processing to quickly index these individual recordings. The machine transcripts are then made available with time-stamped keywords and search capabilities within minutes. All of this data is made accessible as JSON responses.

We caught up with Greenrope CEO, Lars Helgeson, to talk with him about his own experiences with Twilio’s VoiceBase integration.

We will focus on how to efficiently use the VoiceBase API with Twilio to index a recording.  Check out the VoiceBase Landing Page here for more info on retrieving the keywords, topics, and transcripts as well as building end user displays using Voicebase and Twilio.

Indexing Content with TwiML

In part 1 we will be specifying an action URL in the TwiML so VoiceBase can be notified when a recording is complete.  We will then write some code to receive the end of call event and make an API call to VoiceBase to upload and index the recording. We will simply pass along the URL to the recording we get from the Twilio event as a parameter to the VoiceBase API. We will then specify another callback in the VoiceBase API so we can be notified when the indexing is complete.

Often in your TwiML you use the dial command.   For example here is a dial command that is being used for joining a conference room.  Twilio enables you to record the call with the record flag and specify the action to take when the call completes like this:

Or you can do something like this to record a conference call.


    

           

                LoveTwilio

          

     

In the callback, you will have access to a link to the recording. The callback is where you can initiate the indexing request to VoiceBase, which we will show below. Alternatively you can find links to Twilio recordings through the Twilio API – https://www.twilio.com/docs/API/rest/recording

Let’s take a look at the code in indexConferenceRecording.php, which initiates the request to VoiceBase that eventually creates a transcript of your recording, indexes it so it is searchable and also extracts keywords from the recording.

First we will extract some important values that Twilio provides in POST form.

# Collect meta data from Twilio
      $rec= $_POST['RecordingUrl'];                    # Location (for indexing, use .wav only)
      $recStream= $_POST['RecordingUrl'].”.mp3”;      # Location (for streaming)

      $callSid= $_POST['CallSid'];	
      $from= $_POST['From'];                    # Can be used as Title in VoiceBase
      $recSid= $_POST['RecordingSid'];
      $externalId = $recSid;                        # Map recordings by setting 
                                                                         #VoiceBase ExternalID = Sid

Then we set some of the values we will want to include in the API call to the VoiceBase upload method that will initiate the processing on VoiceBase servers.

# set some of the VoiceBase API call parameters
      $action = "uploadMedia";
      $transcriptType = "machine";    #Human transcription is also available
      $public="false";
      $APIkey="";
      $password="";
      $version =”1.1”;
      $machineReadyCallBack="http://www.example.com";

      # and the VoiceBase API base URL
      $baseurl=”https://API.VoiceBase.com/services”;

Next we do the HTTP GET (POST is also supported).

# uploadMedia allows http GET  to the VoiceBase API to upload the audio file
      #  note the link to the mediaUrl (the audio file that was recorded)
      #  using the callSid as the unique externalId
      #  using the from field for the title
         $url = $baseurl;
         $url .= "?version=".$version;
         $url .= "&APIkey=".$APIkey;
         $url .= "&password=".$password;
         $url.= "&machineReadyCallBack=".$machineReadyCallBack;
         $url .= "&action=".$action;
         $url .= "&mediaURL=".$rec;
         $url .= "&sourceURL=".$recStream;  
         $url .= "&transcriptType=".$transcriptType;
         $url .= "&public=".$public;
         $url .= "&externalId=".$externalId;
         $url .= "&searchHitUrl=".makeSearchHitUrl($externalId,$appUrl); #see part 4
         $url .= "&title=".$from;
         $ch = curl_init();
         curl_setopt($ch, CURLOPT_VERBOSE, 1);
         curl_setopt($ch, CURLOPT_TIMEOUT, 300);
         curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 300);
         curl_setopt($ch, CURLOPT_URL, $url);
         curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
         $response = curl_exec($ch);
         curl_close($ch);

And finally we get the JSON response from the API call.

# get the response from VoiceBase
        #   check the status
        #   get the unique mediaId for the file
        #   get the  fileUrl (can be used for links to branded page)
        $jsonDoc =json_decode($response, true); 
        $statusMessage=$jsonDoc['statusMessage'];
        $fileUrl = $jsonDoc['fileUrl'];
        $mediaId=$jsonDoc['mediaId'];

A few words about the VoiceBase API’s upload method above. Every method has 4 required parameters in common.

  • API key and password
  • Version: Current version is 1.1
  • Action: In this case the action is uploadMedia

For upload media, we are passing in the following parameters:

  • mediaUrl: Location of the recording. Used at upload time for building the transcript, keyword extraction and indexing. Use the wav file.
  • sourceUrl: You can use this setting if you plan on storing the mp3 yourself.VoiceBase will use this URL for streaming and will not transcode the audio to mp4.
    If not set VoiceBase will transcode the audio into a standard mp4 format and store it for streaming purposes later. If you plan on using Twilio store and stream (using the mp3), we recommend that you set this.
  • transcriptType: The transcript can be either machine or human transcribed. It is machine by default.
    Human transcripts are more costly but can be useful for important calls.
  • Externalid: This ID is external from the point of view of VoiceBase. It allows you to use your own uniqueID’s to reference the recording, its transcript and other metadata. It is your responsibility to make sure the externalId’s are unique.
  • SearchHitUrl: By default the search method will return a link to the recordings player page on the VoiceBase server.
    You can use this field to override that URL. This is useful if you plan on using the VoiceBase Web SDK that includes a search component and a player component. More about this in a future blog or at VoiceBase.com
  • machineReadyCallBack: This is similar to the Twilio action parameter.
    This is a way to get notified when the machine transcript and associated processing has been completed.
  •  

    The response from VoiceBase includes a mediaId, which you can store and use to reference recordings, transcripts and keywords later. The externalId allows you to use your own unique Id and can be used in place of the mediaId in any VoiceBase call. In this example we are setting the VoiceBase externalId to Twilio’s RecordingSid . This way we do not have to store the mediaId and maintain a mapping between the Twilio ID and the VoiceBase IDs. You can do the same or use another unique identifier. It is your responsibility to ensure that the identifiers are unique. Or you can just use the VoiceBase unique mediaIds if you like.
     

    Transcripts, Search, and Keyword Spotting With Twilio And VoiceBase

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


February 20, 2015

There are millions of devices hooked up to the Internet and generating data, from refrigerators monitoring their contents to webcams tracking intruders. These connected devices, collectively referred to as the “Internet of Things” often create work that needs to be done by humans. How do we keep track of all that work? How do we assign that work to an appropriately skilled and available worker? How do we know when it’s done?

To solve problems like these Twilio just launched TaskRouter, a service to distribute work that needs to be done to workers who can do it.

In a recent post, Greg Baugues showed how to build an Arduino Yún powered photobooth. In this post we’re going to take the images generated from the photobooth and create a task that only humans can do — coming up with funny captions.

How Does It Work?

The photobooth tutorial culminated in uploading photos to Dropbox. This tutorial picks up where it left off.

  • When a new file is uploaded to Dropbox a task gets created
  • Volunteers text into the system and are added as a worker
  • Workers are set to “Idle,” indicating that they are available to do work
  • TaskRouter matches a picture that needs captioning with an idle worker. Our app sends them a photo via MMS
  • The worker is marked as “Busy” until they reply with a caption
  • Once a caption is received, worker is changed back to “Idle” and waits for their next assignment

Getting Started

We’re going to build our distributed photo captioning app using Ruby and Sinatra.

You don’t need a fully functional Arduino powered photobooth to follow along with this post. You do, however, need to set up a Dropbox app. You can find those instructions in the “Arduinos and Dropbox” section in the photobooth tutorial. Once you have your Dropbox app setup you can mimic the photobooth by manually uploading files to Dropbox.

In addition to a Dropbox app, you’ll need:

  • a free Twilio account
  • an MMS-enabled Twilio number (only available on US and Canadian numbers)
  • ngrok, free IP tunneling software

A Note on Ngrok

Your development machine is most likely hiding behind a router and lacks a publicly accessible IP address. However, both Dropbox and Twilio need to make HTTP requests to this app, so you’ll need to create a tunnel from the public internet to your local server.

Our favorite way to do this is ngrok. If you haven’t already, download ngrok and move it to your home directory. Also sign up for a free ngrok account and follow the instructions on how to set up custom domains. This way you won’t have to change your webhook urls on the Twilio and Dropbox dashboards everytime you restart ngrok. If you’d like to learn more about ngrok, check out Kevin Whinnery’s great tutorial on ngrok.

Once you’ve got ngrok installed, start it up with a custom subdomain (your name perhaps) and point it at port 9292:

./ngrok -subdomain=example 9292

Leave ngrok open in a terminal window for the rest of this tutorial.

Setting Up TaskRouter

The best place to start building a TaskRouter application is the TaskRouter dashboard. TaskRouter applications are scoped to a Workspace. Let’s make one:

  • Click Create Workspace
  • Give your workspace a friendly name of “Photobooth Captions”
  • Leave Template set to None
  • Click Save

Once your workspace is created, change the Default Activity from “Offline” to “Idle.” We’ll discuss why in a few minutes but the short answer is that we want our workers ready to receive work as soon as they enter the system.

create_workspace.png

Next we need to create a Task Queue. Click Task Queues at the top of the dashboard, then click Create Task Queue and configure it with the following properties:

create_caption_queue.png

The key property here is Target Workers which states that workers eligible to complete Tasks in this Task Queue must have a skill of “caption”. For the purposes of this tutorial we’ll only have one kind of worker but Task Queue really starts to shine when you have a multitude of task types requiring a multitude of skillsets. Once you’ve completed this tutorial you’ll be in a great position to create something more complex.

Once you’ve configured your Task Queue, click Save.

Next we need to create a Workflow which will route Tasks into our Task Queue. Click Workflows at the top of the dashboard, then click Create Workflow. Configure it with these properties:

  • Friendly Name: Photobooth Workflow
  • Assignment Callback: http://example.ngrok.com/assignment (replace example with your ngrok subdomain)
  • Leave Fallback Assignment Callback URL and Task Reservation Timeout blank
  • Leave “Caption Queue” as the default task queue
  • Click Save

Twilio_User_-_Account_Taskrouter_Workspaces_Workflows_Create.png

By default, the Workflow will place Tasks into the Caption Queue because of the Default Task Queue setting. If we wanted to be more explicit about this to prepare for a more robust system, we could create a Filter in the Routing Configuration section. Let’s configure a filter for our captioning tasks. Click the Add Filter button and set the following properties:

  • Filter Label: Caption Filter
  • Expression: required_skill = "caption"
  • Target Task Queue: Caption Queue
  • Priority: 1

With this filter in place, a Task with required_skill set to “caption” in its attributes will be routed to the Caption Queue. Your Routing Configuration should look like this:

create_workflow.png

Click Save to complete the Workflow creation. This is all the setup we need to do on our dashboard. Let’s get into the code.

Creating the Sinatra App

Our application will be built in Ruby using Sinatra. Let’s create a directory for our app and a few of the files we’ll need to get started:

mkdir photobooth-taskrouter
cd photobooth-taskrouter
touch app.rb Gemfile config.ru

Then edit the Gemfile:

source "https://rubygems.org"

ruby '2.2.0'

gem 'sinatra'
gem 'thin'
gem 'twilio-ruby', '~> 3.15.1'
gem 'dropbox-sdk'
gem 'envyable'

Install bundler if you haven’t already:

gem install bundler

Then install your gems:

bundle install

Along with the gems for the Dropbox and Twilio, we’ve included Envyable, a gem to manage environment variables. (For more on this, read Phil Nash’s excellent post on managing environment variables in Ruby).

To use envyable we need to create a config directory and a env.yml file:

mkdir config
touch config/env.yml

Open env.yml and add the following YAML:

development: 
  TWILIO_ACCOUNT_SID: 
  TWILIO_AUTH_TOKEN: 
  TWILIO_WORKSPACE_SID: 
  TWILIO_WORKFLOW_SID:
  TWILIO_PHONE_NUMBER:
  DROPBOX_ACCESS_TOKEN:

Copy in the values for your Twilio Account SID, Twilio Auth token — you can find these by clicking “Show credentials” in the top right of the Workspace dashboard. Then copy in the Workspace SID and Worklow SID — you can find these on their respective pages. Then paste in the phone number of one of your MMS enabled Twilio phone numbers.

For the Dropbox token, visit the Dropbox App Console and click into the app you created earlier. In the OAuth 2 section, click Generate under “Generated access token” and copy the resulting token into the YAML.

With our env.yml in place, our environment variables will now be accessible via ENV['NAME_OF_VARIABLE'].

Now let’s start on our Sinatra app. Open ‘app.rb’, paste these lines, and save the file.

require 'dropbox_sdk'
require 'json'

Envyable.load('./config/env.yml', 'development')

Finally, edit the config.ru which tells our server what to do when we run rackup.

require 'bundler'
Bundler.require

require './app.rb'
run Sinatra::Application

If you want to test that this works so far, see if you can start your server without getting any errors:

bundle exec rackup

Configuring the Dropbox Webhook

Our application will utilize Dropbox’s webhook to receive notifications when files are uploaded. This allows us to create Tasks for our app as the photos come in. Before we use the webhook though, we have to verify our app with Dropbox.

For the verification process, Dropbox will make a GET request to our webhook with a challenge parameter. Our HTTP response must simply include the text of that challenge.

Create a new route in app.rb to handle this request:

get '/dropbox' do
  params[:challenge]
end

Restart the app.  Then visit the Dropbox App Console and add http://<your_ngrok_subdomain>.ngrok.com/dropbox to the Webhook URIs field.

dropbox-webhook.png

Once you click Add, Dropbox will verify our domain. We could delete the GET /dropbox route after that, but if we ever change domains (e.g., deploy to production) then we’re going to need to reauthorize again. Might as well leave it there.

If you’d like to learn more about this authorization process or about interacting with the Dropbox API in general, check out their well-written API docs.

Using the Dropbox API’s /delta Endpoint

When a photo is uploaded, Dropbox will make a POST request to our  /dropbox webhook (this is in addition to the GET /dropbox we used to verify our app). The information provided in the POST request is pretty limited. It only contains an array of User IDs that have new file changes in the Dropbox app we configured but it doesn’t contain any additional information about the actual file upload itself.

Since we the webhook request doesn’t tell us which files were added, we need to  request a list of recent Dropbox changes via their delta method. In order to make sure we’re not getting duplicate changes, we need to save a “cursor” returned to us by Dropbox and pass it back in on subsequent delta calls. For the sake of moving fast in this tutorial, we’re going to do this the wrong way and store the cursor in a global variable. Please use a proper datastore in a real app.

Below Envyable.load('./config/env.yml', 'development') in app.rb, add this:

$cursor = nil

Now we’re going to create a post /dropbox route which will:

  • create a REST client using our Dropbox access token
  • retrieve a list of changes to our Dropbox folder since our last cursor
  • save the new cursor

Then it will iterate through each file in the list of changes and:

  • grab its filename
  • request a publicly accessible url from dropbox using our REST client
  • create a new task in TaskRouter (we’ll leave a placeholder for this for the moment)

And finally, it will return a 200 — otherwise Dropbox will keep trying the request over and over and over again.

Here’s the code:

post '/dropbox' do
  dropbox_client = DropboxClient.new(ENV['DROPBOX_ACCESS_TOKEN'])
  changes = dropbox_client.delta($cursor)
  $cursor = changes['cursor']

  changes['entries'].each do |entry|
    file_name = entry[0]
    media_hash = dropbox_client.media(file_name)
    image_url = media_hash['url']
    # create task 
  end
  200
end

If you’d like to learn more about what we’ve done here, check out Dropbox’s core API docs.

Create a Task with TaskRouter

We’re going to be doing a lot of work with Twilio, so let’s create a twilio_helpers.rb file to keep our code clean:

touch twilio_helpers.rb

Now let’s create a helper method in twilio_helpers.rb to instantiate a TaskRouter REST API client:

def task_router_client
  Twilio::REST::TaskRouterClient.new ENV['TWILIO_ACCOUNT_SID'], ENV['TWILIO_AUTH_TOKEN'], ENV['TWILIO_WORKSPACE_SID']
end

Then let’s require the twilio helpers in our app.rb:

require './twilio_helpers.rb'

We’ll use our client helper to create a new task with the image_url as an attribute. Replace the # create task comment with:

attributes = { image_url: image_url, required_skill: 'caption' }.to_json
task_router_client.tasks.create(
  attributes: attributes,
  workflow_sid: ENV['TWILIO_WORKFLOW_SID']
)

Let’s test what we’ve build so far. Restart your Sinatra server and upload a file to Dropbox — either via your Photobooth or by simply dragging an image into the folder of the your Dropbox app.

Once the file uploads, the webhook will fire and hit the /dropbox route, which will then create a task in TaskRouter. Open the TaskRouter dashboard and go to the Tasks page. You should see a new Task. If you click on the Task, you’ll see the image_url.

Create a Worker in TaskRouter

Now that we can create tasks, we need workers who can complete those tasks.

Workers will join the system by texting our Twilio number. We need to configure the webhook that Twilio will use when it receives a new text. Open the numbers list on your Twilio dashboard, click on the phone number you entered earlier into the env.yml, and configure the number by setting the Messaging Request URL to http://<your_ngrok_subdomain>.ngrok.com/message.

configure_number.png

For the sake of this post, we’re going to concern ourselves with two scenarios when someone texts in:

  1. They’re texting in for the first time. We’ll create a worker using their phone number as a friendly name.
  2. They’re providing a caption. We’ll save it, then set the worker as ready to receive more tasks.

Before we create the route to handle the webhook, let’s create two more helper methods in twilio_helpers.rb.

First, a method to check if a worker exists for a given phone number:

def worker_exists?(phone_number)
  task_router_client.workers.list(friendly_name: phone_number).size > 0
end

Second, a method to simplify the generation of TwiML responses which we’ll use to reply to people when they text into the system:

def twiml_response(body)
  content_type 'text/xml'
  Twilio::TwiML::Response.new do |r|
      r.Message body
    end.to_xml
end

Now let’s head back to app.rb and create a  /message endpoint.

For now we’ll focus on the first use case: someone texts in and a worker with that number does yet not exist:

In that case we will create a new worker with:

  • an attribute defining their phone number
  • the friendly name set to their phone number to make them easier to identify

We’ll also reply with a text message telling them to hold tight and wait for their photo.

post '/message' do
  phone_number = params['From']

 if worker_exists?(phone_number)
    # we’ll come back to this soon
  else
    attributes = {phone_number: phone_number, skill: 'caption'}.to_json
    task_router_client.workers.create(
      attributes: attributes,
      friendly_name: phone_number,
    )

    twiml_response("Hold tight! We'll be sending you photos to caption as soon as they become available.")
  end

end

Let’s test this out. Restart your server, then send a text message to your Twilio number. Once you get a reply, check the workers tab on the TaskRouter dashboard. You should see a new worker that has your phone number as a friendly name.

Something else is afoot though. If you look at your server, you’ll see that TaskRouter tried to make an HTTP request at /assignment, but we haven’t defined that route yet. Let’s do that now.

Assign Work

When we have a task in the system and an idle worker who’s qualified to do the work, TaskRouter starts to perform its magic. When TaskRouter sees a potential match, it makes an HTTP request to the assignment webhook defined on our Workflow dashboard. This HTTP request sends information about the task and asks if you’d like the worker to accept it.

In that request, we have everything we need to send a worker their task: the image_url and worker’s phone number.

Let’s create a route that will:

  • respond to a POST request at /assignment
  • extract the phone_number from worker_attributes
  • extract the image_url from task_attributes
  • store the image_url for later
  • call a twilio_helper named send_photo which we will define in just a second
  • return JSON instructions to TaskRouter to tell it that the worker accepts the task

We also need to store data about our image urls and captions. We’re not going to tell you how to do that in this post. Feel free to use MySQL, DynamoDB or the storage engine of your choice. For the purposes of this post, we’ll just leave a comment where you would save the pieces of data you want to persist.

Create your route to handle assignment:

post '/assignment' do
  worker_attributes = JSON.parse(params['WorkerAttributes'])
  phone_number = worker_attributes['phone_number']

  task_attributes = JSON.parse(params['TaskAttributes'])
  image_url = task_attributes['image_url']

  # save then image_url and phone_number pair

  send_photo(phone_number, image_url)

  content_type :json
  {instruction: 'accept'}.to_json
end

The first four lines extract the image_url and phone_number from the parameters sent to us by TaskRouter. Then we send a photo using a Twilio helper we’ll define in a second. The last two lines return JSON telling TaskRouter that our worker accepts the task.

Now let’s create our send_photo method in twilio_helper.rb:

def send_photo(phone_number, image_url)
  twilio_client = Twilio::REST::Client.new ENV['TWILIO_ACCOUNT_SID'], ENV['TWILIO_AUTH_TOKEN']
  twilio_client.messages.create(
    from: ENV['TWILIO_PHONE_NUMBER'],
    to: phone_number,
    body: "What's the funniest caption you can come up with?",
    media_url: image_url
  )
end

We’ve got everything in place to assign a task to a worker and to send them an image to caption. Let’s try it out.

We need your phone number to be a “new” worker for this to work, so go back into your dashboard, click on the worker you created previously, toggle their Activity to “Offline” and then delete it.

Then restart your server to load the changes we just made. After that, send a text to your Twilio number again, and our app will respond with the introductory text like last time.

Now TaskRouter makes a POST request to your newly created /assignment route. You can watch this happen by visiting localhost:4040 in a browser. That route will fire off the MMS with the Dropbox picture to your phone.

Responding to the Worker’s Message

We’ve created a worker in the ‘Idle’ state and they’ve just received their first captioning task. What happens when they text back? After we’ve saved the worker’s caption, we’ll transition them back to the ‘Idle’ Activity so that they will receive more photos to caption.

Let’s create a Twilio helper to retrieve a worker based on their phone number. In twilio_helper.rb:

def get_worker(phone_number)
  task_router_client.workers.list(friendly_name: phone_number).first
end

Let’s create another helper to retrieve the SID for the ‘Idle’ activity:

def get_activity_sid(friendly_name)
  task_router_client.activities.list(friendly_name: friendly_name).first.sid
end

And then we’ll use those two methods to change the worker’s activity back to “Idle”:

def update_worker_activity(phone_number, activity_friendly_name)
  worker = get_worker(phone_number)
  activity_sid = get_activity_sid('Idle')
  worker.update(activity: activity_sid)
end

With these helpers in place we can respond to the existing worker’s incoming message. In the /message endpoint of app.rb let’s add the following code to the if worker_exists? block that we said we’d come back to:

if worker_exists?(phone_number)
    caption = params['Body']
    # save the caption in the same place you stored the image_url and phone_number pair
    update_worker_activity(phone_number, 'Idle')
    twiml_response(“Thanks for the caption! We’ll send you more photos as they come available.”)
else
  # ...

That’s all the code for this app. Restart your server to reload the changes. Then send a hilarious text to your Twilio number. You’ll get a thank you back and your activity in TaskRouter will be switched back to Idle. If there are more tasks waiting in the taskqueue, TaskRouter will make another POST request to the /activity route and your phone will light up with another picture. You’ll respond with a funny caption, and so it goes.

Next Steps

Let’s recap. In this post we:

  • Created a new workspace, workflow and task queue in TaskRouter
  • Created tasks in response to a Dropbox upload
  • Allowed volunteers to sign up as workers via text message
  • Assigned photos to be captioned by workers
  • Updated a worker’s status once the task was completed

TaskRouter has given us a solid foundation for our application that is easily extendable to an even more diverse set diverse set of tasks across workers with varying skills. Consider extending what we built in this post with the following suggestions:

  • Create specialized captioners (for instance, some people might be better at captioning wedding photobooth pictures while others are better at office party photos).
  • Create a second Task Queue for people who can rate captions (the Internet is great at quantity but we might want some quality control).
  • Build a website to show off these hilarious captions.

I’m really excited to see what you build with TaskRouter. If you have any questions while you’re building your application, please reach out to me via email at brent@twilio.com or hit me up on Twitter @brentschooley.

TaskRouter and the Internet of Things

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


Last night DialogTech partied Mardi Gras style, hosting 150 people at our headquarters in Chicago’s Loop for Built In Chicago’s monthly Built In Brews event. Built In Brews is hosted by a different high-growth tech company in Chicago every month, and we were excited to offer a space where the best and brightest in the Windy’s City’s tech scene could come to schmooze and booze.

But not only the talent is local so is the beer and the food. The crowd couldn’t have been happier with awesome brews from Half Acre and Metropolitan and tasty jambalaya, etouffee, and King cake from Le Divas of Chicago. The only thing better than the food and drinks was the conversation, in which employees, founders, and tech-minded entrepreneurs were able to share ideas and insights and a few laughs too.

We’re happy to be part of such a vibrant, engaged local culture that knows how to collaborate as well as compete. Big thanks to Built In Chicago for making it happen and we can’t wait until next month’s event.

built in brews

Built In Chicago

The post Mardi Gras Comes to the Windy City: Beads + Brews with DialogTech and Built In Chicago appeared first on DialogTech.

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


Bands are like startups. When you first start out, you’re flying by the seat of your pants into the unknown with nothing but drive, and a plan you hope will work out. Jeff Lawson wrote out the plan for Twilio on the back of a pizza box in 2007. They Might Be Giants founders, John Flansburgh and John Linnell wrote the plan for their band as they went along. 30 years later, they’re still making new music, and new fans. We could not be more excited to have They Might Be Giants playing on the last day of Signal, May 20th.

Grab your tickets here.

When you call They Might Be Giants, they’ll pick up. Throughout the band’s long and storied career, they’ve kept communicating with their fans a priority. In the days of a tape powered answering machine (R.I.P), TMBG recorded songs to a tape, popped it in their answering machine, and took out ads in local newspapers like The Village Voice advertising a phone number and their new invention, Dial-A-Song.

For 23 years, TMBG published new content to Dial-A-Song. When the technology wave left the answering machine washed up on the shore, TMBG founded a website for Dial-A-Song and put out a Dial-A-Song record.

John and John will perform a set on the final day of Signal, May 20th. Stay tuned to the blog for updates and more information.

They Might Be Giants Take The Stage At Signal’s After Party

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


Last updated: March 05, 2015 08:01 PM All times are UTC.
Powered by: Planet

Speech Connection and Logos are Trademark of Nu Echo Inc. Copyright 2006-2008.