December 22, 2014

Worse than a lump of coal in your stocking is the terrifying experience of discovering your clothes in a wet pile on the floor at your local laundromat or the shared laundry room in your dorm or apartment. Many of us have been there – maybe we forgot to set our alarm or simply fell asleep. Instructables user MakerBee knew there had to be a way to solve this problem and built the Adruino+Twilio Powered SMS Laundry alerts sytem. We couldn’t think of a better hack to highlight day 11 of #12HacksOfChristmas.Picture of Washer Dryer Laundry Alarm using Arudino & SMS Text Messaging Alerts

Head over to Instructables for full instructions on building your own version of this hack. And until tomorrow, Happy Hacking and Happy Holidays to all!

12 Hacks of Christmas – Day 11: SMS Laundry Alerts

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


grinch businessIt’s that time of year when the holiday classics are on every channel. Movies like It’s A Wonderful Life are filled with inspiration for how to improve your relationships with your family…and with yourself. But when it comes to improving relationships with your customers and prospects, there’s another film that you can learn something from: The Grinch.

First: Don’t Be A Grinch

It’s really just that simple. Don’t be a grinch. No one likes a grinch—not even the Grinch likes a grinch. If that grinchiness invades your company culture, it can cause high turnover, which is damaging, not to mention expensive. And when it invades your company culture, it’s only a matter of time until it invades your relationships with your customers. In the movie, the Grinch’s attitude earned him a pretty nasty reputation. You don’t want that for your business.

Use Your Imagination

Max wasn’t a reindeer. He was a dog. But when the Grinch tied an antler to his head, Max became the very thing the Grinch needed to accomplish his goals. Look around you: what is the thing in your business that, given a little bit of imaginative finagling, can become the solution to your problems? Very few growing businesses have unlimited resources, so we have to get creative with what we have to make what we need. What is your reindeer in disguise?

A Strong Community Makes a Huge Difference

Even without all their presents and feasts, the Whos still found something to sing about. When you are surrounded by a community that can weather the storm, dig in, and move forward, it makes a huge difference in the strength and longevity of your business. Keep this in mind when you’re hiring. Who you let into your community can have a big toll on your future.

Last: Be A Grinch

I know, this contradicts the first point. But while you don’t want to be a grinch, it’s not so bad to be the Grinch. After all, in the end, he changes his point of view and embraces the thing he worked so hard to avoid. This brings to mind a lot of marketers’ perspective on digital and mobile trends. You can try as hard to ignore it or say it’s a passing trend, but the fact of the matter is, other businesses who embrace these things will be happier (and more successful) in the end. So be the Grinch, but learn the Grinch’s lessons a lot sooner.

There you have it. The Grinch’s lessons don’t just resonate during the holidays: take his grinchy lessons and keep them in mind all year long.

The post What the Grinch Can Teach You about Running a Business appeared first on Ifbyphone.

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


December 21, 2014

It turns out to be very difficult to work Mortal Combat into a Christmas reference so I’m not going to try (slay bells ring?). Instead I will remind you that even though ‘tis the season to be jolly, you may nonetheless find yourself frustrated, annoyed or just plain exhausted. In those times there is one sure way to work off the holiday jitters and get back into the spirit… by virtually beating up a family member.

Earlier this year Matthew Kaufer extended mk.js to create a version of Mortal Kombat that two players can play over SMS. No need for anything other than a warm beverage, a phone and the internet.

To play it yourself, go to this live demo. To see the code and tweak it go here. Until tomorrow, Happy Holidays and Happy Hacking to all!

12 Hacks of Christmas – Day 10: Mortal Combat

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


December 20, 2014

Frosty the congressman, was a busy hard-to-get a hold of soul… well, that certainly doesn’t roll of the tongue, but at least it’s topical. Welcome to another installment of the #12HacksOfChristmas. I hope by now you are listening to festive tunes, sipping festive beverages, relaxing next to a warm fireplace with your pet ferret snuggled on your shoulders. But if you, like me, live in a crazy state where ferrets are illegal, then you should probably call your congressman immediately!

Fortunately for all of us, taskforce, a very bright group of hackers created Call Congress, an open-source tool for implementing very easy click-to-call solutions for campaigns aimed at capitol hill. This tool can easily be deployed to any website, and automatically connect voters with their local congressperson based on zip code. So rise up, put your python-typing gloves on, and make it illegal to shoot whales from a moving vehicle.

Until tomorrow, remember, Happy Hacking and Happy Holidays to all.

 

12 Hacks of Christmas – Day 9: Call Congress

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


December 19, 2014

Today, we’re happy to announce that v1.2 of our Twilio Client Android SDK is now available to all. This release brings our Global Low Latency (GLL) technology to the Android SDK Client, rounding out our 2014 for Twilio Client nicely as GLL is now available on each of Twilio Client’s supported platforms: Web, iOS and now Android.

To take advantage of the new features, all you need to do is make sure you’re using the 1.2 version of the Android SDK, available here. There are no code changes required.

Latency is the enemy of real time communications, causing users on the call to speak overtop of each other. GLL makes dynamic media routing decisions to ensure the best calling experience possible with minimal latency, no matter where in the world the endpoints are. Your media is always routed via the Twilio data center closest to each user–any one of our seven data centers spread across five continents. We pick the best path for your media in order to minimize both the geographic distance and the number of hops over the public internet. This means wherever your users are, and wherever they’re calling, they experience the best possible call experience.

Finally Android 1.2 brings a couple of other enhancements: you can tweak the verbosity of debug logging, and you can use the SDK with Android Studio. You can also use the new Mobile Quick Start which we released earlier this month with Android 1.2.

We invest in building our GLL infrastructure so that you can build a global service without worrying about the global communications infrastructure. Twilio customers are using Twilio Client to build services that offer a different type of communication, where the interaction is embedded right within the context of what you’re working in. We’d love to hear what you’re building.

Twilio Client Android Update Completes Automated Low Latency Routing Availability Across All Twilio Client Platforms

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


My wife and I took our newborn daughter out shopping on Michigan Ave last weekend. Walking out of Macy’s, I saw a bunch of carolers holding sheets of paper with song lyrics. I thought, “Who has printers and copiers anymore?”

But most everyone’s got a cellphone these days. Maybe they don’t want to install an app. Maybe they don’t have a data plan. But most folks these days can get a text.

And so I spent last night building the Christmas Carol Lyric Line. If you’d like to see how it works, text “carols” to 907.312.1412.

lyrics-menu-new

I wrote this app using Ruby and Sinatra. It’s quite straight forward and below I’ll show you how I did it. Before we get to that though, I want to get something off my chest.

The finished code I’m about to walk you through is fairly clean and modular, but as you can see from the Github history, it didn’t start off that way. Lots of stuff hardcoded, lots of long methods and messy code. It feels a bit disingenuous to write a post saying, “Here’s how I would build this thing” when, in reality, they way I would build this thing is to throw together a bunch of ugly code until I got it working, then refactor to make it easier to maintain, easier to extend and easier to talk about.

With that said…

The idea was that someone could text in, get a menu of songs, reply back with their menu selection and get the lyrics for that song.

I decided to build a Sinatra app. I didn’t do a lot of Sinatra before starting at Twilio but it turns out Sinatra’s pretty useful for building quick Twilio apps, since it’s so stripped down — you can often fit all your code in a single file.

Well, almost one file. In my new project directory, I did create a Gemfile which included:

source 'https://rubygems.org'
ruby '2.1.2'
gem 'rack'
gem 'sinatra'

Then I created twilo.rb where the rest of the code in this post goes. I started it off by requiring Sinatra:

require ‘sinatra’

Getting the Lyrics

Next step was to get some song lyrics. That was pretty easy as there are 3.5MM results on Google for “Christmas carol lyrics.” I created a lyrics directory and dropped the songs in there, one text file per song.

On my first iteration I hardcoded the song menu. But then I wanted to add some songs and found it to be a pain in the ass to change the hardcoded menu every tine. So, I figured why not just automatically create the menu based off of the files in directory?

I renamed the files in such a way that it’d be easy to extract a pretty name — e.g, Away-In-A-Manger.txt instead of away-in-a-manger.txt. Then I wrote a method to do the transformation:

def song_name(filename)
  filename.gsub(/\-/, ' ').gsub('.txt', '')
end

Then I needed a method that would get all the filenames in the lyrics directory, delete the system filenames that start with a period, and sort the rest in alphabetical order:

def filenames
  names = Dir.entries('lyrics')
  names.delete_if { |name| name[0] == '.' }
  names.sort
end

Then I used those two methods to create the text for a menu:

def menu_text
  string = "Welcome to the Christmas Carol Lyric Line! What song number would you like lyrics for?"
  filenames.each_with_index do |filename, i| 
    string << "\n#{i + 1} #{song_name(filename)}"
  end
  string
end

Once I had my menu to display when someone texted in, I needed to deal with their input. First step is to figure out the list of valid menu options (‘1’, ‘2’, etc.):

def valid_options
  (1..filenames.size).collect { |i| i.to_s }
end

Then I needed to convert a valid menu option into a filename. For instance, the user replies with ’1′ and I return the first filename from the directory array. You’ll notice that I subtracted one from whatever the user texted — muggles don’t do zero indexing:

def filename(input)
  filenames[input.to_i - 1]
end

Once I had a filename, I needed to pull the lyrics:

def lyrics(filename)
  File.read("lyrics/#{filename}")
end

Alright! So now I have my output for each state: the user texted in a valid menu option (I return the lyrics) or the user texts in something else (I return the menu). Now I just needed to deal with the whole texting thing.

Texting the North Pole

First I did a quick Google search to find the area code for the North Pole. Very important.

North Pole area code

North Pole area code

Then I opened up my Twilio dashboard and searched for a number in the 907 area code (Fun fact, the North Pole and Anchorage share area codes):

phone-numbers

And wouldn’t you know it, not only do we have North Pole area codes but we’ve got North Pole area code numbers that include 312 — the area code for Chicago! It really is Christmas.

I snagged that third phone number as it seemed easiest to remember. Then I set the messaging webhook to http://baugues.ngrok.com/messaging. (If you haven’t used ngrok before, you should. It lets you expose your local development server to the Internet. Makes development cycles a lot faster. Kevin Whinnery wrote about it here.)

When someone texts my Twilio number, Twilio’s makes a POST request to that URL and passes along information in the params, just like if someone submitted a form. The bit that I’m interested in, the message body, is found in params[‘Body’].

I wrote a route to handle the POST request:

post '/message' do
  if valid_options.include?(params['Body'])
    message = lyrics(filename(params['Body']))
  else
    message = menu_text
  end

  content_type 'text/xml'
  twiml(message)
end

The if statement checks if the body of the incoming text is included in the list of valid menu options. If it is, then we use the lyrics from the filename pulled from the files array. If it’s not, then we generate the menu.

Then I needed to convert that message into the the kind of specialized XML Twilio expects called TwiML.

def twiml(message)
  %Q{
<Response>
  <Message>
    #{message}
    \n--\nPowered by Twilio.com
  </Message>
</Response> }
end

And that’s it. I fired up Sinatra app by running ruby twilio.rb from the terminal. To make sure things were working right, I tested my webhook with Postman:

postman

Then I sent off a text to my shiny new Twilio number, chose a song, and got my lyrics.

Putting a bow on it

Once I knew it was working, I needed to get it off my development machine. There are a hundred ways to deploy these days. Recently I’ve been deploying to my VPS on Digital Ocean using Dokku, which gives you Heroku-like deployments via Git. Here’s a great tutorial from Digital Ocean on using Ruby and Dokku together.

One thing you’ll read about in that tutorial is the necessity of a config.ru to tell your server what exactly it’s supposed to do with the repository you just gave it. In my case, I want it to include twilio.rb and start Sinatra. So I created a new config.ru and wrote:

require './twilio.rb'
run Sinatra::Application

Since I already had a git repo going, I just had to make sure everything was up to date, create a new Dokku instance and push it real good.

git remote add dokku dokku@apps.baugues.com:carols
git add -A
git commit -m ‘Deploying’
git push dokku master

Last step was to update the webhook in my Twilio dashboard to the production URL.

And that’s it!

I’m trying to get the word out about the Christmas Carol Lyric Line so that people know about it as they go out to spread holiday cheer this weekend. Would you be up for retweeting this tweet or simply sharing the phone number: 907.312.1412.

Merry Christmas,
Greg B.
gb@twilio.com

 

12 Hacks of Christmas – Day 8: Christmas Carol Lyrics by Text Message

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


Sao Paulo, Brazil 08:30am
It was a sunny Friday morning. The Macromedia meetup on the previous night had been a blast. I was super pleased with the feedback I had on my presentation and was already planning the next meeting. Suddenly the telephone rang…

The HR lady on the other side of the line was asking me to pop into the boardroom. She finished the call awkwardly by asking me to bring all my belongings with me. Eek!

Long story short, I get into the room and the company’s director announces that I, along with everyone else in the company, am being made redundant.

Boom, just like that!

It’s time for a change

YodaA wise “man” once said, “the fear of loss is a path to the Dark Side”.

This was bad news man! But it turned out to be a great new challenge for me because this marks the point when I decided to move from my native country of Brazil to England and start a whole new chapter in my life.

This would obviously mean leaving the two user groups I for so long co-organised and took pride in, but would also take me on a wicked new journey.

The road was bumpy — I once shared a four-bedroom house in North London with 13 other people who ate pasta and bread as their “balanced diet”. I called it “surviving”.

Because I didn’t really know that many people in the UK I figured the best way for me to keep in check and motivated during hard times was to try and meet people who were in the same situation as I was.

It had already worked for me in the past when I first got involved with communities so I knew this was a tried and tested recipe.

That’s the time when I got involved with many tech communities around London some of which had people who like me had just emigrated from another country into the UK.  These communities gave me the opportunity to meet as many people as I could and to even be able to call some of them “friends” at a later stage.

Tech communities have undoubtedly played a key part throughout the journey to where I am now but to give you a better understanding on this I feel like I should tell you a bit about why I am where I am now.

Where it all started

Marcos Placona - PictureMy name is Marcos Placona, I live in Bedfordshire, UK and am your friendly evangelist for the UK and the rest of Europe along with my other developer evangelist colleagues.

I’ve now spent exactly half of my life as a software engineer having worked in agencies, software houses, private companies and also on the public sector at the EU quarters in Brussels.

My journey started way back in the ‘90s when I developed a passion for computers precisely when my dad gave me a Commodore 64. I remember getting it all wired-up and being presented with a blue screen with white characters saying I had 64k RAM available.

Seeing that blue screen for the first time got me so excited I skipped dinner that night and started to learn BASIC instead. From that moment I knew life was no longer going to be the same… My mum just thought my new “typewriter” was broken.

10 PRINT "Hello World!"
20 GOTO 10

After many late nights and early mornings before school, one gets pretty tired of having to manually add line numbers before every statement so it was time to move on to something a bit more graphical. This got me introduced to Visual Basic 3.0 which was my gateway to the web when I found ASP Classic.

<%
    Set programmingSkills = New Level
    programmingSkills.status = "Hello Future!"
    response.write(programmingSkills.status)
%>

From this point onwards I began building ASP websites as a freelance developer and soon after learned JavaScript amongst other web technologies.

Even individuals stand together

The freelance world got me to meet a lot of people in the industry which mostly had one thing in common regardless of their technology “flavour” or their individuality — they were all very active in technical communities.

When you start learning something new you very quickly realise your resourcefulness runs dry rapidly and you need interaction with other people in order to keep your brain juices flowing with ideas, so I started to look for local communities that shared the same interests as I did.

Getting involved in those communities (Java and Microsoft at the time) was breathtaking and also got me motivated to start writing articles for web portals and my own personal blog.

At this point I had a true appreciation for the value communities brought me by increasing my network and providing me with a lot of skills, both technical and most importantly interpersonal.

Being able to collaborate with people was pretty awesome and really helped me grow. It also made me realise I had to branch out and start focusing on different communities in order to expand my reach.

Because it’s all just syntax

The web is all about using the right tool for the job, and understanding this challenged me to learn more contemporary languages such as Ruby, Dart and NodeJS.

such syntax
	plz console.loge with 'such syntax, much productivity'
wow
plz syntax

For every new language I learned, there was always a community there to support me, and what once started off with only a couple of communities has now turned into about a dozen.

While I worked professionally with Sun and Microsoft products for years, communities have always played a major part in my learning process.

All because community matters

Communities are like an addiction — a good one. Once you get involved in it you just want it more and more in your life. No matter where you are they will always be there for you, and you will always find someone awesome who’ll be willing to offer you some insight in a way or another.

I have now been involved with many different communities on the London and surrounding areas and like to think I have both helped and been helped by people who just like me are very enthusiastic about sharing their experience and life lessons with others.

Working as Developer Evangelist puts me in a very special position as I will be able to focus even more on my community services as well as working directly with developers to help them build killer applications using Twilio or other platforms.

If you are in the UK or anywhere in Europe and would like me to help you using Twilio, feel free to reach out and I will be happy to talk to you about how make your applications even better.

Can’t wait to see what you will build with Twilio.

Introducing Twilio Developer Evangelist Marcos Placona

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


logoThe following guest post was authored by Melissa Ridgley, Marketing Coordinator at Survey Analytics.

Survey Analytics is an enterprise research platform that gives companies the ability to listen, improve, grow, and collect feedback from a multichannel world. However, when we were faced with the proposition of providing phone-based attitudinal surveys for one of our clients, DonorVoice, we knew that we needed something more. We sought out to find a dependable provider of Interactive Voice Response (IVR) that was easy to set up, low in cost, and with the technical bells and whistles necessary to collect feedback over the phone. So when we came across Ifbyphone, partnering with them was a no-brainer.

DonorVoice helps non-profits to retain their individual donors. They needed a way to capture the voice of the customer and bridge this gap before it is too late. They do this by combining advanced analytics that merge attitudinal data with transactional data and bringing the voice of the customer into a “voice of the donor” offering that is all their own.

For DonorVoice, they found that the majority of feedback from donors comes from direct mail. And since the response rates for the direct mail channel (commercial or non-profit) are extremely low, there is a huge gap in the contact that occurs between a given charity and their supporters. DonorVoice approached us in an attempt to help close the loop on donor feedback and listen to the voice of the customer…literally.

Ifbyphone gave us the capability necessary to provide DonorVoice with voice-based surveys in such a way that was very easy to set up and get running. Once a call ends with DonorVoice, the caller is transferred to a number hosted by Ifbyphone. There, they will be taken directly into the survey. These survey questions could be multiple choice, true or false, or open-ended response questions. Ifbyphone’s powerful platform has the ability to transcribe these open-ended voice responses into text-based analytics that are pushed back into the Survey Analytics database via a post-call action API.

The results have been amazing. Thanks to Ifbyphone, DonorVoice is now able to collect IVR feedback in the direct mail channel and all the response data is stored seamlessly in their central Survey Analytics account with the rest of their survey feedback. DonorVoice found that the key to improvement starts with listening. When you give your customers (or in this case, donors) a chance to leave honest and open feedback, you can close the communication loop in order to move forward in the right direction.

You can read the rest of the case study here.

The post Guest Blog: The Key to Improving Relationships with Customers Is Listening appeared first on Ifbyphone.

        

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


December 18, 2014

My mom’s favorite Christmas song is “I’ll be home for Christmas” which means I’m in major trouble since I won’t be making the trip to KC to visit her this year. I imagine many of you, like me, may not be able to spend the holidays with your families. I wanted to build a hack that helps people feel connected to the ones they love no matter how many miles separate them. My creation is a hardware device that will play a clip of a Christmas song whenever someone sends a text to a Twilio number. I can share this number with my family, they can text whenever they want and I’ll have a festive way of knowing they’re thinking of me. Check out this video to see it in action:

You can deploy this web part of this application to heroku with one click by clicking here. But you’ll have a lot more fun if you read this whole post, I promise. Let me show you how to build your very own version this hack..

Our Tools

I opted to build this hack using littleBits which are a collection of electronic modules that snap together with magnets. They’re super easy to get started with and approved for ages 8+. That means this hack is great for the whole family!

We’ll be building our device using the following littleBits modules:

  • cloudBit – This will allow us to interact with our with our littleBits via the cloud using the cloudBit API.
  • USB Power – Power for our device is necessity!
  • mp3 Player – This module will let us load mp3s onto an SD card and play them back on demand.
  • Synth Speaker – This module will actually let us hear the mp3s as they play.

For our web server, we’ll be creating a very basic Laravel application that receives data from Twilio. Make sure you have:

Just a Little Bit of Hardware

As I mentioned, littleBits are great pieces of hardware for hackers of all ages. They magnetically snap together so there’s no wiring required. If you want – you can use them without any coding required but this a Twilio blog post so you know we’ll be writing some code.

First let’s get our littleBits connected to the internet. For this task we’ll be using our cloudBit. If you haven’t set up a cloudBit before it’s super easy. Connect your cloudBit to the USB power module and then head over to littlebits.cc/cloudsetup. The instructions there will walk you through getting connected to your local network and making sure everything works as expected.

Now that we’re connected to the internet, we can start adding modules to play our holiday mp3s. But before we connect our mp3 module let’s add some Christmas tunes to the SD card. On the bottom of your mp3 module you’ll find a memory card:

Put this card in an SD card adapter:

Now you can access this data with a memory card reader. Some computers have a card reader built-in which makes it even easier. On this card you’ll see a handful of sample audio files. The mp3 module loops through each any mp3s stored on the card. Since we’re only using one song we can delete all the mp3s and just include the Christmas song we want to play. I felt there was nothing more fitting for my hack than Death Cab for Cutie’s rendition of Christmas (Please Come Home).

After you’ve loaded up some tunes, connect your mp3 player module to your cloudBit. To make sure our hack works continuously toggle the switch on the mp3 player to loop:

Then connect your synth speaker to your mp3 player. The end result should look like this:

Before we go any further we need to test out our device to make sure it’s working as expected. Let’s use the littleBits cloud control to send a signal to our device and make sure a song starts playing. In the cloud control select your cloudBit and make sure you’re on the send screen. Here you’ll see a giant purple button:

Clicking this button will send a signal to your cloudBit and you’ll hear beautiful music. Victory! I’m hope you’re getting as excited as I am about this hack.

Thinking of You

Now that we’ve built and tested our littleBits device we need to build a web server in order to communicate between Twilio and the device. In this case, we’ll be building a small Laravel app but if you’d prefer you should be able to apply these concepts to the littleBits API and language or framework of your choice.

Let’s start a new Laravel application:

laravel new christmas-cheer
cd christmas-cheer

Now we can add a new /message route in our apps/routes.php file:

Route::post('/message', function()
{
});

We need to make a request to the littleBits API but first let’s setup some environment variables we’ll be using in our code. Create a new file called .env.php:

<?php
return array(
'CLOUDBIT_ID' => 'YOUR_CLOUDBIT_DEVICE_ID',
'CLOUDBIT_TOKEN' => 'YOUR_CLOUDBIT_TOKEN'
);

Here we’ll be storing our cloudBit device ID and access token. You can find this data in the settings section of your littleBits cloud control dashboard:

In case we have any issues in our code during this process let’s turn on debugging. Open app/config/app.php and look for the debug value. Make sure it’s value is set to true:

'debug' => true,

Now we can add the code to our /message route that will make a request to the littleBits API to trigger our device to play music:

Route::post('/message', function()
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://api-http.littlebitscloud.cc/devices/".$_ENV['CLOUDBIT_ID']."/output");
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, "duration_ms=30000");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Authorization: Bearer '. $_ENV['CLOUDBIT_TOKEN']));
$response = curl_exec($ch);
curl_close($ch);
});

We’re making a request to the /output endpoint which will output voltage to our device (triggering our mp3 to play). We want to have the music play for 30 seconds so we’re setting the duration_ms to 30,000. We’re using the token we got earlier to properly authenticate our request.

Start up your server so we can test this out:

php artisan serve

Now we can make a curl request to make sure everything is working as we expect:

curl -X POST http://localhost:8000/message

Hearing beautiful music? Perfect. Let’s add the code we need to have this work with Twilio. When a SMS messages comes into our application we want to respond to whoever sent that message. We can use TwiML to accomplish this. Let’s add the code to our /message route that will take care of this for us:

Route::post('/message', function()
{
  $ch = curl_init();
  curl_setopt($ch, CURLOPT_URL, "https://api-http.littlebitscloud.cc/devices/".$_ENV['CLOUDBIT_ID']."/output");
  curl_setopt($ch, CURLOPT_POST, 1);
  curl_setopt($ch, CURLOPT_POSTFIELDS, "duration_ms=30000");
  curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
  curl_setopt($ch, CURLOPT_HTTPHEADER, array('Authorization: Bearer '. $_ENV['CLOUDBIT_TOKEN']));
  $response = curl_exec($ch);
  curl_close($ch);

  $resp = Response::make('<Response><Message>Merry Christmas</Message></Response>', 200);
  $resp->header('Content-Type', 'text/xml');
  return $resp;

});

If you’re running your Laravel application locally you need to make sure Twilio can access your web server at a publicly accessible URL. Ngrok is a great tool for accomplishing this. You can find a great tutorial for setting up ngrok here. Or you can deploy this code to heroku by clicking here.

Once you have your server exposed to the outside world, head into your Twilio Dashboard and set up our Messaging Request URL for your number to point at your /message route

Now test it out by sending an SMS message to your Twilio phone number and hearing the beautiful music come from your littleBits device! Once you’re sure it’s working, share the # with your friends and family so you can know when they’re thinking of you. You can find the full code for this project on github.

I (won’t) Be Home For Christmas

Nothing beats being home for Christmas but when we can’t make the trip technology can help us have a unique connection to those we love. What improvements could you make to this hack to make it even better? Maybe you could forward the messages people send to your phone so you hear the song and get the message. Are you doing any hacking for the holidays? I’d love to see it: @rickyrobinett or ricky@twilio.com

12 Hacks of Christmas – Day 7: Spread Christmas Cheer With littleBits, Laravel and Twilio

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


Oh the joys of automating nagging tasks. For example: That pipe is leaking. You have to find a plumber, you’ve known this for a while. However, the existential dread of sifting through  plumbers’ Yelp interviews might make you delay that task. TalkLocal has a solution. Submit a short request with what you need done, where you need it done, and when you need it done. They’ll find you a vetted plumber (or any other household service) quick. Blam, your to do list just got automated.

TalkLocal uses Twilio to automate outbound calls to vendors from plumbers to dentists. After TalkLocal does the leg work, you can pick with dentist, plumber or electrician you’d like to go with.

After notching 2.6 million in funding, TalkLocal is full steam ahead, gathering new customers on the consumer and vendor side. “A lot of consumers want to talk to vendors before they make a decision of who they work with,” Manpreet Singh, Co-Founder and President of TalkLocal told The Washington Post. “There are very few services where you’ll be happy with whoever just shows up.”

TalkLocal uses Twilio’s call tracking data to log which vendor’s calls convert into business, and charges them a fee for that connection. To make things even easier for the customer, TalkLocal plans to roll out another way they can log service requests without having to type. Singh plans to ship a feature that will let customers record their voice, translate their speech to text, and submit that request to vendors. His team is also working on a new suite of mobile and android apps.
Learn more about TalkLocal here

TalkLocal Automates Your Search For Local Vendors Using Twilio

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


gen y tabletHere’s a number for you: 70% of mobile professionals will conduct their work on personal devices by 2017. BYOD, or bring your own device, can be defined as the practice of requiring the employees of an organization to use their own computers, smartphones, or other devices for work purposes, and is part of the foundation for our new, 21st-century mobile office.

The concept of a mobile office has in part emerged in response to the ideas, experiences, and practices that stem from a new generation – Gen Y. Their growing level of technology sophistication and outlook on the workplace have helped shape the mobile office and expanded BYOD. And now, as sales practices continue to evolve, we’ve seen greater success of inside sales, and businesses have shifted to support that success by implementing cost-saving practices such as BYOD.

The flexibility of this new generation and the evolution of business structures to include remote workers are bringing BYOD to the forefront of the workplace.

A Flexible Workplace for a Flexible Generation

Generation Y has helped spur the transformation of the workplace from a place where they are paid, to a place where they can build personal experiences. And because they identify themselves as digital natives, they easily adapt to new technologies, making them especially suited for the flexibility and personalized experience that a mobile office offers.

This generation already represents 25% of the workforce in the United States, with that figure expected to reach 50% by 2020. A growing, mobile-first generation that drives a mobile-first work style: 1 in 4 Americans already use a smartphone or tablet as their primary work device. And this number will only grow as Gen Y continues to expand in the workforce.

The Physical Office Is Becoming an Abstract Concept

As more and more companies begin to adopt an office structure that is mobile, establishing it as a business strategy (versus a perk of employment) is key. This will help build respect toward the structure and minimize exploitation of the privilege. And while it can help promote a positive workplace culture, it also has practical cost-saving implications for your business. Take sales structure: inside sales is growing 300% faster than field sales and helps businesses reduce cost-of-sales by 40%-90% (while revenue stays the same or grows). BYOD is becoming a key factor in the growth of inside sales.

The Growing Presence of BYOD

As mentioned above, businesses are seeing a shift in the structure of their sales teams, from field sales to inside sales – and now to BYOD. With growing personal technology adoption, and as more companies acknowledge its advantages, we’ll continue to see an increase in BYOD:

  • 80% of U.S. businesses with contact centers currently employ work-at-home agents.
  • 34% of U.S. businesses with contact centers expanded their work-at-home agent pool in 2014.
  • 66% of companies will adopt a BYOD solution by 2017.
  • Work-at-home agents in the U.S. are expected to grow from 100,000 to 160,000 by 2017.

How Can You Support Your BYOD Strategy?

Offering flexibility and giving your employees the option to use their own technology to conduct business is all well and good, but how can you effectively manage inbound calls, know they are quickly reaching the right sales and support agents, and ensure service levels are being met? Here are just a few pieces essential to making BYOD work harder for your business:

  • Call Routing: Ensure your important calls never go unanswered and are directed to the correct agent based on caller location, time of day, or marketing source – and to any type of phone (cell phone, work phone, home phone, Skype).
  • IVR (Interactive Voice Response): Automatically filter and qualify callers upfront to ensure high-quality, sales-ready leads are being routed to the best agent to convert.
  • Automated Call Distributor: Operate seamlessly as one team regardless of location or device by using a call management system to easily monitor call activity, provide agents with caller information, and integrate with your CRM or help desk systems.
  • Call Queuing: Customize the call queue experience (audio, maximum time callers have to wait, number of callers in queue, etc.) to provide a seamless experience for callers – you can even let them request a callback without losing their place in line, automatically triggering a call to them when it’s their turn.

To learn more about BYOD and the above tips, check out our on-demand webinar, “7 Ways Inside Sales Teams Optimize Performance in a BYOD World.”

The post BYOD: A Building Block of the New Mobile Office appeared first on Ifbyphone.

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


December 17, 2014

Today for the #12HacksOfChristmas we’re featuring a game that was built at the Hack the North hackathon in Waterloo, ON. Tejas Manohar, Jocelyn Lee and Cheng Wanli Peng won the Twilio prize, and our hearts, with their Twilio Plays 2048 project. If you can’t tell from the name, this hack lets you play the highly addictive game 2048 with the whole world using text messages.

Not only is this a clever hack – the team behind it posted the code on github. It’s like a digital stocking stuffer where you can see how they built the game and even fork the code to build your own version!

Until tomorrow, Happy Hacking and Happy Holidays to all!

12 Hacks of Christmas – Day 6: Twilio Plays 2048

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


Buying a new pair of jeans should be simple. Bring them to the cash register, pay, and leave. When you hand the person behind the counter cash, and they ask you “what’s a good phone number to reach you?” something’s gone awry. There’s tremendous pressure on the everyday consumer to provide private personal information in the most basic transactions.

The folks at Abine believe that everyone should feel comfortable giving out phone numbers, credit card details, and email addresses to anyone they please — as long as they’re masked. Abine lets users mask all their personal digital information so they can get through day to day shopping and business without putting their personal information or finances at risk.

“We wanted Abine to be a one stop shop for privacy needs,” says Thomas Lessard, Quality Operations Officer at Abine. In 2012, Abine rolled out their masked phone numbers service which assigns each user a Twilio phone number that forwards to their real phone number. Abine users can give that number out without the fear of jeopardizing their personal number. If you get calls from a number you don’t recognize and don’t want to follow up using your real number, Abine also offers masked SMS numbers you can use to follow up with little risk.

It’s rare that a brick and mortar store asks for your phone number, credit card, address, email and name all at once, but that’s practically the standard for e-commerce. Abine lets users auto fill information forms with masked credit cards, emails etc, so they don’t have to worry about remembering their masked phone numbers. However, that doesn’t mean that Abine customers won’t call to check in.

Lessard recalls a phone call he got a year ago when an Abine user was asked for her phone number at Kohl’s. She called Lessard to essentially ask “I can do this right? I can just give them this number and not my real number?” The answer was so simple she had to check. Lessard simply replied, “Yes.”

Abine is working on new privacy tools and launching new features rapidly. Learn more about them here.

Abine Puts Privacy Back In Your Hands With Twilio SMS and Voice

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


Today Ifbyphone announced the availability of the Winter 2014 Release of our Voice360™ solution. The release includes exciting new functionality that enables marketing, sales, and support teams to optimize the value of the sales and support calls that play such a critical role in the customer journey.

Highlights from the Winter 2014 release include:

  • A completely redesigned SourceTrak interface that modernizes and simplifies call analytics
  • Support for cross-domain/sub-domain and multi-location call tracking
  • Big enhancements to our Salesforce app that includes new support for Salesforce Service Cloud
  • New integrations with Act-On and DoubleClick to help you optimize calls and sales from your marketing
  • A new user access control model that makes it easy to control how others access your Ifbyphone account

I go into more detail on the new functionality below, but if you would like to see any of it for yourself, please consider:

Calls Play a Critical Role in Revenue Generation and Customer Support

Phone calls drive revenue. They are the most effective tool for acquiring new customers, and they are the best way to support current customers to increase their lifetime value. Consumers today routinely engage with companies by calling them while researching products, responding to marketing campaigns and advertisements, making purchases and requesting post-sale support.

Companies that analyze, qualify, route, and manage these sales and support calls optimally and efficiently are turning more leads into sales and more customers into loyal, repeat business. Ifbyphone’s new Winter 2014 Release provides the 360-degree call analytics and automation technology to help you do it.

SourceTrak Call Tracking Redesigned and Enhanced

Ifbyphone’s SourceTrak, the industry standard in call tracking and analytics, is redesigned with an intuitive flat-design, card-based interface that modernizes and simplifies how marketers track, analyze, and manage phone leads from every channel. New call analytics dashboards provide a snapshot of how your campaigns are performing in a way that is simple to understand. Campaigns in SourceTrak are extremely easy to set up and update, and complex tasks such as adding new web domains or number pools, finding and replacing numbers, and configuring call routing can be done in seconds.

SourceTrak also adds new cross-domain and sub-domain tracking and multi-location support to its call analytics capabilities. Now as leads navigate from your landing pages and microsites to your main web site, the same SourceTrak phone number moves with them, regardless of domain, and ties calls back to the correct source. This is particularly useful for marketers using marketing automation technology that require the use of sub-domains for landing pages.

And for businesses, directories, and lead gen services that list many phone numbers for different stores, offices, or services on the same web page, SourceTrak can display unique trackable numbers for each one. Calls to those numbers are attributed to the right marketing source and routed to the correct location.

Ifbyphone Salesforce App Enhanced with New Service Cloud Support

The award-winning Ifbyphone Salesforce app, which marketing and sales teams already use to optimize revenue by tracking, routing, and managing phone leads, has added new support for the Salesforce Service Cloud. It enables customer service teams using the Service Cloud to provide superior phone-based support from anywhere using any phone system or device by simply logging in to their Salesforce account.

It also includes numerous additional enhancements, including the abilities to:

  • Analyze both inbound and outbound call activity by agent
  • Store data on calls from phone numbers with no assigned contact in a queue and automatically sync it to the right record once that person is known
  • Use a new custom filter to only display calls in Salesforce records that last a significant enough length of time

New Integrations with Act-On and DoubleClick

Ifbyphone has partnered with Act-On Software to integrate our call analytics data with Act-On’s marketing automation solution. Now Act-On users can track calls from website visitors, improve lead scoring accuracy and understand the impact of calls on marketing campaigns to improve performance and ROI.

A new integration with DoubleClick adds to Ifbyphone’s existing integrations with leading bid management solutions from Marin Software, Acquisio and Kenshoo. It provides marketers and agencies using DoubleClick with highly sought-after data on call conversions needed to optimize performance across digital campaigns and channels.

New User Access Control

Ifbyphone now makes it easy for you to control how users access your Voice360™ account. You can assign each user their own Ifbyphone login and control their level of access to the system. It’s a great feature for agencies with multiple clients or businesses with a large number of Ifbyphone users.

For more information about Ifbyphone’s Winter 2014 Release or to request a demo, visit our New Features page. Existing Ifbyphone customers can also register to attend a webinar on January 8th to learn more about the new features.

 

The post Ifbyphone’s New Winter 2014 Release Provides Call Analytics and Automation for Every Stage of the Customer Journey appeared first on Ifbyphone.

        

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


December 16, 2014

Deck the halls with Raspberry Pi’s, fa la la la la la LEDs! Today’s hack is another brilliant hardware hack that solves one of the more mundane and cumbersome interactions many car owners face everyday… the garage door. In this detailed tutorial instructables user AkiraFist explains how to harness Raspberry Pi, a webcam and Twilio to create a video-monitored garage door “butler”. AkiraFist even explains how you can enable access for friends, family and house-sitters.

Thanks for the stellar hack Akira! And until tomorrow, Happy Hacking and Happy Holidays!

12 Hacks of Christmas – Day 5: The SMS Garage Door Butler

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


I turned off Gmail notifications on my phone a while ago, mostly because this tweet accurately sums up my email situation:

always-have-email

But there are some emails that you want to know about right now. For instance, say The Doggfather wants to invest your startup. That’s an email worth interrupting dinner for. Text messages are great for alerts like this. With Twilio and the GMail API you can send SMS alerts when urgent emails hit your inbox.

This is a two part tutorial. In Part 1 we looked at how to use Omniauth to authenticate a Rails app with the Gmail API. In Part 2 we’ll retrieve labels, messages and message details with the Gmail API, then send SMS alerts using Twilio.

Before we get started, we’ll need:

  • A Gmail account
  • A Twilio account
  • an SMS enabled phone number from Twilio
  • ngrok

The label that pays me

Which emails deserve SMS alerts? My first inclination was to hard-code that logic but Gmail’s already given us filters and labels. So before we write any code, let’s create a new filter to apply a sms label to certain messages. In Gmail:

  • Click the Settings gear in the top right
  • Click Filters
  • Click Add New Filter on the bottom of the page
  • Enter search criteria that’s important to you — in our case emails from “Snoop.”
  • Click Create Filter with This Search
  • Check Apply the Label and create a new label called sms

snoop-filter

Once we’ve sent an sms alert, we’ll automatically remove the label so that we send only one alert per email. Sadly, Snoop hasn’t emailed us yet, so go ahead and apply the SMS label to any of the messages in your inbox — that’ll do until our big day inevitably comes.

Get yo token, man

To access the Gmail API we need to authenticate our Rails app and retrieve an OAuth access token. This is what Part 1 of this post was all about so we’ll use it as the foundation for this post.

Clone the repo from Part 1:

git clone git@github.com:GregBaugues/gmail-alerts.git
cd gmail-alerts

If you use RVM or rbenv set up your .ruby-version and .ruby-gemset now. (And if you don’t, you should.) Then add the Google API Client and Twilio gems to the Gemfile:

gem 'google-api-client', :require => 'google/api_client'
gem 'twilio-ruby'

Google didn’t play by Ruby conventions when it named its gem (one of many things it did to make working with its APIs difficult), so if you don’t include that require bit you’re going to get an error that looks like: 

NameError (uninitialized constant SessionsController::Google)

Install the gems, setup up the database and start the server:

bundle install
rake db:create
rake db:migrate
rails s

In a different terminal, start ngrok pointed at port 3000 (ngrok is covered in more detail in Part 1):

./ngrok -subdomain=example 3000

Once our server has started, visit example.ngrok.com in the browser and click the Authenticate with Google link and authorize your account. (I’ve made a change since I originally wrote Part 1 to request a modify token instead of a readonly token. If you get an insufficient permissions error later in this post, revoke your access token and repeat this process). If we look in our database via the Rails console, we’ll find an access token that will allow us to make Gmail API calls:

rails c
Token.last.access_token

Now that we’ve got an access token, let’s roll.

The biggest star on your label

Let’s find the ID of our sms label. Unfortunately, the only way to do that is via the API. But that’s cool, it gives us a simple foray into making API requests against Gmail.

The Google API docs for Ruby aren’t great, but if you’re looking for more information on where all this code came from, here are the starting points I used:

From here on out, all the code we write will be in rake tasks so that we can run them from the command line. Create a new rake file:

touch lib/tasks/list_labels.rake

Then paste this into that file:

require 'pp'
task :list_labels => :environment do
  client = Google::APIClient.new
  client.authorization.access_token = Token.last.fresh_token
  service = client.discovered_api('gmail')
  result = client.execute(
    :api_method => service.users.labels.list,
    :parameters => {'userId' => 'me'},
    :headers => {'Content-Type' => 'application/json'})
  pp JSON.parse(result.body)
end

For better or worse, this is as simple as a Google API request gets. We create an authenticated API client using our OAuth token. We tell Google an which API to use, which method to call, and which user to access. Google returns some JSON which we turn into a hash using the Ruby JSON library.

You’re going to see this pattern repeated over the next few examples. If we were doing anything more complicated than a tutorial, we’d want to DRY out our code. For the sake of brevity, I’ll leave that as an exercise to the reader, though Phil Nash wrote a sample of what an abstracted Gmail class might look like.

Let’s run our task in the terminal:

rake list_labels

My sms label id is Label_29. What’s yours?

label

I got a little message

We’ve got our label ID, let’s get the emails that match it. Create another rake file:

touch lib/tasks/check_inbox.rake

Paste in this code (and make sure you change your LABEL_ID value):

require 'pp'
LABEL_ID = 'Label_29'
task :check_inbox => :environment do
  client = Google::APIClient.new
  client.authorization.access_token = Token.last.fresh_token
  service = client.discovered_api('gmail')
  result = client.execute(
    :api_method => service.users.messages.list,
    :parameters => {'userId' => 'me', 'labelIds' => ['INBOX', LABEL_ID]},
    :headers => {'Content-Type' => 'application/json'})
  pp JSON.parse(result.body)
end

This code looks awfully similar to the code we used to pull down labels, except that we are:

  • accessing messages.list instead of labels.list
  • adding a filter for messages with inbox AND sms labels

Run our new task:

rake check_inbox

You’ll notice that we get minimal information about each email — only the message and thread IDs. Gmail doesn’t even tell us the subject and sender of each message! We’ll have to make another API call for that.

Peep out the manuscript

Let’s add two methods this this rake file to:

  • request message details
  • parse the convoluted JSON returned by Google

Add this to the end of check_inbox.rake:

def get_details(id)
  client = Google::APIClient.new
  client.authorization.access_token = Token.last.fresh_token
  service = client.discovered_api('gmail')
  result = client.execute(
    :api_method => service.users.messages.get,
    :parameters => {'userId' => 'me', 'id' => id},
    :headers => {'Content-Type' => 'application/json'})
  data = JSON.parse(result.body)

  { subject: get_gmail_attribute(data, 'Subject'),
    from: get_gmail_attribute(data, 'From') }
end

def get_gmail_attribute(gmail_data, attribute)
  headers = gmail_data['payload']['headers']
  array = headers.reject { |hash| hash['name'] != attribute }
  array.first['value']
end

Our check_inbox task that was simply printing our JSON data: 

pp JSON.parse(result.body)
  Replace that one line with code to iterate through and print the details for each message:

messages = JSON.parse(result.body)['messages'] || []
messages.each do |msg|
  pp get_details(msg['id'])
end

Run the rake check_inbox and you’ll see a list of the sender and subject of each message in your inbox. Boom!

There’ll be a text from you on my phone

We’re getting close! We’ve got a hash with the sender and subject of our important email, now we just gotta shoot off that text message. Head over to your Twilio dashboard and take note of your Account SID and Auth Token — we’re going to need those to initiate an outbound SMS from our app.

credentials

You could set these values as constants in your rake file, but they

  • might change between your development to production environments
  • shouldn’t be committed to a github repo.

Instead we’ll set them as environment variables. We’ll also set variables for your Twilio phone number and personal cellphone.

From the same terminal in which you run your rake tasks, run:

export TWILIO_ACCOUNT_SID=xxxxx
export TWILIO_AUTH_TOKEN=xxxxx
export TWILIO_NUMBER=13125555555
export CELLPHONE=13126666666

Then add a method to the end of check_inbox.task to send that SMS:

def send_sms(details)
  client = Twilio::REST::Client.new ENV['TWILIO_ACCOUNT_SID'], ENV['TWILIO_AUTH_TOKEN']
  client.account.messages.create(
    to: ENV['CELLPHONE'],
    from: ENV['TWILIO_NUMBER'],
    body: "#{details[:from]}:\n#{details[:subject]}",)
end

Finally, update the iterator to send the sms for each message:

JSON.parse(result.body)['messages'].each do |msg|
  details = get_details(msg['id'])
  send_sms(details)
)

Run rake check_inbox again. Did your phone light up?

Got some changin’ to do

Almost finished. We just need remove that sms label so that we’re not blowing up our phone every minute after Snoop reaches out (though, that might be appropriate). All of our requests so have been read-only, but now we’re going to modify data.

(Note: when I first published Part 1, I requested a readonly token. I’ve since changed it and the github repo to request a gmail.modify token but if you’re working from a project you started when Part 1 originally came out back in August of 2014, you’re going to need to make the change in the Omniauth initializer, otherwise you’ll get an insufficient permissions error.)

Add this method to the end of check_inbox.rake:

def remove_label(id)
  client = Google::APIClient.new
  client.authorization.access_token = Token.last.fresh_token
  service = client.discovered_api('gmail')
  client.execute(
    :api_method => service.users.messages.modify,
    :parameters => { 'userId' => 'me', 'id' => id },
    :body_object => { 'removeLabelIds' => [LABEL_ID] }, 
    :headers => {'Content-Type' => 'application/json'})
end

Then we’ll add the remove_label method to our iterator:

JSON.parse(result.body)['messages'].each do |msg|
  details = get_details(msg['id'])
  send_sms(details)
  remove_label(msg['id'])
)

Great! Now we’ll send one and only one alert for each tagged email.

Put it high in the clouds

In order for this script to be useful, we need to deploy and automate it. I’m not going to dive into that here as I’m sure you have your own preferred method of deployment. If you’re on Heroku, you may want to check out the Heroku Scheduler. If you run your own VPS you’ll probably want to add a line to your crontab that looks like:

* * * * /path/to/app/rake check_messages

You may also want to check out the whenever gem that lets you schedule tasks directly in your rake file.

The next episode

It’s been a long two posts digging into a complex concept (OAuth) and a not-very-friendly API (Gmail). But now that you’ve gotten those two technologies under your belt, the digital world is your oyster. With OAuth you can connect to thousands of APIs, dozens of which fall under the Google umbrella — now just a copy/paste/tweak away from what you just wrote. And of course, with Twilio you can send and receive text messages to billions of devices with just a few lines of code.

If you found this post useful, I’d love to hear about it on Twitter at @greggyb. And if you have any questions, drop me an email: gb@twilio.com.

Happy Hacking!

SMS alerts for urgent emails with Twilio and the GMail API

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


POTBELLY_COLOR_LOGO_1As a Chicago company, we were excited for the chance to hear another Chicago company’s insights on building for success. We joined Potbelly CEO, Aylwin Lewis, at Executives’ Club of Chicago this morning to hear how the little sandwich shop that started on Lincoln Avenue grew into a multimillion dollar company with almost three-hundred locations nationwide and internationally. Lewis, the former CEO and President of Sears, had humble beginnings in his home state of Texas, but those modest roots have contributed to building a distinctive company culture that just keeps on growing. Based on his (very inspiring) talk this morning, we have compiled Lewis’s top 6 tips for creating a company that engages both employees and customers. 

Remember: People, Place, and Product

According to Lewis, Potbelly’s goal is to be the best place for lunch, with the best people, and the best product. This means tuning into performance during peak hours to ensure their much-adored 8-minutes-or-less wait times; training their employees to embody positive energy; and focusing on creating a delicious product that customers can count on.

You may not be in the restaurant industry, but if you can take those values and apply them to your company—whatever it is—you are building for success. No matter what you’re selling; people, place, and product matter. (This reminded me of one of Ifbyphone’s core values: “We will be the easiest company you have ever worked with, whether you are an employee, a customer, or an investor.”) 

Create Values and Live By Them

The Potbelly Advantage is the core set of values that Lewis instated when he became CEO of Potbelly. The focus is on passion, integrity, and positive energy, and instilling that vision in Potbelly employees. In fact, when employees are evaluated, their performance review is based on two things: overall performance, and how well they are living/exuding the values set forth in the Potbelly Advantage. “How do you train for culture?” Lewis asked us. By living it, he says, from the top down. Ensuring that management—even the CEO—is living the values put forth by the company creates what Lewis calls an “internal vibe” that radiates from the business. It becomes part of your image, your fabric. That’s when you know you’re doing it right—but it also depends on hiring the right people. So, next:

Hire Nice People

Ifbyphone is known for hiring SWANs: people who are Smart, Work hard, are Ambitious, and Nice. Lewis says that the emphasis on “niceness” is invaluable and worth the wait when recruiting. “Sometimes you have to go through one hundred just to hire five,” Lewis lamented. “But when you hire the wrong person, it’s like injecting a cancer into your positive workplace.” Hiring the right people can make all the difference. There are some people Lewis calls “grump-bunnies,” and Potbelly avoids them at all costs. Grump-bunnies, he says, can’t be turned into a ray of sunshine, no matter what perks you throw at them. Don’t waste your time. Be patient and only hire the sunshine.

Let Employees Bring Themselves to Work

One thing that makes Potbelly’s culture so fantastic, Lewis said, is the fact that employees can be who they are. Hair, clothing, shoes, race, sexual orientation, educational background—all unimportant, he insists. What matters is the values and how well they can live them. When you let people bring their “whole, authentic self” to work, there will be joy in what they offer the company. There aren’t too many barriers in place for those looking to enter the fast food industry, and Lewis is proud that a high school dropout can work at Potbelly and absorb the work ethic and values he needs to go back and get his GED. Moving up through the ranks at Potbelly to a salaried role is as simple as putting in the work.

Embrace Diversity

Potbelly boasts three women on their board, as well as three female senior leaders (marketing, HR, and development). Lewis insists that diversity must be more than words: “you have to live it,” just like your company’s other values. Lewis, who is African-American, recalls telling the board: “I don’t count. I’m one person. There has to be others.”

Without a hint of humor, he added, “I don’t have a woman’s sensibility. Having these women present with their thoughts and ideas adds a richness to the conversation. When you embrace diversity, your company will be better and your customers will be better served.”

Embrace Digital and Mobile

“The digital age is here,” Lewis declared. “You can ignore it, but it’s here.” But his advice for keeping up is simple.

“We’re facing the Uberization of business.” In Potbelly’s case, he says, people are always going to want a sandwich. But, like with taxicabs, how people want to get their sandwiches might change. To stay successful, you have to be ready for change in whatever form it comes, and be ready to meet your customers where they are—including on their smartphones.

The post 6 Tips from Potbelly’s CEO on How to Grow a Successful Business appeared first on Ifbyphone.

Bookmark this post:
Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


At Twilio, we care a great deal about your success. It’s ingrained in our culture from day one, where every new-hire, regardless of title, works side-by-side with our support team their first week. This gives every Twilio employee a first hand look at how you’re using the Twilio platform and what we can do to make your experience better.

We also make your success a priority by bundling support for Free. We do that with every single Twilio account. That way you always have somewhere to turn if you get stuck or if you just want someone to high-five after you finish your Twilio-powered app. But, what we’ve found is that support is not one-size-fits-all. Different Twilio use cases require different services and response times. This is why we offer more than a Free plan, providing you with a range of support services to meet your needs.

Today, we are excited to give you a new Personalized support plan that proactively anticipates your needs and provides more personal support. You’ll have your own named support engineer, someone who knows your apps, knows your deployment and is familiar with your programing languages.

Not only are we offering this new Personalized support plan, but we’ve also taken this opportunity to upgrade our entire support offering with across-the-board service improvements. Next, we added notifications for status.twilio.com so you can subscribe to updates and always have the latest information on the status of the Twilio platform.
Whether you’re a developer working on your next app or an enterprise integrating voice and messaging into a complex workflow, Twilio is the only cloud communications platform that comes with a full breadth of support services to meet your exact requirements.

Here is an overview of the plans:

  1. Personalized Support: The Personalized support plan is our new top tier plan and is targeted at enterprises and large software companies that are seeking a closer relationship with Twilio support engineers for planning and problem resolution. This plan will include:
    • 24×7 Response: You get round-the-clock email and phone support with a 1-hour SLA for business critical issues.
    • Named Support Engineer: A designated contact in Customer Support, who understands your Twilio-based application.
    • Support Manager Escalation Line: Direct access to the acting customer support manager for escalation of tickets and status updates
    • Quarterly Checkups: We meet with you quarterly to review your support tickets and uses cases and to assist with capacity planning and optimizing your Twilio deployment.
  • Premium Support: The new Premium support plan is perfect for mission-critical Twilio deployments that require 24×7 email and phone support. The Premium support plan guarantees that for high severity problems someone will get back to you in just one hour or less, even if you call at 2am on a Sunday.
  • Bootstrap Support: The new Bootstrap support plan gets a major upgrade. We’ve cut the guaranteed response time in half, so instead of the 4-hour SLA in our previous iteration of Bootstrap, you are now guaranteed a response in 2 hours or less. This is a great plan for anyone that wants email support and needs predictable response times.
  • Free Support: This free support plan is automatically assigned to every new Twilio customer at sign-up and includes access to our real-time status dashboard. This plan includes email support and although we don’t specify a guaranteed response time, you can expect a prompt response from a Twilio support engineer.
  • Together, these plans give you a full set of support options, from Free email support to your own named support engineer. Not to mention, access to some of the best documentation and API helper libraries out there.

    Everything you’ll need as you bring Twilio cloud-communications to the forefront of your business; updating internal telecom infrastructure and modernizing the way you communicate with your customers.

    For more detailed information and a side-by-side comparison of different plans, please visit our Support Website. Use the account portal to sign up for the Bootstrap or Premium or contact us if you’re interested in the Personalized plan.

    Introducing Personalized Support

    Bookmark this post:
    Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


    December 15, 2014

    For many families the holidays are a time to gather and partake in some of their favorite activities together. For Twilions and their families, those activities often involve writing code. Developer Evangelist Brent Schooley and his family are no exception. In October, Brent showed how to build a meme generator using Twilio. Brent’s brother Aaron Schooley is a software developer at Lockheed Martin and did us all the favor of forking his brother’s code to make some improvements.

    In celebration of family and #12HacksOfChristmas we want to share Aaron’s blog post showing you how to update his brother’s meme generator to send memes to your friends and family. Try it out yourself now. Send the word “list” to:

    United States: (215) 240-7664
    Canada: (587) 410-6363

     

    If you want to meme your friends send a text with the following format (use your friend’s number instead of 555 number):

    Meme All The Things, to: 555-555-5555

    If you have a hack you’d like for us to highlight just send it to @twilio or use #12HacksOfChristmas and until tomorrow Happy Hacking and Happy Holidays!

    12 Hacks of Christmas – Day 4: Meme-o-Gram

    Bookmark this post:
    Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


    A couple of weeks ago we announced the public beta of Elastic SIP Trunking, a new way to connect your SIP gear to the world through Twilio. With Elastic SIP Trunks, you can say sayonara to artificial constraints to scaling and pricing shenanigans. We think this new way of consuming SIP connectivity offers longtime VoIP network administrators and engineers an instantly provisioned, powerfully resilient alternative to the SIP trunk.

    One of the more frequent requests we received after opening the Elastic SIP Trunking beta was how to connect an Elastic SIP Trunk from Twilio to FreeSWITCH, the popular open source SIP platform.  Let’s find out how to do that together.

    How It Works

    A Twilio SIP Trunk provides a domain to connect your SIP switching equipment to Twilio. During our public beta, Elastic SIP Trunks only provides termination service, which is to say you can only place outbound calls.

    Two big differences will be familiar to those of you who have purchased SIP trunks before.  First, Twilio Elastic SIP Trunks are available immediately when you need them – no waiting for price quotes and contracts. As we’ll walkthrough shortly, provision a SIP trunk whenever you need it with transparent, volume-based pricing.  Second, Elastic SIP Trunks scale with you – no dealing with channels or concurrency planning.  With Elastic SIP Trunks you can make as many concurrent calls as you want without paying for any minimum concurrent calls.  Instead of paying for what you might use, Twilio Elastic SIP Trunks bill you for what you do use.

    With those concepts in mind, let’s look at what we need to get started.

    What We’ll Need

    Let’s walk through what we need for this project.

    With our requirements taken care of, let’s get some information to prepare the project.

    A Word About Security

    If you are installing FreeSWITCH for the first time, a quick note about securing your installation.  My default the vanilla package comes configured with the default password for all users “1234” and is defined in your vars.xml.  It is very important that you change this default password and also remove any of the default users in the directory that you are not going to be using.

    There are many bots that scour the Internet looking for insecure SIP PBXs in order to pump calls to high cost premium routes (also known as “traffic pumping”). All of these bots look for default passwords for popular open source switches like FreeSWITCH or Asterisk. Taking the precaution of securing your host is not difficult, but these steps are necessary to ensure your Twilio account is not abused.

    Before leaving your host on the public Internet, be sure to secure your host by following this helpful guide.

    Prepping The Project

    For this tutorial, we’ll assume you have FreeSWITCH installed and are looking to add Twilio as an external SIP profile.  With our installation ready, we can begin gathering the information we need to connect FreeSWITCH to a Twilio Elastic SIP Trunk.

    First, let’s look at how communication between our FreeSWITCH install and Twilio is secured.  Elastic SIP Trunks use a combination of both network security and authentication.  To satisfy the latter requirement, we need to create a new username and password that we’ll register with our Twilio SIP trunk to verify our secrets are shared.  We’ll create that username and password in the next step.  To satisfy the network security requirement, we’ll need to identify the external IP address of the host for our installation of FreeSWITCH.

    Let’s start with the network security piece. We need to find the external IP address that our FreeSWITCH installation will be using to connect to Twilio. If you don’t know the external IP address, here’s an easy trick using the handy IPEcho service.

    For Linux users, you can find the IP address on the host by opening a terminal and entering the following command:

    curl http://ipecho.net/plain

    For Windows users, you can find the IP address on the host by opening a web browser and visiting IP Echo.

    Once we have a public IP address for the FreeSWITCH host, we’ll keep it handy as we provision our Twilio SIP trunk next.

    Provisioning A Twilio Elastic SIP Trunk

    With our homework done, let’s light up a new Elastic SIP Trunk instantly from our Twilio Dashboard.  First, we’ll log in to Twilio and navigate our Dashboard to Elastic SIP Trunking by clicking the dropdown menu in the upper left hand corner.

    Then we’ll click the SIP Trunks tab.

    Now, click Create a SIP Trunk in the upper right hand corner below our account dropdown to start provisioning our SIP trunk.

    There are a few things going on here as we provision our first SIP trunk – let’s tick them off row by row.  First, we need to create a SIP URI for our Twilio SIP domain. This URI is going to be the domain that we connect to in our FreeSWITCH configuration and, as such, needs to be unique across Twilio. Second, we need to give our trunk a friendly name that we’ll recognize later. I used “My First SIP Trunk” – you can choose whatever human readable name you prefer.

    Lastly, we need to secure our trunk with an IP Whitelist and a Credentials List.  This will be a two step process of registering an IP Whitelist and Credentials with our trunk.

    First, we’ll create a new IP Whitelist by clicking “Create IP Whitelist.”

    The dialog box that appears takes two values – a Friendly Name (hint: a good practice is to use same name as your trunk) and an IP address or address block with CIDR notation.  We’ll use the public IP address we identified in the previous section.

    Next, let’s create a new set of Credentials for this trunk.  We can do that by clicking “Create Credentials.”

    By default, this menu will autocomplete your Twilio master account’s email address and password.  Note you will need to change the username to something without an @ sign in order to authenticate successfully.

    In this example, let’s set a username that matches the name of our host, which we’ll call pbx.myhost.tld. The password should be a strong one – this SIP domain will be able to place toll calls.  Keeping these credentials and IP lists secure is important to prevent exploitation like traffic pumping.

    Be sure to remember these values for our credentials – we will need them in the next step when we configure FreeSWITCH to connect to our Twilio Elastic SIP Trunk.

    Connect FreeSWITCH To Our Elastic SIP Trunk

    The first step to connecting our FreeSWITCH install to our newly provisioned Elastic SIP Trunk is to create a new external SIP profile in our FreeSWITCH configuration.  FreeSWITCH is a highly featured platform with a large number of configuration files, the location of which will differ from platform to platform and from distro to distro.  We’ll need to find the location of the root configuration directory before we can configure FreeSWITCH to use our Elastic SIP Trunk.

    For Windows, the location of the root configuration directory is:

    • C:\Program Files\FreeSWITCH\conf

    For Linux, it can be in a couple places depending on your installation. You can look in these spots for the configuration directory for your install:

    • /etc/freeswitch
    • /usr/local/freeswitch/conf
    • /opt/freeswitch/conf

    With our FreeSWITCH configuration directory located, we’ll need to complete two configuration steps to place outbound calls with our Elastic SIP Trunk – first we’ll need to create a new SIP profile, then a Dial Plan to instruct FreeSWITCH to use our Twilio SIP profile to connect the call.

    To begin, let’s create a SIP profile for our Twilio Elastic SIP Trunk. Following the conventions of the vanilla FreeSWITCH configuration structure, we’ll create a file called twilio.xml in the sip_profiles/external directory inside the root of our configuration.

    In twilio.xml, we’ll define a new gateway that we’ll call Twilio-outbound with the user credentials and SIP Uri we defined when we first provisioned our SIP trunk in the Twilio dashboard.

    <include>
     <gateway name="Twilio-outbound">
       <param name="username" value="pbx.myhost.tld"/>
       <param name="password" value="Keep this password secret1" />
       <param name="proxy" value="myexample.pstn.twilio.com"/>
       <param name="register" value="false"/>
     </gateway>
    </include>

    Next, we’ll set up a Dial Plan to use the Twilio SIP profile we just created to place any outbound calls.  A Dial Plan in FreeSWITCH is a configured extension that matches a regular expression to the action that you specify.  In our case, we’re going to take all domestic and international phone numbers and route them to our Twilio-outbound SIP profile we defined in the previous step.  We can do that by creating a new dialplan file called 02_twilio.xml in dialplan/default directory.  If you already have a dialplan prefixed by 02, pick the last number you see in the directory (e.g. 09_twilio.xml would be fine).

    <include>
     <extension name="domestic.twilio.short">
       <condition field="${toll_allow}" expression="domestic"/>
       <condition field="destination_number" expression="^(\d{10})$">
         <action application="set" data="effective_caller_id_number=${outbound_caller_id_number}"/>
         <action application="set" data="effective_caller_id_name=${outbound_caller_id_name}"/>
         <action application="bridge" data="sofia/gateway/Twilio-outbound/+1$1"/>
       </condition>
     </extension>
     <extension name="domestic.twilio">
       <condition field="${toll_allow}" expression="domestic"/>
       <condition field="destination_number" expression="^(\d{11})$">
         <action application="set" data="effective_caller_id_number=${outbound_caller_id_number}"/>
         <action application="set" data="effective_caller_id_name=${outbound_caller_id_name}"/>
        <action application="bridge" data="sofia/gateway/Twilio-outbound/+$1"/>
       </condition>
     </extension>
     <extension name="international.twilio">
       <condition field="${toll_allow}" expression="international"/>
       <condition field="destination_number" expression="^\+(1|2[1-689]\d|2[07]|3[0-469]|3[578]\d|4[0-13-9]|42\d|5[09]\d|5[1-8]|6[0-6]|6[7-9]\d|7|8[035789]\d|8[1246]|9[0-58]|9[679]\d)(\d+)">
        <action application="set" data="country_code=$1"/>
        <action application="set" data="national_number=$2"/>
        <action application="set" data="effective_caller_id_number=${outbound_caller_id_number}"/>
        <action application="set" data="effective_caller_id_name=${outbound_caller_id_name}"/>
        <action application="bridge" data="sofia/gateway/Twilio-outbound/+${country_code}${national_number}"/>
       </condition>
     </extension>
    </include>

    Let’s dive into what is going on here. We have defined three new extensions – one for ten digit US numbers (e.g. 555-555-5555), eleven digit US numbers (e.g. +1-555-555-5555) and then a large regex for international phone numbers (e.g. for the UK: +44 55 5555 5555).  Each of these respect the toll_allow setting for the registered endpoints, meaning only users with specific permissions to call domestically and internationally will match the dial plan.  We also inherit the outbound_caller_id_number and outbound_caller_id_name defined in vars.xml.  If you are setting up FreeSWITCH for the first time, I’d recommend setting these two caller_id values to a Twilio phone number.  Finally, we set the extension to bridge to our SIP profile using the Twilio-outbound settings we defined earlier.

    Next, we’ll need to reload our configurations.  The least intrusive way of doing this is to load the fs_cli utility.

    For Windows it is located here:

    C:\Program Files\FreeSWITCH\fs_cli

    For Linux, you can access it by running:

    fs_cli

    You will then need to reload your dialplans with this command:

    reloadxml

    Finally, we can add our new SIP profile with the following command:

    sofia profile external rescan

    Placing Our First Call

    Using our registered SIP endpoint, let’s place a call with our newly configured SIP profile connecting FreeSWITCH to our Elastic SIP Trunk.  If you don’t have a phone number handy, give this one a shot: +1 510.229.4252.

    If we are successful, we should hear a congratulatory voice and see something similar to this in our FreeSWITCH log:

    freeswitch@internal> 2014-12-08 11:54:49.494770 [NOTICE] switch_channel.c:1055 New Channel sofia/internal/rspectre@x.x.x.x [c010b077-646b-4a9d-a1e4-d67c4adc80dd]
    
    2014-12-08 11:54:49.774764 [NOTICE] switch_channel.c:1055 New Channel sofia/external/+15102294252 [50123df5-8d77-4802-a5d8-a6d675494eaa]
    
    2014-12-08 11:54:50.114762 [NOTICE] sofia.c:6716 Ring-Ready sofia/external/+15102294252!
    
    2014-12-08 11:54:50.134734 [NOTICE] mod_sofia.c:2086 Ring-Ready sofia/internal/rspectre@x.x.x.x!
    
    2014-12-08 11:54:50.134734 [NOTICE] switch_ivr_originate.c:527 Ring Ready sofia/internal/rspectre@x.x.x.x!
    
    2014-12-08 11:54:50.374769 [NOTICE] sofia.c:7475 Channel [sofia/external/+15102294252] has been answered
    
    2014-12-08 11:54:50.394783 [NOTICE] sofia_media.c:92 Pre-Answer sofia/internal/rspectre@x.x.x.x!
    
    2014-12-08 11:54:50.394783 [NOTICE] switch_ivr_originate.c:3494 Channel [sofia/internal/rspectre@x.x.x.x] has been answered

    If we weren’t successful, read up on the tips in the next section for some common gotchas I encountered as a FreeSWITCH newbie setting up my first Elastic SIP Trunk.

    Common Gotchas

    There are a lot of moving parts in a FreeSWITCH install – it is an incredibly powerful piece of software with a lot to configure.  Here are some errors you might check if you’re working with FreeSWITCH for the first time.

    • Firewall settings: FreeSWITCH operates over quite a few ports – just opening 5060 on TCP and UDP won’t be enough.  For a full list, check this wiki page. If you are using iptables, it also includes a script for the appropriate rule chains.
    • Dial Plan location: depending on your installation, the location of your dial plan may be incorrect if you do not see sofia attempting to connect to call to the Twilio SIP profile.  For the “vanilla” install of FreeSWITCH, this will be the dialplan/default directory, but it can be different depending on your installation.  Using fs_cli on DEBUG can show you which extensions your numbers match.
    • Inbound calls: Origination is not yet supported by Elastic SIP Trunking while the product is in beta.  If you are interested in setting this up, you can sign up for access when this feature is available here.
    • ACLs: In addition to the firewall your host is likely behind, FreeSWITCH maintains its own Access Control Lists (ACLs).  If you’re unable to receive calls from a Twilio <Dial> verb, you may want to make sure your FreeSWITCH install has added these IP addresses to its ACL set.

    Wrapping Up

    Awesome – we now have an Elastic SIP Trunk working on our FreeSWITCH install.  This is just the beginning of the journey getting your FreeSWITCH installation to speak SIP to Twilio.  For next steps, try checking out these features:

    • Turn on “Record” for easy, single-click recordings of all your outbound calls.
    • Set up your Dial Plans with different Twilio numbers set as the Caller ID for your international calls, so calls to the UK or any of the many other countries we support appear to come from a local number.
    • Set up a Twilio SIP domain for easy Conferencing or Queueing.

    This was my first spin getting FreeSWITCH connected to Twilio – I’m interested to hear more about your experience using this new product.  Please reach out with your FreeSWITCH installation questions and stories to rob@twilio.com or on Twitter at @dn0t.

    Getting Started Placing Outbound Calls with Twilio Elastic SIP Trunking and FreeSWITCH

    Bookmark this post:
    Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


    Recent studies show that customers across all industries are increasingly frustrated with the level of service they receive. In fact, 84% of consumers began doing business with a competitor following a poor customer service experience. In the age of the Internet, competitors are a dime a dozen, and it’s easy for customers to find them. So what is the key to good customer service, and how do you use it to keep customers loyal? Check out our brand new infographic below, and if you want more stats, feel free to download our free white paper, Beyond the Cloud: The Next Generation of Virtual Call Centers.

    Talking the Talk infographic

    The post Infographic: The Key to World-Class Customer Service appeared first on Ifbyphone.

    Bookmark this post:
    Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


    December 14, 2014

    Hi folks! Today we on the devangel crew wanted to sprinkle the 12 Hacks of Christmas with some holiday cheer. The following hack started out as a Santa location hack. Then we found out that there is no such thing as a ‘Santa API’, which means first we’re going to have build that API and then maybe next year we’ll build a tool for tracking Santa in real-time via Twilio.

    Instead Jarod, one of the parents on our team, put together a fun little command-line tool for sending secret messages to festive little recipients. Let us know what you think on Twitter with the hashtag #12HacksOfChristmas.

    Santa’s Helper – Command Line tool using Ruby-gem, Thor and Twilio

    Growing up in my house during Christmas was a delight. Every year we’d put out a tray of cookies on Christmas Eve, hang the stockings and try to sleep quietly upstairs as Santa visited our house and delivered gifts. It has always been a special night but now that I am a new parent I appreciate the magic of Santa even more.

    When I began thinking about how parents like myself might want to interact with the North Pole I started by making a list of requirements and concerns:

    • It needed to seem magical (simply texting a phone number might not be magical)
    • It needed to be hard to find for a child (ie; not a website)
    • It shouldn’t be a phone-to-phone hack (since we don’t want Mom’s phone chiming when little johnny thinks he is talking to the North Pole).
    • It needs to be extendable by the community

    In the end I decided the best implementation would be a ruby-gem command-line tool where hacker parents could interact discreetly with the North Pole. This would allow for magical experiences, is pretty hard for a child to stumble upon and gives us a tool that is easily extendable by the community. Luckily I had been looking for an excuse to build something using the Thor gem… and a command line interface to Santa’s helpers seemed the perfect console for creating magic :)

    In this post I’ll show you how to build the SantaCLI from start to finish, but thats not my real goal. The real goal of this post is to equip you with enough information to inspire you to build an experience customized for your own family. So check out the code and start tweaking!

    Demo:

    Text +1 860-947-HOHO(4646) to ping the North Pole and see what Santa is up to.

    The Code:

    santas_twilio_helper on github
    santas_twilio_helper on ruby-gems
    Twilio: Ruby helper library, REST API, Twilio Number, Thor

    Sign Up for Twilio

    The North Pole is listening: command list

    When I sat down to create my command line interface to the North Pole I wanted to make sure it was a two way channel. I wanted to make sure the elves had a way to communicate with my kids while also giving me a way of sending messages as an ambassador of Santa. I also wanted to give all of you hacker Moms and Dads a foundation that you could build on. Here’s the list of commands I came up with:

    • $ santa begin: this command should register you as a parent, your children’s names and your zip code. This gives you and the elves access to certain details (variables) down the road.
    • $ santa ping: ping the North Pole which should send a random message status (SMS) of what Santa is up to. This could be extended to give location updates on Christmas Day, or to send MMS pics of Santa selfies.
    • $ santa telegraph: this allows mom or dad to send a message as one of Santa’s helpers. This should have an option for a time delay.

    Great, now let me show you how I built this beauty.

    Before we move on, feel free to grab the code from github and follow along: https://github.com/jarodreyes/santas-twilio-helper

    Creating a Ruby gem that used Twilio

    Feel free to skip this section if you just want to install my santa gem and play around with it, for the rest of us, here is a quick rundown of how I chose to create the gem.

    There are a few ways to create a ruby gem, including by following the ruby-gem docs. I decided to use Bundler to bundle up the gem. The biggest benefit to using Bundler is that it can generate all of the files needed for a gem and with a few TODO notes to help you fill it out. Additionally you get some handy rake commands for building and releasing the gem when you’re finished.

    First you need to install Bundler:

    $ gem install bundler

    Next you can generate the gem:

    $ bundle gem santa_helper
        create santa_helper/Gemfile
        create santa_helper/Rakefile
        create santa_helper/LICENSE.txt
        create santa_helper/README.md
        create santa_helper/.gitignore
        create santa_helper/santa_helper.gemspec
        create santa_helper/lib/santa_helper.rb
        create santa_helper/lib/santa_helper/version.rb
        Initializating git repo in <path-to-gem>/santa_helper

    Next you can open the directory you just created and fill out all of the TODOs in the .gemspec file.

    # coding: utf-8
    lib = File.expand_path('../lib', __FILE__)
    $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
    require 'santa_helper/version'
    
    Gem::Specification.new do |spec|
      spec.name          = "santa_helper"
      spec.version       = SantaHelper::VERSION
      spec.authors       = ["Jarod Reyes"]
      spec.email         = ["jarodreyes@gmail.com"]
      spec.summary       = %q{TODO: Write a short summary. Required.}
      spec.description   = %q{TODO: Write a longer description. Optional.}
      spec.homepage      = ""
      spec.license       = "MIT"
    
      spec.files         = `git ls-files -z`.split("\x0")
      spec.executables   = spec.files.grep(%r{^bin/}) { |f| File.basename(f) }
      spec.test_files    = spec.files.grep(%r{^(test|spec|features)/})
      spec.require_paths = ["lib"]
    
      spec.add_development_dependency "bundler", "~> 1.7"
      spec.add_development_dependency "rake", "~> 10.0"
    end

    Finally you need to make sure Twilio and Thor are available in your gem. This means requiring them in two places; the .gemspec and your main application. First you should add them to the .gemspec file:

    spec.add_dependency 'thor', '~> 0.18'
    spec.add_dependency 'twilio-ruby', '~> 3.11'

    Eventually you’ll need to require them in the application.rb file but that’s getting ahead of ourselves.

    In the santas_twilio_helper gem this has all been done for you so you can just install the gem now and build on top of it.

    $ gem install santas_twilio_helper

    Putting the fruit in the fruitcake: building the CLI using Thor.

    Thor is a nifty little library. By including it we get a lot of built-in functionality that will make our minimal code act like a full fledged CLI. To give you an example of what I mean type this into the ‘santas_twilio_helper.rb’ file:

    desc 'hohoho', 'Wake up the big red man'
    def begin
      puts 'Ho Ho Ho! Merry Christmas to all and to all a good night!'
    end

    Now because we are using Bundler we can execute the command line tool using the bundle command:

    $ bundle exec bin/santa hohoho

    That’s all it takes to create a CLI using Thor, a set of descriptions and methods that execute code in the console.

    If you’ve installed the santas_twilio_helper gem you can see a bit of Thor’s magic by typing santa help:

    Screen Shot 2014-12-07 at 2.24.46 PM.png

    Using Thor

    The simplest way to use Thor is by creating some commands within a Thor class that are defined by their description. However, Thor also includes Actions, which are modules that interact with our local environment to give us some cool functionality.

    To illustrate this example here is the begin command that intakes some input from the console and writes it to a file.

    require 'thor'
    require 'paint'
    require 'json'
    require 'twilio-ruby'
    
    module SantasTwilioHelper
      module Cli
        class Application < Thor
    
          # Class constants
          @@twilio_number = ENV['TWILIO_NUMBER']
          @@client = Twilio::REST::Client.new ENV['TWILIO_ACCOUNT_SID'], ENV['TWILIO_AUTH_TOKEN']
    
          include Thor::Actions
    
          desc 'begin', 'Register yourself as one of Santas helpers'
          def begin
            say("#{Paint["Hi I'm one of Santa's Twilio Elves, and I'm about to deputize you as an ambassador to Santa. To get started I need your name.", :red]}")
            santa_helper = ask("Parent Name: ")
    
            children = []
            say("Great Gumdrops. We also need your child's name to verify they are on Santa's list. ")
            child = ask("Child Name: ")
            children.push(child)
    
            say("Fantastic. You can always add more children by running add_child later.")
            say("Next I need to know your telephone number so Santa's helpers can get in touch with you.")
            telephone = ask("#{Paint['Telephone Number: ', :red]}")
    
            say("The last thing I need is your city so we can verify we have the correct location for #{child}.")
            zip_code = ask("#{Paint['Zip Code: ', :blue]}")
    
            data = {
              'santa_helper' => santa_helper,
              'children' => children,
              'telephone' => telephone,
              'zip_code'=> zip_code
            }
    
            write_file(data)
    
            say("#{Paint["Okay you're off to the races. You can type `santa help` at any time to see the list of available commands.", "#55C4C2"]}")
          end
        end
      end
    end

    Right away you can see some cool Thor functionality, by using the methods say and ask Thor will pause the shell session to prompt the user for some feedback, wait for a response and then store the response to a variable that our script can reference.

    I mentioned earlier that Thor includes Actions which allow us to save input to disk among other things. One of these Actions is create_file() which I use in the function below, to save the user input from begin to the filesystem.

    def write_file(data_hash)
       create_file "santarc.json", "// Your Santas Helper configuration.\n #{data_hash.to_json}", :force => true
    end

    I chose to write the data to a santarc file as opposed to writing to db. If you would like a more secure data store you should probably change this.

    Next let’s take a look at the ping command since I believe this command has a ton of potential ways it could be extended to do amazing things.

    # PING will tell us what Santa is up to.
    desc 'ping', 'See where Santa is right now'
    def ping
      file = File.read('messages.json')
      messages = JSON.parse(file)
      # TODO: if it is Dec. 25 we should pull from a different set of messages. Location specific
      # TODO: We should use the zip code in the profile to make sure Santa arrives here last.
    
      # For now Santa messages are all non-location based
      santaMs = messages['SANTA_SNIPPETS']
      a = rand(0..(santaMs.length-1))
      msg = santaMs[a]
      puts "sending message..."
      sendMessage(msg)
    end

    This command is pretty straight-forward as it stands, but you may want to extend it based on the TODO messages in the comments. For now it reads messages from the ‘messages.json’ file and pulls a random message. Then we call sendMessage() which actually does the job of sending the SMS with Twilio.

    Before we can send a message in Ruby using Twilio all we need to do is add ‘require twilio-ruby’ to the top of application.rb

    def sendMessage(msg)
              message = @@client.account.messages.create(
                :from => @@twilio_number,
                :to => phone,
                :body => msg
              )
              puts "message sent: #{msg}"
            end

    This version of sendMessage() is the simplest way we can send an SMS using Twilio. To send a picture you would simply need to add a :media_url parameter that would point to a url of some media. However on github, you will find a more robust version of sendMessage() that actually appends a greeting and a sign-off from the elves.

    I’ll quickly mention the ‘telegraph’ command which allows me to send a message as Santa’s Helper with a time delay (in seconds). This means I can create scenarios where I am in the kitchen cooking with the family when the ‘Santa Phone’ mysteriously delivers a custom message. Pretty sneaky ;) Try it yourself (assuming you installed the gem) with the command:

    $ santa telegraph "Santa just checked his list for the 3rd time, hope you were nice today!" 240

    The last bit of know-how I picked up on this journey to a gem was how to build and release the gem.

    Building and releasing the ruby gem.

    Before we can use the handy Rake tasks we need to save your ruby-gem credentials to your .gem file. So run:

    $ curl -u your-rubygems-username https://rubygems.org/api/v1/api_key.yaml > ~/.gem/credentials

    Once we have done this we can run:

    $ rake build && rake release

    Grok’in around the christmas tree

    So now we have a working CLI to interact with the North Pole, but there are a few things I would challenge you all to do. First this CLI could use a location specific hack. Since neither Google or Norad(Bing) offer an api for Santa’s location this is going to require some work on your end. But if you want to collaborate on a Santa API hit me up and we’ll get started.

    I would also recommend adding a Christmas day switch to the ‘ping’ command that maybe kicks off an hourly update via SMS of where Santa is located.

    Hopefully learning how I built this tool has given you a plethora ideas of how to make it even more magical. I look forward to all of the pull requests on github, but until then feel free to ping me on twitter or email and we can talk about how The Santa Clause movie with Tim Allen is actually a cultural gem.

    12 Hacks of Christmas – Day 3: Santa CLI

    Bookmark this post:
    Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


    December 13, 2014

    We’ve seen people combine Twilio with a bunch of unexpected things – like bears, beers and babies. For today’s #12HacksOfChristmas hack we wanted to highlight a hack from the community that features the pairing of Twilio and man’s best friend. This holiday season learn how to train your dog using Javascript in this brilliant tutorial by Chad Hart.

    How does it work? Chad gets text notifications when his dog is on the couch and then drives a robot to help encourage his dog to go back on the floor. Check out this video to see it in action:

    Yes this hack is cute. But it’s also full of full of great technologies like WebRTC, Node.js and Tessel. Throughout this post Chad takes you on a journey that makes building this project seem straight-forward even if you’ve never worked with this stack before. Gather your puppy, Tessel and Twilio account then check out Chad’s tutorial on WebRTCHacks.com.  And until tomorrow, Happy Hacking and Happy Holidays!

    12 Hacks of Christmas – Day 2: Training Your Dog with WebRTC, Tessel and Twilio

    Bookmark this post:
    Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


    December 12, 2014

    Hello Twilio friends! In honor of that age-old tradition of giving gifts on the holidays we here at Twilio wanted to share with you some of our favorite hacks from the past year. We’re calling it the #12HacksOfChristmas. Please share with us your favorite hacks on Twitter, and happy hacking!

    Oh Christmas tree…that beautifully disguised fire-starter that slowly dries up until January 1st, when it finally gets thrown out on the street to become public-restroom toilet paper. If only the darn thing did more than just sit there?! Well I am here to bring good news, because for todays #12HacksOfChristmas Will Dages explains how you can build your very own texting christmas tree.

    What does it do? When you text this tree a color, all of the lights change in a cascading fashion. So stop whittling away your time, and start hacking away on your christmas tree (phrasing?).

    Here is a demo of the tree in action:

    htCCy7b.gif

    When we asked Will what inspired this hack he wrote:

    I wanted to be able to control some Christmas lights in a novel way. I thought if I could figure it out, it’d be something fun to take to work and leave in the cafe, or have around as a party trick for the holidays. I started by controlling it with tweets, but ran into a few problems so I switched over to using Twilio and text messages. That ended up being a great idea because it made the tree way more accessible. Not everyone has a Twitter account, but almost everyone is comfortable sending a text message. It’s been a lot of fun to see people’s reactions to the tree — lots of smiles.

    Thanks for the awesome tutorial Will! And to the rest of you, until tomorrow happy hacking and happy holidays!

    12 Hacks of Christmas – Day 1: The Texting Tree

    Bookmark this post:
    Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


    Smiling fashion designer in office talking phoneYesterday we joined Hubspot’s exclusive webinar, The Art of Visual Marketing, to hear experts Guy Kawasaki and Peg Fitzpatrick offer their tips for turning visual marketing—combined with social media—into a definitive asset for businesses. Kawasaki’s 1.4 million Twitter followers speak to his expertise on the subject, so we thought we’d round up some of his tips here, along with a few of our own.

    1. Create or Curate Great Content

    It all starts with content. If you don’t have great content, you have nothing to share, and social media and marketing is all about sharing. Kawasaki urges businesses to create and share content every single day—whether it’s a blog, a video, an infographic, or something else—that provides a benefit to your audience, so that when the day comes when you ask them to do something (fill out a form, check out your new website) you have earned the right to ask.

    2. Design (Or Find) Great Visuals

    Kawasaki shared a secret to great engagement on social media: use an image every single time. Yes, every time: on every post. In his own humble experience (remember: 1.4 million followers) he claims that using an image doubles his engagement more often than not. He recommends Canva as a great tool to design and create attractive marketing images. We’ve had a good experience with PicMonkey. Pick your poison.

    3. Optimize Your Visuals

    Remember that the images you use should be optimized for where you’re sharing them. Enlarging visuals rather than leaving them thumbnail-sized makes a big difference too. Kawasaki was kind enough to provide the optimal size for images per platform, so we’ll share them here:

    • Twitter: 1024 x 512
    • Facebook: 940 x 788
    • Instagram: 640 x 640
    • Pinterest: 735 x 1102
    • Google+: Any of the above!

     4. Use Some Nifty Tools

    Eye Dropper is an open source extension that enables you to pick colors from web pages and grab the color code so that you can match fonts, images, etc. in a nice uniform color selection. Uniformity and pleasing colors are essential to successful visual marketing. Hyperlapse is another tool Kawsaki recommended that enables users to shoot polished time lapse videos that were previously impossible without bulky tripods and expensive equipment. A good video goes a long way, and Hyperlapse is a great way to take a video of lots of faces at events without them being blurred by speed. WordSwag is an app that Fitzpatrick suggested that enables you to add text to your visuals in an easy-to-use app.

    5. Use Phone Numbers

    One important part of visual marketing that often gets overlooked is the use of phone numbers on web pages and assets. Not only is it a trustmark, but it gives your audience a quick easy way to connect with you if they like your content enough to want to learn more. After all, 61% of customers believe it’s important that a business provide a phone number to call. The eye naturally goes to the upper corner of a site, a blog, or an asset to find a phone number…make sure there’s something there when they look.

    6. Get Creative With Video and Presentations

    Your video itself should be creative, of course, but Kawasaki recommends uploading the file directly to Facebook native rather than pasting in a link from YouTube. The video looks much more attractive in the feed, and even though its views won’t contribute to the view count on YouTube, it takes advantage of viewers who may be on one platform but not the other.

    In addition, Kawasaki recommends taking advantage of SlideShare. If you have a particularly long presentation or even a long-form post, create a SlideShare presentation to go with it and take advantage of the audience that watches SlideShare for good content. Plus it gives you another piece of visual marketing content to add to your arsenal to share on other networks. You can’t lose.

    Want more tips on improving your content marketing performance? Sign up for our webinar: Eliminate the Biggest Blind Spot in Your Content Marketing ROI Data.

     

     

    The post 6 Steps to Improving Your Visual Marketing, with Guy Kawasaki appeared first on Ifbyphone.

    Bookmark this post:
    Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


    December 11, 2014

    It’s been an exciting few weeks of launches for Twilio. My favourite was the launch of our Network Traversal Service. Whilst that may sound a bit dry it’s an important service for WebRTC applications as it removes the overhead of deploying your own network of STUN and TURN servers. I’ve been dying to find an excuse to get playing with WebRTC and this was a great reason to do so.

    It would, of course, be remiss of me to keep the code and the process of putting together a WebRTC application to myself. Throughout this post I will share how I got started with it building out a video chat application with WebRTC. Then you can spend fewer late nights wondering which callback you missed or which message you haven’t implemented yet and more time waving at your friends and thinking of cool applications for this technology.

    Let’s make some WebRTC happen!

    What is WebRTC?

    Let’s start with a few definitions just to make sure we all know what we’re talking about.

    WebRTC is a set of JavaScript APIs that enable plugin-free, real time, peer to peer video, audio and data communication between two browsers. Simple, right? We’ll see the JavaScript APIs in the code later.

    What isn’t WebRTC?

    It is also important to talk about what WebRTC doesn’t do for us, since that is the part of the application we actually need to build. Whilst a WebRTC connection between two browsers is peer to peer, we still require servers to do some work for us. The three parts of the application that are required are as follows:

    Network configuration

    This refers to information about the public IP address and port number on which a browser can be reached. This is where the Twilio Network Traversal Service comes in. As explained in the announcement when firewalls and NAT get involved it is not trivial to discover how to access an endpoint publicly. STUN and TURN servers can be used to discover this information. The browser does a lot of the work here but we’ll see how to set it up with access to Twilio’s service later.

    Presence

    Browsers usually live a solitary life, blissfully unaware of other browsers that may want to contact them. In order to connect one browser to another, we are going to have to discover the presence of another browser somehow. It is up to us to build a way for the browsers to discover other browsers that are ready to take a video call.

    Signalling

    Finally, once a browser decides to contact another peer it needs to send and receive both the network information received from the STUN/TURN servers, as well as information about its own media capabilities. This is known as signalling and is the majority of the work that we need to do in this application.

    For a much more in depth view on WebRTC and the surrounding technologies I highly recommend the HTML5 Rocks introduction to WebRTC and their more detailed article on STUN, TURN and signalling.

    Tools

    To build out our WebRTC “Hello World!” (which, excitingly enough, is a video chat application) we need a few tools. Since we are speaking JavaScript on the front end I decided to use JavaScript for the back end too, so we will be using Node.js. We need something to serve our application too and for this project I picked Hapi.js (though it doesn’t really matter, you could easily use Express or even node-static). For the presence and signalling any two way communication channel can be used. I picked WebSockets using Socket.io for the simplicity of the API.

    All we need to get started is a Twilio account, a computer with a webcam and Node.js installed. Oh, and a browser that supports WebRTC, right now that is Firefox, Chrome or Opera. Got that? Good, let’s write some code.

    Getting started

    On the command line, prepare your app:

    $ mkdir video-chat
    $ cd video-chat
    $ npm init

    Enter the information that

    npm init
    asks for (you can mostly press enter here). Now, install your dependencies:

    $ npm install hapi socket.io twilio --save

    Make up the files and directories you’re going to need too.

    $ mkdir public
    $ touch index.js public/index.html public/adapter.js public/app.js

    In

    adapter.js
    we’re going to use Google’s adapter.js library which they maintain to normalise different implementations and vendor prefixed versions of the JavaScript APIs. Copy the file into your
    adapter.js
    file. If you have
    curl
    installed, you could do so with the following command:

    $ curl https://webrtc.googlecode.com/svn-history/r4259/trunk/samples/js/base/adapter.js > public/adapter.js

    Then open up

    public/index.html
    and enter the following bare bones HTML page:

    <!doctype html>
    <html>
    <head>
      <meta charset="UTF-8">
      <title>Video Chat</title>
    </head>
    <body>
      <h1>Video Chat</h1>
      <video id="local-video" height="150" autoplay></video>
      <video id="remote-video" height="150" autoplay></video>
    
      <div>
        <button id="get-video">Get Video</button>
        <button id="call" disabled="disabled">Call</button>
      </div>
    
      <script src="https://www.twilio.com/blog/socket.io/socket.io.js"></script>
      <script src="https://www.twilio.com/blog/adapter.js"></script>
      <script src="https://www.twilio.com/blog/app.js"></script>
    </body>
    </html>

    As you can see this includes two empty

    <video>
    elements and some buttons that we will be using to control our calls, the JavaScript files we defined earlier alongside the Socket.io client library.

    Finally, we’ll set up our server. Open up

    index.js
    and enter the following:

    // index.js
    var Hapi = require('hapi');
    var server = new Hapi.Server()
    server.connection({
      'host': 'localhost',
      'port': 3000
    });
    var socketio = require("socket.io");
    var io = socketio(server.listener);
    var twilio = require('twilio')(process.env.ACCOUNT_SID, process.env.AUTH_TOKEN);
    
    // Serve static assets
    server.route({
      method: 'GET',
      path: '/{path*}',
      handler: {
        directory: { path: './public', listing: false, index: true }
      }
    });
    
    // Start the server
    server.start(function () {
      console.log('Server running at:', server.info.uri);
    });

    This is a basic setup for Hapi, we’re not really doing anything special here except attaching the Socket.io process to the Hapi server object.

    We’ve loaded the Twilio node library here too and you can see that I’m including the API credentials from the environment. Before we run the server, we should make sure we have those credentials in the environment.

    $ export TWILIO_ACCOUNT_SID=ACXXXXXXXXXX
    $ export TWILIO_AUTH_TOKEN=YYYYYYYYY

    Now run the server and make sure everything is looking ok.

    $ node index.js

    Open up http://localhost:3000 and check to see that you have a title, some empty video elements and two buttons. Is that all there? Let’s continue.

    Video and Audio Streams

    We’re all set up, so the first thing we need to do to start the video calling process is get hold of the user’s video and audio streams. For this we will use the

    navigator.getUserMedia
    API. It’s vendor prefixed in Chrome, Opera and Firefox, so this is where
    adapter.js
    helps us out for the first time.

    We’re going to listen for a click on the first

    <button>
    element we added to the page and request the streams from the user’s webcam and microphone. Open up
    public/app.js
    and enter the following:

    // app.js
    var VideoChat = {
      requestMediaStream: function(event){
        getUserMedia(
          {video: true, audio: true},
          VideoChat.onMediaStream,
          VideoChat.noMediaStream
        );
      },
    
      onMediaStream: function(stream){
        VideoChat.localVideo = document.getElementById('local-video');
        VideoChat.localVideo.volume = 0;
        VideoChat.localStream = stream;
        VideoChat.videoButton.setAttribute('disabled', 'disabled');
        var streamUrl = window.URL.createObjectURL(stream);
        VideoChat.localVideo.src = streamUrl;
      },
    
      noMediaStream: function(){
        console.log("No media stream for us.");
        // Sad trombone.
      }
    };
    
    VideoChat.videoButton = document.getElementById('get-video');
    
    VideoChat.videoButton.addEventListener(
      'click',
      VideoChat.requestMediaStream,
      false
    );

    The code above does a few things so let’s talk through it. I first set up a

    VideoChat
    object, this is to store a few objects and functions that we will be defining throughout the process. First object we grab hold of is the video button, to which we attach a click event listener (no jQuery here I’m afraid, this is all vanilla DOM APIs). When the button is clicked we make the request to access the video and audio streams through the
    getUserMedia
    function. Usually this would be called on the
    navigator
    object but
    adapter.js
    makes it available globally as
    getUserMedia
    .

    The call to

    getUserMedia
    causes the browser to prompt the user to accept or deny the page’s request to use their media. In Firefox this looks like this:

    getUserMedia permissions in Firefox

    And in Chrome it looks like this:

    getUserMedia permissions in Chrome

    If you accept, the first callback to the

    getUserMedia
    method is called with the stream as an argument. If you deny the permissions, the second callback gets called. When we receive the stream we will save it to our
    VideoChat
    object and add it to the video element so you can see yourself (we also turn the volume down to 0 to avoid echoes). To do so we need to turn the stream into a URL, which we do with the
    window.URL.createObjectURL
    function. We also disable the “Get Video“ button as we don’t need that anymore.

    Save that, reload the page, click “Get Video“ and you should see the permissions popup, accept and you should see yourself!

    Me waving at the camera!

    User presence

    Next we need to build a way of knowing we have another user on the other end ready to make a call. By the end of this section we will have enabled the “Call” button when we know there is someone on the other end.

    In order to start passing messages between browsers as part of our signalling we need to start using our WebSockets. Open up

    index.js
    again and copy and paste the following code before the
    server.start
    function.

    // index.js
    io.on('connection', function(socket){
      socket.on('join', function(room){
        var clients = io.sockets.adapter.rooms[room];
        var numClients = (typeof clients !== 'undefined') ? Object.keys(clients).length : 0;
        if(numClients == 0){
          socket.join(room);
        }else if(numClients == 1){
          socket.join(room);
          socket.emit('ready', room);
          socket.broadcast.emit('ready', room);
        }else{
          socket.emit('full', room);
        }
      });
    });

    This is a very basic idea of a room and presence. Only two users can join the room at any one time. When a client tries to join a room we count how many clients are in the room right now. If it is zero they can join, if it is one they join and the socket emits to both clients that they are ready. If there are already 2 clients in the room then it is full and no further clients can join for now.

    Now we need to join the room from the client. We need to start a connection to the socket server, we can do that by simply calling

    io()
    . Assign that to our
    VideoChat
    object so we can use it later. Then at the end of the
    onMediaStream
    function add two more lines, one to join the room and one to listen for the
    ready
    event. We then need a function to callback to once we hear that the room is ready. In that callback we will enable the “Call” button.

    // app.js
    var VideoChat = {
      socket: io(),
      //...
      onMediaStream: function(stream){
        VideoChat.localVideo = document.getElementById('local-video');
        VideoChat.localStream = stream;
        VideoChat.videoButton.setAttribute('disabled', 'disabled');
        VideoChat.localVideo.src = window.URL.createObjectURL(stream);
        VideoChat.socket.emit('join', 'test');
        VideoChat.socket.on('ready', VideoChat.readyToCall);
      },
    
      readyToCall: function(event){
        VideoChat.callButton.removeAttribute('disabled');
      },
      //...
    };

    We better get hold of that “Call” button too. At the bottom of the file where we grabbed the “Get Video” button, we’ll do the same for the “Call” button.

    // app.js
    VideoChat.callButton = document.getElementById('call');
    
    VideoChat.callButton.addEventListener(
      'click',
      VideoChat.startCall,
      false
    );

    Let’s create a dummy

    startCall
    method in the
    VideoChat
    object to make sure things are going as planned.

    // app.js
    var VideoChat = {
      //...
      startCall: function(event){
        console.log("Things are going as planned!");
      }
    };

    Now, restart the node server (

    Ctrl + C
    to stop the process and
    $ node index.js
    to start again), open 2 browser windows to http://localhost:3000 and click “Get Video” in both. Once both videos are playing the “Call” buttons in each window should be live. And clicking on the “Call” button should log a nice message to your browser’s console.

    Start the signalling

    Our “Call” button is very important as this is going to kick off the rest of the WebRTC process. It’s the last bit of interaction the user needs to do to get the call started.

    The “Call” button is going to set up a number of processes. It is going to create the

    RTCPeerConnection
    object that will manage creating the connection between the two browsers. This consists of producing information on the media capabilities of the browser and the network configuration. It is our job to send those to the other browser.

    Signalling the network configuration

    To set up the

    RTCPeerConnection
    object we need to give it details of the STUN and TURN servers that it will use to discover the network configuration. For this we will use the new Twilio STUN/TURN servers. The simplest method is to just use the STUN servers, they are free and don’t require any authorisation.
    iceServers
    (and
    iceCandidates
    that you will see later) refer to the overall Interactive Connectivity Establishment protocol that makes use of STUN and TURN servers.

    // app.js
    var VideoChat = {
      //...
      startCall: function(event){
        VideoChat.peerConnection = new RTCPeerConnection({
          iceServers: [{url: "stun:global.stun.twilio.com:3478?transport=udp" }]
        });
      }
    };

    In order to get the best possible chance of a connection we will want to use the TURN servers as well. In order to do this, we will need to request an ephemeral token from Twilio using the new Tokens endpoint that will give us access to the TURN servers from our front end JavaScript. We’ll have to request this token from our server and deliver the results back to the browser. Since we have a WebSocket connection already set up, we’ll use that. Here’s the flow we’ll be using in this next section:

    The browser requests the token from the server over WebSockets, the server requests it from Twilio and when it gets it sends it back to the browser over the WebSocket.

    Return to

    index.js
    and within the callback to the socket’s connection event place the following code:

    // index.js
    io.on('connection', function(socket){
      //...
      socket.on('token', function(){
        twilio.tokens.create(function(err, response){
          if(err){
            console.log(err);
          }else{
            socket.emit('token', response);
          }
        });
      });
    });

    Here, when the socket receives a token message it makes a request to the Twilio REST API. When it receives the token back in the callback to the request it emits the token back to the front end. Let’s build the front end part of that now.

    Our

    startCall
    function now needs to use the socket to get a token, so we simply set up to listen for a token message from the server and emit one ourselves.

    // app.js
    var VideoChat = {
      //...
      startCall: function(event){
        VideoChat.socket.on('token', VideoChat.onToken);
        VideoChat.socket.emit('token');
      },
      //...
    };

    And now we need to define the

    onToken
    method to initialise our
    RTCPeerConnection
    with the
    iceServers
    returned from the API. This kicks off the process to get the network configuration so we need to add a callback function to the
    peerConnection
    to deal with the results of that. This is the
    onicecandidate
    callback and it is called every time the
    peerConnection
    generates a potential way of connecting to it from the outside world. As the developer, it is our job to share that candidate with the other browser, so right now we’ll send it down the WebSocket connection.

    The callback receives a candidate and the caller shares the candidate with the other browser over the WebSocket.

    // app.js
    var VideoChat = {
      //...
      onToken: function(token){
        VideoChat.peerConnection = new RTCPeerConnection({
          iceServers: token.iceServers
        });
    
        VideoChat.peerConnection.onicecandidate = VideoChat.onIceCandidate;
      },
    
      onIceCandidate: function(event){
        if(event.candidate){
          console.log('Generated candidate!');
          VideoChat.socket.emit('candidate', JSON.stringify(event.candidate));
        }
      }
    };

    On the server, we need to send that candidate straight on to the other browser:

    // index.js
    io.on('connection', function(socket){
      //...
      socket.on('candidate', function(candidate){
        socket.broadcast.emit('candidate', candidate);
      });  
    });

    Then, we need to be able to receive those messages in the front end, this time on behalf of the other browser. We’ve set up the listener for the socket within the

    onToken
    function, since that is when we create the
    peerConnection
    and will be ready to deal with candidates.

    // app.js
    var VideoChat = {
      //...
      onToken: function(token){
        VideoChat.peerConnection = new RTCPeerConnection({
          iceServers: token.iceServers
        });
        VideoChat.peerConnection.onicecandidate = VideoChat.onIceCandidate;
        VideoChat.socket.on('candidate', VideoChat.onCandidate);
      },
      //...
      onCandidate: function(candidate){
        rtcCandidate = new RTCIceCandidate(JSON.parse(candidate));
        VideoChat.peerConnection.addIceCandidate(rtcCandidate);
      }
    };

    The

    onCandidate
    method receives the
    stringify
    ’d candidate over the socket, turns it into an
    RTCIceCandidate
    and adds it to the browser’s
    peerConnection
    . You may be wondering where the second browser got a
    peerConnection
    object from since we only created that object when the user clicked the “Call” button in the first browser. You’re right to wonder but don’t worry, that is coming up very soon.

    If you want to test for now, restart the app and refresh the page in both the browsers you had open. I left a

    console.log
    in the
    VideoChat.onIceCandidate
    callback above. So get the video streams in both browsers then click “Call”. Instead of the “Things are going as planned!” message we saw before, you should now see a bunch of “Generated candidate!” messages in the console. We’re doing well, but there’s more information we need to share between browsers before we can complete the connection.

    Sharing media configuration

    In the last section we set up how the call initiator starts sharing their network config. Now we need to sort out sharing media information. The

    peerConnection
    objects in each browser will need to generate descriptions of their media capabilities. The caller will create an offer detailing those capabilities and send it over the WebSocket connection. The other browser takes that offer and creates an answer containing its own capabilities and sends it back to the caller. We will implement this below, but here’s a diagram to show what should happen.

    The caller creates an offer and sends over the WebSocket, the receiver creates an answer and sends it back.

    Making the offer

    To start this process, we start with the offer. Once we have created the

    peerConnection
    object, we add our
    localStream
    to it. Once we’ve done that we call
    createOffer
    on the
    peerConnection.
    This generates the media configuration and calls back to the function passed in. In the callback, we call
    setLocalDescription
    with the offer on the
    peerConnection
    and send the offer over the socket to the other browser. We also need a callback for errors if
    createOffer
    isn’t successful.

    // app.js
    var VideoChat = {
      //...
      onToken: function(token){
        VideoChat.peerConnection = new RTCPeerConnection({
          iceServers: token.iceServers
        });
        VideoChat.peerConnection.onicecandidate = VideoChat.onIceCandidate;
        VideoChat.socket.on('candidate', VideoChat.onCandidate);
        VideoChat.peerConnection.addStream(VideoChat.localStream);
        VideoChat.createOffer(
          function(offer){
            VideoChat.peerConnection.setLocalDescription(offer);
            socket.emit('offer', JSON.stringify(offer));
          },
          function(err){
            console.log(err);
          }
        );
      },
      //...
    };

    On the server, we need to pass this message along again.

    // index.js
    io.on('connection', function(socket){
      //...
      socket.on('offer', function(offer){
        socket.broadcast.emit('offer', offer);
      });
    });

    Receiving the offer

    Then in the front end we need to receive the offer. We’re setting up the listener in the

    onMediaStream
    this time as it will trigger the creation of the
    peerConnection
    in the other browser.

    // app.js
    var VideoChat = {
      //...
      onMediaStream: function(stream){
        VideoChat.localVideo = document.getElementById('local-video');
        VideoChat.localStream = stream;
        VideoChat.videoButton.setAttribute('disabled', 'disabled');
        VideoChat.localVideo.src = window.URL.createObjectURL(stream);
        VideoChat.socket.emit('join', 'test');
        VideoChat.socket.on('ready', VideoChat.readyToCall);
        VideoChat.socket.on('offer', VideoChat.onOffer);
      },
    
      onOffer: function(offer){
        console.log('Got an offer')
        console.log(offer);
      },    
      //...
    };

    Let’s run this to make sure we’re on track so far. Restart the server and go back to your open browser windows. Refresh both, click “Get Video” in both and accept the permissions request. Open a developer console in one window and click “Call” in the other browser. You should see “Got an offer” printed to the console followed by a JSON string of the offer that was sent. One side of our signalling is working!

    There’s a lot of information in the offer but thankfully we don’t need to look deeply into that right now. It just needs to be passed between the

    peerConnection
    objects in each browser. Let’s carry on building.

    At this point, we could make a call to

    VideoChat.startCall
    but that’s eventually going to create an offer and send it over the socket to the first browser which will then go through that process again in a loop. What we actually want to do here is create an answer and return it to the first browser. I think we need a refactor at this point.

    Refactoring

    What we need is a way to create a

    peerConnection
    object for ourselves and set up the listeners but decide whether we create an offer or an answer to send to the other browser.

    To do this, I’m going to update the

    onToken
    function to take a callback function that will allow us to describe what happens once the
    peerConnection
    is set up. Since
    onToken
    is also used as a callback the function definition will now return a function that will become the callback:

    // app.js
    var VideoChat = {
      //...
      onToken: function(callback){
        return function(token){
          VideoChat.peerConnection = new RTCPeerConnection({
            iceServers: token.iceServers
          });
          VideoChat.peerConnection.addStream(VideoChat.localStream);
          VideoChat.peerConnection.onicecandidate = VideoChat.onIceCandidate;
          VideoChat.socket.on('candidate', VideoChat.onCandidate);
          callback();
        }
      },
      //...
    };

    So the callback function replaces our original method of creating the offer, which we will need a new function for:

    // app.js
    var VideoChat = {
      //...
      createOffer: function(){
        VideoChat.peerConnection.createOffer(
          function(offer){
            VideoChat.peerConnection.setLocalDescription(offer);
            VideoChat.socket.emit('offer', JSON.stringify(offer));
          },
          function(err){
            console.log(err);
          }
        );
      },
      //...
    };

    Then we change

    startCall
    to set up the callbacks like this:

    // app.js
    var VideoChat = {
      //...
      startCall: function(event){
        VideoChat.socket.on('token', VideoChat.onToken(VideoChat.createOffer));
        VideoChat.socket.emit('token');
      },
      //…
    };

    Now we can start defining the functions for creating an answer.

    // app.js
    var VideoChat = {
      //...
      createAnswer: function(offer){
        return function(){
          rtcOffer = new RTCSessionDescription(JSON.parse(offer));
          VideoChat.peerConnection.setRemoteDescription(rtcOffer);
          VideoChat.peerConnection.createAnswer(
            function(answer){
              VideoChat.peerConnection.setLocalDescription(answer);
              VideoChat.socket.emit('answer', JSON.stringify(answer));
            },
            function(err){
              console.log(err);
            }
          );
        }
      },
      //...
    };

    In this case we want to use

    createAnswer
    as the callback to the creation of the
    peerConnection
    but we also need to use the offer to set the remote description on the
    peerConnection
    . This time, we create a closure by calling the function with the offer and return a function to use as the callback. Now when the
    peerConnection
    is created we return to the inner function and turn the offer we received over the socket into a
    RTCSessionDescription
    object and set it as the remote description. We then create the answer on the
    peerConnection
    object, very much the same as we created the offer in the first place, and send it back over the socket.

    This is how we set up our

    onOffer
    function now:

    // app.js
    var VideoChat = {
      //...
      onOffer: function(offer){
        VideoChat.socket.on('token', VideoChat.onToken(VideoChat.createAnswer(offer)));
        VideoChat.socket.emit('token');
      },
      //...    
    };

    Making the final connection

    Now that we are sending an answer back over the socket, all we need to do is pass that on to the original caller and then wait for the browser to do its magic.

    Back in

    index.js
    let’s set up the relay for the answer.

    // index.js
    io.on('connection', function(socket){
      //...
      socket.on('answer', function(answer){
        socket.broadcast.emit('answer', answer);
      });
    });

    Then, we need to set up receiving the answer in the browser. We’ll add one more listener to the socket when the

    peerConnection
    is created and build the callback function to save the answer as the remote description of the
    peerConnection
    .

    // app.js
    var VideoChat = {
      //...
      onToken: function(callback){
        return function(token){
          VideoChat.peerConnection = new RTCPeerConnection({
            iceServers: token.iceServers
          });
          VideoChat.peerConnection.addStream(VideoChat.localStream);
          VideoChat.peerConnection.onicecandidate = VideoChat.onIceCandidate;
          VideoChat.peerConnection.onaddstream = VideoChat.onAddStream;
          VideoChat.socket.on('candidate', VideoChat.onCandidate);
          VideoChat.socket.on('answer', VideoChat.onAnswer);
          callback();
        }
      },
    
      onAnswer: function(answer){
        var rtcAnswer = new RTCSessionDescription(JSON.parse(answer));
        VideoChat.peerConnection.setRemoteDescription(rtcAnswer);
      },
      //…
    };

    The browsers are now passing media capabilities and connection information between them leaving one more thing to do. When there is a successful connection the

    peerConnection
    will receive an
    onaddstream
    event with the stream of the peer’s media. We just need to connect that to our other
    <video>
    element and video chat will be on. We’ll add the
    onaddstream
    callback in where we create the
    peerConnection
    .

    // app.js
    var VideoChat = {
      //...
      onToken: function(callback){
        return function(token){
          VideoChat.peerConnection = new RTCPeerConnection({
            iceServers: token.iceServers
          });
          VideoChat.peerConnection.addStream(VideoChat.localStream);
          VideoChat.peerConnection.onicecandidate = VideoChat.onIceCandidate;
          VideoChat.peerConnection.onaddstream = VideoChat.onAddStream;
          VideoChat.socket.on('candidate', VideoChat.onCandidate);
          VideoChat.socket.on('answer', VideoChat.onAnswer);      
          callback();
        }
      },
    
      onAddStream: function(event){
        VideoChat.remoteVideo = document.getElementById('remote-video');
        VideoChat.remoteVideo.src = window.URL.createObjectURL(event.stream);
      },
      //...
    };

    And that should be it! Load up a couple of browsers next to each other, open up your development URL, get the video stream in both browsers and then click “Call” from one of them. You should find yourself looking at yourself. Four times!

    Me waving at myself, twice!

    This is just the beginning

    This is just step one to building out all sorts of potential WebRTC applications. Once you get your head around the process required to set up the connection between two browsers, then what you do with that connection is up to you. In this instance, creating a way for users to hang up might be a start, or making a lobby area with much better presence controls.

    Then there’s more fun stuff you could try out. You can alter the video streams by passing it to a canvas and playing about with it there, you could use the WebAudio API to change the sound and with the data channel (which I haven’t covered in this post) pass any data you wanted between peers.

    You can see all the code from this post, fully commented, on GitHub.

    I’d love to hear about the sorts of things you want to do or are already doing with WebRTC. Give me a shout on Twitter or drop me an email at philnash@twilio.com.

    Set Phasers to STUN/TURN: Getting Started with WebRTC using Node.js, Socket.io and Twilio’s NAT Traversal Service

    Bookmark this post:
    Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


    man-on-phoneAttribution is the heart and soul of marketing analytics. Being able to determine which leads, opportunities, and revenue come from your marketing efforts is crucial. Search marketing now accounts for 49% of digital marketing budgets (MarketingProfs), and being able to prove the ROI driven by those online sources is the only way to keep that budget growing. For many marketers, web forms are the primary way to capture online leads. This has become standard practice in the industry, and tracking the lead source through forms is fairly simply. But what if that lead doesn’t want to take the time to fill out a form? What happens to them? They still came in through one of your online sources, but do you have way to capture that if they don’t fill out a form? When that lead decides to pick up the phone instead of filling out a form, they usually slip through the cracks of marketing attribution. With the rapid adoption of mobile phones, it has become easier then ever to call a business instead of staying online. In order to close that hole in your attribution and capture those leads that go offline, you need call tracking.

    By adding call tracking to your array of attribution tools, you gain the ability to claim leads that came in over the phone as marketing leads. Typically, people who pick up the phone are further along in the buying process and are much more likely to turn into a customer. In addition, 61% of SMBs rate telephone leads as excellent or good, higher than any other lead type (BIA/Kelsey report, 2012). Missing out on attributing phone leads to marketing can be a deadly mistake. In order to accentuate this point, we performed an internal review of our own search marketing efforts. The following is the result of our analysis.

    Findings

    Conversion Type Leads Per Year Deals Per Year Revenue Per Year
    Web Forms 66% 34% 37%
    Inbound Calls 34% 66% 63%

    Key Takeaways

    Throughout a one-year period, search marketing generated a higher number of leads from web forms, but those leads don’t convert anywhere near the rate at which phone leads convert. Without the ability to attribute phone leads to marketing sources, these numbers would have been drastically different. Here are a few important insights from the data:

    • Web forms account for only 66% of online lead capture
    • 23% of inbound calls convert to sales compared to 6% of web forms
    • 66% of won deals came from inbound calls
    • Without call tracking, only 37% of revenue can be attributed to marketing

    Being able to claim almost 3x more revenue as being generated from marketing can have a tremendous impact on your organization. By only taking credit for leads that are generated through web forms, you miss a huge opportunity to prove the importance of your marketing and increase your budget. Phone leads are 4x more likely to convert into a sale, so taking credit for those leads is huge. If you still aren’t convinced, take a look at our white paper, Tracking Phone Leads: The Missing Piece of Marketing Automation.

    The post Ifbyphone’s Own Analysis of Search Marketing Leads Captured by Web Forms vs. Inbound Calls appeared first on Ifbyphone.

    Bookmark this post:
    Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


    This is post #1 of 1 in the series “How the Digital Revolution is Reshaping Education”

    The digital revolution has had a huge impact on education. While university students of fifteen years ago may have been lucky enough to have had their own desktop computer, today’s university student is often equipped with a laptop, smartphone, and sometimes even a tablet. This extends to primary and secondary students as well; the seemingly intuitive abilities of today’s children to operate their digital devices is astounding.

    Students have adopted mobile devices throughout the world. In the US, more than 90% of college students own a smart phone as of 2014, a figure that is expected to account for the rest of the world by 2016 (Source: eMarketer). However, college students have been slower to adopt tablets, because tablets are often seen as devices exclusively ‘for entertainment purposes, not for writing papers and doing class projects’ (Source: Ball State University). In 2014, only about 29% of college students report owning a tablet in the US.

    With the advent of digital textbooks, more and more students could be adopting tablets sooner. In the United Kingdom, the University of East London provided all first-year students as of fall 2014 with Samsung Note devices. These devices are meant to replace costly and heavy textbooks, so students have more flexibility when studying and researching. Devices come equipped with core textbooks, links to the online library resources, and the virtual learning environment. (Source: UEL). Textbooks are provided by Kortext, an aggregated platform that offers fully functional on- and offline reading.

    The University of East London is one of the first universities worldwide to undertake such an initiative. As more and more textbooks are digitized, the tablet becomes the logical device on which to read: like an eReader, the tablet mimics a physical book by allowing the user to ‘turn’ pages. The digital nature of the book make searching for key words or subjects easy, and taking notes in the margins is no longer decreasing the value of a physical book.

    Digital textbooks can also be interactive, containing audio, videos, hyperlinks, and more. Interactivity, as opposed to passively reading, increases students’ comprehension and retention rates. Additionally, digital textbooks save resources and ultimately lower costs of expensive college textbooks. E-textbooks, digitized alternatives to printed textbooks, without interactivity, cost about 40-50% of the print retail price, although access expires after 180 days. Right now, e-textbooks and digital textbooks make up about 3% of textbook sales in the United States. (Source: AASCU) However, as more schools and students recognize the value of digitized textbooks, this number is sure to rise.

    This is the first post in our series: How the Digital Revolution is Reshaping Education. The next post will discuss the pros and cons of Virtual Learning Environments.

    Get Your Free eBook:
    Enhancing the Learning Experience: The Future of Education

    Download Your Free Ebook

    Bookmark this post:
    Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


    December 10, 2014

    Learn more about Twilio’s local German numbers here

    Running errands is much more fun in a BMW. DriveNow is revolutionizing car-sharing by offering customers the freedom pick a BMW or Mini Cooper up wherever is convenient for them, and drop it off wherever they please.

    DriveNow is committed to delivering their customers a seamless experience from sign up to car drop off. To keep in touch with customers, they use SMS to confirm rentals and manage reservations. Timing is everything when it comes to car rentals. If a customer is waiting around for text message with the details about their car rental, they’re losing valuable time.

    “We have to have a partner for a 24/7 business that’s reliable and fast,” said Katrin Lippold, Operational Business Manager at DriveNow. To ensure that their customers were getting critical texts immediately and reliably, DriveNow switched to Twilio to power their SMS communications.

    When a customer reserves a car, they instantly get at text with the car’s location, the color of the car, and the license plate. DriveNow’s communication strategy is tailored to make each customer’s rental experience easy, and hyper-local. “If you want to come to the customer’s heart, you have to be local,” says Lippold. Using Twilio, DriveNow can use local German numbers, familiar to their customers. They can be sure that anyone using DriveNow, whether they’re visiting from Munich or San Francisco, will get the mission critical texts they need every time.

    “Twilio helps us a lot to improve the customers experiences by delivering fast and reliable SMS to a worldwide network,” says Lippold. DriveNow is currently expanding their coverage worldwide, and relying on Twilio to help them give every customer a fantastic rental experience.

    DriveNow Uses Twilio To Connect To Customers and Revolutionize Car Sharing

    Bookmark this post:
    Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


    We hinted at its existence before…but today we’re taking customer support to the next level with the official launch of the Ifbyphone Support Community.

    With a central location that combines tutorials, jumpstarts, and user submitted questions and answers, Ifbyphone is working harder than ever at making us the easiest company you’ve ever done business with. The Help Center and Customer Service buttons in the portal now direct you to the Support Community, where we’re centralizing our support efforts with FAQstutorial videosjumpstarts and user guides, and a customer forum for you to share ideas and issues alike. There’s even a section for specific categories based on applications that allows you to ask questions and post ideas about product enhancements.

    This helps keep our world of support transparent and open, while also offering you a faster way of resolution with a common issue!

    Go ahead: create a user for yourself. If you need assistance, we’re one step ahead of you: check out the video below. It explains everything. Now dive in! We’re looking forward to seeing you (and supporting you) in this exciting new community.

    The post The Brand New Ifbyphone Support Community Has Officially Launched appeared first on Ifbyphone.

    Bookmark this post:
    Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


    “Global reach. Local experience.” It’s a standard we hold ourselves to, so that your calls and messages to customers around the world deliver a familiar experience with recognizable phone numbers and with local language and call quality. Starting today, you can deliver this experience in Germany with both local and mobile German Twilio phone numbers.

    We’re fielding a lot of interest in global Twilio numbers these days, especially from Germany where there’s a vibrant developer community and a business climate that’s driven by constant innovation. In fact, use of Twilio by non-US businesses grew 240% this year. So we’ve been upping our coverage to meet this demand.

    Here’s what you need to know about Twilio’s new German phone numbers:

    • Instantly-available local German numbers: We now offer voice-capable local numbers across Germany. With a new Addresses API endpoint, we streamline the process that many countries, like Germany, require to associate a local phone number with a physical address. This means your new German number is instantly available with just a simple request from your web or mobile app or a few clicks in the Twilio account portal.
    • Dual-function voice and SMS mobile numbers: We are the first to offer virtual German phone numbers that support both voice and SMS. These are the dual-function mobile phone numbers you’ve been asking for to provide your customers with one consistent number for calling and messaging.

    If you already operate in Germany or plan to soon, then German phone numbers are a no-brainer. They give your communication a familiar local identity leading to much higher response rates. Customers are also more likely to return your calls and messages because they pay lower local rates. Another advantage is that Twilio customers are already seeing higher SMS delivery rates in Germany from messages originating from German mobile numbers.

    But offering phone numbers in countries like Germany is just a start. Giving your customers a true local communications experience in Germany requires even more. That’s why today’s announcement is just one step in a series of new capabilities for providing the best in-country experience possible. Here are a few examples:

    • Local Quality: Direct carrier connectivity with Telefonica in Germany lets us offer you high quality voice calls and predictable messaging with real-time SMS delivery status. We also provide Global Low Latency (GLL), so your calls use local routes and deliver clear, uninterrupted voice.
    • Local Language: Convert text to speech (TTS) in German and 25 other languages for your automated voice greetings and commands. Send text messages in German and almost every other language with support for 116,000 characters through dynamic Unicode encoding.
    • Local Regulation: Twilio is compliant with the European Safe Harbor data privacy regulation. We also give you the control to delete call and message data using the Twilio API or the account portal.
    • Local Cost Controls: Prevent voice and messaging overuse in Germany or any country, with an API for usage-based triggers and explicit geographic permissions.

    You can start provisioning German numbers and taking advantage of these new localization features right away.

    Go ahead and get started. You’re just a few clicks away from provisioning your German phone number. We can’t wait to see what you build!

    Ahoy Deutschland: Introducing New Local Numbers In Germany

    Bookmark this post:
    Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google


    December 09, 2014

    Today, the Prime Minister of Estonia, Taavi Rõivas, made the 5,500 mile trek from Estonia to visit Twilio HQ. While Estonia might seem like a world away, Prime Minister Rõivas is eliminating the geographic barriers that prevent entrepreneurs in the US from establishing companies abroad. At Twilio, Rõivas discussed Estonia’s E-Residency card program that gives anyone the ability to establish Estonian citizenship virtually, via a digital identification card.

    Along with 60 state officials and Estonian entrepreneurs, Prime Minister Rõivas met with Ott Kaukver, who is Vice President of Engineering at Twilio.

    Ott talked with the group of state officials and Estonian entrepreneurs about the parallels of shipping new features and gathering customer feedback at Twilio, and managing the E-Residency program. “For them, it’s about how we work. What’s the philosophy? What do we do well with our small teams?” says Ott. Twilio R&D is built around small, empowered and self contained teams that are responsible for delivering great experiences to customers, and gathering feedback to iterate on their work. This structure isn’t common in large companies, but it’s essential to the way Twilio operates.