Building Multichannel Transit Apps with Tropo

This post is the third in a series about building an open source transit data application using GTFS data from the Delaware Transit Corporation.

In the first post, I described how to download the State of Delaware’s transit data and populate a MySQL database with it.

In the previous post, I walked through a process of setting up stored procedures for querying the transit data and setting up a LAMP application environment.

Now we’re ready to write code for our transit app!

Choosing a Platform

One of the most under appreciated developments that has accompanied the increasing amount of government data that has become available in open formats is the vast array of new tools now available for developers to use. I’ve talked about this a lot in the past but it bears repeating – it has never been easier to build sophisticated, multi-channel communication applications than it is now.
The number of options open to developers is truly exciting, but there are some platforms that rise above the rest in terms of ease of use and in what they enable developers to do. For this project, I will use the Tropo WebAPI platform.

The Tropo WebAPI has a number of advantage that will come in handy for our transit app project (and any other projects you’ve got in the works). You can write a Tropo app in one of several popular scripting and web development languages – Ruby, Python, PHP, C# and JavaScript (Node.js). There are libraries available for each language that make it easy to build Tropo apps and to integrate with the Tropo API. (Disclaimer – I’ve worked on several of these libraries.)

In addition, the real magic that Tropo brings to the table is the ability to serve users on multiple communication channels (phone, IM, SMS, Twitter) from a single code base. This is especially important for an application meant to service transit riders. These users may not have the luxury of sitting in front of a desktop computer in order to look up information on a bus route or schedule. They are much more likely to be traveling and using some sort of phone or mobile device. The Tropo WebAPI is perfect for our needs.

Vivek Kundra, the former CIO of the District of Columbia and current CIO of the United States, has described the effort by governments to release data in open formats as “the democratization of data” – these efforts make previously hard to get, or hard to use data available for everyone.

I like to describe platforms like Tropo and the various libraries that are available to use with it as “the democratization of application development” – these tools make building powerful communication apps simple for anyone who understands web development.

Building our Transit App

Before we can build our application, we need to decide what it will do.

For our purposes, this has already been determined by the stored procedures we built in the last post. Our transitdata database has 2 stored procedures – one to return the nearest bus stops to a specific address or location, and one to return the next bus departure times from a specific bus stop.

However, this series of posts is meant to inspire readers to build their own applications – now that you have transit data in a powerful relational database like MySQL you can query it any way you like. In addition, the SQL scripts and steps developed for this series of posts can certainly be used with the data from any other transit agency that uses the GTFS format. There are lots. Use your imagination – build whatever you find useful.

So now that we have some idea of what we want our application to do, we need to select a development language. It will probably come as no surprise that for this example I’m going to use the PHP scripting language and the PHP Library for the Tropo WebAPI. PHP is a good match for Linux, Apache and MySQL – all technologies we used in the previous entries in this series of blog posts.

If you want some more detailed information on building PHP applications that run on the Tropo WebAPI platform, you can review a separate series of blog posts on this issue here.

To get the PHP Library for the Tropo WebAPI, you can download it and unpack on your web server, or simply clone the Github repo.

Once you do that, you can grab the code for our demo application from GitHub as well.

In order to test this application, you’ll need to sign up for a free Tropo account – you can do that here. Once you are signed up, go to the Applications section in your Tropo account and set up a new WebAPI application that points to the location of our PHP script on your web server. You can see more detailed information on setting up a Tropo account here.


Note – You’ll also need an API key from Google Maps for geocoding addresses – get one here. Change the following line in the application to include your Google API key:

define("MAPS_API_KEY", "your-api-key-goes-here");

Once your Tropo account and application are set up, you can add as many different contact methods as you like – your Tropo application is automatically provisioned a Skype number, a SIP number and an iNUM.

To illustrate how our transit app will work, I’ve gone ahead and assigned a Jabber IM name to my app – Add this to your friends/user list in Google chat and you can use the app I’ve set up. Here’s what it looks like in my IM client:


As you can see, my first IM to sends the address of a building in Downtown Wilmington (actually, a building I used to work in). The app responds with the three closest bus stops and the distance (in miles) to each.

I then send the number of the bus stop I am interested in. The app responds with the next three buses to leave that stop, the route served by each and the number of minutes before each departs.

How cools is that!

I could very easily make this application more sophisticated, so that it it delivers content tailored to specific channels (i.e., IM vs. phone) but I want to keep things simple for now.

In the next blog post of this series, we will introduce some additional tools, including Google Maps and the new hotness in cloud telephony – Phono.

Stay tuned!

Democratizing Transit Data with Open Source Software

Democratizing government data will help change how government operates—and give citizens the ability to participate in making government services more effective, accessible, and transparent.

Peter Orszag, OMB Director

This post is a continuation in a series on building a transit data application using GTFS data recently released by the State of Delaware.

If you missed my first post, go back and check it out. You can get a MySQL database loaded up with all of the Delaware GTFS data in just a couple of minutes. Once you do that, you’ll be ready to follow along.
MySQL Database
Continuing our work from the last post, in this post we’ll finish building out our database and set up an environment to run a web application – for the purposes of the demo app I’m building for this series, I’ll assume you have a standard LAMP set up to work with.

Finish the Database Setup

In the last post, we downloaded the GTFS data from the State of Delaware, unzipped it and loaded it into a MySQL database. Now, we need to set up some stored procedures so that we can extract data from our MySQL database and present it to an end user.

You can see the stored procedures I created for this demo application on GitHub. To load them into our shiny new database, simply run:

  ~$ wget
  ~$ mysql -u user_name -p transitdata < dartfirststate_de_us_procs.sql

Thats it!

If you look at these procedures, you’ll see that they are set up to answer two different questions from users. The first one – getDepartureTimesAndRoutesByStopID – will query our database and get a set of routes and departure times by the ID of a transit stop. The other – GetClosestStopsByLocation – accepts a lat/lon and returns the stop ID and name of the transit stops closest the the requesting location.

In practice, you can see these two procedures working in tandem – the later procedure would be used by someone wishing to find the transit stop closest to their present location. The former would provide information on the next buses to reach that stop, the routes they serve and the scheduled departure time from that location.

There are certainly many more potential queries that could be used to extract valuable information from the GTFS data in our database, but these two should suffice for our demo application. Also, both are pretty well suited for use from a text messaging (SMS) application, which is what we’ll build in the last post in this series.

Setting up the Application Environment

I assume for this series of posts that you have access to a LAMP server. This server should be hosted somewhere where it can receive HTTP posts from a third party platform (this is required in order to build an SMS application).

While it is not a requirement that you code your transit application in PHP, I will do so in this series. Feel free to use the development language of your choice in building your own application – just about every web development language can work with MySQL.

Before we start writing code, lets finish a few last items. First, lets create a user for our web application – remember to give this user only the privileges they need. For our demo application, the web app user only needs to EXECUTE stored procedures. So, we want to do this at the MySQL shell:

mysql> GRANT EXECUTE ON transitdata.* TO username@'localhost' IDENTIFIED BY 'password'; 

Be sure to replace the ‘username’ and ‘password’ above with values of your choosing. Now, let’s put our database access credentials in a safe and convenient place.

When writing a web application, I prefer not to store this inforamtion in my code (as a config item or declared constant). Instead, I like to keep this information in my Apache configuration.

If you’re using Apache on Ubuntu, you can typically just store this inforamtion in your VirtualHost file (located in /etc/apache2/sites-available/). Use the Apache SetEnv directive to set the values you want to store:

SetEnv TRANSIT_DB_HOST localhost
SetEnv TRANSIT_DB_USER username
SetEnv TRANSIT_DB_PASS password
SetEnv TRANSIT_DB_NAME transitdata

Again, be sure to replace the ‘username’ and ‘password’ above with the values used when creating your MySQL user. Once you have entered these values into your VirtualHost file, save it and reload Apache:

 ~$ sudo /etc/init.d/apache2 reload

Now we’re all set to start writing code!

In the next post we’ll build a simple, yet powerful PHP-based SMS application that anyone with a cell phone can use to find a transit location nearest to them in the State of Delaware, and find out the departure times / routes from that location.

Stay tuned!

How to Build an Open Transit Data Application

Earlier this year, I had the chance to work with one of my state’s Senators to draft and pass a bill requiring the state’s transit agency to publish all of it’s route, schedule and fare information in an open format for use by third parties.

This bill was signed into law by the Governor a few months ago, and the data is now available (in GTFS format) on the Delaware Transit Agency’s web site.

My primary goal in working to get this law enacted was to raise awareness within my state about the potential for open government data to spur civic coding and the development of useful applications at little or no cost to the government. Now that my state actually publishes some open data (Hells to the yeah!), I think the next step for me is to provide some guidance on how to get started using it to build civic applications.

Hopefully, this will show others how easy it is and get them to try their hand at building a civic application.

(Note, transit data is an especially rich source for developing civic applications. For some background and more detail on this, see this post.)

In the next several posts, I’ll document one process for developing an open source transit data application using GTFS data from the Delaware Transit Agency. I’ll be sharing code and some examples that will help you get started if you feel like trying your hand at building a civic application.

Let’s get started!

Getting the Data

Now that the Delaware Transit Agency has published all of their route and schedule information, anyone that wants to use it can simply download it.

This zip file contains a collection of text files that conform to the GTFS specification – for a detailed description of file contents, go here. If you want to build a transit app with GTFS data, I recommend spending a little time becoming familiar with the layout of these files, and getting a sense of what the data represents.

Setting up a Database

In order to use this data as part of an application, we’re probably going to need to get it into a database so that we can manipulate it and run queries against it. An easy way to do this is to import it into a MySQL database instance.

MySQL is a powerful open source database that is used in scores of different web applications and its a solid choice for building a transit data application. In addition, the MySQL LOAD DATA INFILE statement is a powerful and easy way to populate a database with information from a text file (or multiple files).

I’ve created a SQL script to load Delaware transit data into a MySQL database. You can get this script from GitHub – it’s pretty simple, and you should feel free to modify it as your own personal preferences or requirements dictate. Just fork the Gist.

Combining this script with a couple of minutes on the command line will give you a MySQL database with all of the transit data loaded up and ready to use. The steps below assume that you have MySQL installed and running.

To install MySQL:
~$ sudo apt-get install mysql-server

To see if MySQL is running:
~$ pgrep mysql

Create a temporary location for the GTFS files:
~$ mkdir /tmp/dartfirst_de_us

Download the GTFS files from the Delaware Transit Agency website:
~$ wget

Unzip the individual text files to our temporary location:
~$ unzip -d /tmp/dartfirst_de_us/

Get the SQL script for loading GTFS files into MySQL from GitHub:
~$ wget

Invoke MySQL and pass in the SQL script (make sure you change ‘user_name’ to a valid MySQL user name):
~$ mysql -u user_name -p < dartfirststate_de_us.sql

That’s it!

Now, all of the data from the original text files has been loaded into a MySQL database called transitdata. You can start to construct queries to retrieve information from these tables to support the functionality for your application.

In the next post, I’ll walk through a few basic queries that can extract useful information from these tables. We’ll also lay the groundwork for a really cool mobile application that I will deploy for use by the public when this series of posts is complete.

Stay tuned!

A ‘Glass Half Full’ View of Government App Contests

An increasing number of people are starting to suggest that the concept of the “app contest” (where governments challenge developers to build civic applications) is getting a bit long in the tooth.

There have been lots of musings lately about the payoff for governments that hold such contests and the long term viability of individual entries developed for these contests. Even Washington DC – the birthplace of the current government app contest craze – seems the be moving beyond the framework it has employed not once, but twice to engage local developers:

“I don’t think we’re going to be running any more Apps for Democracy competitions quite in that way,” says Bryan Sivak, who became the district’s chief technology officer in 2009. Sivak calls Apps for Democracy a “great idea” for getting citizen software developers involved with government, but he also hints that the applications spun up by these contests tend to be more “cool” than useful to the average city resident.

App Contests Abound

This view is starting to crystallize against the backdrop of an ever greater number of app contests being held. At the recent Gov 2.0 Expo in Washington DC, Peter Corbett of iStrategy Labs (who helped launch the first government app contest in DC) gave a presentation that listed several dozen governments around the globe that had recently completed an app contest or were scheduled to soon start one.

And the biggest app contest to date – being sponsored by the State of California – is slated to begin soon. (Two fringe technology companies that you’ve probably never heard of – Google and Microsoft – are set to partner with the Golden State for this 800 pound gorilla of government app contests.)

So if app contests are being used in more and more places, and the size and scope of these contests keeps growing, what’s with all the hand wringing of late?

Lessons Learned from App Contests

My take on app contests is not an unbiased one. I’ve been a competitor in three different app contests (the original Apps for Democracy, the original Apps for America, and the NYC Big Apps competition) and was recognized for my work in them. Outside of contests, I’ve build applications using open government data and APIs for the cities of Toronto and San Francisco, and for the New York State Senate.

Clearly I am a supporter of the concept of the government app contest.

Having said that, though, I do think that those taking a more skeptical view of app contests are asking some important questions. The government app contest has come a long way since Vivek Kundra was in the driver’s seat in the DC technology office. It’s time to start asking how app contests can be improved.

But before we move on to that discussion, it is worth noting the lessons that have been learned over the last two years or so from government app contests.

First, governments and citizens benefit when high value, high quality data sets are released by governments that are in machine readable formats, easily consumed by third party applications. Believe it or not, there is still debate in many places on this point. App contests prove the theory that publishing open government data provides tangible benefits.

Second, app contests prove that it is possible to engage and excite both developers and high level elected officials about open government data. The cause of open government can’t be anything but well served when these two groups are excited about it, and appealing to both successfully in equal measure is usually very challenging.

Third, and maybe most importantly, government app contests provide sort of a “petri dish” for government officials to see how government data might be used. They let governments solicit ideas from the private sector about the different ways that open data can be used in a manner that is low risk and low cost. Some of the proposed uses of government data that emerge from these contests – whether its tweeting a recorded message to your Congressman, or using an IM client to browse campaign finance data – might never be considered by governments but for them running an app contest.

These lessons aside, there are those who contend that the existence of app contest entries that have languished (or even been abandoned altogether) after a contest is over suggests that an app contest didn’t work well (or as well as it should have). I don’t think this is necessarily the case.

Look at it this way; once a government has decided to publish open data sets and enable the development of one single app by an outside developer, the marginal cost of the next app (from the perspective of government) is essentially zero.

Once a data set has been put into a machine readable format and staged for download so that it can be used by a developer or third party, what is the cost of the next download? Or the next 50, or 100? Essentially nothing.

The road to tech startup profitability and success is a long and hard one, and it’s littered with the hollowed out husks of ideas (some very bad, some very good) that for one reason or another just don’t make it.

Should we be overly concerned that the dynamic of government app contest entries is essentially the same as it is for any other sort of technology startup project? Personally, I don’t think so.

Making App Contests Better

I do however, think there are some things that government app contests organizers can do a better job on.

Most notably, government engagement with app developers over the long-term has proved to be somewhat challenging. Gunnar Hellekson of Red Hat has observed the same phenomenon:

“..I would think that one of the desired outcomes [of an app contest] was an ongoing community of developers that are producing and maintaining applications like this — whether it’s for love, money, or fame. It would be a shame to see hard work like this die on the vine because we’ve lost the carrot of a cash prize.”

I don’t think this is an issue with developers necessarily – I know there is still lots of excitement around the data sets that have served as the foundation for app contents that are now over. I think the issue is that governments do not always have a plan for post-contest developer engagement.

Once the prizes are given out, and the award ceremony is over, there are no plans or strategies in place to keep developers engaged over the long haul. I do not believe this is an issue of money – not every developer is looking for a cash prize, and there are some good examples of government agencies (MassDOT and BART among them) who do a pretty good job of keeping developers engaged without contests.

I also think that a greater emphasis could be placed in app contests on developing reusable components (as opposed to user-facing solutions) that can be released as open source software and used by anyone to consume data or interact with a government API. I’m talking specifically about things like open source libraries for interacting with the Open311 API – tools and libraries specifically designed to make it easier to use open government data.

The easier it is to use government data and APIs the more people will do it, and the more development of reusable components as a by product of app contest, the less angst there will be about projects that don’t remain viable long-term. If one of the requirements of entry is the use (or reuse) of common components, even contest entries that fizzle out down the road will have made a tangible contribution to the open data effort.

I think with a few simple changes, app contests can continue to be used as an effective tool by governments to encourage the development of cutting edge applications powered by “democratized” government data.

Building an Open311 Application

Earlier this year, I had an idea to build a Twitter application that would allow a citizen to start a 311 service request with their city.

At the time, there was no way to build such an application as no municipality had yet adopted a 311 API that would support it (although the District of Columbia did have a 311 API in place, it did not – at the time – support the type of application I envisioned).

That changed recently, when San Francisco announced the deployment of their Open311 API. I quickly requested an API key and began trying to turn my idea into reality.

My idea resulted in an application that I soft launched last week. TweetMy311 is now live and can be used in the City/County of San Francisco to report 311 service requests. The project website has a detailed description of how it works, but its very close to my original idea.
More good news on the Open311 front came recently when it was announced that San Francisco and the District of Columbia had come to agreement on a shared Open311 standard. This means that apps built to work with the San Francisco 311 API will also work with the 311 API in Washington DC. I’m working on enabling TweetMy311 for Washington DC now, and hope to have this service live there in a few weeks.

Ultimately, I hope people use my application, that they like it, and that it makes it easier to report an issue to their municipality. I did, however, have some other motives in developing this application that I think are equally important.

Are You Experienced?

Since 311 APIs are rare, and (right now) applications that use 311 APIs are also rare, I think there is value in being able to capture the experience of developing an Open311 application from scratch. This information can provide tremendous value to the governments that deploy 311 APIs (what works, what doesn’t, what can be improved, etc.), and for developers thinking about building an Open311 application.

I hope to use TweetMy311 to provide feedback to governments that deploy 311 APIs (and to those thinking about deploying one) so that they can get a sense of how the experience works from a developer that has used one. At the end of the day the ease of use of an API, the quality of documentation, the ability to test applications in a meaningful way and a number of other factors will determine how many developers decide to take the step and become a “civic coder” by building an Open311 application.

Getting to Open

For me, the use of open source technologies in TweetMy311 was important. This project provided a great opportunities to learn more about a technology that I have become fascinated with of late – CouchDB. TweetMy311 is a NoSQL application that uses CouchDB at its core. It runs on Ubuntu Linux with Apache and was built with the PHP scripting language (I guess that makes it the CLAP stack – CouchDB, Linux, Apache, PHP)

Building with open source technologies was important because I hope to be able to share the code I have developed with interested governments that want to learn how an Open311 application is put together. I also believe it’s important because I think the Open311 initiative can be a great mechanism for encouraging the use of open source technologies.

Leading up to this project, I developed a small PHP library for interacting with the San Francisco Open311 API. I make use of this library in TweetMy311 and any other developer that wants to use it in their project is free to do so. I plan on branching this library soon so that it can work with the new version of the Open311 standard.

Give it a twhirl

So if you live in San Francisco and you want to give TweetMy311 a twhirl, check out the description on the project website. I’d appreciate any feedback – positive or negative – because ultimately I think it will make the project better.

I had a great experience developing TweetMy311, and I learned a lot. I’m looking forward to sharing my experience with interested governments and other developers.

Making Democratic Participation Frictionless

This week, I had the pleasure of presenting at the Emerging Communications Conference & Awards (eComm) event in San Francisco.

I gave a presentation on the convergence of two powerful trends that promise to deliver more and more choices to people in how to communicate, interact and transact with their governments. The first is the growing trend toward more transparent government. By this I mean the efforts by governments around the globe and across this country to release meaningful, high-value data sets in formats that are designed to be used by third-party developers and applications.

Photo by James Duncan Davidson

Photo by James Duncan Davidson

The second is the proliferation of more and more powerful, easy to use tools for developers to build mobile applications – applications that use telephony, SMS, IM and social networking interfaces for interacting with users. I’ve discussed both of these trends at length on this blog in the past, but it was nice to bring them together in one succinct, focused presentation. These two trends are powering disruptive changes, and will have a significant effect on how citizens communicate with their governments in the future.

By enabling the development of new, more sophisticated, more powerful applications, these two trends will enable a greater array of options for citizens in how they interact with government. This choice is central (to my way of thinking) to a vibrant democracy. Healthy democracies are those where participation becomes “frictionless.” Frictionless participation requires choices – the more options people have in how they participate in their democracy, they more likely they are to participate and more active they will become.

I think this is an important discussion, and I hope to have it with more people. You can view the presentation I gave below, or follow this link to SlideShare.

I’m looking forward to watching these two trends develop, and to seeing the good things that come out of their convergence.

OpenGov APIs: Interfacing with Open Government

There has been lots of good talk (and a good deal of action) lately around open government APIs at events like Transparency Camp, Where 2.0 and on the Twitters.

So, as a prelude to a talk I’ll be giving at eComm next month, I wanted to write a post surveying the landscape of recent government API developments, and also to describe evolving efforts to construct standards for government APIs.

A Rundown of Recent State and Local API Developments

At Transparency Camp in DC last weekend, Socrata – a firm that hosts open data sets for governments – open sourced their API for accessing and querying public data. The Socrata Open Data API (or SODA) is a specification for running queries against public data sets. Currently, Socrata hosts data sets for the City of Seattle and others – code samples for working with the SODA spec can be found on Github.

The Open311 API recently implemented by the City of San Francisco (and being implemented by others) got some well deserved attention at the recent Where 2.0 conference. Other cities are starting to take note, and some (like Edmonton and Boston) look set to be next in line.

One of the early adopters of government APIs – the NY Senate – recently announced a new release for their OpenLeg API, which includes some important new changes. Today the NY Senate remains one of the few (if not the only) state legislative body to adopt an API to open up access to legislative information and proceedings, but other will hopeful soon follow. (Certainly the work done in Albany by NY Senate CIO Andrew Hoppin and his team has opened the door for work on other government APIs.)

That’s a lot of good stuff in just the last few weeks – I’ve probably missed some stuff, but I’m sure there is more to come in the weeks and months ahead.

Towards API Standards

The work being done on the Open311 API, the OpenMuni Project, and certainly the move by Socrata to open source the SODA spec will have significant implications for the open government data movement.

Standards for open data and APIs will make it easier for developers to build things – an app that works for one municipality can work for others if both adhere to a common standard that the app can run against. But they’ll also make it easier for governments to open up their data – standards will offer governments assurance that the time and effort they expend to maintain and publish data or stand up APIs will provide the most return on investment.

The move towards open data and government API standards is an important one that may influence the long-term success of the open government movement.

What’s Next?

As these standards develop, and as more and more municipalities start to embrace open data, we’ll move closer to the idea of government as a platform.

More and more open data will be published by governments in this country and others. These newly opened data sets may be hosted on infrastructure maintained by governments, or by third parties like Socrata. Enterprising governments in different regions or states may decide to team up and jointly host data that is of interest or value to constituents served by multiple governments or jurisdictions.

The applications that allow citizens to communicate with governments and consume public services will increasingly be built outside of government. (By outside, I mean outside the control of government and the government procurement framework.) Governments will increasingly become the collectors and maintainers of data and information and will focus less on building applications that use such data (or contracting for such applications to be built).

The applications built to consume public data and communicate with government will increasingly be designed as multitenant applications, able to service constituents in multiple jurisdictions that adhere to common data or API standards. They will also be built using more open source components and Web 2.0 technologies.

And (hopefully) the ranks of civic coders will continue to swell, as technologists looking to “scratch their own itch” are empowered to make a difference far beyond their own wants or needs.

All hail the transformative power of standards!

NoSQL Telephony with Tropo and CouchDB

In the last two posts, I’ve provided a basic overview of how to create cloud telephony applications using the Tropo platform and CouchDB.

Apache CouchDB Logo

In the first post of this series, I walked through a quick install of CouchDB and provided information on getting a Tropo account set up. In the second post, we created a simple auto attendant Tropo script in PHP that populates a CouchDB database with a call record for each inbound call that is transferred.

I’ll conclude the series with information on how to retrieve information from a CouchDB instance for use in a cloud telephony application, and talk about design documents. This post will also introduce the reader to the concepts of CouchDB Views and Show Functions – powerful tools that can be harnessed to create truly cutting edge cloud phone apps.

First, let’s create a CouchDB database to hold our call settings.

Creating a Call Settings Database

As mentioned in the previous CouchDB posts, you can create a new call settings database using curl from the command line, or using the Futon GUI.

$ curl -X PUT http://your_new_couchdb_ip:5984/call_settings

You should see a response from CouchDB like this:


You can add a record to the call settings database the same way. This time, however, we’ll append the URL for our CouchDB database with a document ID, in this case ‘1000’ – this is the extension that a caller to our cloud telephony app will dial. We’ll use the document ID and and the CouchDB REST API to get all of the settings we’ll need to conduct the transfer – these settings can be seen in the document structure below (feel free to add others to meet your needs or preferences).

$ curl -X PUT http://your_new_couchdb_ip:5984/call_settings/1000 -d ‘{“first_name”:”Joe”,”last_name”:”Blow”,”phone”:”17777777777″,”title”:”Master of Disaster”,”ring_tone”:”audio/ring.wav”}’

You should see a response from CouchDB like this:


Let’s add a few more documents to our call settings database (replacing the telephone numbers below with real ones that you want callers to transfer to) and then view all of the documents that we have created.

$ curl -X PUT http://your_new_couchdb_ip:5984/call_settings/2000 -d ‘{“first_name”:”Harry”,”last_name”:”Smith”,”phone”:”18888888888″,”title”:”President of the World”,”ring_tone”:”audio/ring.wav”}’

$ curl -X PUT http://your_new_couchdb_ip:5984/call_settings/3000 -d ‘{“first_name”:”Martin”,”last_name”:”Scorsese”,”phone”:”19999999999″,”title”:”The Departed”,”ring_tone”:”audio/ring.wav”}’

You can view all of the documents in a CouchDB database using the HTTP GET method:

$ curl -X GET http://your_new_couchdb_ip:5984/call_settings/_all_docs

You should see a response from CouchDB like this:


Now we need to modify our Tropo PHP script to retrieve the settings we want to use with each transferred call.

Note, for now we’ll keep the logic simple – if a caller enters an extension that does not exist we’ll get a specific HTTP response back from CouchDB – something in the 400 class of responses. If this happens, we’ll just end the call – in the real world you’d want to do something a little more friendly, but you can sort that out when you build your own cloud telephony application. 😉

Modifying the Tropo Script

So, our new Tropo script looks like this:

Note that the getPhoneNumberByExtension() method no longer returns a hard coded phone number – it is using the 4-digit extension entered by the caller to access our CouchDB database using the REST API. The response from CouchDB is a document in JSON format, that we can easily parse using PHP’s handy json_decode() function.

I’ve also modified the value of the $callLog variable to correctly capture some of the variables exposed in the Tropo environment (i.e., the session ID of the call, and the caller ID – see this thread for more information).

So now we have a working cloud telephony application built on Tropo that uses CouchDB to get its call settings, and also to write a call record for billing, reconciliation, etc.

As cool as this is, there is still a lot more we can do with CouchDB in our cloud telephony apps. Note the constants declared at the top of the Tropo script – the last two are blank; one for a design document name, and one for a show function.


Let’s talk about those concepts now, and explore how they could be used in a cloud telephony application.

Getting more out of CouchDB – Design Documents, Map/Reduce and Show Functions

As the title of this post suggests, we’re building cloud-based phone applications without SQL. CouchDB doesn’t use SQL – instead it uses a Map/Recuce framework to index documents in a database.

Map functions can be used to emit a key-value listing of documents in a CouchDB database. Reduce functions are used to aggregate the key-value pairs emitted by a Map function. Map/Reduce functions (or Views) live inside of a special document in a CouchDB database called a “design document“, which has a document ID prefixed with “_design/”.

For example, suppose we have a special design document in our database called “_design/extensions” with a View called “getExtensions” – our View is made up of a Map function and (optionally) a Reduce function. Let’s assume our View has only a Map function to return data on extensions with valid phone numbers to transfer a caller to.

function(doc) {
  if( == 11 &&,1) == ‘1’) {

Our Map function (which is written in JavaScript, and stored in our design document) has one parameter – doc. This function is called for each document in our database, and the doc parameter represents the document itself. As can be seen, we simply examine each document in the database to see if it has a valid phone number (11 digits, starting with 1).

Views are accessed using a specific URI structure (do note, however, that the REST API for querying Views can change significantly between CouchDB versions), and the response is a set of key-value pairs formatted as JSON.


$ curl -X GET http://your_new_couchdb_ip:5984/call_settings/_design/extensions/_view/getExtensions

You should see a response from CouchDB like this:


You can check to see if your Map function is working properly by adding a document with an invalid phone number.

$ curl -X PUT http://your_new_couchdb_ip:5984/call_settings/4000 -d ‘{“first_name”:”Richard”,”last_name”:”Kimble”,”phone”:”4444444″,”title”:”The Fugitive”,”ring_tone”:”audio/ring.wav”}’

Accessing the getExtensions view will return the same results as before, as the phone number for the new document does not pass validation. Using design documents and Views, cloud telephony developers can use CouchDB to build grammars for user input which will significantly enhance the usability of the sample application we’ve used during the last few posts.

But there is even more potential with another piece of functionality in CouchDB – show functions. Show function also live in design documents, alongside Views. Show functions allow a developer to return specifically formatted content from a CouchDB instance, not just data in JSON format.

A basic show function that can be used to return information from our CouchDB database in the format of a SRGS grammar might look like this.

function(doc, req) {
 var grammer = ‘<?xml version=”1.0″?><grammar xmlns=””>&#8217;;
 grammar += ‘<rule id=”R_1″><one-of>’;
 grammar += ‘<item>’ + + ‘<item>’;
 grammar += ‘</one-of></rule></grammar>’;
 return grammar;

Like Views, Show Functions are accessed using a specific URI structure.


Note that the Show function above is different from the Map function discussed earlier in that it takes two parameters – doc and req. As before, the doc parameter represents the document the function is called against. The req parameter represents a parameter that is sent in with the HTTP request, which can be used inside the function to render output. So a Show function canbe accessed using the above URL with an optional parameter as well, like so.



I hope this series of posts has provided a helpful overview of CouchDB, with an emphasis on how it can be used to build cloud telephony applications.

Cloud telephony platforms like Tropo, CloudVox, CallFire and others provide enormous flexibility to developers in building and deploying sophisticated cloud telephony applications.

Pair these tools with CouchDB and you’ve got a powerful combination for building full featured, easy to maintain cloud-based phone apps.

Toronto Opens Government Data

The City of Toronto recently joined a growing fraternity of governments to release public data sets for developers and other interested parties to use to create interesting and useful mashups.

It’s gratifying to see more and more cities place an emphasis on releasing open data sets to the public. Toronto’s open data web site is still new, and while there aren’t a ton of data sets yet some of those that are available are very interesting.

One data set provides the location of licensed child care centers and provides information on the number of spaces available for children of different ages. Since I recently joined the ranks of child care consumers in my own city, I thought it would be interesting to build a small app that lets a person search for child care centers within a specific postal code.

The app is quite simple, and is still rough in many ways, but it was completed within several hours and demonstrates how governments that release interesting and valuable information empower developers to build useful things. All of the source code for the app is up on GitHub.

The app can (currently) be accessed in any one of three ways:

  • Jabber Instant Message client (you can use GTalk for this) – simply add to your contacts list.
  • SMS – you can text a search request to (773) 273-9982.
  • Twitter – you can tweet a search request by sending a @reply to childcareto (e.g., @childcareto).

In order to use the app, you have to search using the 3 character forward sortation area (FSA) prefix from Toronto postal codes.

For example, if you send a tweet like this:

@childcareto M1N

You’ll get back the location of the first child care provider found, along with some instructions. If you send another tweet with the hashtag #next, you’ll get the next listing (if there is one). You can start a new search by simply tweeting #reset

Same thing works with a Jabber IM client or SMS – just send the 3 character FSA you want to search in the body to start the search, and then use #next or #reset as needed.

I’m admittadly a little ignorant about Canadian Postal Codes, so if anyone up north checks this out and thinks there is a better way to do it I would love to chat.

When I built this app I imagined someone who might be moving to a new job or a new house in Toronto, and being interested in nearby child care services. If you know the FSA of the new home or job location, its easy to do with this nifty little app.

While this first iteration is relatively simple, there are a lot of possibilities because the information in the data set is very compact. Compact data lends itself to a range of different user agents – I have phone enabled similar types of information on other projects and provided multilingual support. If there is any interest expressed in taking this farther, I may pursue other communication channels for this app.

Once again, hats off to the City of Toronto for making this data set (and others) available. Any feedback on this app is welcome.

Lots of Gov 2.0 Potential in Twitter Geolocation

So the new Twitter hotness will be the ability to add locational data to individual Tweets – not sure on exactly when this new feature will go live, but it will require someone wishing to add locational data to their tweets to:

  1. Explicitly opt in to this feature by changing their Twitter account settings.
  2. Utilize a Twitter client that is location aware, and can add lat/long to specific Tweets.

Twitter currently has some limited geolocation support that utilizes the account-level location field, but there is no validation on what is entered, so it is not terribly reliable.
The imminent support for “geo-Tweets” holds enormous potential for governments if you think of Twitter as another communications channel that citizens can use to interact with government. (Clearly, I do.)

Many government services are tied to a specific location – parks, libraries, motor vehicle offices, unemployment offices, etc. – and there are lots of good examples of information that governments generate that are location-specific – road closures and construction delays, pollution sites, crime incidents, etc.

As the application I built to query legislative information from the NY Senate Open Leg API demonstrates, Twitter can be used as a power application interface. It’s easy to use, available to people on a variety of devices and relatively easy for governments to set up. With the addition of locational data, Twitter will become an even more powerful interface for citizens to use when interacting with Governments.

Now, if a citizen wants to use Twitter to find out the hours of operations of libraries in their city or town, they can get an answer that is specifically tailored to their location – they can get a response back from a government application telling them the hours (and the address) of the library closest to their current location.

Governments need to think about Twitter as an interface to their services and applications – one that will soon be able to support location-specific data and responses. There is a lot of potential here for those interested in advancing Gov 2.0.