Driving Innovation Beyond The Big City

In late 2014, I had a chance to present on the main stage at the annual Code for America Summit in San Francisco. To the surprise of very few people, I was there to talk about cities and data.

Earlier that year, I had finished up my term as the first Chief Data Officer for the City of Philadelphia, one of the largest cities in the country. But my focus that day was not on big cities like Philadelphia—but rather on smaller cities that had not yet started down the road of leveraging data to spur innovation and inform better policy decisions.

In 2014, the delta between what large cities were doing with data and what small and mid-sized cities were doing was pretty stark.

Read More

The Future of Civic Tech


Photo courtesy of flickr user r2hox.

Data is the lifeblood of civic technology.

It is the source of all innovation and advancement in civic tech, and is the basis for developing new ways of engaging with voters and taxpayers so that they may be informed about how government works and – hopefully – to help make it operate more effectively.

Without data, civic technology doesn’t work.

The good news is that more and more governments are opening up their data – adopting new open data policies and standing up new portals to share their data with the public (and themselves). This moves the needle on civic tech but, increasingly, it also highlights the single biggest challenge facing the civic tech community.

We lack well developed, widely adopted standards for data that will enable civic tech solutions to scale.

Read More

The Bomb, the Pill, and the Shot

the Shot

Image courtesy of Flickr user Joe Flintham.

A few days ago, Tom Steinberg – the founder and former director of mySociety – wrote a fascinating piece on power that was meant for people developing civic technology.

In a post on Medium, Tom clearly describes the nature of power as it relates to technology and implored civic technologists to think more directly about how power shifts can be caused by the development and adoption of new technologies. He uses the analogy of a nuclear bomb and the contraceptive pill to describe the different kinds of power shifts that can occur when technology becomes more broadly adopted.

Read More

Thinking Small on Civic Tech

Designing simple systems is one of the great challenges of Government 2.0. It means the end of grand, feature-filled programs, and their replacement by minimal services extensible by others.

— Tim O’Reilly, Open Government

The original idea of Government as a Platform is now almost a decade old. In the world of technology, that’s a long time.

In that time, people working inside and outside of government to implement this idea have learned a lot about what works well, and what does not. In addition, we’ve seen some significant changes in the world of technology over the past decade or so, and the way the we develop solutions (both in the world of civic tech, and outside of it) have changed fairly dramatically.

The power of the original idea for Government as a Platform continues to echo in the world of civic tech and open data. I have no doubt that it will for a long time to come.

But in 2015 what does Government as a Platform actually look like, and what should it look like going forward into the future? What are its component parts? How does it manifest in terms of actual infrastructure, both inside and outside government?

And, most importantly, who controls this infrastructure and has a say in how it is shaped and used.

Read More

Density and Destiny

Perhaps no other process of government that has such a significant impact on people’s lives is as opaque and less understood as establishing the rules for land use.

Maybe the redistricting process. Maybe.

How land is zoned – the setting of specific requirements for how land may be used, and even how buildings and structures on land may be designed – is a complex process because discretion for setting zoning rules is generally delegated to local governments. My home state of New York provides a very comprehensive (though somewhat dated) guide for local communities that want to institute zoning rules. It’s a fascinating read.

Land use rules can fundamentally alter the character of communities, and there is an increasingly robust body of research that suggests that where you live – where you are born, grow up, access educational opportunities and job opportunities – helps determine your lot in life. There is also an abundance of information available that details how land use rules have added to the very serious problem of segregation in many communities in this country.


In a time when our collective attention is focused on higher offices at the state and federal level, it’s easy to forget about local government officials – particularly at the town and village level – and the work that they do. County legislatures, city councils, town councils and village boards all have a part to play in deciding how land gets used – and, by extension, where people get to live.

Read More

Enabling the Enterprise

Its not often that I run across posts about enterprise architecture that get me excited. This one – by Tariq Rashid – did. Very much so.

This issue interests me because its one that, as a former state IT executive and policy advisor, I have personal history with. I also believe its an issue that will have great impact on how successful governments are at redesigning services around users, and embracing civic technology and open data.

Read More

On Data Standards for Cities

Creating open data standards for cities is really, really hard. It’s also really, really important.

Data standardization across cities is a critical milestones that must be realized to advance the open data movement, to fully realize all of the potential benefits of openly publishing government data. More and more people are starting to realize the importance of this milestone and more and more energy will be devoted to creating new standards for city data in the months and years ahead.


The best example of what is possible when governments publish open data that conforms to a specific standard is the General Transit Feed Specification (GTFS). Developed by Google in partnership with the Tri-County Metropolitan Transportation District of Oregon (TriMet), GTFS is a data specification that is used by dozens of transit and transportation authorities across the country, and it has all of the qualities that open data advocates hope to replicate in other data standards for cities.

Transit authorities that publish GTFS data see an immediate tangible benefit because their transit information is available in Google Transit. Making this information more widely available benefits both transit agencies and transit riders, but the immediacy with which transit agencies can see this benefit make GTFS particularly valuable. Data standardization is an easier sell to government officials when tangible benefits are quickly realized.

The GTFS standard is relatively easy to use – it’s a collection of zipped, comma-delimited text files. This is a pretty low bar for transit agencies being asked to produce GTFS data, and it’s an eminently usable format for consumers of GTFS data. In fact, the ease of use of GTFS has spawned a cottage industry of transit applications in cities across the country and continues to be used as the bedrock set of information for transit app developers.

And perhaps most importantly, GTFS has given open data advocates a benchmark to use to advance other data standardization efforts. In many ways, GTFS made standards like Open311 possible.

So if data standardization is the future, and we’ve got at least one really good example to demonstrate the benefits to stakeholders and advance the concept, then what’s next? What’s the next data standard that will be adopted by multiple governments?

For the past year or so, there has been widespread interest in developing a shared data standard for food safety inspection data. On it’s face, this seems like a good candidate data source to standardize across cities. Most cities (certainly all large cities) conduct regular inspections of establishments that serve food to the public. This information can be (but is not always) fairly succinct – usually a letter grade or numerical ranking – that can easily be delivered to an end user on a number of different platforms and channels. For many reasons, focusing on food safety inspections data as the next best data set to standardize across cities makes a lot of sense.

Just recently, the joint efforts of several different groups culminated in an announcement by the City of San Francisco and Yelp to deliver standardized food safety inspection data through the Yelp platform.

I was involved in the discussions about a data standard for food safety inspections, though the City I work for will not be adopting the newly developed standard (at least not yet). The process of developing the new food safety inspections data standard was illuminating. There are some important lessons we can take away from this work – lessons we can put to use as we work to identify additional municipal data sets for standardization.

For me, the biggest lesson learned from the work that went into standardizing food safety inspection data is understanding when applying a data standard might obscure important differences in how data is collected, or in what data means. By way of example, a data standard like GTFS does not obscure differences in the underlying data across different jurisdictions. A transit schedule broken down to its essence is about location and time – when will my bus be at a specific stop on a specific route. There is nothing inherently different about this information from jurisdiction to jurisdiction. Time and place mean the same thing everywhere.

But this is not always the case with food safety inspection data – particularly when this data is distilled into digestible (pun intended) scores or rankings. The methods for conducting food safety inspections from city to city can vary widely, and these differences can result in very different results depending on where it comes from.

Daniel E. Ho, a professor at Stanford University, conducted an in depth study of the restaurant inspection systems in New York City and San Diego and found that the way in which inspection regimes are implemented can result in data that is often very different when compared across cities.

“While San Diego, for example, has a single violation for vermin, New York records separate violations for evidence of rats or live rats; evidence of mice or live mice; live roaches; and flies — each scored at 5, 6, 7, 8 or 28 points, depending on the evidence. Thirty ‘fresh mice droppings in one area’ result in 6 points, but 31 droppings result in 7 points.”

There also appears to be some debate in the medical community about the effectiveness of simplified grading for food establishments – i.e., using a letter grade or a numerical score. As noted in Professor Ho’s report – “…a single indicator has not been developed that summarizes all the relevant factors into one measure of [food] safety.”

All that said, if we’re going to advance the work of creating data standards across cities we need to identify the right data sets to standardize. These candidate data sets should have the same qualities as GTFS – demonstrating immediate benefits to data producers and data users, ease of use – but not have some of the less desirable qualities of food safety inspection data – obscuring differences in data collection and data quality across jurisdictions.

Lately, I’ve been trying to advance the idea that data about the locations where flu shots are administered (or any other form of inoculation) could be standardized across cities. I’ve gotten some great input from data advocates and from other cities, like the cities of Chicago and Baltimore.

I’m hoping to continue pushing this idea in the months ahead, leading up to the next flu season. If this most recent flu season has shown us anything, it’s that data matters – I think there could be enormous benefit in having cities use a standard data format for this information before the onset of the next really bad flu season.

But whether it’s flu shot locations or some other data set, the future of open data lies in building standards that multiple cities and government can adhere to. This is the next great milestone in the open data movement.

Advancing the movement toward this goal will be the most important work of the open data community in the months and years ahead.

[Note – photo courtesy of the San Diego International Airport]

Building an Open311 Application with Node.js and CouchDB

Lots of work is being done to finalize the next version of the Open311 API spec (officially referred to as GeoReport V2).

Almost a year ago I launched TweetMy311 – a service that lets people report non-emergency service requests using a smart phone and Twitter. Since then, a lot has changed – not only with the Open311 specification but with the tools available to build powerful Twitter-based applications.
In the last several months, I’ve spent a lot of time learning about and working with Node.js. Some of the things I did in the initial version of TweetMy311 (written in PHP) are so much easier to do in Node.js that I’ve decided to completely rewrite the application to use Node. In addition, since I initially launched TweetMy311 CouchDB (the NoSQL database on which the app is built) has also seen a lot of enhancements.

I’ve expecting the overhaul I’m currently working on to make the application code a lot more efficient and easy to understand. Once this overhaul is complete, I intend to release a big chunk of it as open source software, so that anyone that wants to build a powerful Node.js/CouchDB-based civic app can do so.

It’s also exciting to see new cities get on board the Open311 bandwagon. The City of Boston is now supporting Open311 and has started to issue API keys to developers.

As part of my work to overhaul TweetMy311, I’ve developed a neat little Node.js library for interacting with the Open311 API. Since I just started to work with the Boston implementation, I thought it would be helpful to others interested in doing so to walk through a quick example.

If you want to run this example for yourself, you’ll need to have Node.js installed, specifically the latest version – v0.4.2. If you have the Node Package Manager installed, you can simply do:

npm install open311

Once you’ve done this, you should be able to run the following script:

Which will output:

This is just a quick example of how to make the most basic of API calls with the Node.js Open311 module. You can use this module to build fully feature Open311 applications.

I’ll be doing some more blogging in the weeks ahead as the rewrite of TweetMy311 continues, and work on this phase of the GeoReport V2 spec is concluded.

Stay tuned!

The Key to Open Gov Success: Common Standards

There is a really good post on the state of open government in Canada and the use of specific data licenses by Canadian cities over on David Eaves’ blog.

His post raises an important issue for the open government movement, one that I believe will ultimately determine it’s success or failure – the adoption of common standards by multiple governments in support of open government. This is something I’ve touched on before.

Eaves’ recent post discusses the importance of common licensing standards for open data. Equally important, in my mind, are other standards like those being developed for Open311, and standards for data formats (like GTFS).

One of the intended outcomes of the open government movement is the development of applications built on top of open data and open APIs. One of the primary advantages for governments from this type of “civic development” stems from the fact that (with rare exception) governments are not in direct competition with each other, and face common challenges.

This means that solutions built to address issues in one jurisdiction or municipality can potentially provide a benefit in other municipalities. That is the theoretical underpinning for efforts like Civic Commons.

But for this to work, there must be mutually agreed upon standards for things like data formats, APIs and data licensure to name just a few. Crafting and adopting these standards is work. Hard work. And making this even more difficult is the fact that there are those who would benefit from the absence of such standards – software vendors and other service providers.

Without painting all such vendors with the same brush (there are some notable exceptions), the absence of standards allows vendors to lock customers into their particular solutions, and provides an opportunity for them to sell the same solution over and over to different governments.

I’m not against capitalism (far from it), but governments need to get wise to the fact that common standards for data and APIs are what will ultimately help deliver on the promise of open government.

And also that there are those that do not wish such standards to be adopted or for open government to succeed.