Civic Innovations

Technology, Government Innovation, and Open Data

The Future of Civic Tech

Photo courtesy of flickr user r2hox.

Data is the lifeblood of civic technology.

It is the source of all innovation and advancement in civic tech, and is the basis for developing new ways of engaging with voters and taxpayers so that they may be informed about how government works and – hopefully – to help make it operate more effectively.

Without data, civic technology doesn’t work.

The good news is that more and more governments are opening up their data – adopting new open data policies and standing up new portals to share their data with the public (and themselves). This moves the needle on civic tech but, increasingly, it also highlights the single biggest challenge facing the civic tech community.

We lack well developed, widely adopted standards for data that will enable civic tech solutions to scale.

Almost 10 years on from the meeting of open government leaders in Sebastopol, CA that generated the core principles of open data, we still don’t have a robust collection of standards that allows new civic tech solutions to easily and efficiently scale across different jurisdictions.

To be fair, there are some standards for open data. And I can say – from personal experience – that a lot of work has gone into developing the standards we have today.

I was an early supporter of the Open311 standard and wrote several client libraries for using Open311 data – some going all the way back to V1 of the standard. I helped develop an open data standard for flu shot locations when I worked for the City of Philadelphia, and a standard for building permit data in my current job. I recently led a discussion on developing data standards at the 2015 Code for America Summit.

And yet, despite all that, I don’t think that we’re any closer to solving the data standards riddle that will allow civic tech solutions to scale across towns, cities and states in a lasting and impactful way.

The Open311 standard hasn’t been updated in a while, and the lead organization behind it no longer exists. The Open Civic Data project also appears stagnant, and the lead organization behind it is currently in flux. The LIVES data standard has seen some adoption, but now appears to be driven exclusively by Yelp with little or no outside input. The GTFS standard – widely viewed as the high water mark for open data standards – is alive and healthy and a fair amount of the current work on data standards seems focused on replicating the GTFS approach for new standards.

As a community, I don’t think we’re struggling with what kinds of standards to develop – I see some pretty broad consensus around the kinds of data standards that people are looking for. Instead, I think the largest open question on data standards is how they get developed. What organization(s) have the clout, impartiality and durability to bring together disparate interests and help craft a new data standard?

Governments, non-profits and private companies all have a stake in how data standards get developed – and each of these kinds of organizations has spearheaded at least one effort to develop a data standard. But I still feel like we’re searching around for the right model – one that can be used to craft standards that will be widely adopted, and that will be repeatable going forward.

If data really is the lifeblood of civc technology, then data standards are the key to scaling out civic tech solutions. I would argue very strongly that the ultimate success of the civic tech movement rests on our ability to develop widely adopted standards for open data.

Coming up on almost 10 years from that watershed meeting in Sebastopol, we’ve still got a lot of work to do.

3 responses to “The Future of Civic Tech”

  1. An interesting point that was not included in this post, but that has surfaced in a subsequent Twitter conversation is that the primary driver of civic data standards (differences in the same kinds of data between different governments) is often a reflection of differences in an underlying government process.

    Different jurisdictions use different processes to inspect restaurants, issue permits, enforce parking regulations, license rental properties, etc. And these differences bubble up to the data that these processes throw off. The process of developing a data standard then can be seen as a way to try and reconcile (obscure?) the differences between the data generated by the same kind of process in various jurisdictions, allowing for easier interoperability and comparability.

    It strikes me that the difference in process we see between different jurisdictions in the operation of government can be viewed as a good thing – even a desirable one – in our federalist system. Variations in how local governments operate demonstrate responsiveness to local needs / values, and can provides choice for citizens and taxpayers. If you don’t like how jurisdiction X does something, you have the option to move to jurisdiction Y where they do it differently.

    But from a technology standpoint, this is suboptimal. Differences in data need to be reconciled, and different data formats need to be parsed to use in a single app that may serve multiple jurisdictions. In the world of technology, differences – in operating systems, browser versions or data formats – can add complexity and increase the risk of a software issue.

    It’s worth noting this issue as something that might make developing open data standards more complicated than developing other kinds of standards. A sort of civic paradox – a scenario that produces an outcome that can be viewed as favorable from a democratic governance perspective, but lousy from a technology perspective.

  2. GTFS is something of an awkward poster-child for civic data standards. It was originally “Google Transit Feed Specification”, and was then renamed to “General” after the fact. While it’s widespread, it’s idiosyncratic enough that using it feels a lot like imitating Portland, not using a well-designed standard. That said, it was a tremendously successful collaboration between Google & Trimet, and is very useful.

    The role of standards bodies seems to have transitioned from ‘where standards come from’ to really a place of mediation and semi-democracy between the orgs/cities/companies/people who write standards. In this pattern it’s less of a “why aren’t standards bodies making something” than “how do we crystalize what we’re doing and bring it to a standards body to synchronize it with everyone else’s technique?”

    Finally, I think that requirements for usage or creation of standards can paralyze open data efforts. Publishing in Excel, CSV, JSON, without a domain-specific standard on top of it makes it less convenient to use data, but never impossible. If the information is there and the format is either simple or standardized on some level, it can be transformed into any other format. The question of how closely the standard should match the domain is one of degrees – JSON is general, a 311 standard is specific, a health report standard is even more specific. The more specific, the less general, and the more chance you have of overfitting the data with a strict standard based on a different city’s needs.

  3. I don’t know what the solution is to getting more standards. There seem to be a relatively small number of people with the skills and experience to develop them, and an even smaller number of candidate organizations to host, curate, promote and enforce the standards. Because the pool is so small, it’s a challenge for those governments and funders who understand the importance of standards to choose initiatives to invest in. Standards also fail really easily, and really slowly, so both funders and initiatives are less likely to try a lot of things, get quick feedback, and learn from what sticks; the timeline is too long and expensive. As a result, you often see the same people involved in standards initiatives (myself included), which limits the number of people who will eventually acquire the skills and experience to lead initiatives. I’ve been thinking about what a curriculum for standards development would look like, as this may be one way out of this problem.

    I know Andrew Nicklin at the Center for Government Excellence (to which you linked) is interested in exploring how we can lower the barrier to standards development and reduce the time required. I’m concerned about how such a process could maintain good quality and durability for the standards it produces, but it’s an avenue worth exploring – especially in the absence of any compelling alternative.

    By the way – Open Civic Data may be dormant at the moment, but the standard it largely adopted, Popolo (which I lead), is still actively used and developed, by governments, corporations and especially civil society.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

About Me

I am the former Chief Data Officer for the City of Philadelphia. I also served as Director of Government Relations at Code for America, and as Director of the State of Delaware’s Government Information Center. For about six years, I served in the General Services Administration’s Technology Transformation Services (TTS), and helped pioneer their work with state and local governments. I also led platform evangelism efforts for TTS’ cloud platform, which supports over 30 critical federal agency systems.

%d bloggers like this: