On Barriers to Adopting New Technologies

Interesting story in the Washington Post describing a survey of federal government technology managers.

The big takeaway from this survey seems to be that the majority of IT managers are enthusiastic about new technology and can see how it helps them do their jobs more effectively, but they question the government’s ability to keep pace with the private sector.

Continue reading “On Barriers to Adopting New Technologies”

On Barriers to Adopting New Technologies

Hacking the RFI Process

rfi

The Seattle Police Department recently held a hackathon.

When the event was initially announced, there was a fair bit of skepticism in the civic technology community with more than a few people stating that the event would likely not be a productive one, for either the Seattle Police or those that chose to attend. I was one of those skeptics – I thought the event was too narrowly focused and that the problem that attendees would be working to help resolve wouldn’t appeal to a broad enough audience for it to work as the organizers probably hoped.

Continue reading “Hacking the RFI Process”

Hacking the RFI Process

Data is Law

“…[U]nless we understand how cyberspace can embed, or displace, values from our constitutional tradition, we will lose control over those values. The law in cyberspace – code – will displace them.”
— Lawrence Lessig (Code is Law)

In his famous essay on the importance of the technological underpinnings of the Internet, Lawrence Lessig described the potential threat if the architecture of cyberspace was built on values that diverged from those we believe are important to the proper functioning of our democracy. The central point of this seminal work seems to grow in importance each day as technology and the Internet become more deeply embedded into our daily lives.

But increasingly, another kind of architecture is becoming central to the way we live and interact with each other – and to the way in which we are governed and how we interact with those that govern us. This architecture is used by governments at the federal, state and local level to share data with the public.

Continue reading “Data is Law”

Data is Law

Unexpected Satisfaction from Falling Short

When 2013 closed out, I made a bold prediction.

Screen Shot 2014-12-26 at 3.20.35 PM

As it turned out, I came nowhere near writing and publishing my targeted number of blog posts, though I did write more on this site in 2014 than the year before (17 posts in 2013 vs. 25 in 2014). Adding up the number of posts for all of the other sites that I have written for this year (there are several), I’d say my total is around 50 original posts. Not bad, but well short of my original goal.

Continue reading “Unexpected Satisfaction from Falling Short”

Unexpected Satisfaction from Falling Short

Realtime Open Data

I’ve been thinking a lot lately about data being collected about cities through remote sensor networks.

iot-tweet

It’s never been easier to build DIY sensors, and some cities are starting to look seriously at how sensor data can inform better policy decisions and better investment of public resources.

It strikes me that this is a very relevant issue for those in the open data movement, as the data generated by urban sensor networks is likely to be mashed up with publicly available data from cities on crime, land use, service requests and a host of other things to drive better decision making. There’s a natural connection between the kinds of data we find in open data portals and the kind of data that is generated by emerging sensor networks.

Continue reading “Realtime Open Data”

Realtime Open Data

Open Data Beyond the Big City

This is an expanded version of a talk I gave last week at the Code for America Summit.

An uneven future

“The future is already here – it’s just not evenly distributed.”
William Gibson. The Economist, December 4, 2003

The last time I herd Tim O’Reilly speak was at the Accela Engage conference in San Diego earlier this year. In his remarks, Tim used the above quote from William Gibson – it struck me as a pretty accurate way to describe the current state of open data in this country.

Open data is the future – of how we govern, of how public services are delivered, of how governments engage with those that they serve. And right now, it is unevenly distributed. I think there is a strong argument to be made that data standards can provide a number of benefits to small and mid-sized municipal governments and could provide a powerful incentive for these governments to adopt open data.

One way we can use standards to drive the adoption of open data is to partner with companies like Yelp, Zillow, Google and others that can use open data to enhance their services. But how do we get companies with 10s and 100s of millions of users to take an interest in data from smaller municipal governments?

In a word – standards.

Why do we care about cities?

When we talk about open data, it’s important to keep in mind that there is a lot of good work happening at the federal, state and local levels all over the country. Plenty of states and even counties doing good things on the open data front, but for me it’s important to evaluate where we are on open data with respect to cities.

States typically occupy a different space in the service delivery ecosystem than cities, and the kinds of data that they typically make available can be vastly different from city data. State capitols are often far removed from our daily lives and we may hear about them only when a budget is adopted or when the state legislature takes up a controversial issue.

In cities, the people that represent and serve us us can be our neighbors – the guy behind you at the car wash, or the woman who’s child is in you son’s preschool class. Cities matter.

As cities go, we need to consider carefully that importance of smaller cities – there are a lot more of them than large cities and a non-trivial number of people live in them.

If we think about small to mid-sized cities, these governments are central to providing a core set of services that we all rely on. They run police forces and fire services. They collect our garbage. They’re intimately involved in how our children are educated. Some of them operate transit systems and airports. Small cities matter too.

Big cities vs. small cities on open data

So if cities are important – big and small – how are they doing on open data? It turns out that big cities have adopted open data with much more regularity than smaller cities.

If we look at data from the Census Bureau on incorporated places in the U.S. and information from a variety of sources on governments that have adopted open data policies and making open data available on a public website, we see the following:

Big Cities:

  • 9 of the 10 largest US cities have adopted open data.
  • 19 of the top 25 most populous cities have adopted open data.
  • Of cities with populations > 500k, 71% have adopted open data.

Small Cities:

  • 256 incorporated places in the U.S. with populations between 500k – 100k.
  • Only 39 have open data policy or make open data available.
  • A mere 15% of smaller cities have adopted open data.

The data behind this analysis is here. As we can see, it shows a markedly different adoption rate for open data between large cities (those with populations of 500,000 or more) and smaller cities (those with populations between 100,000 and 500,000).

Why is this important?

We could chalk up this difference to the fact that big cities simply have more data. They may have more people asking for information, which can drive the release of open data. They have larger pools of technologists, startups and civic hackers to use the data. They may have more resources to publish open data, and to manage communities of users around that data.

I don’t know that there is one definitive answer here – there’s ample room for discussion on this point.

We should care about this because – quite simply – a lot of people call smaller cities home. If we add up the populations of the 256 places noted above with populations between 100,000 and 500,000, it actually exceeds the combined population of the 34 largest cities (with populations of 500,000 or more) – 46,640,592 and 41,155,553 respectively. Right now these people are potentially missing out on the many benefits of open data.

But more than simple math, if one of the virtues of our approach to democracy in this country is that we have lots of governments below the federal level to act as “laboratories of democracy” then we’re missing an opportunity here. If we can get more small cities to embrace open data, we can encourage more experimentation, we can evaluate the kinds of data that these cities release and what people do with it. We can learn more about what works – and what doesn’t.

In addition, we now know that open data is one tool that can be used to help address historically low trust in government institutions. It’s not hard to find smaller governments in this country that could use all the help they can get in repairing relations with those they serve.

How do we fix this?

There’s at least a few things we can do to address this problem.

First, we need more options for smaller governments to release open data. We’re not going make progress in getting smaller governments to adopt open data if the cost of standing up a data portal has the same budget impact as the salary for a teacher, or a cop, or a firefighter, or a building inspector – I just don’t think that’s sustainable.

Equally important, we need to work on developing useful new data standards. This won’t always be easy, but it’s important work and we need to do it.

open data standards tweet

For smaller cities without the deep technology, journalism and research communities that can help drive open data adoption, data standards are a way to export civic technology needs to larger cities. I believe they are critical to driving adoption of open data in the many small and midsized cities in this country.

We’ve already seen what open data looks like in big cities, and they are already moving to take the next steps in the evolution of their open data programs – but smaller cities risk getting left behind.

They next frontier in open data is in small and mid-sized cities.

Open Data Beyond the Big City

What if We’re Doing it Wrong?

Ever since the botched launch of Healthcare.gov, procurement reform has become the rallying cry of the civic technology community.

There is now considerable effort being expended to reimagine the ways that governments obtain technology services from private sector vendors, with an emphasis being placed on new methods that make it easier for governments to engage with firms that offer new ideas and better solutions at lower prices. I’ve worked on some of these new approaches myself.

The biggest danger in all of this is that these efforts will ultimately fail to take hold – that after a few promising prototypes and experiments governments will revert to the time honored approach of issuing bloated RFPs through protracted, expensive processes that crowd out smaller firms with better ideas and smaller price tags.

I worry that this is eventually what will happen because far too much time, energy and attention is focused on the procurement process while other, more fundamental government processes with a more intimate affect on how government agencies behave are being largely ignored. The procurement process is just one piece of the puzzle that needs to be fixed if technology acquisition is to be improved.

Right now, the focus in the world of civic technology is on fixing the procurement process. But what if we’re doing it wrong?

Things Better Left Unsaid

During the eGovernment wave that hit the public sector in the late 90’s to early 2000’s, tax and revenue collection agencies were among the first state agencies to see the potential benefits of putting services online. I had the good fortune to work for a state revenue agency around this time. My experience there, when the revenue department was aggressively moving its processes online and placing the internet at the center of its interactions with citizens, permanently impacted how I view technology innovation in government.

It’s hard for people to appreciate now, but prior to online tax filing state tax agencies would get reams and reams of paper returns from taxpayers that needed to be entered into tax processing systems, often by hand. Standard practice at the time was to bring on seasonal employees to do nothing but data entry – manually entering information from paper returns into the system used to process returns and issue refunds.

The state I worked for at the time had a visionary director that embraced the internet as a game changer in how people would file and pay taxes. Under his direction, the revenue department rolled out innovative programs to fundamentally change the way that taxpayers filed – online filing was implemented for personal and business taxpayers, and the department worked with tax preparers to implement a new system that would generate a 3D bar code on paper returns (allowing an entire tax return and accompanying schedules to be instantly captured using a cheap scanning device).

When these new filing options were in place, the time to issue refunds plummeted from weeks to days, and most personal income taxpayers saw their refunds issued from the state in just a couple of days. By this time, I had moved to the Governor’s office as a technology advisor and was leading an effort to help state departments move more and more services online. I wanted to use the experience of the revenue department to inspire others in state government – to tout the time and cost savings of moving existing paper processes to the internet, making them faster and cheaper.

When I asked the revenue director for some specifics on cost savings that I could share more broadly, his response could not have been further from what I expected.

He told me rather bluntly that he didn’t want to share cost saving estimates from implementing web-based services with me (or anyone else for that matter). Touting costs savings meant an eventual conversation with the state budget office, or questions in front of a legislative committee, about reducing allocations to support tax filing. The logic would go something like this – if the revenue department was reducing costs by using web-based filing and other programs, then the savings could be shifted to other department and policy areas where costs were going up – entitlement programs, contributions to cover the cost of employee pensions, etc.

All too often, agencies that implement innovative new practices that create efficiencies and reduce costs see the savings they generate shifted to other, less efficient areas where costs are on the rise. This is just one aspect of the standard government budgeting process that works against finding new, innovative ways for doing the business of government.

Time to Get Our Hands Dirty

A fairly common observation after the launch of Healthcare.gov is that governments need to think smaller when implementing new technology projects. But at the state and local level, there are actually some fairly practical reasons for technology project advocates to “think big,” and try and get as big a piece of the budget pie as they can.

There is the potential that funding for the next phase of a “small” project might not be there when a prototype is completed and ready for the next step. From a pure self-interest standpoint, there are strong incentives pushing technology project advocates to get as much funding allocated for their project as possible, or run the risk that their request will get crowded out by competing initiatives. Better to get the biggest allocation possible and, ideally, get it encumbered so that there are assurances that the funding is there if things get tight in the next budget cycle.

In addition, there are a number of actors in the budget process at all levels of government (most specifically – legislators) who equate the size of a budget allocation for a project with its importance. This can provide another strong incentive for project advocate to think big – in many cities and states, funding for IT projects is going to compete with things like funding for schools, pension funding, tax relief and a host of other things that will resonate more viscerally with elected officials and the constituencies they serve. This can put a lot of pressure on project advocates to push for as much funding as they can. There’s just too much uncertainty about what will happen in the next budget cycle.

Its for all of these reasons that I think it’s time for advocates of technology innovation in government to get their hands dirty – to roll up our sleeves and work directly with elected officials and legislators to educate them on the realities of technology implementation and how traditional pressures in the budget process can work to stifle innovation. There are some notable examples of legislators that “get it” – but we’ve got yeoman’s work to do to raise the technology IQ of most elected officials.

Procurement reform is one piece of the puzzle, but we’ll never get all the way there unless we address the built in disincentives for government innovation – those that are enforced by the standard way we budget public money for technology projects (and everything else). We’re having conversations in state houses and city halls across the country about the future costs of underfunding pensions, but I don’t think we’re having conversations about the dangers of underfunding technology with the same degree of passion.

Time for us to wade into the morass and come back with a few converts. We’ve got work to do.

What if We’re Doing it Wrong?