Civic Innovations

Technology, Government Innovation, and Open Data


From Sharing Code to Sharing Knowledge

How AI Tools are Changing Government Technology Collaboration Forever

The world of software development is changing rapidly, and the implications for how governments share technology are only beginning to come into focus.

For two decades, the vision for reducing waste in government technology has been straightforward: build once, share widely. Fifty states all need unemployment insurance systems. They all need licensing systems, benefits administration, tax processing. Why should each one spend the effort to build essentially the same thing? The legacy contractors who currently dominate this space have built their moats around proprietary code and switching costs, charging full price to build similar systems in state after state. The logic of sharing drove open source policies like the Federal Source Code Policy and the SHARE IT Act, which require agencies to make custom code available for reuse.

A recent piece in the Law and Political Economy Project blog, “The Means-Testing Industrial Complex”, offers a thoughtful examination of this dysfunction. The author – Luke Farrell, a veteran federal technology leader – shines a bright light on the waste of the current contractor-dominated model and the extraction enabled by private equity consolidation. Luke’s piece is an important read. But it also suggests, almost in passing, an idea that deserves more scrutiny – that CMS could build an open source version of Medicaid eligibility software for states to use and build on, following the model of IRS Direct File.

This idea encapsulates some assumptions that deserve closer examination, assumptions that may no longer hold as AI-assisted development tools mature.

The Friction Points of Code Sharing

There is room for debate about the success of government open source code initiatives, at least as far as driving reuse of code in one agency or government that is built by another. There are some clear examples of how open source software and code sharing have worked really well. Shared platforms like cloud.gov and Login.gov, and shared standards like the U.S. Web Design System are some notable examples. But leaving aside questions about the efficacy of this approach, the traditional model of sharing code between agencies and between governments faces inherent friction.

Different jurisdictions have genuinely different requirements. A licensing system for professional engineers encodes specific requirements about education, examination, and experience that can vary (sometimes widely) by state. A benefits eligibility system implements rules that reflect each jurisdiction’s specific policy decisions. Adopting another jurisdiction’s code means either accepting their policy choices or doing significant adaptation work.

A reasonable counterargument is that well-designed software can separate jurisdiction-specific policy logic from common infrastructure through modular architecture and configuration. In theory, you could build a licensing system where the core workflows are shared while the specific requirements for each profession in each state are configuration parameters. But building software this way is harder and more expensive than building for a single use case. It requires anticipating variation, designing clean abstractions, and maintaining flexibility that may never be used. Government contracts typically reward delivering specific functionality on time and within budget, not elegant architecture or modularity that hypothetical future adopters might appreciate. The agency paying for the system bears the cost of building for reuse while the benefits accrue to other agencies that may or may not ever adopt it. The existing incentive structure works against modularity even when everyone agrees it would be valuable in the abstract.

Technology stack fragmentation also creates barriers to traditional software sharing. One state may use Ruby on Rails, another may use Python and Django, a third favors Java or the .NET Framework. Even functionally similar systems can become difficult to share when the adopting jurisdiction lacks expertise in the originating jurisdiction’s technology choices.

Adoption requires capacity the adopter simply may not have. Understanding someone else’s codebase, evaluating whether it fits your needs, identifying what must change, making those changes without breaking things, and maintaining the modified version over time all demand technical expertise. For agencies that struggle to maintain their existing systems, taking on a complex external codebase may not reduce their burden at all.

Maintenance obligations can persist after adoption. Once an agency has adopted open source code, someone has to keep it running, patch security vulnerabilities, update dependencies, and adapt to changing requirements. The original developers may have moved on; the adopting jurisdiction now owns this burden now.

How AI Changes the Equation

AI-assisted coding tools are shifting the economics of software development in ways that change what’s worth sharing.

The cost of writing code is dropping. Tasks that once required days of developer time can now be accomplished in hours with AI assistance. Generating a working implementation from a clear description of requirements is becoming routine. Code is becoming way less expensive to produce.

The cost of understanding existing code is also dropping. Large language models excel at code comprehension. They can explain what unfamiliar code does, trace through logic, identify where specific functionality lives, and answer questions that would previously have required access to the original developers.

But while the economics of what we can learn from legacy code has changed, we still need humans to specify what the code should do. An AI assistant can be used to build a benefits eligibility system, but only if someone (a human) provides clear requirements about eligibility rules, edge cases, integration points, and workflow logic. The knowledge of how government programs actually work remains essential.

This changes what is scarce and what is valuable – and ultimately, what governments should focus on sharing and reusing. If code is cheap to generate and adapt, but domain knowledge remains hard-won, then the knowledge becomes the asset worth sharing rather than the code itself.

What Knowledge Sharing Looks Like

Consider what an AI coding assistant needs if it will be used to successfully build a government software system. It needs to understand the domain: what the system does, what workflows it supports, what edge cases exist. It needs to understand the regulatory context: accessibility requirements, security standards, compliance obligations. It needs to understand integration patterns: how this system connects to others, what data flows where, what APIs exist.

This knowledge can be captured and shared in ways that sidestep the friction points of traditional code sharing. The key is recognizing that some layers of knowledge are more portable than others.

The things that vary by jurisdiction: specific eligibility thresholds, fee structures, examination requirements, documentation rules, reciprocity agreements. These reflect policy choices that differ from state to state. The things that don’t vary much: the general workflow of receiving an application, verifying credentials, checking against requirements, issuing or denying a license, handling renewals, managing complaints. The structural patterns of how licensing systems work are common even when the specific parameters differ from jurisdiction to jurisdiction.

Shareable software specifications can describe these common workflow patterns while leaving room for jurisdiction-specific policy parameters. The specification captures the general shape of the system, the decisions that need to be made at each step, the data that needs to be captured, the integrations that need to exist. Each jurisdiction supplies their own rules to fill in the blanks. This is similar to how OpenFisca, the open source rules-as-code engine, operates: the engine and the structure for expressing rules are shared across countries, while each country models its own tax and benefit policies using the common framework.

Instruction sets can inform AI assistants about government domains, legacy platforms, and common patterns. An instruction set for understanding COBOL has value whether you’re modernizing benefits systems in California or tax systems in New York. An instruction set describing the common patterns in professional licensing workflows can help an AI assistant generate better code regardless of which state’s specific requirements it’s implementing. The knowledge is portable even when the policy details are not.

Reference architectures can describe how systems should connect, what security boundaries matter, what integration patterns work. This structural knowledge helps any implementation fit into the broader ecosystem. In this model, jurisdictions build their own systems, generating much of the code with AI assistance.

But they build from shared specifications of common patterns, informed by shared domain knowledge, validated against shared structural tests. The knowledge of how these systems generally work transfers even when the specific policy logic doesn’t.

The Maintenance Advantage

Knowledge sharing also addresses the maintenance burden differently than traditional code sharing.

When you adopt someone else’s code, you own the obligation to maintain it. When you generate code from shared specifications using AI tools, you can regenerate or modify it as needs change. The specification remains the source of truth; the code becomes more disposable.

Maintaining specifications and instruction sets still requires effort. Rules change, edge cases get discovered, better patterns emerge. But the burden differs from maintaining a running codebase. And updates to shared knowledge resources benefit everyone building from them, without requiring coordinated deployments or version management across independent instances.

Looking Ahead

The  case for traditional government code sharing rests on the assumption that code is expensive to write and therefore should be reused. AI-assisted software development is eroding that assumption. Code is becoming cheaper to produce. The scarce resource is no longer working code, but the knowledge required to specify what code should do.

This suggests the future of government technology collaboration may look less like code sharing and more like a system for sharing open source software specifications Specifications, test suites, domain models, instruction sets that encode understanding of how government actually works. It can all be shared, and it is increasingly becoming more valuable than code itself.

I explore this shift in detail in The SpecOps Methodology, which describes an approach to legacy system modernization built around capturing and sharing institutional knowledge. In a world where AI tools can generate implementations from specifications, the specifications themselves become the durable, shareable asset.

The people who have spent years advocating for government technology sharing aren’t wrong that collaboration should reduce waste and improve outcomes. But the question of what governments share, and what they reuse is changing. As the economics of software development shift, the answer may be shifting too.

Leave a comment

About Me

I am the former Chief Data Officer for the City of Philadelphia. I also served as Director of Government Relations at Code for America, and as Director of the State of Delaware’s Government Information Center. For about six years, I served in the General Services Administration’s Technology Transformation Services (TTS), and helped pioneer their work with state and local governments. I also led platform evangelism efforts for TTS’ cloud platform, which supports over 30 critical federal agency systems.