Civic Innovations

Technology, Government Innovation, and Open Data


Building the Foundations for the AI Era

We’re living through another one of those moments. The kind where the infrastructure being built today will shape how we work, communicate, and interact with technology for decades to come. But unlike the dot-com boom or the mobile revolution, this moment is harder to see clearly. It’s happening in the background, in the foundation.

I’m referring t0 the protocols that are emerging to support AI applications.

If you were around in the early days of the internet, you remember when protocols like TCP/IP, HTTP, DNS, and SNMP were being developed and refined. Most people didn’t pay attention to them. They weren’t sexy. They were infrastructure. But they were also the foundation that made everything else possible—the websites, the applications, the services we now take for granted.

We’re seeing something similar happen now with AI. New protocols like Model Context Protocol (MCP), Agent Communication Protocol (ACP), and Agent-to-Agent (A2A) protocol are being developed to help AI systems communicate, share context, and coordinate with each other. These aren’t consumer-facing products. They’re infrastructure. And like those early internet protocols, they’re going to matter a lot.

Why Protocols Matter

Protocols are agreements about how things should work. They’re the standards that allow different systems to talk to each other without needing custom integrations for every possible combination. When Tim Berners-Lee developed HTTP in the late 1980’s and early 1990s, he created a simple, universal way for computers to request and share documents. That simplicity made the web possible.

The same principle applies to AI systems today. As more organizations build AI applications, we need standardized ways for these systems to share information, maintain context across interactions, and coordinate actions. Without these protocols, every integration becomes a custom build. Every new capability requires rework. Progress slows.

Think about what DNS did for the internet. Before DNS, you had to remember IP addresses to connect to other computers. DNS created a human-readable naming system that made the internet usable for regular people. It was infrastructure, but it was infrastructure that fundamentally changed who could participate.

The protocols emerging around AI serve a similar purpose. They’re making it possible for AI systems to work together in ways that would otherwise require enormous engineering effort. They’re creating a foundation that others can build on.

What These New Protocols Do

Model Context Protocol is designed to help AI systems maintain awareness across different interactions and platforms. Instead of starting from scratch every time you switch tools or applications, MCP allows systems to carry forward the context they’ve built up. It’s like having a conversation that doesn’t reset every time you open a new window.

Agent Communication Protocol and A2A are focused on how AI agents coordinate with each other. As we move toward systems where multiple AI agents work together on complex tasks, we need standardized ways for them to communicate, delegate work, and share results. These protocols are the infrastructure that makes multi-agent systems practical.

None of this is glamorous work. These are standards, specifications, and technical agreements. But this is exactly the kind of basic infrastructure that enables the next generation of innovation.

The Pattern of Infrastructure

What strikes me most about this moment is how familiar the pattern feels. When the internet was being built, there was a lot of attention on the flashy stuff—the first websites, early e-commerce, the browser wars. But the real work that made everything possible was happening in the background, in working groups and standards bodies, where engineers were hammering out the protocols that would become the foundation of everything else.

Not every standard survives, of course. Remember WAP and Wireless Markup Language from the early days of mobile devices? They seemed important at the time, but they didn’t make it in the long run. The same will likely be true for some of the AI protocols being developed now. Some will become foundational infrastructure. Others will fade away as better approaches emerge.

We’re seeing the same thing now. There’s enormous attention on large language models, agentic applications, and AI-generated content. Those are the visible parts. But the protocols being developed now—the standards for how AI systems will communicate and coordinate—are quietly becoming the foundation for what comes next.

This matters for people who work in or around government for a few reasons.

First, government agencies are beginning to adopt AI systems at scale. As they do, the ability of these systems to work together across different departments and jurisdictions will depend on shared protocols. The more standardized these protocols become, the easier it will be for governments to integrate AI into their operations without creating fragmented, incompatible systems.

Second, these protocols will shape what’s possible. Just as HTTP made the web accessible and DNS made it usable, the protocols emerging around AI will determine how easily we can build sophisticated, multi-system applications. They’ll determine whether we end up with a few dominant platforms or a more open, interoperable ecosystem.

Third, protocols create leverage points for policy. Once standards are established, they become places where governments can focus their efforts to ensure systems are secure, transparent, and accountable. But that window of opportunity is narrow. By the time protocols are fully baked, it’s much harder to influence them.

Building Together

One of the most important lessons from the early internet is that the protocols that succeeded were the ones built collaboratively, with input from diverse stakeholders. TCP/IP emerged from a collaborative effort across universities and research institutions—leader like Bob Kahn and Vint Cerf working with teams at Stanford, BBN Technologies, and University College London. They recognized that interoperability was more valuable than control, and they built the protocols in public, incorporating ideas from researchers around the world.

We’re at a similar inflection point with AI protocols. The choices being made now about how AI systems communicate and coordinate will shape what’s possible for years to come. And just like with the early internet, the best outcomes will come from bringing diverse perspectives to the table—including voices from government, which has unique requirements around transparency, accountability, and equity.

This isn’t a moment for passivity. It’s a moment to pay attention, to engage, and to help shape the infrastructure being built. The foundation matters. It always has.

Leave a comment

About Me

I am the former Chief Data Officer for the City of Philadelphia. I also served as Director of Government Relations at Code for America, and as Director of the State of Delaware’s Government Information Center. For about six years, I served in the General Services Administration’s Technology Transformation Services (TTS), and helped pioneer their work with state and local governments. I also led platform evangelism efforts for TTS’ cloud platform, which supports over 30 critical federal agency systems.