March 13, 2026:
This article is part of the AI, Cybersecurity, and Risk Series →
Christopher Yoo’s recent piece in the Business Times—“AI governance: The summit stage is necessary but it isn’t sufficient”—captures a moment that many in the AI policy world are feeling but few have articulated so clearly.
The India AI Impact Summit was historic. Nearly 300,000 people from more than 100 countries gathered in New Delhi—the largest AI summit ever held, and the first hosted in the Global South. Heads of state, frontier lab CEOs, and civil society shared the same stage. The New Delhi Frontier AI Impact Commitments produced concrete pledges from Anthropic, Google, OpenAI, and others. The will to connect was visible at scale.
But as Yoo writes:
“The summit stage is where political will gets declared and assessed. Governance gets built in the harder, less visible work that follows.”
That harder work is where I have spent my career.
The Gap Between Evidence and Action
Yoo identifies three specific gaps that demand urgent attention: incident reporting, verification, and evaluation standards that work across languages and contexts. He calls for “trusted spaces—Track 1.5 forums where researchers, civil society, policy-makers, industry practitioners and investors can have the substantive and curated conversation that the formal summit stage does not allow.”
He is right. But there is a deeper question embedded in his analysis: Who builds those spaces?
The UN has taken a historic step with its Independent International Scientific Panel on AI—40 experts from around the world who will provide rigorous, independent assessments. The Global Dialogue on AI Governance will create space for discussion. A $3 billion capacity fund has been proposed.
These are essential pieces of an emerging architecture. But they are not yet a governance system.
A Precedent from Internet Governance
I have seen this moment before.
When the internet’s domain name system was liberalized in the 2012 new gTLD round, the world faced a similar dynamic. Technical expertise was essential. Standards were set. Panels convened. But what made the system actually work for the world—across 21 language communities, from Arabic to Chinese to Cyrillic—wasn’t just science.
It was patient diplomacy. Trust-building across cultures. Institutional design that made inclusion operational, not aspirational. It required people who could sit with governments, private sector actors, and civil society, and find common ground.
That work did not happen in scientific panels. It happened in rooms where practitioners negotiated, compromised, and built.
What AI Governance Needs Now
Yoo points to the Pugwash conferences as a precedent: spaces where understanding was built before treaties were possible. Today, we need similar spaces for AI—and we need the people who know how to build them.
Who has:
- Facilitated consensus across continents?
- Designed capacity programs that actually reached those left behind?
- Navigated both Fortune 500 boardrooms and multilateral negotiations?
- Championed inclusion not as a slogan, but as a deliverable—across countries, languages, and decades of work?
These are not rhetorical questions. They point to a specific kind of practitioner whose voice is largely absent from the current architecture.
The good news is, such people exist. They have done this work before—in internet governance, in digital development, in the long, slow craft of turning principles into practice.
A Constructive Path Forward
The UN has laid a strong foundation. The scientific panel gives the world evidence. The Global Dialogue gives the world a table.
What comes next is the hardest part: building the bridge between them. That will require people who understand both science and politics, both evidence and implementation, both the global stage and the local ground.
The scientists have spoken. The summit stage has done its work. Christopher Yoo’s concern is addressed by this article.
Now the real work begins.
DigitalSovereignty, AIGovernance, GlobalDialoguePlatform, ScientistsHaveSpoken, NowWhoWillBuild, ContinentalStewardship, Multilateralism, GlobalSouth, AIforHumanity
Sophia Bekele
Digital Sovereignty Architect | AI Governance Strategist
Founder of DotConnectAfrica Group and CBSegroup, a, a former advisor to multiple UN bodies, and a founding advisor to the EurAfrican Forum. She is recognized as a pioneer of internet governance, having chaired the IDN Committee, which unites 21 language communities, and championed the .africa domain. She was named a Global Champion of Digital Sovereignty in 2025.
Subscribe to The Ethical Technocrat →
The Counter-Playbook for leaders navigating power, platforms, and institutional risk