CHIA Team Members Attend AI Impact Summit 2026
February 2026
Umang Bhatt and Ariella Shulman, a member of his Trustworthy AI Lab (TRACE), attended the 2026 AI Impact Summit in New Delhi, India. They co-wrote a reflection piece for the new TRACE Substack entitled, “We Need to Talk About Institutions,” a reflection of their time at the Summit. You can read their reflections on Substack, or below.
………………………………………………………………………………………………………………………………………
We Need to Talk About Institutions
By: Umang Bhatt and Ariella Shulman
Back from the AI Impact Summit in Delhi, we are left with a sense that the global conversation around AI sovereignty is advancing, yet incomplete.
Amid turbulent traffic, Delhi offered many of the familiar debates about scaling laws and frontier benchmarks. What stood out immediately, however, was the infrastructure: biometric scanners at the airport, integrated digital public rails, technology visibly embedded into civic life. Facial recognition cameras marked the entrances to heritage buildings, and street vendors asked for digital payments. The message was clear: India is accelerating into the future, and AI will be the catalyst for national prosperity.
Inside the Summit, the language of sovereignty dominated. We heard calls for domestic compute, multilingual LLMs, and reduced dependence on Silicon Valley hyperscalers. One speaker put it bluntly: there are two AI superpowers, and there is everyone else. The question for “everyone else” is whether to attempt frontier competition or to invest in smaller, locally grounded systems that serve specific use cases.
Many attendees advocated for the latter. Build small language models attuned to India’s thousands of distinct regional dialects. Invest in digital infrastructure and novel applications rather than a single sovereign AGI. Yet many others warned that such a strategy amounts to economic self-sabotage. A common refrain was that without frontier capability, states like India risk permanent economic dependence on the superpowers.
This framework clarifies the geopolitical stakes. But it is incomplete.
The Missing Institutional Question
At Cambridge’s Trustworthy AI Lab (TRACE), our research has focused less on national compute capacity and more on what happens inside institutions as AI systems are adopted. During our satellite dialogue at the BAPS Swaminarayan Akshardham with frontier researchers and leading government officials, a recurring tension emerged: institutions are designed to be reliable, norm-governed, and accountable, while AI systems are probabilistic, brittle, and fallible. The friction concerns how knowledge itself is produced and trusted.

Participants described internal fractures as staff adopted generative systems at radically different rates. Disagreements surfaced about authorship, originality, and professional responsibility. Institutional memory, long sustained through person-to-person transmission, had already been strained by digitisation; AI threatened to accelerate that erosion. The central concern was not simply whether AI improved workflow. It was whether the institution could continue to define and enforce its own epistemic norms.
We are deeply concerned that this institutional layer is largely absent from sovereignty debates.
The Value of Many (Competing) Voices
National compute capacity does not automatically translate into what we call institutional epistemic resilience: the ability of an institution to autonomously regulate its knowledge practices, articulate and enforce red lines on AI use, equip its members with the skills necessary for critical engagement, and sustain coherent standards of evidence and authorship over time.
This capacity is not abstract. It is observable, albeit imperfectly, through indicators such as membership retention and intergenerational continuity, the stability of internal review and accountability mechanisms, and the persistence of shared professional standards. It can fail severely: where externally procured model assumptions displace internal norms, or when knowledge production is increasingly outsourced to general purpose AI systems. The threat of institutional convergence toward a single platform-mediated epistemic template is real, and should be taken seriously.
Delhi’s sovereignty debate mirrors this institutional struggle. Many argued that India should prioritise smaller, localised models attuned to linguistic diversity and domestic law rather than pursue frontier competition. Others warned that abandoning frontier ambitions risks permanent strategic dependence. The choice is somewhat artificial, given that orchestration layers may inevitably sit atop large foundation models. Yet this sharpens the underlying question: is sovereignty about scaling compute or retaining normative control?
It is worth pausing on a prior assumption. As generative AI disrupts traditional social frameworks, are institutions themselves normatively entitled to preservation? Or would the reduction of intermediaries democratise access to knowledge?
The Future of Knowledge Production
Institutions have historically enabled epistemic pluralism. They protect minority languages, sustain non-normative traditions, and create structured spaces for deliberation that platform architectures do not easily replicate. Without them, we risk collapsing into a flattened informational ecosystem governed primarily by engagement incentives and infrastructural defaults: a social monoculture. If institutions are to retain social authority and trust in the age of AI, they must actively govern AI integration to avoid the gradual absorption of generative outputs: an algorithmic monoculture.
India’s position complicates any simple east versus west narrative. Its embrace of digital public infrastructure reflects distinct developmental priorities and state capacity; we observed a level of comfort with surveillance technology that may be surprising for participants in the Western AI ethics discourse. At the same time, sovereign AI in India must navigate domestic norms and legal particularities. When local law diverges from assumptions embedded in Western-trained models (e.g., when a chatbot allows Indian citizens to determine the sex of a fetus, in violation of longstanding social and medical norms) what is the clear path forward? In cases like this, it becomes clear that institutions may benefit from more granular control over AI outputs to preserve local ethical standards. The challenge is how to embed such controls without inadvertently reinforcing harmful social structures.
Whether smaller, curated systems ultimately enhance institutional self-governance remains an open empirical question. Our preliminary work suggests that AI adoption can destabilise internal trust and accelerate epistemic fragmentation within institutions. Systematic longitudinal research is needed to assess which deployment models strengthen institutional capacity, and where limits should be placed.
Compute sovereignty matters. But without institutional capacity to govern knowledge production, sovereignty risks becoming symbolic rather than substantive. If you are working on curated LLMs, public-facing AI systems, or governance mechanisms for civic institutions, we would welcome collaboration. The future of AI safety will not be secured by superpowers alone. It will be negotiated in the spaces where knowledge is produced, contested, and trusted.