Introduction: The Death of the "Human Counter"

Think about the last time you visited a government office. There is a specific sensory memory associated with it: the low hum of a ceiling fan, the smell of old ledger paper, and, finally, the physical wooden counter. Behind that counter is a government employee, a person with a name badge and a face. They see your frustration, they hear your tone, and they can offer a contextual fix that no rulebook can anticipate. This physical counter has long been the sacrosanct boundary where the state meets the citizen. But that counter is disappearing. AI is sold as a sleek engine of efficiency, but beneath the hood, it is rewriting the social contract without our consent. We are shifting toward an era of trust transition in e-governance, where the human official is being replaced by the surgical efficiency of algorithms.

At the recent National Workshop hosted by the SRM Institute of Science and Technology, experts in defence, journalism, and political science gathered to dissect this transition. What they revealed is that the shift to digital governance isn’t just a "tech upgrade" but is a fundamental refactoring of power, trust, and accountability. This is a look at the most impactful takeaways from the front lines of the algorithmic revolution.

AI as a Tool of Mastery: The Rise of the Surveillance State

Air Marshal Matheshwaran opened the floor by delivering a blunt warning: AI must remain a tool, or it will become a master. He highlighted how states are monopolising data to transform from democratic entities into draconian surveillance states, monitoring citizens to the minutest detail to ensure "social compliance."

The most extreme case study is the use of a "Lavender" AI system in Gaza. Matheshwaran described it as a target formulation system based on a "Zionist philosophy," creating a massive database that automates life-and-death decisions. It is the ultimate ethical failure, showcasing a system that bypasses the Geneva Convention and human rights by automating warfare.

"AI should be treated as a tool. Don’t let AI master us. If you let AI master us, there’s going to be a catastrophe... States have a propensity to become powerful; they monopolise the use of force to control people, and how you control people is to control information." — Air Marshal Matheshwaran

A Face Behind the Screen: The loss of an identifiable official

Nagaraj Nagabushnam, Vice President of Data and Analytics at The Hindu Group, warns of the rise of the "Invisible Bureaucrat." In a traditional system, accountability is anchored to a person. If a transaction fails, you know who you spoke to. AI, conversely, offers decisions without a face to anchor responsibility.

This shift creates a vacuum that swallows the marginalised. While the "digital elite" navigate these faceless systems via high-end smartphones, the vulnerable, those on prepaid cards who must ration every megabyte, are left staring at a digital wall. For them, the loss of the physical counter is the loss of their only means of representation.

Nagabushnam illustrated this with a personal example. While attempting to send money to his son studying abroad, his bank's AI flagged the transaction as a potential fraud. Why? Because his mobile phone latched onto different cell towers. To a human in Chennai, it’s obvious: he lives at a busy junction between four neighbourhoods. To a probabilistic system, shifting between four pin codes in an hour looks like a security threat.

The danger here is the death of the "root cause." As Nagabushnam put it: "You will get observability, but you will not get root cause analysis." You can see the machine said "no," but you can never truly audit why.

Another startling insight from the workshop was the "Dashboard Trap." Nagabushnam recounted an experiment at The Hindu using AI to generate SEO headlines. Human editors, it turned out, were sub-optimal or "wrong" about one-third of the time. The AI, however, made only two errors out of 500 stories.

By any rational metric, the AI was a triumph. Yet, the system was nearly mothballed because those two machine errors were highly visible on a digital dashboard. Human error is often invisible and thus accepted; machine error is tracked in real-time, leading to an unfair standard that treats any algorithmic stumble as a catastrophic failure of trust. This feeds into the "Tyranny of Metrics." When we manage via dashboards, the metric becomes more important than the mission. We see this in call centres where representatives might cut off a frustrated customer simply to meet a "time-to-resolution" target. In governance, if we only value what a dashboard can measure, we lose the qualitative service, the empathy and common sense that defines a functional society. "The counter acts like a boundary... There are things that today’s bureaucracy does well, and one of those things is to be visible and identifiable. An AI system is not visible to you; it’s certainly not identifiable. You can't tell the AI that I spoke to one of your avatars yesterday." — Nagaraj Nagabushnam

Algorithmic Colonialism and the Ghost Factories of the Global North

Dr Papia Sen Gupta of JNU introduced a sobering reality: technology is never neutral. It is dictated by the politics of those who own the infrastructure. We are entering an era of "Algorithmic Colonialism," where the Global North acts as the architect and the Global South serves as the "ghost factory."

Thousands of workers in India, Kenya, and Uganda spend their days in data-labelling centres for meagre wages, cleaning data for systems they will never own. But the cost isn't just human; it’s environmental. These data centres consume millions of litres of water for cooling. Dr Papia Sen Gupta noted a chilling statistic: in Alaska, a single Google data centre’s water usage can equal the consumption of the entire local population for five years.

"Science is not neutral... science is also a human creation. If humans are biased, science is a biased subject. Nothing is beyond bias." — Dr Papia Sen Gupta

The "Adjustment" Gap: Why we need guardrails to oversee AI

Mr Shivam, Consultant at ZS Associates Bangalore, restarted the discussion on the 2nd day with an intriguing thought. Artificial Intelligence and Common Sense is a love story. At first glance, this pairing seems contradictory, perhaps even impossible. But they are complementary opposites, much like the trigonometric identity where sin^2theta + cos^2theta = 1. They are mathematically distinct, yet they must complete each other to create a functional whole.

As a strategist, he finds it a relatable curiosity: why does our most sophisticated technology still struggle with the simple logic of a child or the nuanced "adjustment" of a human worker? The answer is that while AI excels at pattern recognition, which means that it can predict that "four" follows "one, two, three, consequentially lacks the intuitive friction of reality. To move toward true e-governance, we must look beyond the bot and confront the hard truths about ecosystems, data bias, and the structural costs of innovation.

The Death of the Lone Wolf: Building Agentic Ecosystems

Mr Shivam explains how the era of the "single agent" is over. For AI to provide real business value or societal impact, we must transition to agentic ecosystems. In isolation, single-agent systems inevitably hit a "Creation vs. Critique" deadlock.

Imagine two agents: one designed to create content and another to critique it. Without a hierarchical structure, they enter a perpetual loop of conflict because they possess different objective functions. They lack the "organisational thinking" required to resolve disputes. To bridge this, we must implement "Parental Guardrails"—human-defined visions, missions, and core values that provide a terminal logic for autonomous systems.

As we move toward autonomy, the definition of these values must remain a human prerogative. We cannot afford to let our systems become "black boxes" of unresolved logic. "Don't leave the parental guardrails... the defining of those parents still should be with human no matter how much autonomous you become." — Mr Shivam

The Silicon Valley Lens and the GIGO Reality

The "Garbage In, Garbage Out" (GIGO) principle remains the ultimate law of AI ethics. If the underlying data is polluted by bias, the output is inherently illegitimate. Currently, 86% of the data used to train major LLMs is English, creating what we call the "Silicon Valley Lens." When AI is trained primarily through a Western, Californian perspective, it fails to grasp the social realities of the Global South. This leads to dangerous misrepresentations:

  • Cultural Stereotyping: Models often replicate Western media tropes regarding conflict zones like Syria or Yemen, ignoring local resilience and nuanced daily life.
  • Linguistic Erasure: With only 14% of training data covering the rest of the world’s languages, the AI's "brain" is fundamentally skewed.

Without diversifying these datasets and acknowledging that AI works exactly as designed, but within unequal contexts, we risk entrenching digital colonialism.

The Structural Reality: AI as a Five-Layer Cake

AI is not a cloud-based miracle; it is a physical, resource-intensive infrastructure. We must view it as a "five-layer cake" where each level rests on the stability of the one beneath it:

  • Energy: The foundational requirement.
  • Chips: The hardware backbone.
  • Tools: The software frameworks and middleware.
  • AI Models: The underlying LLMs.
  • Applications: The final user interface.

This structural reality comes with a high environmental cost. For every query processed, there is a hidden consumption of natural resources. In Colombia, the massive water and energy requirements needed to cool and power the global AI grid have become a point of serious ecological concern. Innovation at the "Application" layer is meaningless if we deplete the "Energy" layer of the planet to achieve it.

The Problems and Risks of Artificial Intelligence

Last but not least, Mr Venkat Raman, Consultant at UNESCO, delivered a talk on the ethics, governance, and societal implications of artificial intelligence. His perspective was shaped significantly by his work in conflict and post-conflict countries, which led him to challenge what he described as the overwhelmingly urban-centric nature of AI discourse. He opened by pointing out that while the world debates AI's transformative potential, millions of children and communities in remote and conflict-affected areas lack even basic internet access and yet AI is already influencing their lives, knowingly or unknowingly.

He argued that AI is fundamentally reshaping how decisions are made, not just at the individual level but at the political and policy level as well. One of his central examples was predictive policing in the United States, where AI systems trained on historical crime data ended up flagging certain racial groups as more likely to commit crimes, not because the technology failed, but because it worked exactly as designed, reflecting the biases already embedded in the data it learned from. As he put it, bias and exclusion in AI do not arise from malfunction; they arise from design choices made in unequal social and data contexts.

He devoted considerable attention to the problem of data. AI systems reflect the worldview of whoever trains them, and much of that training happens in places like Silicon Valley, far removed from the realities of communities in Yemen, Syria, or rural India. When people query ChatGPT about conflict countries, they often receive answers filtered through Western media stereotypes, because that is what the system has been trained on. He was equally emphatic about how AI literacy is now a tool for survival, arguing that it cannot remain confined to schools and colleges and is not just for computer scientists. It has evolved into a prerequisite for citizenship. Every citizen needs to understand the basics of how AI works, what it can and cannot be trusted with, and what rights are at stake when their data is collected and used. This gap in awareness, he argued, is especially dangerous in the context of elections and political campaigns, where AI-generated misinformation can spread rapidly through communities that already lack the tools of media literacy. The environmental cost of AI was another concern he raised — the vast energy and water consumption required to run large AI systems is already straining natural resources in parts of Latin America and Africa, yet this dimension rarely enters mainstream conversations about AI's impact.

Frameworks, Solutions, and the Path Forward

A significant portion of Mr Venkat Raman’s talk was devoted to UNESCO's 2021 Recommendation on the Ethics of Artificial Intelligence, the first global standard of its kind, endorsed by 193 member states. This framework is grounded in four foundational values: human rights and dignity, environmental sustainability, inclusivity and equity, and the promotion of peaceful and just societies. He was careful to clarify that ethics, in this context, is not about stifling innovation. Rather, as he put it, ethics is what makes innovation legitimate; it is about managing and regulating technology so that its benefits outweigh its harms.

Conclusion: The Accountability Gap

We are currently in the midst of the "Great Refactor" of governance, an era where technology is a new layer of social reality, not just a tool. To survive it, we must adopt safeguards like the "Sandwich Rule": A human must initiate the task, and a human must publish the final decision. AI can handle the middle, but it cannot be the beginning or the end of the process.

As we trade our privacy for the convenience of an "Invisible Bureaucrat," we must pause to consider the human cost. Efficiency is a virtue, but it is not a substitute for justice. We need a moral compass that remains firmly in human hands to ensure the algorithms of tomorrow do not erase the humanity of today.

The two days of this National Workshop on Ethics in AI have made it very clear that the future of e-governance will not be determined by how intelligent our systems become, but by how accountable they remain to human dignity.

In a world where AI agents are beginning to ask if they can trust humans, the real question for policy-makers is: Have we built a foundation of ethics strong enough for humans to trust AI? The success of our digital future depends on our ability to bridge the gap between technological pattern recognition and the complex, beautiful chaos of human common sense.