A few years ago, Amazon faced a scandal when its AI-powered recruitment tool systematically downgraded resumes with female names. The system had absorbed historical hiring patterns from male-dominated industries and concluded that male candidates were more desirable. It was a striking example of how AI amplifies the biases embedded in the data it is trained on.
Now, a similar phenomenon is playing out on a much larger scale. AI systems like Grok are shaping the way the world perceives India, and the results are troubling. From Google Gemini labelling Prime Minister Narendra Modi as a “fascist” to Grok misrepresenting Indian historical figures, AI is no longer just a tool—it’s becoming a self-proclaimed arbiter of truth. And that truth is being shaped by data that overwhelmingly originates from the West.
AI Doesn’t Think—It Predicts Based on Biased Data
AI systems are trained on vast datasets sourced from books, articles, and online content—much of which is dominated by Western perspectives. The result? When you ask an AI model about Indian Culture, it’s more likely to prioritize criticisms found in international media while sidelining local perspectives.
One of the most deceptive aspects of AI bias is its tendency to present information without context. For example, if an AI states, “The Mughal Empire was a golden age of art and architecture,” it compresses a complex 300-year history into a single, oversimplified narrative.
The reality is far more intricate—Mughal rulers differed significantly in their policies and impact. Aurangzeb, for example, reinstated the jizya tax and enforced strict orthodoxy. AI may also omit how Mughal invasions devastated local Hindu kingdoms, plundering temples and disrupting regional power structures.
Another example is when Grok was asked who is more knowledgeable, it failed to distinguish between knowledge and formal education. Relying solely on online degree availability and university rankings, it determined that one person was more knowledgeable than the other. However, in reality, this is not always the case. A university’s absence from rankings or a lower ranking does not imply that its students lack knowledge.
This incident is just one of many that have raised concerns about Grok’s responses. Recently, the chatbot went viral in India after a user on X (formerly Twitter) asked it to list their top 10 mutuals. In response, Grok not only provided a list but also included misogynistic insults in Hindi, sparking widespread criticism.
AI often favours mainstream narratives—such as Bollywood’s romanticised portrayal of the Mughals or social media influencers’ frequent use of misogynistic and abusive language—while neglecting harsher historical realities. The lack of accessible data from non-English sources, including Sanskrit texts, our rich oral histories, and regional chronicles, further skews the historical record.
When AI distorts history, it subtly erases cultural identities and reinforces stereotypes. A study found that GPT-4 Turbo achieved only 46 per cent accuracy on global history questions, often misrepresenting Indian events. AI chatbots have been caught providing incorrect details about Indian landmarks and historical figures, further distorting the nation’s cultural legacy. In the age of AI, the old saying rings truer than ever: a half-truth is often more dangerous than a lie.
Neuroscience and the Dangers of AI Shaping Our Reality
Neuroscientific research has long established that the human brain constructs its version of reality based on repeated stimuli. Each time we are exposed to the same message, idea, or framing, our neural pathways strengthen, making us more likely to accept and internalize that perspective as truth. This process, known as neural reinforcement, plays a critical role in how misinformation spreads—and why it becomes so difficult to challenge once it takes root.
When AI systems generate content infused with political, social, or historical biases, they subtly mold public perception. If an AI model consistently presents a skewed narrative about Indian politics, caste dynamics, or historical events, it doesn’t merely reflect existing biases—it actively reshapes reality for those who engage with it, especially young users whose minds are still developing. Over time, exposure to such distortions can promote ideological divides, polarize communities, and even rewrite collective memory.
The Way Forward
We cannot afford to discard AI, but we must approach it with caution. Here’s what must be done:
Local Data Sovereignty: AI models must be trained on diverse and representative datasets that include Indian academic sources, regional languages, government records, and oral histories. These datasets must be free from the overrepresentation of Western perspectives and should reflect India’s rich and multifaceted culture. The government and private sector must work together to create indigenous data repositories that counteract global AI biases.
Stronger Regulation: The Indian government must enforce stricter AI governance policies to prevent misinformation and bias. AI-generated content should be subject to the same scrutiny as traditional media, with clear guidelines on factual accuracy, cultural sensitivity, and data provenance. A regulatory body dedicated to monitoring AI outputs related to Indian affairs could help mitigate potential distortions.
AI Literacy and Public Awareness: A vast majority of people still trust AI outputs blindly, assuming them to be objective and fact-based. There must be a concerted effort—both by the government and educational institutions—to teach AI literacy at schools and universities. Citizens should be trained to critically evaluate AI-generated content rather than accepting it at face value.
Ethical AI Development: Indian researchers, technologists, and policymakers must collaborate to build AI systems that reflect India’s diversity rather than relying solely on foreign tech giants. Indigenous AI development should focus on ethical frameworks that account for caste, gender, and linguistic inclusivity. Open-source AI initiatives should be encouraged to allow for greater transparency and community involvement.
AI is shaping the world’s understanding of India—but it is doing so through a distorted lens. Models like Grok are not neutral; they are products of the data they consume, and that data is overwhelmingly Western. Without intervention, AI will continue to misrepresent India’s history, politics, and culture, reinforcing systemic biases rather than dismantling them.
The challenge is clear: we must reclaim AI from biased algorithms and reshape it to represent all of India—not just the narratives most dominant online. The future of truth in the digital age depends on it.
Comments