Artificial Intelligence (AI) holds a transformative place in world history as a groundbreaking advancement that will redefine how societies function, innovate, and interact.
Emerging in the mid-20th century, AI has accelerated technological progress, mirroring the revolutionary impact of the Industrial Revolution. It has enabled unprecedented efficiency in industries such as healthcare, manufacturing, finance, and education, fostering innovations like personalised medicine, predictive analytics and automation.
Influencing National Security
AI’s role in shaping global connectivity through advancements in communication and data analysis has fundamentally altered economies and geopolitics, influencing everything from trade to national security.
As a catalyst for both opportunities and challenges, AI symbolises humanity’s capacity to create tools that not only solve complex problems but also raise profound ethical and societal questions about the future of work, equity and human identity. Bharat as a civilisational entity must pay close attention to the intersection of structural forces, human actions and scaled up technology on global affairs. Technological advancement outlasts the rise and fall of superpowers and is the substrate on which the economy operates, social interactions happen and culture is created.
How Social Media Platforms Shape Reality?
Questions that need our constant attention and scrutiny are how does truth get corroborated, what is real or fake and how does cultural influence take place? From where do “global cultural trends” emerge, who are the biggest influencers and how informational cascades happen are other queries.
With near saturation in reach and possibly infinite repetition, the internet has created an environment where protecting reputation from malefic intent may seem nearly impossible. Whether it is ‘malicious information’, ‘deliberate misinformation’, ‘informed disinformation’: you can create many permutations, but the core underlying issue is that it is impossible to tell apart the real from the fake.
Artificial Intelligence (AI) has significantly amplified the spread of disinformation by enabling the creation and dissemination of false or misleading content at unprecedented speed and scale.
Familiarity with Major Issues
Deepfakes: Videos or audio recordings, making it possible to fabricate events or statements by public figures, which can mislead audiences and erode trust.
Bots and Automation: AI-powered bots can flood social media platforms with false information, manipulate trending topics and create the illusion of widespread consensus, which can influence public opinion and decision-making.
Content Generation: AI language models can generate convincing articles, social media posts or comments that mimic human writing, making it harder to discern fact from fiction.
Targeted Misinformation Campaigns: AI-driven algorithms analyse user data to identify individuals or groups susceptible to specific narratives, allowing disinformation campaigns to be highly targeted and effective in spreading propaganda.
Algorithmic Amplification: Social media platforms use AI algorithms to prioritise content that drives engagement, often amplifying sensational or polarising information, regardless of its accuracy.
Scaling Misinformation: AI tools enable the creation of large volumes of false content quickly and cheaply, overwhelming fact-checking efforts and making it difficult for the public to keep up with corrections.
While AI offers immense potential for good, its misuse in spreading disinformation underscores the urgent need for robust ethical frameworks, better detection tools, and public awareness to mitigate these risks.
The internet as a creative ecosystem is inefficiently democratic, chaotic and unequal. However, social media is brutally efficient as a distribution channel. Billions of content pieces from millions of creators compete for attention, a happy few will go big, and a microscopic minority will get fantastically rich. Then too, success will be short-lived. Hence, manipulating voices for agendas has become easier. Creators jump on to the trends bandwagon.
The era of social media is accelerating into an era of AI-enabled content. Today, every internet enabled phone owner is a walking studio. Everyone has a shot at being a star. Governments across the world have long complained about disinformation undermining elections, inciting unrest and deepening societal divides. There is outright disinformation — the deliberate spread of falsehoods to deceive. Next is misinformation, which is accidental, and malinformation, which exaggerates truths or changes their context to cause harm.
Threats can take many forms — fabricated news, phoney social media accounts and fake text, audio or video content. Online superspreaders, often aided by AI, intensify these attacks, allowing disinformation to ricochet unpredictably. After all, lies travel faster than the truth. Over the past five years, American companies have roughly tripled the amount of marketing spend they lavish annually on influencers, to $7bn, according to eMarketer, a research firm. Goldman Sachs estimates that as of last year there were more than 50 million influencers globally, from fashionistas on Instagram and comedians on Tiklok to gamers on YouTube. Political commentators are a subgroup in this ecosystem. The influencer community is growing by 10 per cent and 20 per cent annually.
There are myriad types of influencers
Some influencers view the work as a pastime; others aspire to make it their vocation, lured by stories of superstars making tens of thousands of dollars for a post and harnessing their legions of followers to launch businesses of their own. Fifty seven per cent of Gen Z in America would like to be a social-media influencer, according to Morning Consult, a pollster; 53 per cent describe it as a “reputable career choice”. In Bharat, 40 crore youth are less than 20 years of age. This cohort is going to create and consume social media content only. The influencer industry as it exists today is a complex and far-reaching ‘informational-commercial-personal’ communication apparatus whose incentive system is sub optimal and tightly controlled by algorithmic social media platforms.
As the influencer industry has developed, there has been an overarching trend of power shifting away from individuals and individual ownership (such as with blogs) and toward social media companies, as well as toward companies offering various technologies
of self-commercialisation
Perhaps the most significant factor in social media companies’ ability to accumulate power was industry stakeholders’ desire to maximise efficiency and minimise risk: individual participants, particularly influencers, wanted to gain income and visibility; platforms wanted consistency and predictability in content; and marketers sought to make these processes efficient and profitable.
This macro shift in power toward media and technology companies is a one way traffic. The drive toward data-driven identification of smaller and smaller subsets of influencers, led to a growing chasm between the various classes of influencers. Power has tilted so decidedly toward the influencer industry’s technological gatekeepers that it is their agendas that are most clearly observable in the industry’s continued evolution. The message format and platform rules are more important than influencer style, inspiration or content. To be famous, they have to be seen. To be seen, they need algorithmic push. For that recommendation, they need to conform to the platform template. Hence there is no freedom to create as they like.
Regulatory attention has grown. However, it’s focused on the veracity of content, the imbalance of power between the major platform companies and those whose content they use but not as much on the gross imbalance of power and lack of transparency between platform companies and their users. What began as “social networks”have become “social media” and profoundly influence culture by shaping content, creating trends, and deciding societal norms.
There is a positive element to how rapid information exchange enables global cultural exchanges and the swift spread of ideas. But the bigger consequence is that it leads to cultural homogenisation, where diverse cultures adopt similar behaviors and preferences, diminishing unique cultural identities. These platforms are western in sense and spirit. Additionally, social media platforms often promote consumerism through influencers and targeted advertising, impacting lifestyle choices and societal values. It leads to a sort of bondage. One is reduced to being one’s clicking self and recommendations limit exposure to diverse perspectives and reduce the richness of cultural experiences. In the name of information sorting and user experience , these social media algorithms limit individual choice and creativity. Social media companies, brands and influencers have increasingly adopted the term “creator” to encapsulate the many forms of social media content producers thriving today, from vloggers and livestreamers to Instagrammers and TikTok stars. But there is a key difference. “Influencer,” as it evolved in recent years, requires some proof: that people act based on the content. Not everyone influences these terms—but anyone can “create.”Every influencer is a creator but every creator is not an influencer. This is something to watch in 2025.
At the level of a civilisational narrative, we have to build awareness of external influence and the capabilities to defend our culture. The scale of virality is such that conventional responses can seem like using a bucket of water to put out a forest fire.
Government must make disinformation a priority, at a higher status of importance than even cyber security. Any security breach is a known act but the polarising dynamics of digital world conversations operate silently. Finding so-called “tripwires” and testing crisis responses can highlight weaknesses. Calling out falsehoods quickly after an attack makes sense, but this can also amplify the fabricated message. Mapping and tracking disinformation sources must become strategic imperatives. We must understand online communities because this is the space where origination occurs.
After the release of ChatGPT in November 2022 highlighted the growing power of AI, public debate was dominated by AI-safety concerns. In March 2023, a group of tech grandees, including Musk, called for a moratorium of at least six months on AI development. The following November a group of 100 world leaders and tech executives met at an AI-safety summit at Bletchley Park in England, declaring that the most advanced AI models have the “potential for serious, even catastrophic, harm”. Instead of worrying about theoretical, long-term risks posed by AI, the focus should be on real risks posed by AI that exist today, such as bias, discrimination, AI-generated disinformation and violation of intellectual-property rights.
Prominent advocates of this position, known as the “AI ethics” camp, include Emily Bender, of the University of Washington, and Timnit Gebru, who was fired from Google after she co-wrote a paper about such dangers.A grand global experiment is therefore under way, as different governments take different approaches to regulating AI.
Besides introducing new rules, this also involves setting up some new institutions. The EU has created an AI Office to ensure that big model-makers comply with its new law. By contrast, America and Britain will rely on existing agencies in areas where AI is deployed, such as in health care or the legal profession. But both countries have created AI-safety institutes. Other countries, including Japan and Singapore, intend to set up similar bodies.
The era of social media is accelerating into an era of AI-enabled content. Today, every internet enabled phone owner is a walking studio
Meanwhile, three separate efforts are under way to devise global rules and a body to oversee them. One is the AI-safety summits and various national AI-safety institutes, which are meant to collaborate. Another is the “Hiroshima Process”, launched in the Japanese city in May 2023 by the G7. These initiatives will probably converge and give rise to a new international organisation.
There are many views on what form it should take. OpenAI, the startup behind ChatGPT, says it wants something like the International Atomic Energy Agency, the world’s nuclear watchdog, to monitor x-risks.
Microsoft, a tech giant and OpenAI’s biggest shareholder, prefers a less imposing body modelled on the International Civil Aviation Organisation, which sets rules for aviation. Academic researchers argue for an AI equivalent of the European Organisation for Nuclear Research, or CERN. A compromise, supported by the EU, would create something akin to the Intergovernmental Panel on Climate Change, which keeps the world abreast of research into global warming and its impact.
As courts deliberate over existing laws, legislatures will debate new ones, in particular on Deepfakes, which use AI to insert a person’s likeness into an existing photo or video, often of a pornographic nature. This is worrying parents (whose children are being harassed with “nudifying” apps), celebrities (whose likenesses are being stolen by con artists) and politicians (who have found themselves the targets of ai-powered disinformation).
In March, the American state of Tennessee passed the Ensuring Likeness Voice and Image Security (ELVIS) Act, to protect performers from having their image or voice used illegally. California has passed laws to stop political Deepfakes.
Ensuring that humanity reaps the benefits of AI needs a framework that balances innovation with ethical considerations, accountability, and public trust. Thoughtful regulations can mitigate risks such as bias, privacy breaches and misuse while promoting transparency and inclusivity in AI development. By setting global standards for safety, data governance and human oversight, regulation ensures that AI serves as a tool for social good, driving advancements in healthcare, education, and sustainability.
Ultimately, well-crafted policies can create an environment where AI enhances human potential, safeguards rights and aligns with the broader values of society. Bharat and its civilisation has a unique place in this world and we must pick what serves us best.
Comments