On March 28 this year, more than 1100 individuals ranging from global tech leaders to eminent citizens signed an open letter that was posted online that called on “all Artificial Intelligence (AI) labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. The timing is significant as, across the globe, the debate has begun on the reach and impact of natural language processing tools that are driven by AI technology and generative in nature.
All of that started ever since the launch of ChatGPT by OpenAI in November last year, which facilitated human-like conversations and much more with the chatbot and could assist with tasks like composing emails, essays, and code in no time. While GPT4 is a more comprehensive multimodal large language model and considered wider than ChatGPT, the fact remains that the outcome of the processing in both these models and also the Bard from Google is somewhat sending outrage across the world.
These fears are premised around the concerns that the open letter has raised- should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk the loss of control of our civilization?
While these capabilities are impressive, they also raise concerns about their potential misuse. There are fears that these AI systems could be used to spread fake news, create malicious content, or even impersonate individuals
These questions definitely need answers, and the stakeholders cannot delay a global regime to address issues of technology and how it is managed as a global commons. The leading nations of the world have spent almost two decades looking at the impact of technology on international order.
While space and nuclear have binding agreements, cyber technology issues don’t have one, nor is there on the horizon. The first attempt in the form of the Council of Europe draft convention on cyber crimes in 2001 was a failure, and the more recent United Nations fostered Group of Governmental Experts on advancing responsible State behaviour in Cyberspace in the context of international security has made some headway in the form of prescribing 11 non-binding norms in July 2021 which has still a long way to go before an agreement is reached.
Meanwhile, the march of technology continues. It is very clear that while an overarching agreement has to be there, some of the sectoral areas like AI need to be addressed more proactively and some arrangement arrived at so that product and solution developers, regulators and sovereign authorities know how to deal with the impact of advances in technology. Clearly, generative AI based on large language models offers a bigger challenge than anticipated in terms of its timings and the pace at which it is progressing.
This technology is becoming increasingly sophisticated, with systems that can generate images, music, text, and even videos. While these capabilities are impressive, they also raise concerns about their potential misuse. There are fears that these AI systems could be used to spread fake news, create malicious content, or even impersonate individuals. One of the challenges of regulating generative AI is that it is difficult to determine the intent of these systems. Unlike human creators, AI systems do not have a moral compass or conscience.
Therefore, it is essential to ensure that these systems are designed in a way that aligns with ethical standards. For instance, they should not be programmed to generate content that is racist, sexist, or discriminatory in any way. Another challenge of regulating generative AI is that it can be challenging to distinguish between content that is generated by humans and that which is generated by AI. This makes it difficult to hold individuals or organizations accountable for the content they publish. Therefore, it is essential to develop methods that can verify the source of generated content.
To regulate generative AI, it is necessary to have a clear set of guidelines and standards that govern their development and use. These guidelines have to be developed in consultation with stakeholders, including industry experts, policymakers, and the general public.
One approach to regulating generative AI is to introduce legal frameworks that govern their development and use. This would require policymakers to work closely with industry experts to develop regulations that balance the potential benefits of generative AI with the need to protect individuals and society as a whole.
Such regulations could include rules on the use of personal data, the transparency of AI systems, and the accountability of those who develop and use them. Another approach to regulating generative AI is to promote self-regulation within the industry. This would involve industry experts developing their own codes of conduct that govern the development and use of generative AI.
These codes could be enforced by industry bodies or through peer review. However, the challenges have to be addressed by governments and regulatory bodies more proactively than what it is today.
In this context, it will be very pertinent for India to take a major role in the global order to address the concerns that have been listed in the open letter. In November last year, India assumed the Chair of the Global Partnership on Artificial Intelligence (GPAI), an international initiative to support responsible and human-centric development and use of AI having 25 major nations as its members.
As the Indian government had indicated in its acceptance of the Chair to work in close cooperation with member states to put in place a framework around which the power of AI can be exploited for the good of the citizens and consumers across the globe- and ensure that there are adequate guardrails to prevent misuse and user harm, the opportunity cannot be better.
India has successfully applied AI in many areas, including agriculture, and is ranked second in the Asia Pacific region in terms of AI applicability and adaptability. India’s start-up ecosystem has successfully built many AI-related projects whose applications in solving problems has been optimal. Thus India’s leadership in fostering globally responsible AI is very pertinent and also in line with the country’s leadership of G20, wherein it has espoused the reach of technology for the greater public good and fostering responsible AI in development and usage across the horizon.
While Big Tech has been on the radar of governments and regulators across the world more so for its significant influence and reach, the push on them for responsible technology development and layout has not been impactful.
Their competition in generative AI has already shown that there is no industry-wide consensus to stop at a level. The 23 Asilomar AI Principles, which comprise the guidelines and ethics for the research and development of beneficial AI in January 2017 by a group of AI researchers, technology experts and legal scholars from different universities and organizations, have not been followed by the developers.
Possibly that’s a guiding tool that GPAI under India’s leadership could deliberate on and fine-tune a binding agreement for all stakeholders for responsible AI. The race is pacing too fast, and only India’s leadership can handle this march in a sustainable way.
Comments