By Gaurav Sahay, Partner, SNG & Partners, Advocates & Solicitors
Released in November 2022, ChatGPT, is a sophisticated computer program, powered by artificial intelligence (AI), is designed to engage in conversations with users in a natural language manner. It runs on a language model architecture created by OpenAI called the Generative Pre-trained Transformer (GPT). ChatGPT has been trained on a vast amount of text data from the internet, books, and other sources, which enables it to generate human-like responses to a wide range of questions and prompts. ChatGPT uses advanced language processing techniques to understand and generate text-based interactions, making it a powerful tool for communication, information retrieval, and assistance with various tasks. It can be used for a wide range of applications, such as answering questions, providing explanations, generating creative content, assisting with language translation, and much more. With its ability to understand and respond to human-like queries, ChatGPT aims to provide a helpful and interactive experience for users in their interactions with the AI-powered chatbot.
Convenience or methodological hard work
The concerns of every individual were same, when calculator reached the hands of the masses. Though far ahead in technological advancement, ChatGPT, probably initiates scrutiny on similar rationale.
ChatGPT is not accessible in a number of countries, including China, Iran, North Korea and Russia, but Italy is the first government to ban ChatGPT as a result of privacy concerns.
But reasons attributable to privacy, is only peripheral.
A ban on ChatGPT in Italy or any other country, is attributable to reasons, such as concerns about privacy, data protection, ethical considerations, potential misuse, or regulatory compliance.
The Indian context
The cardinal questions concerning its accountability, jurisdiction, privacy, intellectual property, social/ ethnic/ racial/ cultural prejudice, are matters being debated across nationalities. The effortless approach, until recently, by Indian regulators was to ban and limit, what does not fall within the purview of its archaic law(s). The unyielding regulator however, had to unclasp its tight fist to transpose from its totalitarian persona. Adoption of regulatory sandboxing methodology, is an effort by the Indian Legislature, to keep apace with technological disruptions than shunning it. Use it to our advantage, is the remedy, tame it with utility and regulate it while understanding it.
The point at issue is also addressing the social/ ethnic/ racial/ cultural prejudice. AI essentially is subject to the prejudice(s), we as individuals accept inadvertently and unwittingly. Its’ been observed that in instances of translating from one language to the other, AI suffers social/ cultural imperfection(s), such as, gender identification of profession, skill-sets and human intelligence, that can be termed medieval. The rectification of social/ cultural biases in human society can still be achieved by social grooming, education, amendment in ideologies, but the correction of AI by similar approach is arduous. Law can affix liabilities and obligations on real persons, but how does it arrest this biases in AI?
The proposition of opportunities brought forth by AI is enormous, yet it is always incumbent to re-evaluate and calculate the potential challenges that the new way of working and the technological transformation it shall bring forth.