Express Computer
Home  »  Artificial Intelligence AI  »  Risks associated with ChatGPT

Risks associated with ChatGPT

0 100

By Rishi Kant, Joint Director, Directorate of Economics and Statistics (DES)

ChatGPT, chatbot developed by OpenAI, has taken the world by storm. Within two months after launch, it is estimated to have more than 100 million active users, clearly reflective of its massive popularity. Chatbot GPT an AI large-scale language model (LLM), is the latest manifestation of AI powered by vast quantities of data, algorithms capable of correcting themselves, and massive computational power. It uses a deep learning architecture which allows it to understand and generate human-like text and responses, by using mix of vast datasets and powerful computing. It can take a user’s input (a prompt or question) and generate a relevant and coherent response in natural language.

Disruptive technology
As an AI language model, ChatGPT can be considered disruptive as it demonstrates remarkable natural language understanding and generation capabilities. It can comprehend context, follow conversational threads, and provide relevant responses, which has the potential to revolutionize human-computer interactions and can automate tasks that previously required human intervention. AI models like ChatGPT can continually learn and adapt from new data. As more users interact with the system, it gains insights and improves its responses over time, leading to a constantly evolving and self-improving AI.

Potential for Misuse
While ChatGPT has numerous positive applications, it also has the potential for misuse. Besides, concerns about data breaches and surveillance, the use of AI has also raised concerns about its potentially harmful effects on the accuracy and integrity of the information it produces. ChatGPT can be manipulated for generating misinformation, as the model is trained on a large volume of data, and finding associations and patterns without making counterfactual inferences in terms of good or bad. Therefore, risk of spreading misinformation, hate speech, and deepfake content can become much easier, while tackling the same become even more difficult.

Another risk is the potential for generating and perpetuating bias. Such bias can be either in relation to certain religions, gender, race, ideology, caste etc. AI systems learn from data, without making any ethical or moral judgments or reasoning. AI lacks emotional intelligence, empathy, and compassion, which are crucial components of moral decision-making and rest only with the conscious human mind. Devoid of causal reasoning AI system, if trained using data, which advertently or inadvertently contain biases, may perpetuate and even amplify those biases. For example, if historical data contains discriminatory patterns or reflects societal prejudices, the AI system may make biased outcomes. Referring to such outcomes as holy grail may lead to inaccurate or unfair conclusions, which may prove disastrous, especially when such biases creep into the policy-making domain.

Besides, teaching AI to make ethical and moral decisions presents its own set of challenges. The ethical rules, principles, and priorities can vary across cultures, societies and individuals, making it difficult to program a universally “ethical” AI. What is considered ethical in one context may be seen as immoral in another. Moreover, societal values and moral norms can change over time. What is considered ethical today may not be so in the future. Keeping AI aligned with evolving ethical standards requires continuous adaptation.

In certain situations, AI may face ethical dilemmas where no option is entirely ethical, and it has to make trade-offs. For instance, what may sound ethical or moral from the utilitarian perspective may not be so from a deontological perspective. A utilitarian might argue that sometimes it’s necessary to temporarily set aside certain rules in order to achieve the greatest overall well-being, while a deontologist might contend that certain principles are inviolable, regardless of potential positive outcomes.

Overall, ChatGPT and similar AI language models have the potential to bring about significant changes in how we interact with technology and each other. While this disruption holds great promise for innovation and efficiency, it also demands careful consideration of the ethical and social implications to ensure responsible and beneficial use.

Get real time updates directly on you device, subscribe now.

Leave A Reply

Your email address will not be published.

LIVE Webinar

Digitize your HR practice with extensions to success factors

Join us for a virtual meeting on how organizations can use these extensions to not just provide a better experience to its’ employees, but also to significantly improve the efficiency of the HR processes
REGISTER NOW 
India's Leading e-Governance Summit is here!!! Attend and Know more.
Register Now!
close-image
Attend Webinar & Enhance Your Organisation's Digital Experience.
Register Now
close-image
Enable A Truly Seamless & Secure Workplace.
Register Now
close-image
Attend Inida's Largest BFSI Technology Conclave!
Register Now
close-image
Know how to protect your company in digital era.
Register Now
close-image
Protect Your Critical Assets From Well-Organized Hackers
Register Now
close-image
Find Solutions to Maintain Productivity
Register Now
close-image
Live Webinar : Improve customer experience with Voice Bots
Register Now
close-image
Live Event: Technology Day- Kerala, E- Governance Champions Awards
Register Now
close-image
Virtual Conference : Learn to Automate complex Business Processes
Register Now
close-image