By Dr. Vamsidhar Yendapalli, Head of Department, CSE, GITAM University
Artificial Intelligence is now used in healthcare diagnostics, financial markets, transport systems, and defence applications. Development cycles are accelerating, with systems designed and deployed in weeks. Ethical safeguards and regulatory mechanisms have not advanced at the same pace, placing new responsibilities on engineers.
Bias is a central risk. In 2018, a healthcare algorithm in U.S. hospitals assigned lower treatment priority to Black patients than to white patients with similar conditions because it relied on healthcare spending data that reflected unequal access. Similar risks exist everywhere. In India, where public datasets are incomplete and skewed, deploying such systems without safeguards could reinforce existing inequalities in health, credit, or employment.
Opacity is another concern. Complex models, particularly in deep learning, often produce outcomes that are difficult to interpret. If a loan is denied, a parole decision is influenced, or an autonomous vehicle causes harm, the people affected must know why. Without transparency, accountability cannot be established, and accountability is essential in any system that influences rights and opportunities.
Privacy is also at stake. AI depends on large-scale personal data, often gathered continuously. While this can improve medical diagnosis or service delivery, it can also enable surveillance or manipulation. India’s Digital Personal Data Protection Act of 2023 has created a baseline legal framework, but engineers must design systems that embed privacy protections in practice. Legal compliance is not sufficient without engineering safeguards in data collection, storage, and use.
Ethical standards must therefore be integrated into engineering practice. Systems should undergo rigorous testing before deployment, especially in healthcare, transport, finance, and defence. Audit trails of datasets, design choices, and parameters should be maintained to ensure traceability. Algorithms should be designed to provide explanations that are understandable to non-specialists, particularly in sectors where decisions directly affect livelihoods or legal rights.
Meeting these standards requires contributions from multiple disciplines. Sociologists and psychologists study how bias appears in social contexts. Lawyers provide clarity on liability and responsibility. Ethicists define frameworks for reasoning about choices. Engineers bring these inputs into practice by embedding them into system design. Without such integration, principles remain theoretical and unimplemented.
Universities must take the lead in preparing engineers for these responsibilities. Courses on programming and algorithms should be accompanied by training in ethics and governance. Case-based learning can present students with dilemmas that combine technical, legal, and social issues. For example, debates on the deployment of facial recognition in public spaces force students to weigh efficiency against privacy and democratic accountability. Research projects should be evaluated not only for technical performance but also for ethical impact, with dedicated funding and recognition for work that advances responsible design.
The urgency of such preparation is clear. AI systems are already assisting radiologists in detecting cancers, driving high-frequency trading that moves financial markets, and being tested in autonomous defence applications. Each of these contexts carries risks if deployed without clear ethical safeguards.
International organisations have issued guidance. UNESCO and IEEE have published frameworks for responsible AI, and the European Union is introducing regulations that require risk assessments for high-risk systems. India has begun policy discussions through the NITI Aayog AI strategy and through the Bureau of Indian Standards, which is drafting AI standards. These initiatives provide direction, but their success depends on implementation within engineering education and practice.
Ethical AI requires enforceable standards, professional accountability, and institutional support. Engineers must design systems to minimise harm, prevent bias, protect privacy, and provide transparency. Ethics is not separate from engineering but part of design, testing, and deployment.
The trajectory of AI will depend on choices made by engineers today. Those choices will determine how systems influence healthcare, finance, education, and governance for decades. Every system reflects design decisions, and those decisions must align with legal standards, human rights, and social values. Ethics in AI is not an external requirement but an engineering standard, as integral as safety codes in other fields.