Need for a Strong Regulatory Framework for Artificial Intelligence

By Vijeth Shivappa

After the internet and mobile internet triggered the Third Industrial Revolution, artificial intelligence (AI) technologies, driven by big data, are fueling a Fourth Industrial Revolution. Today, AI adoption is rapidly increasing across all industries, including healthcare, transportation, retail, financial services, education, and public safety. The establishment of regulatory frameworks for artificial intelligence is of paramount importance in our rapidly evolving digital ecosystem. These regulations are essential for ensuring ethical and fair AI, protecting privacy and data security, mitigating risks, and promoting accountability.

Governments across the globe must propose a legal framework on AI, that addresses the risks of AI and plays a collaborative role globally. The Regulatory framework and Collaborative Plan will guarantee the safety and protect the fundamental rights of people and businesses when it comes to AI. This will, in turn, strengthen uptake, investment, and innovation in AI across the globe. The regulatory framework must be aimed at providing AI developers, deployers, and users with clear requirements and obligations regarding specific uses of AI. At the same time, the framework should reduce administrative and financial burdens for businesses, particularly small and medium-sized enterprises (SMEs).

The 2nd Global Forum on the Ethics of AI: “Changing the Landscape of AI Governance,” will be organized under the patronage of UNESCO, on 5 and 6, 2024. The Forum will also feature the launch of various UNESCO initiatives, including the Global AI Ethics Observatory and UNESCO AI Ethics Experts without Borders Network. UNESCO has made a seminal contribution to the goal of effective and ethical AI governance by adopting an ambitious global standard.

Why do we need a robust regulatory framework for AI?

AI governance is necessary when machine-learning algorithms are used to make decisions. Machine learning biases, particularly in terms of personal identity profiling, can incorrectly identify basic information about users. This can result in unfairly denying individuals access to healthcare and loans, as well as misleading law enforcement agencies in identifying criminal suspects. AI regulatory governance framework determines how best to handle scenarios where AI-based decisions could be unjust or violate human rights.

Although existing legal systems in respective nations provide some protection, it is insufficient to address the specific challenges AI systems might bring. The AI-specific regulatory framework ensures that citizens can trust what AI technology has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that must be addressed to avoid undesirable outcomes.

The regulatory framework must have the following:

  • A governance structure at the national and global level
  • Clearly listing high-risk areas & applications
  • Setting clear requirements for AI systems for high-risk applications;
  • A compulsory conformity assessment before the AI system is put into service or made available to the market;
  • Outlining specific obligations for AI users and providers of high-risk applications;
  • Regulations to govern after such an AI system is made available to the market;

Identifying high-risk areas for the use of AI technology

  • Critical infrastructures (e.g. transport, Telco, Power , Nuclear installations, etc ), that could put the life and health of citizens at risk ;
  • Safety components of products (e.g. AI application in robot-assisted surgery) ;
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts);
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • Employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Educational or vocational training may determine access to education and the professional course of someone’s life (e.g. scoring of exams).

Keeping checks & balances

  • High-risk AI systems must be subject to strict obligations before they can be made available for general use in the market.
  • Adequate risk assessment and mitigation systems
  •  High quality of the datasets feeding the system to minimize risks and discriminatory outcomes;
  •  Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimize risk;
  • High level of robustness, security, and accuracy.
  • All remote biometric identification systems are considered high-risk and subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes must be, in principle, prohibited.

All exceptions must be clearly outlined and regulated, such as when it is necessary to prevent a specific and imminent terrorist threat or to detect, locate, identify, or prosecute a perpetrator or suspect of a serious criminal offense. Those usages are subject to authorization by a judicial or competent authority and to appropriate limits in time, geographic reach, and the databases searched.

Concerted Collaboration

Government policymakers, high-level decision-makers, industry leaders, representatives of scientific and research institutions, and non-government organizations must collaborate & share their insights and good practices about the governance of AI at global, regional, and national levels. If an AI system is designed well in compliance with the regulatory framework, then the product will be more convenient to use, more accurate, and therefore more useful. There will be more users and, hence, more data, which in turn makes the AI system better. A mutually strengthening relationship exists between AI systems and data. Big data and AI could be merged into a new kind of AI, called data intelligence.

The future of AI depends on collaboration among governments, organizations, and stakeholders. AI applications must remain trustworthy even after they have been made generally available in the market. This requires continuous quality and risk management by providers. As AI is a rapidly evolving technology, the regulatory framework must have a future-proof approach, allowing regulations to adapt to technological advancement. Its success depends on developing a comprehensive AI regulatory framework that protects the public while fostering innovation & transparency.

AIai ethicsAI Governance
Comments (0)
Add Comment