Express Computer
Home  »  Guest Blogs  »  SEBI Prepares for Next Lap of Regulatory Measures for AI

SEBI Prepares for Next Lap of Regulatory Measures for AI

0 157

Artificial Intelligence and Machine Learning (AI/ML) have assumed a cult trend in the financial domain. Even a non-financially savvy person can easily generate their customised investment plans using prompts in Chat-GPT or Gemini. A trained financial professional can make more sophisticated use AI/ML technologies, enabling their commercialisation. As the scope of AI-based securities market transactions expanded drastically, the Securities and Exchange Board of India (SEBI) was among the first to recognise the opportunities and risks associated with the use of AI by intermediaries. 

Let’s have a quick recap of how SEBI began its journey of AI regulation in the securities market. SEBI initiated its policy preparations with a circular issued in January 2019 that requires market infrastructure institutions to disclose their use of AI. The goal was to understand how AI technology is adopted across various functions performed by key institutions, such as stock exchanges and depositories. SEBI did not precisely define the AI technologies that were covered under the disclosure obligations, but provided only an illustrative list for guidance. 

Based on data collected from these institutions over a five-year period, SEBI indicated its inclination to regulate the use of AI by the intermediaries it oversees. A consultation paper was floated by SEBI in November 2024, proposing to assign responsibility for the use of AI tools in investor-facing financial products to the intermediaries. The proposals in the consultation paper were implemented through amendments to regulations governing various intermediaries, especially the SEBI (Intermediaries) Regulations, 2008. Now, the regulations hold the intermediaries responsible for the use of ‘AI Tools’ in any form while providing services to their customers or reporting their compliance obligations. For this purpose, for the first time, SEBI broadly defined AI Tools. 

Where is SEBI heading from here to regulate the use of AI in the securities market? SEBI has now chosen the guidance path. It has issued a new consultation paper for public comments regarding the guidelines for the responsible use of AI/ML in the Indian securities market. This consultation paper leaves hints about the regulatory and policy directions that SEBI may adopt in the future regarding the use of AI/ML. 

Here’s a peek into five significant implications of these proposals:

  1. AI advisors will have to wait: Human accountability for AI-driven decisions is a key principle of the proposed guidelines. It recommends implementing “human-in-the-loop” or “human-around-the-loop” mechanisms to prevent over-reliance on AI systems.  Additionally, it mandates that senior management with appropriate technical knowledge oversee AI/ML models and that market participants ensure responsible and ethical outcomes in the use of AI. 
  2. Tiered approach to implementation: The tiered approach proposed by SEBI involves adopting a “regulatory lite framework” for the use of AI/ML in securities markets for purposes other than business operations that directly impact clients. For internal compliance purposes, including surveillance, advanced cybersecurity tools, and similar applications, specific guidelines will initially apply. This approach aims to simplify regulatory requirements for less critical applications while maintaining stricter oversight for AI/ML usage that directly impacts customers or clients.
  3. No data black holes: Market participants using AI will have to establish a robust testing framework that can validate the results of the underlying technology. Market participants are required to maintain logs with full verbosity to chronologically reconstruct events and ensure transparency. Additionally, the guidelines mandate independent auditing of AI/ML systems and periodic reviews to monitor their behaviour, ensuring that data usage remains traceable and explainable. These measures are designed to avoid opaque or inaccessible data practices.
  4. Governance is fairness: SEBI emphasises data governance norms that include data ownership, access controls, encryption mechanisms, and proper documentation. SEBI might mandate an arrangement akin to an ‘ethical review board’ model at the intermediary level to oversee the performance, controls, testing, efficacy, and security of the AI technology deployed. Intermediaries will be required to ensure the quality of input data and the absence of bias in data processing for AI model development. These measures aim to ensure that AI/ML models are fair and do not favour or discriminate against specific groups of clients.
  5. Competition is essential: SEBI will foster competition among AI-tech providers to mitigate any concentration risk. Market participants will have reporting obligations to disclose the names of third-party vendors or service providers involved in AI services. This allows regulators to monitor any buildup of concentration. SEBI will designate dominant AI providers as critical service providers and subject their AI applications to enhanced monitoring to ensure resilience. Additionally, market participants will be encouraged to work with multiple AI vendors to decrease reliance on a single provider. 

It is also interesting to see how SEBI will implement these proposals after the consultation process. SEBI can either amend its regulations to include a code of conduct for intermediaries or introduce a novel and distinct charter of rights in favour of investors who access financial services involving AI Tools.

Leave A Reply

Your email address will not be published.