The rise of shadow AI: Why businesses need stronger AI governance

By Dr. Kanishk Agrawal, Chief Technology Officer at Judge group India

Indian companies are using intelligence a lot more now. Over the two years many organizations have started using artificial intelligence tools to make their work easier. These organizations are from sectors like banking, healthcare and information technology. They use intelligence to do their work faster to automate some tasks and to make better decisions. According to a report by Nasscom and EY 60% of big Indian companies are using artificial intelligence in many parts of their business. Also 90% of workers who do knowledge work use some kind of artificial intelligence tool every day. However while many companies are using intelligence they are not making rules to control how it is used. Many organizations are still making policies and security controls for intelligence. This is a problem because it creates a kind of risk that companies are just starting to understand, which is called Shadow Artificial Intelligence.

What is Shadow Artificial Intelligence?
Shadow Artificial Intelligence is when employees use intelligence tools without asking their company’s information technology, security or compliance teams. These tools can be things like intelligence platforms that are available to the public artificial intelligence-powered coding assistants, data analysis tools or automation agents. Employees use these tools to do their work. The reason this is happening is that these tools are easy to access. Unlike company software many artificial intelligence tools can be used right away through a web browser or application programming interface. So employees can start using them without telling their company’s leaders.

Some studies in India show that this is a problem. A survey of companies found that more than 70% of organizations think their employees are using artificial intelligence tools without the company’s information technology team knowing. Also 45% of workers say they have uploaded company documents or data into artificial intelligence systems to help them with tasks like writing reports or analyzing data.

There are risks with Shadow Artificial Intelligence.

One of the concerns is that company secrets might be exposed. When employees put company information into external artificial intelligence platforms that information might be stored, used or even shared with other companies. This is especially a problem in sectors like banking, healthcare and government services where rules about data privacy are very strict. India’s Digital Personal Data Protection Act has made companies responsible for keeping sensitive data safe. If employees accidentally share information with external artificial intelligence platforms companies could get in trouble with the law, hurt their reputation and have to pay penalties. Security experts are also worried about intelligence-related cyber threats. Bad people are trying to trick intelligence systems into revealing secret information or doing things they should not do. When artificial intelligence usage is not monitored these risks are harder to find and stop.

Why do employees use intelligence tools without permission?

Employees use intelligence tools because they think it helps them work better. Artificial intelligence can summarize documents quickly, write marketing content fast, analyze data faster and help developers write or fix code. In paced industries employees feel pressure to get things done quickly. If the company’s official systems do not have intelligence or are hard to use, employees look for other ways to get their work done faster. This is like what happened with shadow information technology, where employees used cloud tools and productivity apps outside of company systems.

In India’s digital economy companies are trying to be more productive and innovative. So employees are tempted to use intelligence tools on their own. By banning artificial intelligence companies should make rules that let employees use it responsibly while still controlling how it is used.Companies need to have governance for artificial intelligence.

As artificial intelligence becomes a part of company operations governance can no longer be an afterthought. Companies need rules that say how artificial intelligence tools can be used, what kind of data can be shared with artificial intelligence systems and which platforms are allowed.

Some companies are making artificial intelligence governance committees that include leaders from information technology, legal, compliance, cybersecurity and business teams. These teams evaluate intelligence tools, make guidelines and ensure responsible adoption, across the company. Companies also need to monitor how artificial intelligence tools are being used. They need systems that track intelligence usage and identify potential risks before they become big problems.

It is also important to teach employees about artificial intelligence usage. Many workers do not understand the risks of sharing company information with artificial intelligence systems. Training programs can help reduce data exposure and encourage innovation while keeping it safe.

The goal is to balance innovation and responsibility. The rise of Shadow Artificial Intelligence shows that innovation often comes from employees. They find ways to use artificial intelligence to improve productivity streamline workflows and get new insights. To stop innovation, companies should find a way to allow it while still controlling how artificial intelligence is used. Good governance lets companies get the benefits of intelligence while keeping data safe following rules and using technology in an ethical way.

Comments (0)
Add Comment