Generative AI, with its potential to revolutionize industries, has captured the attention of executives worldwide. As organizations explore the possibilities of this disruptive technology, they are faced with a myriad of questions and challenges. Muthumari S, Global Head of Data & AI Studio at Brillio, sheds light on the utilization, adoption, and implications of generative AI in an exclusive interview.
Some edited excerpts:
How are enterprises in different industries utilising Generative AI to enhance their operations?
The ability of Generative AI to create new text, images, and audio makes it a far more disruptive technology than anything businesses have previously experienced. Microchips, cloud computing, 5G — all pale in comparison to GenAI’s potential to change our world and how we live and work. For enterprises, the use cases are impressive, showing GenAI’s capacity to increase efficiency and productivity, reduce costs, and introduce new growth areas. I have seen a lot of use cases across industries on content creation, engineering excellence, including coding assistant and testing, and intelligent automation in search and knowledge management.
In Banking, for example, one popular use case for GenAI is algorithm training, which involves creating synthetic data to train the machine learning algorithms used in KYC and creating more accurate natural language models for virtual assistants. In the area of interpreting loan applications, it can be used to assess small business loan applications that contain non-numeric data, such as business plans. It can also help with real-time customer analysis by speeding up commercial banking tasks, such as answering questions in real time about a customer’s financial performance in complex scenarios.
For Healthcare and Life Sciences organisations, GenAI can facilitate drug discovery and development by helping generate new compounds for drug molecules and optimising drug candidates. It can accelerate clinical trials by constructing summarisation, Q&A, translation, and knowledge graphs out of massive volumes of unstructured data. It can serve as a medical insurance assistant that interacts with customers and assists with inquiries regarding health plans and other issues.
GenAI can also help optimise component placement in semiconductor chip design. It can be used for procedural content generation, where game content, such as levels, maps, and quests, can be produced based on predefined rules and criteria. It can also support the analysis of player data, such as gameplay patterns and preferences, to provide personalised experiences and help developers increase player engagement and retention.
Quick Service Restaurants can use GenAI for self-service through speech- and video-enabled kiosks to identify customers and retrieve their profiles and previous transactions to offer personalised menu items. They can also generate receipts and kitchen orders that note personal preferences and raise the bar of personalisation by creating unique combos for customers to deliver customised experiences.
What challenges do organisations face when adopting an emerging technology such as Generative AI?
Despite the anticipation — or perhaps because of it — leaders struggle with approaching Large Language Models (LLMs). In the race to capitalise on ChatGPT’s potential, they devote substantial resources to evaluating the language models and algorithms — and short-change the assessment of GenAI’s ability to solve current challenges and meet their business objectives. In other words, they over-index on the science of the technology and overlook the problem-solving and engineering components that are equally necessary for successful GenAI initiatives.
Infrastructure remains a surprisingly overlooked aspect of GenAI. Maintaining LLMs requires businesses to be in the right place on the maturity curve and use appropriate design techniques for their architectures. In addition, because LLMs are resource-intensive and can lead to high latency and costs, building and scaling them depends on architectures that have been designed with the appropriate techniques.
Hallucinations are another big concern for enterprises that are looking to scale LLMs. Data security concerns with LLMs arise from the need to protect sensitive information and ensure privacy. Risks include data breaches, biases in outputs, and adversarial attacks. Safeguarding data, implementing robust security measures, and addressing biases are essential for mitigating these concerns and maintaining the integrity and trustworthiness of LLM systems.
Could you explain the concept of navigating LLM hallucinations and its impact on Generative AI?
LLM hallucinations occur when LLMs generate factually incorrect outputs, are nonsensical, biased, or contain invented information. These hallucinations stem from limitations in the training data and the complexity of language understanding by AI models. They can lead to misinformation, reduced trust in LLMs, reinforcement of biases, and confusion among users. Mitigating hallucinations involves improving data quality, detecting, and addressing biases, refining training procedures, and integrating human validation. Ongoing research aims to enhance LLM reliability, accuracy, and ethical implications to minimise hallucinations and improve the overall performance of GenAI models. Addressing LLM hallucinations requires a multi-faceted approach.
The first aspect is data pre-processing and training. Careful data pre-processing, including data cleaning, normalisation, and bias detection, is crucial to minimise the presence of misleading or biased patterns in the training data. Robust training procedures, including diverse datasets and appropriate regularisation techniques, can help mitigate hallucination risks.
The second is fine-tuning and conditioning. Fine-tuning the LLM on specific domains or applying conditioning techniques can improve the accuracy and reliability of the generated outputs. By narrowing the scope and providing explicit guidelines or constraints during generation, the risk of hallucinations can be reduced.
The third involves integrating human validation and feedback loops to aid in detecting and addressing hallucinations. Human reviewers or validators can assess the generated outputs, identify inaccuracies or biases, and provide feedback to improve the model’s performance and minimise hallucination risks.
The fourth revolves around continual improvement and monitoring. Ongoing monitoring and iteration are essential to address emerging hallucination patterns. Regular evaluation, user feedback analysis, and model updates based on new data can help refine the LLM and enhance its reliability and trustworthiness over time.
Navigating LLM hallucinations is crucial to ensuring the responsible and effective use of GenAI. By actively addressing these challenges, the generated content’s quality, accuracy, and ethical implications can be improved, fostering trust, and enabling the broader adoption of GenAI technologies.
What caveats or considerations should businesses keep in mind when embracing Generative AI?
The current ubiquity of ChatGPT makes GenAI seem like a ready-set-go technology. The reality is far more nuanced. GenAI is an emerging technology that requires business leaders to proceed with an abundance of caution. Not only are technologists still learning about it, but many practical and ethical issues also remain unanswered.
One among them is data ownership. While ChatGPT has denied storing data, there is still an apprehension with many enterprises on the usage of data. To ensure that sensitive data is not lost, more than a few enterprises have blocked access to ChatGPT.
The second issue is accuracy of results. Like people, GenAI can offer inaccurate information in response to users’ questions. Worse, there is no built-in mechanism to signal inaccuracies to the user or to challenge the result being provided.
The third issue involves the tagging of inappropriate content. Current models’ built-in filters are ineffective at catching inappropriate content, such as profanity.
Then there is the issue of systemic biases. GenAI systems draw from massive amounts of data that can include unwanted biases. For organisations, a key task is adapting the technology to incorporate their ethics and values.
Another key issue is that of intellectual property claims and ownership. For results generated from public data, plagiarism is a potential risk that can lead to questions of legal claims and ownership.
How can organisations identify the customer touchpoints or use cases that would benefit the most from Generative AI?
Identifying the correct use case or customer touchpoint that stands to benefit is critical for any AI project in general. It is not restricted to GenAI. However, it is essential here as the build cost is remarkably high. To identify the customer touchpoints or use cases that would benefit the most, we need a quantification framework to define and measure value.
Organisations must begin by formulating the business problem with objectives and measurable business outcomes. They should then use an LLM cost estimator to estimate the cost of build and maintenance, while measuring against expected outcomes identified in the previous step.
If the ROI makes sense, organisations must evaluate the feasibility and potential impact of applying GenAI to the identified touchpoints or use cases. They must prioritise the touchpoints or use cases based on their strategic importance, potential impact, and feasibility. It is advisable to start with a minimum viable product (MVP) to validate the effectiveness of GenAI in addressing the identified challenges or opportunities. They should then measure and gather feedback to refine and scale the solution.