By Vivek Sonny Abraham, Senior Director, Policy- India & South Asia, Salesforce
Technology serves as a yardstick for societal progress, reflecting our ability to enhance the quality of life and elevate the human experience – by applying and democratizing these technological advancements.
We are on the brink of the Fifth Industrial Revolution, driven by significant advancements in artificial intelligence (AI). AI has revolutionized our ability to perform tasks with increased speed and efficiency. While there is immense potential for positive impact, we must also acknowledge the challenges associated with AI. Instances of bias in voice recognition software favoring male voices and discriminatory policing reinforced by crime-prediction tools serve as reminders of the problems AI can present.
Lacking a solid ethical foundation, AI advancements can lead to undesired consequences and perpetuate biases. To establish reliable AI systems, organizations must prioritize accountability, transparency, and fairness.
When considering the positive contributions of AI to society, there are three key pillars that form the foundation for building trusted AI within any organization:
Cultivate an Ethics-By-Design Mindset
Within the business context, integrating ethics into AI involves fostering and maintaining a culture of critical thinking among employees. It is unrealistic to expect a single group to identify ethical risks throughout the development process. Instead, a collaborative approach is necessary, engaging various stakeholders to effectively address and mitigate ethical concerns associated with AI. Instead, ethics by design requires a multitude of diverse perspectives from different cultures, ethnicities, genders, and areas of expertise.
A “consequence scanning” framework, which involves anticipating unintended outcomes of new features and finding ways to mitigate harm, has dual benefits. It not only enhances the final product but also stimulates creative thinking among teams during the development phase. By considering the potential impact on various stakeholders, as well as anticipating other problems, this framework promotes a holistic approach to product development.
Creating an environment that embraces input from a broader audience can help organizations eliminate blind spots that can be ripe for bias. By offering training programs that help employees put ethics at the core of their respective workflows, organizations can also empower their workforce to critically identify potential risks. An effective approach is to provide comprehensive training to new employees right from the beginning, enabling them to grasp their responsibilities in the development process and cultivate a mindset focused on ethics-by-design.
Apply Best Practice Through Transparency
Transparency plays a crucial role in capturing diversified perspectives and avoiding unintended consequences. Collaborating with external experts, including academics, industry leaders, and government representatives, can provide valuable feedback and insights.
Sharing information about data quality, bias mitigation efforts, and the development process with the relevant audiences fosters trust. Publishing model cards, like nutrition labels, help users understand the AI system’s intended use, performance metrics, and ethical considerations. Effective communication of AI explanations is essential to inspire confidence and prevent confusion among user groups.
To establish trust in AI, it is crucial for the intended audience to comprehend the rationale behind AI’s recommendations or predictions. Different users engage with AI technologies with varying levels of knowledge and expertise. For instance, data scientists may need access to all the factors utilized in a model. On the other hand, sales representatives lacking a background in data science or statistics may find such detailed information overwhelming. To instill confidence and prevent confusion, it is essential for teams to possess the ability to effectively communicate these concepts and explanations in a manner that suits the understanding and needs of diverse users.
Empower Customers to Make Ethical Choices
Ethics doesn’t stop at development. Developers serve as providers of AI platforms, but it is important to recognize that AI users ultimately own and bear responsibility for their data. Although developers can offer training and resources to assist customers in identifying bias and minimizing harm, it is crucial to acknowledge that inadequately trained or unmonitored algorithms have the potential to perpetuate harmful stereotypes. Therefore, organizations must provide customers and users with the right tools to use technologies safely and responsibly, to identify and address problems.
Establishing trusted AI systems requires a comprehensive approach that encompasses an ethics-by-design mindset, transparency throughout the development process, and empowering customers to make ethical choices. By adhering to these three pillars, organizations can foster a culture of accountability, transparency, and fairness in AI development.
Embracing these principles will democratize the benefits of AI and ensure its positive contribution to society. As we navigate the future of AI, prioritizing ethics will pave the way for responsible technological progress and a more inclusive human experience. By providing proper guidance and training, customers can gain a better understanding of the repercussions associated with the decision to include or exclude sensitive fields and “proxies” from their AI model.
Embedding values in AI is a complex and multifaceted endeavor. It involves a transformation in culture, the refinement of processes, enhanced engagement with employees, customers, and stakeholders, and empowering users with the necessary tools and understanding to wield technology responsibly. By collectively advancing these three fundamental pillars, we can ensure that AI is developed and implemented with accountability and transparency, thereby democratizing the advantages of AI for the broader society.