You know for sure AI has formally gone mainstream when it is a hot topic among government circles. To give a few instances — the Central Board of Secondary Education (CBSE) has partnered with Microsoft to enable tech skills development, while the Indian army is seeking intuitive AI assistance in its weapon systems and surveillance programmes. And more recently, the Housing Ministry has ordered AI algorithms to cut down bookkeeping corruption incidences. But the most prominent development has been the proposal of Rs.7,500-crore plan by Niti Aayog to create an institutional framework for AI in the country.
And while the public sector is poised to play a crucial role in the growth of India’s future, the increasing adoption of AI in the corporate world is calling for experts to create responsible AI technologies.
As can be witnessed, AI is demonstrating enormous potential in creating value and helping build a better future for global citizens.
But like it has been with some forms of technology, AI has revealed a darker side – that of an inadvertent detrimental impact on people, communities, companies and society at large. Biased data that shaped algorithms have resulted in adverse outcomes in some cases – which calls for deeper introspection and a closer look at the ethical aspects of employing AI in society.
AI solutions must be unbiased, dependable, comprehensive, transparent, and held responsible. And for this to happen, it must be designed keeping humans at the forefront and in the centre.
Responsible AI solutions must be used to improve and amplify human potential.
And this is where Human-Centered Design for AI comes into the picture. Through human-centred design, industries can examine crucial ethical challenges, gauge the competence of policies and programmes and develop a set of value-driven specifications involving AI solutions.
Integrating human-centred design
Integrating a human-centred design process infuses humanity and empathy into an AI solution. An excellent AI design must be born out of empathy – an intrinsic understanding of customer requirements.
Why is empathy important?
Empathy is vital to human-centred design. Within design thinking, empathy helps design thinkers gain insight into the needs of their users.
Take, for instance, the first wearable product from Google — Google Glass. Launched in 2013 amidst much fanfare, the head-mounted wearable device was technologically advanced but failed to perform due to a lack of empathy towards its consumers. And while it conducted many tasks, it did not fulfil the actual needs of its users.
On the other hand, a team of postgrad students at Stanford were assigned to build a new type of incubator for developing nations. By meeting mothers in remote villages and rural areas, they were able to transform the concept of a mere incubator into a warming device, which they christened as The Embrace Warmer. This new creation was an immediate winner with its portability and negligible production costs. Through empathy, the design team built a successful human-centred solution for impoverished mothers in developing countries.
Human-centred design is about capturing empathy for the benefit of your users. It means placing people at the heart of your solution and achieving an accepted solution through research and collaborative problem solving.
The journey so far
For over two decades, various frameworks and proposals have been advocated on how to design for powerful human interactions with AI-embedded systems.
One recommendation advised creating precautions such as verification measures or regulative layers of autonomy to help avoid undesired modifications or activities from intelligent systems. Another guideline proposed handling expectations so as not to delude or disappoint consumers when interacting with volatile adaptive agents.
Today, a growing body of researchers are seeking how to enhance clarity, build trust with users and interpret the behaviours of AI systems, while at the same time how to spontaneously accommodate or individualise interfaces in diverse scenarios.
Others are studying how to design for precise human-AI interaction situations and how users can effectively connect with intelligent agents. In this regard, there is intense study and progress in natural-language development and embedded devices steering the expansion of conversational agents.
Researchers are further delving into human interactions with perceptive context-conscious computing systems, such as how to design for comprehensibility and command of the underlying sensing systems and how to resolve ambiguities.
For instance, we are seeing how sensing technologies and the extensive accessibility of commercial fitness and activity trackers are guiding interaction research that can be applied in other settings too.
Human-Centred AI Design guidelines
And because AI-powered systems are dynamic, continuous consumer feedback and empathy can help provide personalised content and improve user experience.
When designing a responsible AI solution in a human-centred manner, here are some crucial points to consider:
- The users
- Their values
- The problem to be solved
- The answer to the question
- Discerning when the experience is complete
To address the above, human-centred AI design guidelines come to the fore. These are:
- Be transparent on what the solution can tackle. Aid the consumer in knowing what the AI system can do.
- Be transparent on how competently the system can achieve what it aims to achieve. Inform the consumer that the AI system could err at times.
- Time activities based on the setting. The time when to respond or interject based on the user’s present activity and surroundings.
- Display contextually pertinent data. Reveal evidence applicable to the user’s present activity and surroundings.
- Correspond with appropriate social norms. Ascertain the experience is rendered in a manner that users would look forward to, based on their social and cultural background.
- Advocate efficient requests. Allow easy access to demand for the AI solution’s services when required.
- Promote effective rectification. Integrate easy-to-alter, fine-tune, or retrieval modules when the AI solution is faulty.
- Support streamlined expulsions. Build the design in a manner so that it is easy to disregard or overlook unwanted AI solution services.
- Assess activities when uncertain. Become involved in clarification or deftly diminish the AI solution’s services when unsure about a user’s objectives.
- Be transparent about the AI solution’s services. Provide a resolution and access to the user of why the AI solution behaved in the manner it did.
- Evoke contemporary actions. Sustain short term memory and permit the consumer to make useful acknowledgements of that recall.
- Glean from user conduct. Individualise a user’s experience by understanding and becoming knowledgeable from their actions over time.
- Modify and adapt guardedly. Restrain disruptive changes when modifying and adapting to the AI solution’s behaviours.
- Boost granular assessment to improve performance. Empower the user to offer feedback suggesting their predilections during frequent interaction with the AI solution.
- Communicate the outcomes of user activities. Instantly inform or communicate how user actions can affect future behaviours of the AI solution.
- Render global controls. Permit the user to universally customise what the AI solution supervises and how it acts.
- Apprise users about alterations. Communicate to the user when the AI solution enhances or modifies its capabilities.
The way forward
Experts are paving the way with new guidance, fresh learning and advice to create better products with AI. Together, a collaborative AI solution built with human-centred design can lead to additional products and enhanced services, and more importantly, niches that could be accurately aligned with specific consumer requirements and demands.
Rather than creating patchwork solutions to mask or ease symptoms only briefly, companies have the power to infuse a radical change and create a positive impact in society – all through human-centred design. Industries and organisations can develop new markets and move whole communities closer to better standards of living and goals.
And for an AI solution to become productive, successful and responsible, it must work for its users, and not compel its users to work around it. There is promising progress around the corner, but still, a long way to go and learn. Every company tasked with developing a responsible AI system has an opportunity and obligation of building a ‘user-first’ AI solution. And for that to succeed, it calls for human-centred design to be involved at its very core.
Authored by Debprotim Roy, Founder and CEO, Canvs.in