AI may have found its way into many aspects of a person’s life but the debate over AI bias is far from being settled. Along with gender and racial bias, there has been another incident that has sparked debate over Harrisburg University’s researchers claim to develop facial recognition software that detects if a person is a criminal.
Based on just a picture of a person, the facial recognition technology will determine if they are a criminal or not. The reason to work on this software is cited as helping police and law enforcement officials prevent crime from happening. These researchers have also said that there is no racial bias in their AI.
However, an open letter organised by the Coalition for Critical Technology has refuted by saying that these claims are being made on unsound scientific premises. They have demanded that the research paper should not be published anywhere.
The researchers have continued to support that this software would illuminate some biases but the university has taken down the press release.
In a report by the New York Times, a black man was falsely arrested because of a blunder in the algorithm of the facial recognition system used. There have been many cases of bias displayed by facial recognition software, especially for women and diverse individuals.
Linking appearance with criminality
The world has evolved impeccably in terms of intelligence, emotional intelligence and behavioural standards. The past decade has seen a consistent push for society to let go of its prejudicial and stereotypical ideas for a more accepting stance. The battle for gender and racial equality is ongoing and there is hope we may get there someday.
Judging people based on their appearance is an outdated concept but shockingly it continues to exist. While we fight for gender and racial bias, there is also the existence of judging people based on their looks and categorizing them wrongly.
This research paper has sparked that debate because again, we would be judging individuals based on their appearance to decide if they are criminals or not. If an AI is fed with data that suggests a long beard, scars on the face, haughty walk and pace is a criminal, it would be a terrible generalisation. Fashion, too, would take a massive hit if some clothes and accessories are considered as what a potential criminal would wear.
The quote- ‘Don’t judge a book by its cover’ would ring true in this scenario. However, if along with appearance, more data on a person’s background would be considered, then the paper might have a chance.
Feeding inefficient datasets into AI algorithms
Technology is merely a reflection of what people want and how they want it. When using AI technology, one must realise that it is artificial at the end of the day. The datasets are prepared by humans and hence, extinguishing bias that exists in human minds is important for an AI technology to operate without bias.
If you have an interesting article / experience / case study to share, please get in touch with us at [email protected]