
When most people think of AI, they imagine a powerful tool that is set to positively transform the future of humanity and create a beautiful experience for everyone. Although this is true, there is, however, the ugly side of AI that could dominate the beautiful aspect of it, if left unaddressed. The problem is not with the technology itself but in how it is designed and used. “In this bleak depiction of our future, decades of fights for civil rights and equality have been unwritten in a few lines of code,” lamented Miriam Vogel, the Executive Director of EqualAI. This problem of AI is, however, beyond the algorithms. It’s something much bigger. It’s about the social challenges associated with the technology.
In January 2020, Robert Julian-Borchak Williams, a young black American, was wrongfully arrested on the grounds of felony and larceny. This happened after he was wrongfully accused by a facial recognition software which falsely matched his face against an image from a surveillance footage. Robert spent the night in detainment, lost his freedom, was humiliated, and lost an irreplaceable part of his life, all because he was misidentified by a facial recognition software that thought all black men looked alike. We think AI is so smart and intelligent, but the reality is that, in a lot of cases, AI misses basic context and is incapable of comprehending some very basic information whilst accurately comprehending complex ones. Robert is just one amid a pool of several others that have been treated unfairly as a result of bias in AI.
In Detroit, facial recognition software fails to correctly identify people “96 % of the time”, according to the Detroit police chief. This helps us understand how much of a problem this is. The problem of AI in the social context is beyond the algorithms, and can be attributed to a host of factors including the dataset used in training the AI. An AI that is strictly trained to identify images of cats and dogs will be unable to identify other animals outside these two. In the same manner, a non-diverse skewed dataset of images, for example, with very few images of people from a particular race or gender, will perform poorly on images of people from that race or gender. The reality is that these algorithms subtly reflect the inequalities in our society.
An MIT study has shown that the commercial facial recognition software of some major technology companies exhibit significant gender and skin-type bias, with an error rate of as high as 34.7% for black women, compared to a negligible 0.8% error rate for light-skinned men. The brutality resulting from the use of computer vision software is becoming increasingly alarming, and has pushed major technology companies like Amazon, IBM and Microsoft to halt the commercialization of their facial recognition software. Portland, also, has outrightly banned the use of facial recognition software in the region.
The social dilemma
The social dilemma of AI is beyond images. It’s an all-encompassing problem. A Netflix documentary titled Social Dilemma was released this year. The documentary, which featured powerful figures in Silicon Valley, was created to expose the exploitation of human emotions and psychology by social media applications. On the surface, it would appear that these applications do not pose any threat or concern, and this, in itself, is a cause for serious concern. Social media is built on the bricks of the attention economy, and advertisers are simply paying these giant tech companies to mine the attention of users. If you are not paying for the product, then you are the product. This manipulative art and exploitation of human fears and weaknesses is powered by the use of highly intelligent AI systems, which can understand more about the behavioural patterns of users than the users themselves. Now, more than ever before, the world needs to share the concern expressed in Yuval Noah Harari’s concluding words in his book, Homo Deus, “What will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?”.
If a machine learning algorithm is trained on a dataset containing the information of numerous people in order to determine their loan eligibility or eligibility for other financial services, and the dataset is significantly biased against a particular gender or class of people, you can only expect the system to likely be biased in its decision of who is eligible to access loans or any of the other financial services. Although machine learning models make their predictions based on what has been learnt from several features and not just a single feature, we cannot deny the fact that the underlying issues in the construction of these features may accelerate the occurrence of biased outcomes. Imagine the polarity these biased systems could create in our world.
A few months ago, Carmen Arroyo asked the management company of her apartment in Connecticut if her son could move in with her after he survived a brutal accident. Her request was denied after an AI-powered tenant-screening background check was conducted. The denial resulted from the system’s identification of a minor shoplifting charge from her son’s past, which was very irrelevant considering his present circumstances, apart from the fact that this also violated the region’s housing laws. With AI being at the core of the major decision systems, a lot of laws could be violated and a lot of people could be unfairly excluded from a better life simply because a software says so. In cases where the AI system involves machine learning, which is mostly the case today, it will be nearly impossible for even the AI expert to understand how a pattern was recognized or why it was even identified. This further exacerbates the situation. In cases like this, victims of AI injustice, like Carmen Arroyo’s son, may never have the opportunity to contest these biased decisions because the reason for the decision will forever remain unknown.
AI is more powerful than we can imagine, and this is a major concern because when its applications deviate from what we intend, it can grow into a destructive beast. It can thwart our justice system. It can cripple the human experience. The bias that comes with AI is a serious concern, but it can be addressed. Just like Sandra Watcher, associate professor in Law and AI ethics at Oxford University, expressed, “It’s unrealistic to assume that we’ll ever have a neutral system. But with the right systems in place, we can mitigate some of the biases.”