Discriminatory AI algorithms can destroy the lives of minorities

[ad_1]

The time has come to find a solution to the moral, political and social issues with a concerted social movement in support of the law. The first step in this direction is to make ourselves aware of what is happening in the present moment, because our life will never be the same again.

Wrong data not only gives wrong results, but can also prove to be the reason for the oppression of various sections of the society, such as vulnerable women and minorities. This is what my new book on the connection between various forms of racism and gender discrimination and artificial intelligence (AI) argues. The problem is very serious. They are usually provided with data taken from the Internet to improve the performance of algorithms in various cases, from screening job applications. But this training data contains many types of discrimination, which are seen in the real world. For example, through training data, the algorithm may conclude that the majority of employees employed in a particular field are men, and therefore may prefer male applicants to apply for jobs in that field.

After looking at the history of societies around the world where racism has played a significant role in establishing social and political systems—for example, in Europe, North America, and Australia—privileging white males—this is readily accepted. Maybe the remnants of racial discrimination are embedded in our technology as well. Some of the prime examples I documented in the research done for the book. I have found that the ‘face recognition’ software used in the criminal justice system operates on the assumption that blacks and Asian minorities have higher crime rates than whites. Due to this, blacks and Asian minorities in the US and other countries often face arrest in false cases. The matter of taking wrong decisions regarding health care has also come to the fore.

One study found that an algorithm used for health management in the US assigned equal health risk scores to white and black patients, even though black patients get sick more often than white patients. This cuts the number of black patients marked for additional care by more than half. Since less money is spent on treating Black patients despite having similar health and care needs to White patients, the algorithm erroneously concludes that Black patients are better off than sicker White patients. Do machines not lie? -Such oppressive algorithms pervade almost every area of ​​our lives.

AI is making matters worse, because it is sold to us by essentially being fair. We are told that machines never lie. In such a situation, it is argued that no one can be blamed. This pseudo-objectivity is at the heart of the tall claims about AI made by Silicon Valley giants. This can be easily understood from the speeches of Elon Musk, Mark Zuckerberg and Bill Gates, even though they sometimes warn us about the projects for which they themselves are responsible. There are many unresolved legal and ethical issues at stake in this regard. For example, who is responsible for mistakes? Can a person claim damages against an algorithm for denying him parole on the basis of his ethnic background, as in a case of a toaster exploding in the kitchen?

The opaque nature of AI technology poses serious challenges to legal systems that center around individual or human accountability. Basic human rights are at risk, as legal accountability is blurred by technology’s web of criminals and various forms of discrimination, making it easy to blame the machine. Moral and legal vacuum – In a world where it is increasingly difficult to differentiate between truth and falsehood, it is necessary to provide legal protection to our privacy needs. The right to privacy and the concomitant ownership of data associated with our virtual and real lives needs to be codified as a human right, at least to take advantage of the real opportunities that good AI software provides for human security .

But, the technology world is way ahead of us. Technology has overtaken the law. Criminals easily take advantage of the moral and legal vacuum this creates, as this brave new world of AI is largely chaotic. By turning a blind eye to the mistakes of the past, we have entered an anarchic era where there is no authority or mechanism to check the violence of the digital world affecting everyday life. The time has come to find a solution to the moral, political and social issues with a concerted social movement in support of the law. The first step in this direction is to make ourselves aware of what is happening in the present moment, because our life will never be the same again.

Disclaimer: IndiaTheNews has not edited this news. This news has been published from PTI-language feed.



Leave a Reply

Your email address will not be published. Required fields are marked *