Trusting AI blindly can be so dangerous, know how


Wrong data not only gives wrong results, but can also prove to be the reason for the oppression of different sections of the society, such as vulnerable women and minorities. This is what my new book on the relationship between various forms of racism and gender discrimination and artificial intelligence (AI) argues for. The problem is very serious. They are usually provided with data taken from the Internet to improve the performance of algorithms in various cases, from screening job applications. But this training data contains many types of discrimination, which are seen in the real world. For example, through training data, an algorithm may conclude that the majority of employees employed in a particular field are men, and therefore may prefer male applicants to apply for jobs in that field.

Asian minorities often face arrest in false cases

After looking at the history of societies around the world where racism has played a significant role in establishing social and political systems—for example, in Europe, North America, and Australia—privileging white males—this is readily accepted. Maybe the remnants of racial discrimination are embedded in our technology as well. Some of the prime examples I documented in the research done for the book. I have found that the ‘face recognition’ software used in the criminal justice system operates on the assumption that blacks and Asian minorities have higher crime rates than whites. Due to this, blacks and Asian minorities in America and other countries often face arrest in false cases.

The matter of taking wrong decisions regarding health care also came to the fore.

The matter of taking wrong decisions regarding health care has also come to the fore. One study found that algorithms used for health management in the US gave equal health risk scores to white and black patients, even though black patients get sick more often than white patients. This cuts the number of black patients marked for additional care by more than half. Since less money is spent on treating Black patients, despite having similar health and care needs as White patients, the algorithm incorrectly concludes that Black patients are better off than sicker White patients.

Do machines not lie?

Such repressive algorithms have their interference in almost every area of ​​our lives. AI is making matters worse, because it is essentially sold to us as unbiased. We are told that machines never lie. In such a situation, this argument is given indiscriminately that no one can be blamed. This pseudo-objectivity is at the heart of the tall claims about AI made by Silicon Valley giants. This can be easily understood from the speeches of Elon Musk, Mark Zuckerberg and Bill Gates, even though they sometimes warn us about the projects they are responsible for. There are many unresolved legal and ethical issues at stake in this regard. For example, who is responsible for mistakes? Can a person claim damages against an algorithm for denying him parole on the basis of his ethnic background, as in a case of a toaster exploding in the kitchen? The opaque nature of AI technology poses serious challenges to legal systems that center around individual or human accountability. Basic human rights are at risk, as legal accountability is blurred by technology’s intertwining of criminals and various forms of discrimination, making it easy to place blame on the machine.

moral and legal vacuum

In a world where it is extremely difficult to differentiate between truth and falsehood, it is necessary to provide legal protection to our privacy needs. The right to privacy and the concomitant ownership of data associated with our virtual and real lives needs to be codified as a human right, at least to take advantage of the real opportunities that good AI software provides for human security . But, the technology world is way ahead of us. Technology has overtaken the law. Criminals easily take advantage of the moral and legal vacuum this creates, as this brave new world of AI is largely chaotic. By turning a blind eye to the mistakes of the past, we have entered an anarchic era where there is no authority or mechanism to check the violence of the digital world affecting everyday life. The time has come to find solutions to the moral, political and social issues with a concerted social movement in support of the law. The first step in this direction is to make ourselves aware of what is happening at the present time, because of this our life will not be the same as before.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *