Post a Comment Print Share on Facebook
Featured mercados Hamburg Estilo de Vida Inmobiliario y Construcción Mercado inmobiliario

Why it can be dangerous for an algorithm to decide whether to hire you or give you a credit

In 2014, Amazon has developed an artificial intelligence of recruitment he learned that the men were to be preferred, and began to discriminate against women. A

- 30 reads.

Why it can be dangerous for an algorithm to decide whether to hire you or give you a credit

In 2014, Amazon has developed an artificial intelligence of recruitment he learned that the men were to be preferred, and began to discriminate against women. A year later a Google user Photos realized that the program labelled their friends to black people as gorillas. In 2018 it was discovered that an algorithm that analyzed the possibility of a relapse of a million convicts in the united STATES failed as much as any person without special knowledge, judicial or forensic science. Decisions that were previously taken by humans today are taken by systems of artificial intelligence. Some relating to the procurement of persons, the granting of credit, medical diagnoses or even court rulings. But the use of such systems carries a risk, as the data with which the algorithms are trained are conditioned by our knowledge and prejudices.

“The data are a reflection of the reality. If reality has a bias, the data also”, explains Richard Benjamins, ambassador of big data and artificial intelligence in the Phone, to THE COUNTRY. To prevent an algorithm to discriminate against certain groups, he argues, it is necessary to verify that the training data do not contain any bias, and during the testing of the algorithm to analyze the ratio of false positives and negatives. “It's much more serious an algorithm that discriminates in a manner not desired in the domains of legal, loan, or admission to the education in domains such as recommendation of movies or advertising,” says Benjamins.

Isabel Fernández, managing director of intelligence applied to Accenture, he gives as an example the granting of mortgages of automatic: “let's Imagine that in the past the majority of the applicants were men. And to the few women that were given a mortgage spent a few criteria so demanding that all comply with the payment commitment. If we use these data without more, the system would conclude that today, women are better payers than men, which is only a reflection of a prejudice of the past”.

women, however, are in many cases affected by these biases. “The algorithms usually are developed because mostly white men between 25 and 50 years so have decided during a meeting. On that basis, it is difficult to reach the opinion or perception of minority groups or of the other 50% of the population are women”, explains Nerea Luis Mingueza. This researcher in robotics and artificial intelligence at the University Carlos III ensures that the groups under-represented always will be most affected by the technological products: “For example, the female voices or children fail more in the speech-recognition systems”.

“The data are a reflection of the reality. If reality has a bias, the data also”

minorities are more likely to be affected by these biases as a matter of statistics, according to Jose María Lucia, partner in charge of the centre of artificial intelligence and data analysis from EY Wavespace: “The number of cases available for training will be less”. “In addition, all those groups that have suffered discrimination in the past of any type may be susceptible, as with the use of historical data we may be including, without realizing it, such a bias in the training,” he explains.

Betmatik

This is the case of the black population in the united STATES, according to points the senior manager at Accenture Juan Alonso: “It has been proved that faced with the same kind of lack as smoking a joint in public or the possession of small amounts of marijuana, a white person will not stop but someone of color”. Therefore, he argues that there is a higher percentage of black people in the database and an algorithm trained with this information would have a racial bias.

Sources of Google explained that it is essential to “be very careful” in granting power to an artificial intelligence system to take any decision on his own: “The artificial intelligence produces responses based on existing data, so that humans must recognize that not necessarily give flawless results”. Therefore, the company bet that in the majority of applications the final decision is taken by a person.

The black box

The machines end up being in many cases a black box full of secrets even to their own developers, they are unable to understand which path has followed the model to arrive at a specific conclusion. Alonso argues that “normally when you judge you, give you a explanation in a judgment”: “But the problem is that this type of algorithms are opaque. You face a kind of oracle that is going to give a verdict”.

“people have a right to ask how an intelligent system suggests certain decisions and not others, and companies have the duty to help people understand the process of decision - ”

“Imagine that you are going to an outdoor festival and when you come to the first row, those responsible for security os stack without give an explanation. You are going to feel indignant. But if I explain that the first row is reserved for people in a wheelchair, you're going to go back but you're not going to get angry. The same thing happens with these algorithms, if we do not know what is going on it can produce a feeling of dissatisfaction,” explains Alonso.

To end this dilemma, researchers in artificial intelligence claimed to be the transparency and explanation of the training model. Large technology companies such as Microsoft defend several principles to make responsible use of artificial intelligence and driving initiatives to try to open the black box of algorithms and explain the why of their decisions.

Telefonica are hosting a challenge in the area of LUCA —its a unit of data— with the aim of creating new tools to detect unwanted bias in the data. Accenture has developed AI Fairness and IBM has also developed its own tool that detects bias and explains how the artificial intelligence takes certain decisions. To Francesca Rossi, the director of ethics in artificial intelligence from IBM, the key is that the systems of artificial intelligence are transparent and reliable: “The people have a right to ask how an intelligent system suggests certain decisions and not others, and companies have the duty to help people understand the decision-making process”.

Avatar
Your Name
Post a Comment
Characters Left:
Your comment has been forwarded to the administrator for approval.×
Warning! Will constitute a criminal offense, illegal, threatening, offensive, insulting and swearing, derogatory, defamatory, vulgar, pornographic, indecent, personality rights, damaging or similar nature in the nature of all kinds of financial content, legal, criminal and administrative responsibility for the content of the sender member / members are belong.