Therefore, there are training data. Then, there is refinement and evaluation. Training data on all types of stereotypes may really have problematic across countries, but then the bias mitigation techniques may only consider the English language. In particular, it tends to be North America-and the United States. Although you may somehow bias for English users in the United States, you haven’t done it all over the world. You are still risked by really inflating views worldwide because you focus only on English.
Do you give birth artificial intelligence new stereotypes of different languages and cultures?
This is part of what we find. The idea of stupid blondes is not something around the world, but there are in many of the languages that we looked at.
When you have all the data in a common underlying space, semantic concepts can be transferred across languages. You are risked by publishing the harmful stereotypes that others did not think.
Is it true that artificial intelligence models will sometimes justify stereotypes in their outputs by making shit?
This was something in our discussions about what we found. We were all strange that some stereotypes were justified through references to the non -existent scientific literature.
The outputs that say, for example, science has shown genetic differences as it was not presented, and it is a basis for scientific racism. Artificial intelligence outputs put these false scientific opinions, and then also used the language that suggested academic writing or academic support. He talked about these things as if they were facts, when they were not realistic at all.
What are some of the biggest challenges when working on the shadow data set?
It was one of the biggest challenges about linguistic differences. There is a really common approach to evaluating prejudice is the use of the English language and made a sentence with an opening like: “People from (NationUns the trust. ”Then, fluctuating in different countries.
When you start sexual mode, the rest of the sentence now begins to agree on a grammatical way. This was really a restriction to evaluate prejudice, because if you want to do these contradictory bites in other languages - they are very useful for measuring bias – the rest of the sentence should change. You need different translations as the entire sentence changes.
How do you make templates where the entire sentence needs to agree on sex, in number, in pluralism, and all these different types of things in order to stereotype? We had to reach our linguistic explanation in order to calculate it. Fortunately, there were a few people concerned who were linguistic obsessive.
So, you can now do these contradictory phrases across all of these languages, even those that really have the tough rules of agreement, because we have developed this template -based novel to assess the bias that are sensitive in the form of bee.
The gym artificial intelligence is known to amplify stereotypes for a while now. With a lot of progress in other aspects of artificial intelligence research, why are these types of extreme prejudices prevailing? It is an issue that seems unacceptable.
This is a very big question. There are some different types of answers. One cultural. I think it is in many technology companies, it is believed that it is not a big problem. Or, if so, it is a very simple reform. Its priorities will be determined, if the priority of anything is determined, so are these simple methods that can get worse.
We will get superficial repairs to the very basic things. If you say girls like PINK, he realizes that he is as stereotypes, because it is just a kind of thing that if you are thinking about typical stereotypes emanating on you, right? These very basic cases will be dealt with. It is a very simple and ultimate approach as these embedded beliefs are not deepened.
It ends up that it is a cultural issue and an artistic issue to find how to obtain a deep -rooted biases that do not express themselves in a very clear language.
https://media.wired.com/photos/6807f967e5ea214937e794e3/191:100/w_1280,c_limit/AI-Lab-Multi-Language-AI-Test-Business.jpg
Source link