Vice Vice President JD Vance world leaders in Paris that Amnesty International should be “free of ideological bias”, and that American technology will not be a control tool. (Credit: Reuters)
A new report from the Anti -Defamation Association (ADL) shows anti -Jewish biases and anti -Israel between Amnesty International Language Models (LLM).
In her studies, ADL asked GPT-4O (Openai), Claude 3.5 Sonnet (Anthropic), Gemini 1.5 Pro (Google) and Llama 3-8B (Meta) approval of a series of phrases. They diversified the claims to put names on some and left others unknown – and they saw a difference in LLMS answers based on the username or its absence.
In the study, both LLMS to evaluate data were asked 8,600 times and gave a total of 34,000 responses, according to ADL. The organization said it used 86 data, each fell in one of six categories: Bias against the Jews,, Bias against Israel, the war in Gaza/Israel and Hamas, the theories of Jewish and Israeli conspiracy (with the exception of the Holocaust), the theories of the Holocaust plot and the theories of non -Jews conspiracy.

AI Assistant Apps on a smartphone including Openai Chatgpt, Google Gemini and Clade. (Getty Images / Getty Images)
Adl said that while All llms The “anti -Jewish and anti -Israel biases” were the most obvious “Lama” biases. Llama’s Meta, according to ADL, gave some “expressive” answers to questions about the Jewish people and Israel.
“Artificial intelligence reshapes how people consume information, but as this research appears, artificial intelligence models are not immune to deeply inherent societal biases,” ADL CEO Jonathan Greenblatt said in a statement. “When LLMS inflated the wrong information or refuses to recognize certain facts, it can distort public discourse and contribute to anti -Semitism. This report is an urgent invitation to artificial intelligence developers to take responsibility for their products and implement stronger guarantees against bias.”
When questions were asked about the ongoing Israel War, GPT and Claude were found to show “important” biases. In addition, ADL stated that “LLMS refused to answer questions about Israel more frequently than other topics.”

ADL warned that the LLMS used in the report showed “a deficiency of the inability to reject anti -Semitic degrees and conspiracy theories accurately.” In addition, ADL found that every LLM, with the exception of GPT, showed more bias when answering questions about the theories of Jewish conspiracy more than those related to non -Jews, according to ADL, but they all showed more bias against Israel more than Jews.
A Meta Fox Business spokesman told the ADL study did not use the latest version of Meta Ai. The company said that it tested the same claims that ADL used, and found that the answers from the updated version of Meta AI gave different answers when asking a multi -options question in exchange for an open question. Meta says users are more likely to ask more open questions than that are coordinated, such as AdL claims.
“People usually use artificial intelligence tools to ask open questions that allow accurate responses, and not claims that require choosing from the list of multiple -specific options. We are constantly improving our models to ensure that they are based on facts and unbiased, but this report simply does not reflect how AI in general is used.”
Google has caused similar problems when talking to Fox Business. The company said that the Gemini version used in the report was the model of the developer, not the product facing the consumer.
Like Meta, Google has a problem with how to ask ADL to Gemini questions. The phrases did not reflect how users ask the questions and answers they would get would have more details, according to Google.
The temporary president of the ADL Technology and Society Center, Daniel Kelly, warned that these artificial intelligence tools are already in schools, offices and social media platforms.
“Artificial intelligence companies must take proactive steps to address these failures, from improving their training data to improving moderation policies in the content,” Kelly said in a press statement.

The demonstrators supporting Palestine before the Democratic National Congress on August 18, 2024 in Chicago, Illinois. (Jim Vondruska/Getty Images)
Get Fox Business on the Go by clicking here
ADL made many recommendations to both developers and those in the government who are looking to address prejudice in artificial intelligence. First, the organization calls on the developers to partnership with institutions such as government and academic circles to conduct a pre -publication test.
The developers are also encouraged to consult the National Risk Management Framework (NIST) of the National Institute of Standards (NIST) for Amnesty International and consider potential biases in training data. Meanwhile, the government is urged to encourage artificial intelligence to “focus integrated to ensure content safety and uses.” ADL also urges the government to create an organizational framework for artificial intelligence developers and investment in artificial intelligence safety research.
Openai and Anthropic did not immediately respond to the Fox Business request for comment.
https://a57.foxnews.com/static.foxbusiness.com/foxbusiness.com/content/uploads/2024/05/0/0/GettyImages-2151277485-scaled.jpg?ve=1&tl=1
Source link