Meta says that the latest model of artificial intelligence is less woke up, similar to Elon’s Grok

Photo of author

By [email protected]


Meta says the latest model of artificial intelligence, Llama 4 is less political bias than its predecessors. The company says it has partially accomplished by allowing the model to answer more politically divided questions, and added that Lama 4 is now positively compared to the lack of political lean in Grok, which is the non -clothing Chatbot from Elon Musk from Xai.

“Our goal is to remove our bias from our artificial intelligence models and make sure that Llama can understand and clarify both sides of a controversial issue,” Meta continues. “As part of this work, we continue to make Llama more responsive to answer questions, and we can answer a variety of different views without issuing a ruling, and some opinions do not prefer others.”

One of the concerns raised by the skeptics of the large models developed by a few companies is the type of information control that you can produce. Whoever controls artificial intelligence models can mainly control the information that people receive, and transfer faces in any way, please. This is not a new thing, of course. Internet platforms have long used algorithms to determine the surface -to -surface content. For this reason Meta is still being attacked by conservatives, many of whom insist that the company has suppressed the views of right -wing tendencies despite the fact that conservative content was historically more popular on Facebook. CEO Mark Zuckerberg works in a state of excess in favor of the Curry in the administration in the hope of organizational headache.

There Blog postMeta emphasized that her changes to Llama 4 specifically aims to make the model less liberal. She wrote: “It is known that all the leading LLMS suffers from problems with bias-specifically. It has historically bowed to the left when it comes to the political and social topics that have been discussed.” “This is due to the types of training data available on the Internet.” The company did not reveal the data it used to train Llama 4, but it is known that Meta and other model companies depend on it Pirate And sharpen the sites without permission.

One of the problems related to improving “balance” is that it can create a false equation and provide credibility of bad religious arguments that are not based on experimental scientific data. Known as colloquial “”both of them“Some people in the media feel responsible for providing equal weight to conflicting views, even if there is a side that offers a data -based argument and the other explodes conspiracy theories. A group like Qanon is interesting, but it represents a marginal movement that has never reflected the views of many Americans, and may have been given more time than you deserve.

The leading artificial intelligence models still have a harmful problem in producing accurate information realistically, even this day still adopts information and Lying about this topic. Artificial intelligence has many useful applications, but as the information retrieval system, it remains dangerous to use. Large language programs explode incorrect information with confidence, and all previous ways to use intuition to measure whether the web site is legally delivered from the window.

Artificial intelligence models face a problem with bias – it is known that the identification models of perceptions have problems in identifying people with color, for example. Often women are Imagine sexual waysLike wearing a small field. Even bias appears in more harmful forms: it may be easy to discover the text created by artificial intelligence through the frequent appearance of EM stippers, and numbering marks preferred by journalists and other writers who produce many content models. Models show the prevailing popular views of the general public.

But Zuckerberg sees an opportunity to lie in the interest of President Trump and does what is political appropriate, so dead are specifically that his model will be less liberal. So the next time one of the Meta AI products is used, it may be likely to argue for Covid-19 treatment by taking horse strikers.



https://gizmodo.com/app/uploads/2025/01/GettyImages-2164637635.jpg

Source link

Leave a Comment