X users who deal with GROK, such as fears that exceed facts about wrong information

Photo of author

By [email protected]


Some users of Eileon Musk resort to Musk from Ai Bot Grok to achieve facts, raising fears between human facts determinants that this may hinder wrong information.

Earlier this month, x Enabling Users to call Xai’s Grok and ask questions about different things. This was the move Similar to confusionWhich was running an automatic account on X to provide a similar experience.

Soon after the creation of Xai on the Grok automated account on X, users have started trying to ask questions on them. Some people in the market, including India, have started to claim Grok to check the comments and questions targeting specific political beliefs.

Fact monitors are concerned about the use of GROK-or any other artificial intelligence of this type-in ​​this way because robots can fram their answers to the convincing sound, even if they are not true in reality. Cases Publish fake news and Wrong information Was seen with Grook in the past.

In August last year, five state secretaries Urging Musk to implement critical changes on GROK after the misleading information created by the surface on social networks before the US elections.

Other Chatbots, including Chatgpt from Openai and Google’s Gemini, were seen, as Establish inaccurate information In the elections last year. Separately, misleading information researchers found in 2023 that Chatbots can easily be used from artificial intelligence including ChatGPT to produce Confused text with misleading novels.

“The aides of artificial intelligence, like Grok, are really good in using natural language and giving an answer that looks like a human.

Grok has been requested by a X user to check the facts on the claims made by another user

Unlike artificial intelligence assistants, human factors use multiple and credible sources to verify information. They also bear the full accountability of their findings, with their names and institutions attached to ensuring credibility.

Pratik Sinha, co -founder of Alt News, who wins the facts in India, said that although Grok seems to have convincing answers, it is good like the data that is provided to it.

“Who will decide the data that is provided, and this is where the government will enter, etc., to the image.”

“There is no transparency. Anything that lacks transparency will cause harm because anything that lacks transparency can be formed in any way.”

“It can be misused – to spread wrong information.”

In one of the responses published earlier this week, a GROK account on X Recognized It can be misused – to spread wrong information and violate privacy. ”

However, the automatic account does not expose any evacuation of users when they get his answers, which leads to misleading if, for example, the hallucinations of the answer, which is the potential defect of Amnesty International.

GROK response to if it can publish misleading information (translated from Hinglish)

“He may compensate for information to provide a response,” said Anushka Jain, a partner researcher at the digital “Digital Storms Research” laboratory.

There are also some questions about the amount of GROK posts on X as training adverss, and what quality monitoring measures you use to check these posts. Last summer Pay It seems that Grok’s allowing to consume X user data by default.

Another regarding artificial intelligence assistants such as Grok can be accessed through social media platforms is to provide information in public places – unlike ChatGPT or other chat that is used separately.

Even if the user is well aware that the information he gets from the assistant may be totally or incorrect, others on the platform may still believe it.

This can cause serious social damage. This was seen earlier in India The wrong information distributed on WhatsApp led to the disruption of mobs. However, these severe incidents occurred before the arrival of Genai, which made the generation of artificial content easier and seem more realistic.

“If you see a lot of these Grok answers, you will say, hey, well, most of them are right, and that may be, but there will be some wrong. How many are not a small break. Some research studies have shown that AI models are subject to 20 % error rate

Amnesty International against real fact auditor

Although artificial intelligence companies including XAI improve artificial intelligence models to make them communicate more like humans, they still – and cannot – replace humans.

Over the past few months, technology companies explore ways to reduce dependence on human facts. The platforms, including X and Meta, began to embrace the new concept of breaking the collective facts through the so -called community notes.

Of course, these changes also cause anxiety from auditors in reality.

Sinha of Alt News believes optimistic that people will learn to distinguish between machines and human facts and will appreciate the accuracy of humans.

“We will see the pendulum ultimately swing towards more facts,” said Holian from IFCN.

However, I noticed that in the meantime, the fact monitors may have more work regarding the information created by artificial intelligence that spreads quickly.

She said: “A lot of this problem depends. Do you really care about what is really true or not?

X and Xai did not respond to our request for comment.



https://techcrunch.com/wp-content/uploads/2025/03/fact-checking-ai-getty.jpg?resize=1200,800

Source link

Leave a Comment