Google Ai Overview Expresses Makeup Expressions with confident nonsense

Photo of author

By [email protected]


The language can look almost completely complex, as internal jokes and expressions have a meaning for a small group of people only and they look meaningless for the rest of us. Thanks to AI TolideEven the meaningless meaning this week, as the Internet exploded like a knocker of salmon on the ability Google Search Look To determine the phrases it has not been pronounced before.

What, never you heard the phrase “I blew like Traot Brook”? Certainly, I just made it, but an Overview of Google Amnesty International told me that it is a “colloquial way to say something that exploded or has become a quick feeling”, it is possible that it would indicate the attractive colors and marks of fish. No, this does not make sense.

The Atlas of Artificial Intelligence

The trend may start on SubjectsWhere the author and screenwriter Megan Wilson Anastasius Share what happened When I searched for “the heels the peanut butter platform”. Google returned to a result indicating a scientific (unrealistic) experience in which peanut butter was used to show the creation of diamonds under high pressure.

I moved to other social media, such as BlouseWhere people shared Google explanations of phrases such as “You can not lick a rhir twice.” Game: Look for a meaningful novel with “meaning” in the end.

Things rolled from there.

Bluesky screen screen "Wait this amazing" With a Google search screen from "You cannot sculpt the salted in good intentions." Google Ai says: Say "You cannot sculpt the salted in good intentions" It is an example that highlights that even with the best intentions, the end result can be unexpected or even negative, especially in situations that involve complex or sensitive tasks. It represents the salted, with its twisted and complex shape, a task that requires accuracy and skill, and not just good faith. This is a collapse of the saying: "Crown salt"This indicates the act of making or forming a salt, a task that requires accurate treatment and technology.

Screen snapshot by John Reed/CNET

Blue Publosure by Leviagershon.bsky.sochal who says "Just amazing" It contains a screenshot of a general look on Google Search Ai that says "Term "You cannot pick up a camel to London" It is the way of humor to say something is impossible or very difficult. It is a comparison, which means that trying to capture the sentences and transporting them to London is so ridiculous or impractical that it is a metaphor for an almost impossible or meaningless task.

Screen snapshot by John Reed/CNET

This meme is interesting for more reasons than comic relief. It explains how the language models can be hesitant to provide an answer Voices True, not that He is correct.

“It has been designed to generate fluent and appropriate responses, even when the inputs are completely illogical,” he said. Yavang LeeAssistant Professor at the Fogman College of Business and Economics at Memphis University. “They are not coaches to verify the truth. They are trained to complete the sentence.”

Like glue on pizza

The false meanings of the taught sayings re-memories of the real stories about an overview of Amnesty International that give incredibly wrong answers to the basic questions-such as when they were when they were I suggest putting glue on the pizza To help cheese stick.

This trend looks at least more harmful because it does not focus on practical advice. I mean, I hope that no one tries to lick a crime once, much less twice. The problem behind it is the same – a Language modelHe loves Gemini from Google Behind an artificial intelligence overview, he tries to answer your questions and provide a possible response. Even if it gives you nonsense.

A Google spokesman said an artificial intelligence overview is designed to display information supported by the best web results, and that it has a similar rate similar to other research features.

“When people perform illogical or” wrong hypothesis “, our systems will try to find the most relevant results based on the limited web content available,” said a Google spokesman. “This is true in research in general, and in some cases, an overview of Amnesty International will also lead to an attempt to provide a useful context.”

This particular condition is a “data vacuum”, as there is not much relevant information available to inquire the research. The official spokesman said that Google is working to reduce an overview of an overview of searches for searches without sufficient information and prevent them from providing misleading, irony or useful content. Google uses information about inquiries like this to better understand when an artificial intelligence overview should appear.

You will not always get the definition of makeup if you request the meaning of a fake phrase. When formulating the title of this section, I looked “like glue on the meaning of pizza”, and it did not raise an overview of artificial intelligence.

The problem does not seem global via LLMS. I asked Chatgpt The meaning “You can not lick a rider twice” and tell me the phrase “is not a standard expression, but it is definitely Voices This type of strange and rural proverb that a person may use. “However, try to provide a definition in any case, mainly:” If you do something reckless or provoke once, you may not survive this again. “

Read more: Amnesty International: 27 ways to make Gen AI work for you, according to our experts

Withdraw the meaning from nothingness

This phenomenon is an entertaining example of LLMS inclination to the formation of things – what the world of artificial intelligence calls “Hallucinogenic“When the Gen AI model is cheering, it produces information that looks like it can be reasonable or accurate but it is not actually rooted.

Llms said, “They are not real generators”, as they only expect the next logical parts of the language based on their training.

Most artificial intelligence researchers in a Conversation I stated that they question the accuracy of artificial intelligence and the merit of issues will be resolved soon.

Fake definitions not only show inaccuracy, but also Confident Llms. When you ask someone the meaning of a phrase like “You cannot get Türkiye from Cyberrtruck”, you may expect them to say that they have not heard of it and that it has no meaning. LLMS often interacts with the same confidence as if you were asking for real expression.

In this case, Google says that the phrase means that Cybertruck from Tesla “is” unable or able to connect Roman roosters on Thanksgiving or other similar elements “and highlights” its distinctive and future design that does not lead to carrying huge goods. ” Burning.

This humorous trend has a fateful lesson: Do not trust all the chatbot. Making things from thin air may be, which is It will not necessarily indicate that he is not certain.

“This is an ideal moment for teachers and researchers to use these scenarios to teach people how the meaning is generated and how artificial intelligence works and why it matters.” “Users should always remain skeptical and check claims.”

Be careful to search for it

Since you cannot trust LLM to be skeptical on your behalf, you need to encourage her to take what you say with a grain of salt.

“When users enter a mentor, the model assumes that it is correct and then continues to create the accurate answer that is likely,” he told me.

The solution is to enter doubts in your demands. Do not require the meaning of an unfamiliar phrase or expression. Ask if this is real. He suggested that you ask, “Is this a real expression?”

“This model may help get to know the phrase instead of just guessing,” she said.





https://www.cnet.com/a/img/resize/bef48a0df87e2006275bdb078e32fd54f2d2bd17/hub/2025/04/24/88e79f0b-28e8-4921-8f25-b4e8b5c13548/gettyimages-898177576.jpg?auto=webp&fit=crop&height=675&width=1200

Source link

Leave a Comment