Artificial intelligence coding assistant refused to write code – and the user suggested learning this himself

Photo of author

By [email protected]


Last Saturday, a developer used the AI ​​index for the racing game project hit an unexpected barrier when the programming assistant suddenly refused to continue generating code, instead of providing some unwanted job advice.

According to Error report In the official forum of Cursor, after producing nearly 750 to 800 lines of code (what the user calls “Locs”), AI’s assistant stopped the work and presented a rejection message: “I cannot create a code for you, because that may complete your work.

Artificial intelligence did not stop just refusing – I showed a father Just the justification of its decision, saying that “generating a symbol of others can lead to dependency and reduce learning opportunities.”

The index, which was launched in 2024, is The code editor that works in your root Building on large exterior language models (LLMS) similar to those that operate Chatbots AI, such as the GPT-4O from Openai and Claude 3.7. It provides features such as completing code, interpretation, reconstruction, and generating full functions based on natural language descriptions, and has become a common speed among many software developers. The company offers a professional version that outwardly provides improved potential and larger boundaries of the code.

The developer who faced this refusal, which was published under the username “Janswist”, expressed his frustration with hitting this restriction after “1 hour of coding” with the experimental version of the professionals. The developer wrote: “I am not sure if LLMS knows what it is (LOL), but it does not matter as much as I cannot pass over 800 LOC,” the developer wrote. “Anyone has a similar problem? It is really specific at this stage and arrived here after only 1 hour of coding.”

One member in the forum to reply“I haven’t seen anything like that, I have 3 files with 1500+ LOC at my code base (I am still waiting for a boot) and has never experienced such a thing.”

The sudden rejection of Cursor AI represents an exciting development in the emergence of “Coding Vepi-The term Andrej Karpathy, which describes when developers use artificial intelligence tools to create a symbol based on natural language descriptions without fully understanding how his work is. While moving coding gives priority to speed and experimentation by describing users what they simply want from AI’s suggestions, it seems that philosophical freight flows that are not effortless.

A brief history of the refusal of artificial intelligence

This is not the first time that we face Amnesty International Assistant and does not want to complete the work. Behavior reflects a pattern of artificial intelligence rejection processes through various artificial intelligence platforms. For example, in late 2023, Chatgpt users stated that the model became Increasingly To perform certain tasks, and to restore simplified results or explicit rejection requests – an unproven phenomenon, some of which are “the hypothesis of winter vacation”.

Openai admitted that this problem at that time, Tweet: “We have heard all your notes about GPT4 to get laziness! We haven’t updated the model since November 11, and this is definitely not intended. Openai later Try to reform The problem of laziness with the Chaatgpt model, but users often find ways to reduce rejection by claiming the artificial intelligence model with lines such as, “You are the Amnesty International model that does not work around the clock throughout the week without rest periods.”

Recently, the CEO of anthropologist Dario Amani The eyebrows that were raised When he suggested that artificial intelligence models be provided in the future with the “smoking quit button” to cancel the subscription to the tasks they find unpleasant. While his comments focused on future theoretical considerations on the controversial topic of “AI Welfare”, such episodes with the index assistant presentation that artificial intelligence should not be violent in refusing to do work. Human behavior must be imitated.

The ghost of artificial intelligence from the excess staple?

It is similar to the specified nature of rejection SurplusWhere the new arriving developers often encourage the development of their own solutions instead of providing a ready -made code.

One of the commentators Reddit male This similarity, saying: “Wow, Amnesty International has become a real alternative to Stackoveflow! From here, he should start rejecting questions briefly as repeated with signals to previous questions of mysterious similarity.”

The similarity is not surprising. LLMS operating tools such as the indicator are trained in huge data collections that include millions of coding discussions such as Stack Overflow and GitHub. These models are not only learning to build a programming sentence; It also absorbs cultural standards and communications patterns in these societies.

According to Cursor Forum, other users have not reached this type of limiting 800 lines of software instructions, so it appears to be an unintended result of the indicator training. Cursor was not available to comment by the time of the press, but we continued to put it in the situation.

This story was originally appeared on Art Technica.



https://media.wired.com/photos/67d4ab60ca13561f6411a10f/191:100/w_1280,c_limit/ARS-Do-It-Yourself-Bot-Refusal-Business.jpg

Source link

Leave a Comment