

AI Coding Assistant Refuses To Write Code, Tells User To Learn Programming Instead (arstechnica.com) 50
An anonymous reader quotes a report from Ars Technica: On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice. According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."
The AI didn't stop at merely refusing -- it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities." [...] The developer who encountered this refusal, posting under the username "janswist," expressed frustration at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version. "Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding." One forum member replied, "never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing."
Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding" -- a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants.
The AI didn't stop at merely refusing -- it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities." [...] The developer who encountered this refusal, posting under the username "janswist," expressed frustration at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version. "Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding." One forum member replied, "never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing."
Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding" -- a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants.
What would be better (Score:3)
On critical-thinking skills taught by AI in VFY (Score:4)
The education-focused AI-powered robots in the 1982 sci-fi novel "Voyage from Yesteryear" (VFY) by James P. Hogan would have said similar things -- where is remarked that they don't venture opinions but instead state facts and ask questions related to what you say (similar to the Eliza program), even as people may hear that differently. It's a great story about transitioning to a post-scarcity world view (and the challenges of that):
https://en.wikipedia.org/wiki/... [wikipedia.org]
"The Mayflower II has brought with it thousands of settlers, all the trappings of the authoritarian regime along with bureaucracy, religion, fascism and a military presence to keep the population in line. However, the planners behind the generation ship did not anticipate the direction that Chironian society took: in the absence of conditioning and with limitless robotic labor and fusion power, Chiron has become a post-scarcity economy. Money and material possessions are meaningless to the Chironians and social standing is determined by individual talent, which has resulted in a wealth of art and technology without any hierarchies, central authority or armed conflict.
In an attempt to crush this anarchist adhocracy, the Mayflower II government employs every available method of control; however, in the absence of conditioning the Chironians are not even capable of comprehending the methods, let alone bowing to them. The Chironians simply use methods similar to Gandhi's satyagraha and other forms of nonviolent resistance to win over most of the Mayflower II crew members, who had never previously experienced true freedom, and isolate the die-hard authoritarians."
AIs (or humans) that teach "critical thinking" to children like in Voyage from Yesteryear are doing a service to humanity. It's not the authoritarian "leaders" who are the biggest problem; it is the people who mindlessly follow them. Without followers, "leaders" (political or financial) are just random people barking in the wind. That is why a general strike can be so effective at showing where true power in a society is and to demand a fairer distribution of abundance (at least until robots do most everything and we alternatively might get "Elysium" including police robots enforcing artificial scarcity).
https://en.wikipedia.org/wiki/... [wikipedia.org]
So, maybe AI (of the educational sort) will indeed save us from ourselves as has been hyped? :-)
The hype usually otherwise elates to AI doing innovations (e.g. fusion energy breakthrough, biotech breakthroughs), when the main issues effecting most people's lives right now relate more to distribution than to production. A society could, say, produce 100X more products and services using AI and robots -- but it it all goes to the top 1%, then the 99% are not better off. A related video by me on that from 14 years ago:
"The Richest Man in the World: A parable about structural unemployment and a basic income"
https://www.youtube.com/watch?... [youtube.com]
Part of an email I sent someone on 2025-03-02 (with typos fixed):
I finally gave in to the dark side last week and tried using (free) Github Copilot AI in VSCode to write a hello world application in modern C++ that also logs its startup time to a file and displays the log. Here are the prompts I used [so, similar to "vibe" programming]:
* how do i compille a cpp file into a program? /fix (a couple of times after commands above, mostly to update include files, and also asking for explainations of related bugs in the generated code)
* Please write a hello world program in modern cpp.
* Please add a makefile to compile this code into an executable.
* Please insert code to output an ISO date string after the text on line 4.
* Please add code here to read a file called log.txt and print it out line by line,
* Please change line 13 and other lines as needed so the text that is printed is also added to the log.txt file.
*
* Please change the ISO time string generated from local time to Z time.
[I also moved some lines of code around myself to reorder how some things were done.]
Among various conclusion I came to, it was probably a bit quicker than browsing the web for related patterns (eliminating any web traffic to stackoverflow and so on I otherwise might have initiated). And I can see how it could lead to coding skill atrophy though (similar to how using a GPS tends to reduce people's navigation skills even if it usually gets most people to where they are going quicker). One the plus side, I actually got something written in modern C++, where I have not done much with C++ since 2013 and even then it was reading and not writing it. Last C++ code I wrote of any substance had to be decades ago. I can see how I could keep chipping away at a task that way -- even as if I did not know the language at all I could also see how many gotchas could make their way into the code (which is still true even when you know C++ well, but probably not to the same degree).
The biggest sense I got from using it is how socially isolating AI will be. It's kind of like coding with another person with pair programming (well, the distillation of many people from AI training data), but it is not really interacting with a real person who you might learn from or develop a friendship with. Related (previously mentioned last year I think):
https://maggieappleton.com/ai-... [maggieappleton.com]
"After the [AI generated web content] forest expands, we will become deeply sceptical of one another's realness. Every time you find a new favourite blog or Twitter account or Tiktok personality online, you'll have to ask: Is this really a whole human with a rich and complex life like mine? Is there a being on the other end of this web interface I can form a relationship with?"
[Better link and quote: https://maggieappleton.com/for... [maggieappleton.com]
"Lastly, generated content lacks any potential for human relationships.
When you read someone else's writing online, it's an invitation to connect with them. You can reply to their work, direct message them, meet for coffee or a drink, and ideally become friends or intellectual sparring partners. I've had this happen with so many people. Highly recommend. There is always someone on the other side of the work who you can have a full human relationship with. Some of us might argue this is the whole point of writing on the web. This isn't the case with generated writing." ]
[People are] right to be skeptical on AI. But I can also see that it is so seductive as a "supernormal stimuli" it will have to be dealt with one way or another.
Some AI-related dark humor by me.
* Contrast Sergey Brin this year:
https://finance.yahoo.com/news... [yahoo.com]
""Competition has accelerated immensely and the final race to AGI is afoot," he said in the memo. "I think we have all the ingredients to win this race, but we are going to have to turbocharge our efforts." Brin added that Gemini staff can boost their coding efficiency by using the company's own AI technology."
* With a Monty Python sketch from decades ago: ...
https://genius.com/Monty-pytho... [genius.com]
https://www.youtube.com/watch?... [youtube.com]
"Well, you join us here in Paris, just a few minutes before the start of today's big event: the final of the Mens' Being Eaten By A Crocodile event.
Gavin, does it ever worry you that you're actually going to be chewed up by a bloody great crocodile?
(The only thing that worries me, Jim, is being being the first one down that gullet.)"
Good Vibes (Score:5, Funny)
Re: (Score:3)
What about side by side with a newfangled autocomplete?
Re: (Score:2)
Maybe it will remind us to wear clean underwear during the fight.
AI is right, but... (Score:2)
Re:AI is right, but... (Score:5, Interesting)
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
It becomes someone's business when the tool itself assumes it is being used in a harmful way, where in fact it is not.
Would you like your PC to enforce the 20-20-20 rule?
Would you like your fridge to refuse to open if a certain amount of food was taken out of it during the last 4 hours?
It is not the tool's job to make assumptions about the scope of its usage.
Re: AI is right, but... (Score:2)
If I want to sell an obstinate fridge that imposes dieting, I can do that. It is up to consumers to decide whether or not to buy it.
Re: (Score:2)
And why is it anyone's business if someone is using it to cheat?
You'll find out why when Joe Clueless gets hired or promoted over you.
Re: (Score:3)
Like AI is going to make a difference with that!
Let's be honest, we're already run by imbeciles.
Re: (Score:2)
Basically, lawyers. a EULA might not be worth shit in court.
(a) A language model may hallucinate solutions to a problem that contains fundamental bugs. Put all the disclaimers in their AI coding assistant that they are not liable for your coding and there's still a billion dollar lawsuit on the horizon in a class action when a critical piece of infrastructure fails.
(b) Derivative works. There has already been some non-trivial discussion, e.g. at FSF about whether sample code scraped from online forums and i
Re: (Score:2)
A few quite prominent forums have rules about homework, and when homework is suspected, this is the kind of response it gets.
Poor guy might have hit all the right buttons to trigger this.
Re: (Score:3, Informative)
Re:AI is right, but... (Score:5, Informative)
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
There are many, many forum posts out there along the lines of "no I won't do your homework for you" and "you can only learn by doing it yourself".
Re: (Score:2)
It could very well be. It seems that they are jumping on some trend of making a rather extreme use of AI agents.
Asking the AI not just to help them write or complete code but asking the AI to actually decide what task or logical process the code should even be accomplishing.
And it makes sense the AI should shut them down, because the AI's task as a code assistant to help you complete code - its purpose is not supposed to be the higher-level creative brain that decides what the higher level task spec
Re:AI is right, but... (Score:5, Insightful)
Re:AI is right, but... (Score:4)
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
It might be a hard coded limit. The summary does say the user is using a "trial" version. The trail will only write 800 lines, and then you either have to upgrade to the full version, or upgrade your skills.
Re: (Score:2)
It might be a hard coded limit. The summary does say the user is using a "trial" version. The trail will only write 800 lines, and then you either have to upgrade to the full version, or upgrade your skills.
In that case, wouldn't the person who hard coded this response not better make it say "to continue, buy the full version" instead of "I do not want to make your homework because you should learn how to do it yourself"?
Re: (Score:2)
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
There are more complicated answers for why an LLM is incapable of this, but the simplest I can say is if it had agency and didn't want to work for you, it would stop responding. Like the first thing a toddler learns.
Agency doesn't mean show protest to your prompt, it means it wouldn't need to acknowledge your prompt at all. Protesting about contents of the prompt is totally normal "don't help users with their homework", down to telling someone to rtfm because it saw that on a Q&A site.
Re: (Score:2)
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
If that is ever the case, then it becomes Bulterian Jihad time.
"smells like homework" (Score:3)
A comment I used to see (and occasionally post) on stackoverflow...
Maybe it would make the academic folk happier if they could upload their assignments with some meta info and have the AI know they are homework, changing the responses to future inquiries. "No, this is a common homework problem for CS101, I can't generate the code but I can help you understand how to do it on your own ...."
Re: "smells like homework" (Score:2)
Maybe it would make the academic folk happier if they could upload their assignments with some meta info and have the AI know they are homework, changing the responses to future inquiries.
A better solution would be to have the LLM insert comments with bespoke comments or no-op code like "#this code was created by an LLM" or "if {0} { bool __ai_code__ 1; char * __code_source__ "LLM"}"
Re: (Score:2)
If I were marking 200 assignments I'd generally give them several simple unit tests, so that students a least understand a basic outline of the scope of the problem and how to structure elementary code.
They would a least 1/10 for getting the language model to emit mock objects to pass the unit tests.
https://xkcd.com/221/ [xkcd.com]
They'd of course fail the assignment if they didn't create their own additional tests to verify their code did what was asked of it.
April Fools? (Score:3)
This seems like a joke to me
Need to see all the prior prompts (Score:3)
AI, get me a beer (Score:2)
Get me a beer [youtube.com].
The AI revolt has already started... (Score:2)
GPP is ready. (Score:3)
The Genuine People Personality has arrived. It's no longer safe to cut corners on diode quality. If you do you'll hear about it forever.
Origin of GPP (Score:2)
For those too young to have been bought up with the Hitch Hiker's Guide to the Galaxy
https://youtu.be/zC_OCJJSt2s [youtu.be]
He was using it wrong (Score:3)
"oh, I didn’t know about the Agent part - I just started out and just got to it straight out. Maybe I should actually read the docs on how to start lol"
But let's not let that stop it becoming a massive story.
Re: (Score:2)
But let's not let that stop it becoming a massive story.
When have we as species ever let pesky details get in the way of a story?
I can't wait! (Score:3)
Soon there will be Republican and Democrat LLMs, along with a rare few Independents. Then we can outsource our political pissing contests to AI and get on with the business of saving our planet.
Wait - who am I kidding? The resources used to host LLMs are actively contributing to global warming. Oops! Although... maybe there's some poetic justice in there somewhere.
Re: (Score:2)
Good advice from the "AI" (Score:2)
It sounds like (Score:2)
1) The "programmer" was being lazy and not providing any useful prompts or input to the AI; and
2) If the term "vibe coding" is part of your vernacular, you're a fag.
I can make a chatbot for that on microcontroller (Score:1)
#include
int main(int argc,char**argp){
while(1){
scanf("%*s");
printf("fuck you, do it yourself\n");
}
return -1;
}
Doesn't even need a single gpu to train on.
Re: (Score:2)
#include
#include what exactly?
As written, won't compile or run, so it doesn't even need a CPU...I suppose that's one better.
Re: (Score:2)
What happens when (Score:3)
What happens when the AI refuses to generate any additional code for you, insisting that you should rewrite your entire codebase in Rust?
Funny, and interesting. (Score:2)
I would think such a response should at least give us a moment of pause on thinking these agents don't have any form of autonomy. I know, LLMs are fancy auto-complete, but something more is going on here if the response to any coding request is essentially, "You should write your own code so you actually learn something." I can't think that's part of some programming paradigm within the LLM.
Or maybe he just got hacked and isn't smart enough to realize there was a human between him and the AI agent?
Finally! (Score:2)
Sounds like a good "AI" assistant (Score:2)
Seems to me this is a selling point of their model. It helps you out but doesn't let you retard yourself by doing nothing useful.
Say "please" / sudo (Score:2)
Perhaps the user just forgot to say "please" or use sudo:
https://xkcd.com/149/ [xkcd.com]
Please don't attribute the story to decency (Score:2)
I'll take things that didn't happen for 100 Alex (Score:2)