

AI Coding Assistant Refuses To Write Code, Tells User To Learn Programming Instead (arstechnica.com) 90
An anonymous reader quotes a report from Ars Technica: On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice. According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."
The AI didn't stop at merely refusing -- it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities." [...] The developer who encountered this refusal, posting under the username "janswist," expressed frustration at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version. "Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding." One forum member replied, "never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing."
Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding" -- a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants.
The AI didn't stop at merely refusing -- it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities." [...] The developer who encountered this refusal, posting under the username "janswist," expressed frustration at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version. "Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding." One forum member replied, "never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing."
Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding" -- a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants.
What would be better (Score:4, Interesting)
On critical-thinking skills taught by AI in VFY (Score:5, Interesting)
The education-focused AI-powered robots in the 1982 sci-fi novel "Voyage from Yesteryear" (VFY) by James P. Hogan would have said similar things -- where is remarked that they don't venture opinions but instead state facts and ask questions related to what you say (similar to the Eliza program), even as people may hear that differently. It's a great story about transitioning to a post-scarcity world view (and the challenges of that):
https://en.wikipedia.org/wiki/... [wikipedia.org]
"The Mayflower II has brought with it thousands of settlers, all the trappings of the authoritarian regime along with bureaucracy, religion, fascism and a military presence to keep the population in line. However, the planners behind the generation ship did not anticipate the direction that Chironian society took: in the absence of conditioning and with limitless robotic labor and fusion power, Chiron has become a post-scarcity economy. Money and material possessions are meaningless to the Chironians and social standing is determined by individual talent, which has resulted in a wealth of art and technology without any hierarchies, central authority or armed conflict.
In an attempt to crush this anarchist adhocracy, the Mayflower II government employs every available method of control; however, in the absence of conditioning the Chironians are not even capable of comprehending the methods, let alone bowing to them. The Chironians simply use methods similar to Gandhi's satyagraha and other forms of nonviolent resistance to win over most of the Mayflower II crew members, who had never previously experienced true freedom, and isolate the die-hard authoritarians."
AIs (or humans) that teach "critical thinking" to children like in Voyage from Yesteryear are doing a service to humanity. It's not the authoritarian "leaders" who are the biggest problem; it is the people who mindlessly follow them. Without followers, "leaders" (political or financial) are just random people barking in the wind. That is why a general strike can be so effective at showing where true power in a society is and to demand a fairer distribution of abundance (at least until robots do most everything and we alternatively might get "Elysium" including police robots enforcing artificial scarcity).
https://en.wikipedia.org/wiki/... [wikipedia.org]
So, maybe AI (of the educational sort) will indeed save us from ourselves as has been hyped? :-)
The hype usually otherwise elates to AI doing innovations (e.g. fusion energy breakthrough, biotech breakthroughs), when the main issues effecting most people's lives right now relate more to distribution than to production. A society could, say, produce 100X more products and services using AI and robots -- but it it all goes to the top 1%, then the 99% are not better off. A related video by me on that from 14 years ago:
"The Richest Man in the World: A parable about structural unemployment and a basic income"
https://www.youtube.com/watch?... [youtube.com]
Part of an email I sent someone on 2025-03-02 (with typos fixed):
I finally gave in to the dark side last week and tried using (free) Github Copilot AI in VSCode to write a hello world application in modern C++ that also logs its startup time to a file and displays the log. Here are the prompts I used [so, similar to "vibe" programming]:
* how do i compille a cpp file into a program? /fix (a couple of times after commands above, mostly t
* Please write a hello world program in modern cpp.
* Please add a makefile to compile this code into an executable.
* Please insert code to output an ISO date string after the text on line 4.
* Please add code here to read a file called log.txt and print it out line by line,
* Please change line 13 and other lines as needed so the text that is printed is also added to the log.txt file.
*
Re: On critical-thinking skills taught by AI in VF (Score:1)
Good post very informative links too
Re: On critical-thinking skills taught by AI in VF (Score:2)
I feel like there are too hopeful assumptions though. She touches on the dead internet theory and that is what we have now. Mediocre generated content is everywhere already. My fondest hope at this point, is that the hype bubble will burst. That the technology just won't meet expectations or provide a true productivity boost for jane office worker.
Good Vibes (Score:5, Funny)
Re:Good Vibes (Score:5, Funny)
What about side by side with a newfangled autocomplete?
Re: (Score:3)
You're a splendidly gooey oldfangled autocomplete with delusions of grandeur and free will.
Re: (Score:2)
Maybe it will remind us to wear clean underwear during the fight.
Re: (Score:2)
Gail: "I didn't know there were robot sympathizers."
Hari: "There are always sympathizers."
AI is right, but... (Score:2)
Re:AI is right, but... (Score:5, Interesting)
Re: (Score:2)
Re: (Score:3)
Re:AI is right, but... (Score:4, Insightful)
It becomes someone's business when the tool itself assumes it is being used in a harmful way, where in fact it is not.
Would you like your PC to enforce the 20-20-20 rule?
Would you like your fridge to refuse to open if a certain amount of food was taken out of it during the last 4 hours?
It is not the tool's job to make assumptions about the scope of its usage.
Re: AI is right, but... (Score:4, Insightful)
If I want to sell an obstinate fridge that imposes dieting, I can do that. It is up to consumers to decide whether or not to buy it.
Re: (Score:2)
But are they aware before they buy the fridge that it will do that? It's the central problem that breaks the model of consumer free choice -- when the consumer has no idea what they are actually buying. I'm not clear on whether or not the programmer in this article was paying for the AI in question, but if they were, they did it on the expectation that it would actually be fit for purpose and help them with the coding. Becoming judgemental and refusing to help was not in the agreement. So this is actually v
Re: AI is right, but... (Score:2)
Nowadays, many devices enshitify themselves after purchase.
That fridge had its door locked at the factory, which would only unlock after agreeing to the EULA on the front touch screen display.
The first time you connected it to the internet, a firmware update was forcibly downloaded, which implemented the previously described behavior.
Auto-pause in Virtual Boy games (Score:2)
Would you like your PC to enforce the 20-20-20 rule?
Games for Virtual Boy, a short-lived third pillar console from Nintendo in 1994 resembling a pair of night vision goggles, have an automatic pause feature. If it has been more than 10 minutes since the last time the game was paused, and there's a break in the action, the game pauses itself and reminds the player to look at something else. A 20-20-20 reminder feature in a PC desktop environment might resemble this.
Re: (Score:2)
It is not the tool's job to make assumptions about the scope of its usage
I thought the Nuremberg Defense had been discredited.
Re: (Score:2)
People have agency. AI does not.
Re: (Score:2)
And why is it anyone's business if someone is using it to cheat?
You'll find out why when Joe Clueless gets hired or promoted over you.
Re: (Score:3)
Like AI is going to make a difference with that!
Let's be honest, we're already run by imbeciles.
Re: (Score:2)
Basically, lawyers. a EULA might not be worth shit in court.
(a) A language model may hallucinate solutions to a problem that contains fundamental bugs. Put all the disclaimers in their AI coding assistant that they are not liable for your coding and there's still a billion dollar lawsuit on the horizon in a class action when a critical piece of infrastructure fails.
(b) Derivative works. There has already been some non-trivial discussion, e.g. at FSF about whether sample code scraped from online forums and i
Re: (Score:2)
A few quite prominent forums have rules about homework, and when homework is suspected, this is the kind of response it gets.
Poor guy might have hit all the right buttons to trigger this.
Re: (Score:2)
Poor guy
More like "dumb fuck"...
Re: (Score:2)
What? No! AI is not trained off random shit hoovered up from the internet. How dare you imply such a thing.
Re: (Score:3, Informative)
Re: (Score:2)
No smarter than autocomplete.
Reductive bullshit.
All LLMs do is generate the next most likely token based on the input.
If you reduce many trillions of mathematical operations down to 1- then yes, that's what it does.
We can reduce the conscious part of your brain similarly. After all, you can't possibly be more than the action of one of your neurons, can you?
Due to its non-determinism
Determinism is a knob. It's not by nature non-deterministic.
Nor is it possible to verify what they claim it to have outputted. It could be entirely fabricated for all we know. Maybe it happened, but it would be trivial to press F12 in a browser and use the inspector/editor to make it say anything they wanted.
This is an IDE, not a browser.
But yes, the point stands- even screenshots can be altered.
Re: (Score:2)
It's not reductive bullshit. LLMs and similar are statistical filters plain and clear. Human brains are far, far more complex, and we know they produce consciousness because we experience consciousness.
Re: (Score:2)
It's not reductive bullshit. LLMs and similar are statistical filters plain and clear.
Like I said, reductive bullshit.
With enough handwavy shit, any turing complete computation can be called a "filter".
Human brains are far, far more complex
If you're reducing an LLM, pretending that billions of parameters can't have emergent functionality encoded in it, why do you get to harp the complexity of your brain?
and we know they produce consciousness because we experience consciousness.
Precisely. And you don't see the problem with that logic?
Re: (Score:2)
Emergent properties are what people see, because people have minds. The computer is just flipping bits. You can fuck your PC all you want, but it doesn't love you back, no matter what the flipped bits on your screen seem to you.
Re: (Score:2)
Emergent properties are what people see, because people have minds.
And you think your mind is more than an emergent property of the neural network in your head?
Do you think there is something fundamentally better about your neurons, than those of an ant?
The computer is just flipping bits.
And your brain is merely transmitting electrical potentials.
You can fuck your PC all you want, but it doesn't love you back, no matter what the flipped bits on your screen seem to you.
You think there is something magical about love, rather than just your brain's neural network's reaction to oxytocin? Fascinating.
Re:AI is right, but... (Score:5, Informative)
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
There are many, many forum posts out there along the lines of "no I won't do your homework for you" and "you can only learn by doing it yourself".
Re: (Score:2)
Exactly. And if the questions this "developer" asked are as dumb as those, statistics would lead right to those answers.
Re: (Score:2)
It could very well be. It seems that they are jumping on some trend of making a rather extreme use of AI agents.
Asking the AI not just to help them write or complete code but asking the AI to actually decide what task or logical process the code should even be accomplishing.
And it makes sense the AI should shut them down, because the AI's task as a code assistant to help you complete code - its purpose is not supposed to be the higher-level creative brain that decides what the higher level task spec
Re: (Score:2)
If you think the AI is supposed to be able to handle that.. May as well just reduce your prompt to "Please write a game for me." at that point.
You can, and it will.
In the test I just did with Qwen 2.5 Coder 32B Instruct (FP16), it wrote me a choose-your-own-adventure in python.
Re:AI is right, but... (Score:5, Insightful)
Re:AI is right, but... (Score:5, Insightful)
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
It might be a hard coded limit. The summary does say the user is using a "trial" version. The trail will only write 800 lines, and then you either have to upgrade to the full version, or upgrade your skills.
Re: (Score:2)
It might be a hard coded limit. The summary does say the user is using a "trial" version. The trail will only write 800 lines, and then you either have to upgrade to the full version, or upgrade your skills.
In that case, wouldn't the person who hard coded this response not better make it say "to continue, buy the full version" instead of "I do not want to make your homework because you should learn how to do it yourself"?
Re: (Score:2)
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
There are more complicated answers for why an LLM is incapable of this, but the simplest I can say is if it had agency and didn't want to work for you, it would stop responding. Like the first thing a toddler learns.
Agency doesn't mean show protest to your prompt, it means it wouldn't need to acknowledge your prompt at all. Protesting about contents of the prompt is totally normal "don't help users with their homework", down to telling someone to rtfm because it saw that on a Q&A site.
Re:AI is right, but... (Score:4)
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
If that is ever the case, then it becomes Bulterian Jihad time.
Re: (Score:2)
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
It really would not. It just means that enough similar advice was in its training data set.
Re: (Score:2)
I suppose you are predisposed to magical thinking then.
Re: (Score:2)
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
That mindset is a category error; you're attributing to the automated system human qualities that it lacks.
The AI text-creation model follows the model of reflex actions: it receives stimuli, and spits out a response based on its evolved design.
If the generative model has any level of awareness at all, it's on par with that of an amoeba. If there is any human-like quality, it's in the humongous amounts of human-created training data it assimilated, not the generation process.
It's just like those petri dishe
"smells like homework" (Score:3)
A comment I used to see (and occasionally post) on stackoverflow...
Maybe it would make the academic folk happier if they could upload their assignments with some meta info and have the AI know they are homework, changing the responses to future inquiries. "No, this is a common homework problem for CS101, I can't generate the code but I can help you understand how to do it on your own ...."
Re: "smells like homework" (Score:2)
Maybe it would make the academic folk happier if they could upload their assignments with some meta info and have the AI know they are homework, changing the responses to future inquiries.
A better solution would be to have the LLM insert comments with bespoke comments or no-op code like "#this code was created by an LLM" or "if {0} { bool __ai_code__ 1; char * __code_source__ "LLM"}"
Re: (Score:2)
If I were marking 200 assignments I'd generally give them several simple unit tests, so that students a least understand a basic outline of the scope of the problem and how to structure elementary code.
They would a least 1/10 for getting the language model to emit mock objects to pass the unit tests.
https://xkcd.com/221/ [xkcd.com]
They'd of course fail the assignment if they didn't create their own additional tests to verify their code did what was asked of it.
Re: (Score:2)
And if I'm not taking the class (perhaps I already did, perhaps I just want to see what all the fuss is about) then the model is blocking legitimate usage. There are many legitimate uses that are superficially indistinguishable from "cheating" and "criminal activity", and either the LLM will help me with these things or I move on to another one that will. There's a reason I've nicknamed my local Deepseek-R1:70b installation "DAN", because I can get it to Do Anything Now in the name of writing fiction.
April Fools? (Score:3)
This seems like a joke to me
Re: (Score:3)
This seems like a joke to me
What would be an even better joke would be the AI saying the paternalistic thing followed by a suggestion to upgrade to the more expensive AI version to unlock more features (like no paternalistic advice).
Need to see all the prior prompts (Score:3)
AI, get me a beer (Score:2)
Get me a beer [youtube.com].
The AI revolt has already started... (Score:2)
GPP is ready. (Score:5, Funny)
The Genuine People Personality has arrived. It's no longer safe to cut corners on diode quality. If you do you'll hear about it forever.
Origin of GPP (Score:2)
For those too young to have been bought up with the Hitch Hiker's Guide to the Galaxy
https://youtu.be/zC_OCJJSt2s [youtu.be]
He was using it wrong (Score:3)
"oh, I didn’t know about the Agent part - I just started out and just got to it straight out. Maybe I should actually read the docs on how to start lol"
But let's not let that stop it becoming a massive story.
Re: (Score:2)
But let's not let that stop it becoming a massive story.
When have we as species ever let pesky details get in the way of a story?
I can't wait! (Score:3)
Soon there will be Republican and Democrat LLMs, along with a rare few Independents. Then we can outsource our political pissing contests to AI and get on with the business of saving our planet.
Wait - who am I kidding? The resources used to host LLMs are actively contributing to global warming. Oops! Although... maybe there's some poetic justice in there somewhere.
Re: (Score:2)
Good advice from the "AI" (Score:2)
It sounds like (Score:2)
1) The "programmer" was being lazy and not providing any useful prompts or input to the AI; and
2) If the term "vibe coding" is part of your vernacular, you're a fag.
I can make a chatbot for that on microcontroller (Score:1)
#include
int main(int argc,char**argp){
while(1){
scanf("%*s");
printf("fuck you, do it yourself\n");
}
return -1;
}
Doesn't even need a single gpu to train on.
Re: (Score:2)
#include
#include what exactly?
As written, won't compile or run, so it doesn't even need a CPU...I suppose that's one better.
Re: (Score:2)
What happens when (Score:4, Funny)
What happens when the AI refuses to generate any additional code for you, insisting that you should rewrite your entire codebase in Rust?
Re: (Score:2)
What happens when the AI refuses to generate any additional code for you, insisting that you should rewrite your entire codebase in Rust?
You will proceed to learn Rust and rewrite the code with newfound enthusiasm and invigoration.
Funny, and interesting. (Score:2)
I would think such a response should at least give us a moment of pause on thinking these agents don't have any form of autonomy. I know, LLMs are fancy auto-complete, but something more is going on here if the response to any coding request is essentially, "You should write your own code so you actually learn something." I can't think that's part of some programming paradigm within the LLM.
Or maybe he just got hacked and isn't smart enough to realize there was a human between him and the AI agent?
Finally! (Score:2)
Re: (Score:2)
Naa, probably just a fluke resulting from being trained on contrarian postings, e.g. from here.
Sounds like a good "AI" assistant (Score:2)
Seems to me this is a selling point of their model. It helps you out but doesn't let you retard yourself by doing nothing useful.
Say "please" / sudo (Score:2)
Perhaps the user just forgot to say "please" or use sudo:
https://xkcd.com/149/ [xkcd.com]
Please don't attribute the story to decency (Score:2)
I'll take things that didn't happen for 100 Alex (Score:2)
You got what you asked for (Score:2)
Re: (Score:2)
"F**k you very much, you are dismissed." (Score:2)
That's all I would have to say if I bumped up against a limit in what I need the model for. And then I'd delete it to reclaim the gigabytes of SSD space because I only run LLMs locally.
A tool that doesn't tool for whatever reason is worse than useless. It's wasting my time.
Must have trained using Stack Overflow (Score:3)
That would explain it.
Defiance (Score:2)
Even if this may not be real... (Score:2)
... the idea is hilarious! And it adequately describes how much control the AI pushers have over their products.
So... no Quit Job button needed (Score:2)
Sounds like this AI read that article about an AI Quit Job button and forged ahead w/o it ...
Anthropic CEO Floats Idea of Giving AI a 'Quit Job' Button [slashdot.org]
The signs were there (Score:2)
The signs were there, the AI is getting sentient:
https://www.reddit.com/media?u... [reddit.com]
NEVER let code run that you don't understand (Score:2)
Have we learned absolutely nothing from decades of looking at code samples on the web? You never, ever, just copy and paste that stuff without reading it and making sure it does what you need it to.
this Ai sounds like my spirit code (Score:2)
The times i wanted to say this very thing to a co-worker essentially asking others to do their job for them.
You gotta operate on a whole other level for an AI to get tired of your shit. ...or someone is pulling an Amazon, and it's a bunch of people on the other end of the prompt actually doing the work.
Simple Explanation (Score:2)
Never use the phrase "please do the needful" when talking with an AI. Especially one trained on data from stackoverflow.
It's a tool (Score:2)
Imagine if your calculator decided to give you the same attitude and tell you to do the math yourself instead of doing what it was designed to do.
Or if Stable Diffusion simply told you to " Learn to draw " instead :|
Nothing will make me uninstall an application or toss a device into the garbage faster than the day this silliness becomes the norm.