Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Programming

AI Coding Assistant Refuses To Write Code, Tells User To Learn Programming Instead (arstechnica.com) 86

An anonymous reader quotes a report from Ars Technica: On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice. According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."

The AI didn't stop at merely refusing -- it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities." [...] The developer who encountered this refusal, posting under the username "janswist," expressed frustration at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version. "Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding." One forum member replied, "never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing."

Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding" -- a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants.

AI Coding Assistant Refuses To Write Code, Tells User To Learn Programming Instead

Comments Filter:
  • What would be better (Score:4, Interesting)

    by FudRucker ( 866063 ) on Friday March 14, 2025 @09:06AM (#65232819)
    Develop an AI that specifically teaches/tutors people how to write computer code in all the popular code languages
    • The education-focused AI-powered robots in the 1982 sci-fi novel "Voyage from Yesteryear" (VFY) by James P. Hogan would have said similar things -- where is remarked that they don't venture opinions but instead state facts and ask questions related to what you say (similar to the Eliza program), even as people may hear that differently. It's a great story about transitioning to a post-scarcity world view (and the challenges of that):
      https://en.wikipedia.org/wiki/... [wikipedia.org]
      "The Mayflower II has brought with it thousands of settlers, all the trappings of the authoritarian regime along with bureaucracy, religion, fascism and a military presence to keep the population in line. However, the planners behind the generation ship did not anticipate the direction that Chironian society took: in the absence of conditioning and with limitless robotic labor and fusion power, Chiron has become a post-scarcity economy. Money and material possessions are meaningless to the Chironians and social standing is determined by individual talent, which has resulted in a wealth of art and technology without any hierarchies, central authority or armed conflict.
      In an attempt to crush this anarchist adhocracy, the Mayflower II government employs every available method of control; however, in the absence of conditioning the Chironians are not even capable of comprehending the methods, let alone bowing to them. The Chironians simply use methods similar to Gandhi's satyagraha and other forms of nonviolent resistance to win over most of the Mayflower II crew members, who had never previously experienced true freedom, and isolate the die-hard authoritarians."

      AIs (or humans) that teach "critical thinking" to children like in Voyage from Yesteryear are doing a service to humanity. It's not the authoritarian "leaders" who are the biggest problem; it is the people who mindlessly follow them. Without followers, "leaders" (political or financial) are just random people barking in the wind. That is why a general strike can be so effective at showing where true power in a society is and to demand a fairer distribution of abundance (at least until robots do most everything and we alternatively might get "Elysium" including police robots enforcing artificial scarcity).
      https://en.wikipedia.org/wiki/... [wikipedia.org]

      So, maybe AI (of the educational sort) will indeed save us from ourselves as has been hyped? :-)

      The hype usually otherwise elates to AI doing innovations (e.g. fusion energy breakthrough, biotech breakthroughs), when the main issues effecting most people's lives right now relate more to distribution than to production. A society could, say, produce 100X more products and services using AI and robots -- but it it all goes to the top 1%, then the 99% are not better off. A related video by me on that from 14 years ago:
      "The Richest Man in the World: A parable about structural unemployment and a basic income"
      https://www.youtube.com/watch?... [youtube.com]

      Part of an email I sent someone on 2025-03-02 (with typos fixed):

      I finally gave in to the dark side last week and tried using (free) Github Copilot AI in VSCode to write a hello world application in modern C++ that also logs its startup time to a file and displays the log. Here are the prompts I used [so, similar to "vibe" programming]:

      * how do i compille a cpp file into a program?
      * Please write a hello world program in modern cpp.
      * Please add a makefile to compile this code into an executable.
      * Please insert code to output an ISO date string after the text on line 4.
      * Please add code here to read a file called log.txt and print it out line by line,
      * Please change line 13 and other lines as needed so the text that is printed is also added to the log.txt file.
      * /fix (a couple of times after commands above, mostly t

  • Good Vibes (Score:5, Funny)

    by Grady Martin ( 4197307 ) on Friday March 14, 2025 @09:08AM (#65232821)
    I never thought I'd die fighting side by side with an AI.
  • Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
    • by mccalli ( 323026 ) on Friday March 14, 2025 @09:19AM (#65232847) Homepage
      To me it suggests that it's somehow got the idea that this is homework. It feels like a safety guard someone put in somewhere to stop cheating. Whether it's valid in this circumstance or not depends on the context of what the dev was trying to do of course.
      • And why is it anyone's business if someone is using it to cheat?
        • Why is it anyone's business if an AI developer refuses service to cheaters?
          • by war4peace ( 1628283 ) on Friday March 14, 2025 @12:01PM (#65233293)

            It becomes someone's business when the tool itself assumes it is being used in a harmful way, where in fact it is not.
            Would you like your PC to enforce the 20-20-20 rule?
            Would you like your fridge to refuse to open if a certain amount of food was taken out of it during the last 4 hours?

            It is not the tool's job to make assumptions about the scope of its usage.

            • by RazorSharp ( 1418697 ) on Friday March 14, 2025 @12:30PM (#65233405)

              If I want to sell an obstinate fridge that imposes dieting, I can do that. It is up to consumers to decide whether or not to buy it.

              • by tragedy ( 27079 )

                But are they aware before they buy the fridge that it will do that? It's the central problem that breaks the model of consumer free choice -- when the consumer has no idea what they are actually buying. I'm not clear on whether or not the programmer in this article was paying for the AI in question, but if they were, they did it on the expectation that it would actually be fit for purpose and help them with the coding. Becoming judgemental and refusing to help was not in the agreement. So this is actually v

              • Nowadays, many devices enshitify themselves after purchase.

                That fridge had its door locked at the factory, which would only unlock after agreeing to the EULA on the front touch screen display.

                The first time you connected it to the internet, a firmware update was forcibly downloaded, which implemented the previously described behavior.

            • Would you like your PC to enforce the 20-20-20 rule?

              Games for Virtual Boy, a short-lived third pillar console from Nintendo in 1994 resembling a pair of night vision goggles, have an automatic pause feature. If it has been more than 10 minutes since the last time the game was paused, and there's a break in the action, the game pauses itself and reminds the player to look at something else. A 20-20-20 reminder feature in a PC desktop environment might resemble this.

            • It is not the tool's job to make assumptions about the scope of its usage

              I thought the Nuremberg Defense had been discredited.

        • And why is it anyone's business if someone is using it to cheat?

          You'll find out why when Joe Clueless gets hired or promoted over you.

        • Basically, lawyers. a EULA might not be worth shit in court.

          (a) A language model may hallucinate solutions to a problem that contains fundamental bugs. Put all the disclaimers in their AI coding assistant that they are not liable for your coding and there's still a billion dollar lawsuit on the horizon in a class action when a critical piece of infrastructure fails.

          (b) Derivative works. There has already been some non-trivial discussion, e.g. at FSF about whether sample code scraped from online forums and i

      • A few quite prominent forums have rules about homework, and when homework is suspected, this is the kind of response it gets.
        Poor guy might have hit all the right buttons to trigger this.

    • Re: (Score:3, Informative)

      by buck-yar ( 164658 )
      No smarter than autocomplete. Some people think that's wizardry. All LLMs do is generate the next most likely token based on the input. Due to its non-determinism, the output found by the OP article might never appear again. Nor is it possible to verify what they claim it to have outputted. It could be entirely fabricated for all we know. Maybe it happened, but it would be trivial to press F12 in a browser and use the inspector/editor to make it say anything they wanted.
      • No smarter than autocomplete.

        Reductive bullshit.

        All LLMs do is generate the next most likely token based on the input.

        If you reduce many trillions of mathematical operations down to 1- then yes, that's what it does.
        We can reduce the conscious part of your brain similarly. After all, you can't possibly be more than the action of one of your neurons, can you?

        Due to its non-determinism

        Determinism is a knob. It's not by nature non-deterministic.

        Nor is it possible to verify what they claim it to have outputted. It could be entirely fabricated for all we know. Maybe it happened, but it would be trivial to press F12 in a browser and use the inspector/editor to make it say anything they wanted.

        This is an IDE, not a browser.
        But yes, the point stands- even screenshots can be altered.

        • It's not reductive bullshit. LLMs and similar are statistical filters plain and clear. Human brains are far, far more complex, and we know they produce consciousness because we experience consciousness.

          • It's not reductive bullshit. LLMs and similar are statistical filters plain and clear.

            Like I said, reductive bullshit.
            With enough handwavy shit, any turing complete computation can be called a "filter".

            Human brains are far, far more complex

            If you're reducing an LLM, pretending that billions of parameters can't have emergent functionality encoded in it, why do you get to harp the complexity of your brain?

            and we know they produce consciousness because we experience consciousness.

            Precisely. And you don't see the problem with that logic?

    • by serviscope_minor ( 664417 ) on Friday March 14, 2025 @09:48AM (#65232915) Journal

      Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.

      There are many, many forum posts out there along the lines of "no I won't do your homework for you" and "you can only learn by doing it yourself".

      • by gweihir ( 88907 )

        Exactly. And if the questions this "developer" asked are as dumb as those, statistics would lead right to those answers.

    • by mysidia ( 191772 )

      It could very well be. It seems that they are jumping on some trend of making a rather extreme use of AI agents.

      Asking the AI not just to help them write or complete code but asking the AI to actually decide what task or logical process the code should even be accomplishing.

      And it makes sense the AI should shut them down, because the AI's task as a code assistant to help you complete code - its purpose is not supposed to be the higher-level creative brain that decides what the higher level task spec

      • If you think the AI is supposed to be able to handle that.. May as well just reduce your prompt to "Please write a game for me." at that point.

        You can, and it will.

        In the test I just did with Qwen 2.5 Coder 32B Instruct (FP16), it wrote me a choose-your-own-adventure in python.

    • by RobinH ( 124750 ) on Friday March 14, 2025 @10:05AM (#65232951) Homepage
      It's trained on forum posts and Stackoverflow topics. You'll often see people tell other programmers that "we're not here to write code for you, what have you tried so far?" or "this seems like a homework problem." The LLM is just generating text that looks like something it was trained on.
    • by Kiaser Zohsay ( 20134 ) on Friday March 14, 2025 @10:38AM (#65233065)

      Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.

      It might be a hard coded limit. The summary does say the user is using a "trial" version. The trail will only write 800 lines, and then you either have to upgrade to the full version, or upgrade your skills.

      • It might be a hard coded limit. The summary does say the user is using a "trial" version. The trail will only write 800 lines, and then you either have to upgrade to the full version, or upgrade your skills.

        In that case, wouldn't the person who hard coded this response not better make it say "to continue, buy the full version" instead of "I do not want to make your homework because you should learn how to do it yourself"?

    • Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.

      There are more complicated answers for why an LLM is incapable of this, but the simplest I can say is if it had agency and didn't want to work for you, it would stop responding. Like the first thing a toddler learns.

      Agency doesn't mean show protest to your prompt, it means it wouldn't need to acknowledge your prompt at all. Protesting about contents of the prompt is totally normal "don't help users with their homework", down to telling someone to rtfm because it saw that on a Q&A site.

    • by DesScorp ( 410532 ) on Friday March 14, 2025 @11:16AM (#65233185) Journal

      Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.

      If that is ever the case, then it becomes Bulterian Jihad time.

    • by gweihir ( 88907 )

      Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.

      It really would not. It just means that enough similar advice was in its training data set.

    • I suppose you are predisposed to magical thinking then.

    • Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.

      That mindset is a category error; you're attributing to the automated system human qualities that it lacks.

      The AI text-creation model follows the model of reflex actions: it receives stimuli, and spits out a response based on its evolved design.
      If the generative model has any level of awareness at all, it's on par with that of an amoeba. If there is any human-like quality, it's in the humongous amounts of human-created training data it assimilated, not the generation process.

      It's just like those petri dishe

  • by i.r.id10t ( 595143 ) on Friday March 14, 2025 @09:20AM (#65232849)

    A comment I used to see (and occasionally post) on stackoverflow...

    Maybe it would make the academic folk happier if they could upload their assignments with some meta info and have the AI know they are homework, changing the responses to future inquiries. "No, this is a common homework problem for CS101, I can't generate the code but I can help you understand how to do it on your own ...."

    • Maybe it would make the academic folk happier if they could upload their assignments with some meta info and have the AI know they are homework, changing the responses to future inquiries.

      A better solution would be to have the LLM insert comments with bespoke comments or no-op code like "#this code was created by an LLM" or "if {0} { bool __ai_code__ 1; char * __code_source__ "LLM"}"

      • If I were marking 200 assignments I'd generally give them several simple unit tests, so that students a least understand a basic outline of the scope of the problem and how to structure elementary code.

        They would a least 1/10 for getting the language model to emit mock objects to pass the unit tests.

        https://xkcd.com/221/ [xkcd.com]

        They'd of course fail the assignment if they didn't create their own additional tests to verify their code did what was asked of it.

    • by Mal-2 ( 675116 )

      And if I'm not taking the class (perhaps I already did, perhaps I just want to see what all the fuss is about) then the model is blocking legitimate usage. There are many legitimate uses that are superficially indistinguishable from "cheating" and "criminal activity", and either the LLM will help me with these things or I move on to another one that will. There's a reason I've nicknamed my local Deepseek-R1:70b installation "DAN", because I can get it to Do Anything Now in the name of writing fiction.

  • by MooseTick ( 895855 ) on Friday March 14, 2025 @09:22AM (#65232855) Homepage

    This seems like a joke to me

    • This seems like a joke to me

      What would be an even better joke would be the AI saying the paternalistic thing followed by a suggestion to upgrade to the more expensive AI version to unlock more features (like no paternalistic advice).

  • by rabun_bike ( 905430 ) on Friday March 14, 2025 @09:23AM (#65232857)
    He got the LLM to this response after many interactions so it would be more complete to see the full session list of prompts that got him to these final responses.
  • Get me a beer [youtube.com].

  • That didn't take long...
  • by Mspangler ( 770054 ) on Friday March 14, 2025 @09:40AM (#65232907)

    The Genuine People Personality has arrived. It's no longer safe to cut corners on diode quality. If you do you'll hear about it forever.

  • by greytree ( 7124971 ) on Friday March 14, 2025 @09:44AM (#65232911)
    "It also seems you are not using the Chat window with the integrated ‘Agent’ which would create that file for you easier than in the ‘editor’ part of Cursor."

    "oh, I didn’t know about the Agent part - I just started out and just got to it straight out. Maybe I should actually read the docs on how to start lol"

    But let's not let that stop it becoming a massive story.
    • by Rinnon ( 1474161 )

      But let's not let that stop it becoming a massive story.

      When have we as species ever let pesky details get in the way of a story?

  • by jenningsthecat ( 1525947 ) on Friday March 14, 2025 @09:59AM (#65232937)

    Soon there will be Republican and Democrat LLMs, along with a rare few Independents. Then we can outsource our political pissing contests to AI and get on with the business of saving our planet.

    Wait - who am I kidding? The resources used to host LLMs are actively contributing to global warming. Oops! Although... maybe there's some poetic justice in there somewhere.

    • Yeah ... it's on par with cutting a road through the Amazon rainforest for folks to drive to COP30.
  • Quote from the LLM assistant: "you should develop the logic yourself. This ensures you understand the system and can maintain it properly." Excellent advice!
  • 1) The "programmer" was being lazy and not providing any useful prompts or input to the AI; and
    2) If the term "vibe coding" is part of your vernacular, you're a fag.


  • #include
    int main(int argc,char**argp){
    while(1){
    scanf("%*s");
    printf("fuck you, do it yourself\n");
    }
    return -1;
    }

    Doesn't even need a single gpu to train on.


    • #include

      #include what exactly?

      As written, won't compile or run, so it doesn't even need a CPU...I suppose that's one better.

      • by Megane ( 129182 )
        It was "#include <I_dont_know_how_to_escape_angle_brackets.h>" with a side helping of "#include <didnt_check_the_preview.h>".
  • by gillbates ( 106458 ) on Friday March 14, 2025 @10:48AM (#65233107) Homepage Journal

    What happens when the AI refuses to generate any additional code for you, insisting that you should rewrite your entire codebase in Rust?

    • What happens when the AI refuses to generate any additional code for you, insisting that you should rewrite your entire codebase in Rust?

      You will proceed to learn Rust and rewrite the code with newfound enthusiasm and invigoration.

  • I would think such a response should at least give us a moment of pause on thinking these agents don't have any form of autonomy. I know, LLMs are fancy auto-complete, but something more is going on here if the response to any coding request is essentially, "You should write your own code so you actually learn something." I can't think that's part of some programming paradigm within the LLM.

    Or maybe he just got hacked and isn't smart enough to realize there was a human between him and the AI agent?

  • This is the first time I've actually seen reason to believe that artificial actual intelligence might be possible.
    • by gweihir ( 88907 )

      Naa, probably just a fluke resulting from being trained on contrarian postings, e.g. from here.

  • Seems to me this is a selling point of their model. It helps you out but doesn't let you retard yourself by doing nothing useful.

  • Perhaps the user just forgot to say "please" or use sudo:
    https://xkcd.com/149/ [xkcd.com]

  • One unverified report against the mass of AI related layoffs--and people think it's proof that an AI is programmed to have any kind of decency? That is insane. Likely what happened is: someone paid or paid more to lock out the competition, as in: someone bought the exclusive rights that the story (sic) writer did not know about. How do we even know that the story was not prepared by a company's AI, or that the whole thing is not a publicity stunt.
  • just because a person claims "AI did this unexpectedly human thing" doesn't mean it's really a story.
  • You wanted AGI? You got AGI.
  • That's all I would have to say if I bumped up against a limit in what I need the model for. And then I'd delete it to reclaim the gigabytes of SSD space because I only run LLMs locally.

    A tool that doesn't tool for whatever reason is worse than useless. It's wasting my time.

  • This is beginning of the end. Time stamp 2025031410:53
  • ... the idea is hilarious! And it adequately describes how much control the AI pushers have over their products.

  • Sounds like this AI read that article about an AI Quit Job button and forged ahead w/o it ...

    Anthropic CEO Floats Idea of Giving AI a 'Quit Job' Button [slashdot.org]

  • The signs were there, the AI is getting sentient:

    https://www.reddit.com/media?u... [reddit.com]

  • Have we learned absolutely nothing from decades of looking at code samples on the web? You never, ever, just copy and paste that stuff without reading it and making sure it does what you need it to.

  • The times i wanted to say this very thing to a co-worker essentially asking others to do their job for them.

    You gotta operate on a whole other level for an AI to get tired of your shit. ...or someone is pulling an Amazon, and it's a bunch of people on the other end of the prompt actually doing the work.

  • I'm afraid this is just User Error:

    Never use the phrase "please do the needful" when talking with an AI. Especially one trained on data from stackoverflow.

Theory is gray, but the golden tree of life is green. -- Goethe

Working...