Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
AI Programming

GitHub Users Angry at the Prospect of AI-Written Issues From Copilot (github.com) 46

Earlier this month the "Create New Issue" page on GitHub got a new option. "Save time by creating issues with Copilot" (next to a link labeled "Get started.") Though the option later disappeared, they'd seemed very committed to the feature. "With Copilot, creating issues...is now faster and easier," GitHub's blog announced May 19. (And "all without sacrificing quality.")

Describe the issue you want and watch as Copilot fills in your issue form... Skip lengthy descriptions — just upload an image with a few words of context.... We hope these changes transform issue creation from a chore into a breeze.
But in the GitHub Community discussion, these announcements prompted a request. "Allow us to block Copilot-generated issues (and Pull Requests) from our own repositories." This says to me that GitHub will soon start allowing GitHub users to submit issues which they did not write themselves and were machine-generated. I would consider these issues/PRs to be both a waste of my time and a violation of my projects' code of conduct. Filtering out AI-generated issues/PRs will become an additional burden for me as a maintainer, wasting not only my time, but also the time of the issue submitters (who generated "AI" content I will not respond to), as well as the time of your server (which had to prepare a response I will close without response).

As I am not the only person on this website with "AI"-hostile beliefs, the most straightforward way to avoid wasting a lot of effort by literally everyone is if Github allowed accounts/repositories to have a checkbox or something blocking use of built-in Copilot tools on designated repos/all repos on the account.

1,239 GitHub users upvoted the comment — and 125 comments followed.
  • "I have now started migrating repos off of github..."
  • "Disabling AI generated issues on a repository should not only be an option, it should be the default."
  • "I do not want any AI in my life, especially in my code."
  • "I am not against AI necessarily but giving it write-access to most of the world's mission-critical code-bases including building-blocks of the entire web... is an extremely tone-deaf move at this early-stage of AI. "

One user complained there was no "visible indication" of the fact that an issue was AI-generated "in either the UI or API." Someone suggested a Copilot-blocking Captcha test to prevent AI-generated slop. Another commenter even suggested naming it "Sloptcha".

And after more than 10 days, someone noticed the "Create New Issue" page seemed to no longer have the option to "Save time by creating issues with Copilot."

Thanks to long-time Slashdot reader jddj for sharing the news.


GitHub Users Angry at the Prospect of AI-Written Issues From Copilot

Comments Filter:
  • I have not looked into this myself yet, but in order to write anything GPT needs to have a direction to go in... so I assume the way it would work is you'd write a summary of the issue and chatGPT would fill out details?

    Maybe an indicator that the issue was written by ChatGPT along with a link to the prompt that generated the summary. Then you'd have original intent of the issue.

    I can kind of see the point where people might want to block it altogether but it does come off as a bit luddite. But I can see

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Re:Wouldn't it be at least human-directed?

      One would hope. However, lawyers have been filing false legal documents with the courts, citing made-up court cases, so I'm guessing not even lawyers check what's spewed up by AI.

      • by Anonymous Coward

        People do not understand how bad AI is. It's at best fit for factchecking a slashdot post, or asking random questions like when the National League adopted the designated hitter.

        If you have expertise in an area (or in situations like these idiot lawyers relying on ChatGPT, if you're EXPECTED to have expertise in an area), you cannot rely on AI because the issues will be far too subtle for it to understand.

        If you're at a top-tier professional firm, you will be told nearly at the beginning of your time there

        • That's only mostly true. An LLM model on its own is like a decently intelligent guy at Starbucks. Able to talk but you have no way of knowing which of the things he says are true.

          Now, if you use an LLM with a huge context window and drop a technical manual into that context, it's like you handed that guy at Starbucks the book and asked him to look stuff up in it. You can have a lot more confidence in his answers as they pertain to that book....but he's still going to be imperfect for anything that isn't in
          • by dfghjk ( 711126 )

            "Able to talk but you have no way of knowing which of the things he says are true."
            No way of knowing? I've never experienced that. I very often have ways of knowing, and you do too.

            "Now, if you use an LLM with a huge context window and drop a technical manual into that context, it's like you handed that guy at Starbucks the book and asked him to look stuff up in it."
            You know that an LLM is just a computer application, right? Do you think you can just speculate features in Word and they come into existenc

            • I'm not fabricating anything. I have described to a first approximation how today's LLMs function. The key, in case you didn't get it, is that facts don't come from the model. They come from the context. In the metaphor, the facts are from the book, not the guy. He's just someone who knows how to read.
        • "asking random questions like when the National League adopted the designated hitter"

          But only if you don't care if the answer is wrong. I constantly notice factual errors in AI responses and I can't even imagine how many I'm not noticing.

          Honestly, I think it's much better at summarizing the mainstream sanitized zeitgeist. Like if you just want to get the gist of how the average person thinks about some topic, the AI can give you that.

          But facts? Complete bullshit half the time. Worthless except for entertain

    • A screenshot and a few words constitutes an issue? Sure, I do understand that there is something as too many useless words which takes too much time of the developers to ingest. But with edge-cases, which I tend to find, I sometimes need more than a few words to explain all the steps.

      And I have seen very undescriptive/unclear issue reports that take as much or more time of developers to investigate than a "wordy" one. Which tend to be quickly ignored as well by developers.

      So my issue reports remain "wordy".

      • I can see that if AI is used to find potential dupes, then it's good.
        But to generate tickets from insufficient data, then it's going to generate too much noise.

        The other side of the problem with problem reports is that users that writes them good describes the observable behavior, but then the developers marks it as a dupe of some integral fault that no end user could figure out without access to the code.

        • by dfghjk ( 711126 )

          Just have an LLM process the tickets too, you know that's coming.

          Everything that's done either removes humans writing code, testing code or maintaining code. The end result will be that humans can't use the code. It is very common now to experience software with features that do not work and cannot possibly work.

          I was recently required to switch to a third party app that maintains a profile of me for uninteresting reasons. I occasionally have to update the profile using a web form, as is common. But to

      • I used to have a coworker who would send me screen shots of his terminal window to ask questions instead of copy/pasting text.

        I wanted to punch him in his stupid face. Its like the web developer equivalent of attaching a screenshot to a Word document.

        If the AI could at least transcribe a terminal windows, that could have some value.

        • by pjt33 ( 739471 )

          Screenshots? Count yourself lucky. The best I ever get attached to bug reports is a photo of the screen that the user took with their phone.

    • by dfghjk ( 711126 )

      "I can kind of see the point where people might want to block it altogether but it does come off as a bit luddite."

      Based on what? The facts you literally just made up?

      "But I can see a very real danger of number of issues rising dramatically if they are easier to generate."

      Can you see AI doing what you've done for your entire life, just make up lies to suit your arguments?

  • by will4 ( 7250692 ) on Saturday May 31, 2025 @09:58PM (#65419779)

    If there are 10,000 fully automated AI generated pull requests sent to GitHub repository owners and enough of those pull requests are accepted, there will be a commercial marketing blitz that AI can find and 'fix' X percent of your code issues.

    Ultimately, there will be a 'AI linter as a feature' used to block code pull requests when they don't 'fix' something that AI flags as 'not good enough'.

    • The accepted or rejected AI generated code change pull requests will go into a bucket of AI training data to help make 'better' AI based code suggestions next time.

      Free training data for AI models....

      No consider chapter books.....

      1. AI writes the first chapter and the next 6 chapters
      2. Measure which books get readers to engage longer by reading more chapters in a single reading session
      3. Rank up those books
      4. Rank down other books which did not have repeat readers or readers reading multiple chapters in a d

      • There's a flaw with your idea. You're assuming that readers will read just any slop from any source forever. Try it sometime. Ask your mom to read 3 chapters of an AI generated quantum theory of everything text book mashup with motorcycle repair instructions "written by John P. Sirgig.". See if she serves you dinner after that.

        Human customers can't be used for AI test-driven slop generation for long without complaints, or worse.

    • AI's arguing with each other on the internet with no positive outcome is only a win for the AI company. Same with emails. If you send me 20 pages of AI slop generated from bullet points and I use AI to reduce it back to bullet points, then nothing of value was gained except we both paid openai for burning some coal and thats all.
    • by dfghjk ( 711126 )

      What percent of that X percent of your code issues were caused by AI in the first place?

      "Ultimately, there will be a 'AI linter as a feature' used to block code pull requests when they don't 'fix' something that AI flags as 'not good enough'."

      The world's worst programmer is now the final authority on code reviews. That sounds exactly like what Silicon Valley geniuses have in mind.

      Yep. we definitely need AI inserted into the decision making process everywhere we can imagine.

  • This is really just the latest radon. First and foremost, leave, because Microsoft *never* has your best interests are n mind. Go to GitLab, or host your own. Or pick any of a zillion other services out there.
  • I will start thinking about how to auto close copilot created issues once the start showing up in my projects. I probably have time because my main project is tiny and most of our activity is outside of github with just us mirror our tagged branches to GH/GL.
    I'm hoping it can just be a perl script. But I am prepared to train up Llama to so the dirty work of fight AI with AI.

    • by rta ( 559125 )

      is this an ideological stance because they're AI generated (and e.g. taking jobs, or were trained on stolen IP) or are you concerned the issues will be crap?

      my _guess_ is that they'll be fairly bad in many cases. Especially that they'll have a low signal to noise ratio... like corporate marketing speak, and prob a good number of errors.

      That said... maybe they'll be ok.

      • "maybe they'll be ok"

        It's already happening to some projects and is most definitely not ok. Just constant dupes and meaningless drivel.

      • the quality is low on all AI stuff. It's not worth my time interacting with it.

        Maybe in 20 years it will be ready for prime time. For now, I'm going to hop off the hype train and drive myself to where I need to go.

  • As a developer, I feel that some folks don't want to fix their bugs and don't want to deal with issues. Anything that makes it easier for users to report bugs, causes anger to these people.
    • We don't mind bug reports. In fact, we do want bug reports. But we want reports that are valuable, coherent and that actually tell us what's wrong so we csn fix it instead of being sent on an infertile goode chase of an imaginary issue.

      • goose chase*

        (apologies, only having my first coffee now and my glasses are too far away from the sofa)

      • >But we want reports that are valuable, coherent and that actually tell us what's wrong so we csn fix it

        You don't get that now 98% of the time. It's going to be difficult for the AI reports to be WORSE than almost all of the reports I've had to read through on the projects I follow.

        It's just plain fact that the vast majority of bug reports suck ass, even from the users that bother to report shit back instead of just saying "it dun werk, this thing sucks" and use something else. If AI can help guide / ask

    • I wish more people reported bugs. They're more likely to just immediately abandon the tool and go to the next one.

  • by msauve ( 701917 ) on Sunday June 01, 2025 @03:41AM (#65420057)
    > "With Copilot, creating issues...is now faster and easier,"

    I believe that, and it's a good reason to avoid Copilot.
  • by cygnusvis ( 6168614 ) on Sunday June 01, 2025 @03:47AM (#65420067)
    Don't send or post any AI to me on any platform. I can run the AI my self when I feel it's needed.
  • by _merlin ( 160982 ) on Sunday June 01, 2025 @04:23AM (#65420091) Homepage Journal

    I've already had to summarily close AI-generated pull requests attempting to "correct grammar" but actually changing the meaning. People spam them against dozens of projects attempting to get their contribution count up. AI-generated issues will be just as annoying even if GitHub doesn't provide built-in tools to facilitate opening them.

  • I really don't get it. Isn't AI supposed to save everybody time? Someone has gone to the trouble of describing their issue to AI in sufficient detail that the AI can create an issue for a human to understand it and do the work, so why doesn't AI just create the pull request?

    Even better, if the project maintainer doesn't like it, the requester can for k the project and have their feature/bugfix/whatever and the world is a better place.

    When I create a bug (even a wishlist bug) I often/usually have a patch too

    • > so why doesn't AI just create the pull request

      It does, but that's the problem.

      Have a listen to this week's Security Now, they do a segment on exactly this with a Microsoft engineer 'arguing' with a chatbot on a .NET issue.

      The chatbot identifies the filed problem as improper memory allocation in a regex parser (backtracking), but then instead of fixing the parser, it patches the parser to fail silently but without triggering the memory violation.

      The engineer suggests that it should be fixed instead and

    • by butlerm ( 3112 )

      Aa a general rule AI pull requests and AI output generally is garbage. There is no there there - contemporary AI just stochastically regurgitates whatever is fed to it, like a particularly bad case of Garbage In Garbage Out. Worse than that modern AIs tend to be delusional and are prone to making things up - like legal citations to cases that do not exist. That means you not only need to be smarter than the AI, you need to review anything the AI generates with a much greater degree of care than something

      • It already changed, about 3 months ago when reasoning models came out. I have not seen Phi4-Reasoning-Plus hallucinate yet and it is a wonderful code-reviewer. It's a 14B model too! So go grab Ollama and give it a go. Go see for yourself.

  • I've been leading a project to perform AI Code reviews on Pull Requests to ensure conformance to a Jira story along with a review. Models like Llama, qwq, etc tend to rubber stamp code reviews, which is not so helpful, but models like Phi4-reasoning-plus (from Microsoft) are actually pretty stellar at code reviews, finding real bugs and rarely hallucinating. My team started off just like most of the Slashdot crowd, really unhappy with AI being thrown at them... But then they saw that reviews they got were

  • by chmod a+x mojo ( 965286 ) on Sunday June 01, 2025 @10:40AM (#65420461)

    Jesus fucking christ...

    Devs whine that users don't fill out bug reports with enough info. Some are downright hostile to the people ATTEMPTING to fill out reports properly, even who ask how to even get the info that would help the devs out.

    Devs ALSO whine "we are anti-AI... for uh reasons" like the Thailand meme guy, even though it's designed to, oh I don't know, get the fucking information the fuckers need for a good bug report. Or at least present the information better than bubba's basic "It dun werk" filled out 50 times in the mandatory reporting template.

    I mean god damn, this is one of the things that AI is a perfect fit for doing - taking the average moron, and making them intelligible in a useful way to people much smarter than them.

  • I suspect that the misunderstanding is an old one; but 'AI' tools really bring into stark relief how poor people are at distinguishing between genuine friction; inefficiency because parts of the system are rubbing against one another in undesired ways; and 'friction' the noble phenomenon that improves the signal to noise ratio by making noise just inconvenient enough that you usually do it after you've already thought about it for a minute on your own.

    It's the difference between being able to tap a colle

Doubt isn't the opposite of faith; it is an element of faith. - Paul Tillich, German theologian and historian

Working...