Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming AI

Are AI Coding Assistants Really Saving Developers Time? (cio.com) 142

Uplevel provides insights from coding and collaboration data, according to a recent report from CIO magazine — and recently they measured "the time to merge code into a repository [and] the number of pull requests merged" for about 800 developers over a three-month period (comparing the statistics to the previous three months).

Their study "found no significant improvements for developers" using Microsoft's AI-powered coding assistant tool Copilot, according to the article (shared by Slashdot reader snydeq): Use of GitHub Copilot also introduced 41% more bugs, according to the study...

In addition to measuring productivity, the Uplevel study looked at factors in developer burnout, and it found that GitHub Copilot hasn't helped there, either. The amount of working time spent outside of standard hours decreased for both the control group and the test group using the coding tool, but it decreased more when the developers weren't using Copilot.

An Uplevel product manager/data analyst acknowledged to the magazine that there may be other ways to measure developer productivity — but they still consider their metrics solid. "We heard that people are ending up being more reviewers for this code than in the past... You just have to keep a close eye on what is being generated; does it do the thing that you're expecting it to do?"

The article also quotes the CEO of software development firm Gehtsoft, who says they didn't see major productivity gains from LLM-based coding assistants — but did see them introducing errors into code. With different prompts generating different code sections, "It becomes increasingly more challenging to understand and debug the AI-generated code, and troubleshooting becomes so resource-intensive that it is easier to rewrite the code from scratch than fix it."

On the other hand, cloud services provider Innovative Solutions saw significant productivity gains from coding assistants like Claude Dev and GitHub Copilot. And Slashdot reader destined2fail1990 says that while large/complex code bases may not see big gains, "I have seen a notable increase in productivity from using Cursor, the AI powered IDE." Yes, you have to review all the code that it generates, why wouldn't you? But often times it just works. It removes the tedious tasks like querying databases, writing model code, writing forms and processing forms, and a lot more. Some forms can have hundreds of fields and processing those fields along with doing checks for valid input is time consuming, but can be automated effectively using AI.
This prompted an interesting discussion on the original story submission. Slashdot reader bleedingobvious responded: Cursor/Claude are great BUT the code produced is almost never great quality. Even given these tools, the junior/intern teams still cannot outpace the senior devs. Great for learning, maybe, but the productivity angle not quite there.... yet.

It's damned close, though. GIve it 3-6 months.

And Slashdot reader abEeyore posted: I suspect that the results are quite a bit more nuanced than that. I expect that it is, even outside of the mentioned code review, a shift in where and how the time is spent, and not necessarily in how much time is spent.
Agree? Disagree? Share your own experiences in the comments.

And are developers really saving time with AI coding assistants?
This discussion has been archived. No new comments can be posted.

Are AI Coding Assistants Really Saving Developers Time?

Comments Filter:
  • It's a tool. (Score:5, Interesting)

    by MrNaz ( 730548 ) on Sunday September 29, 2024 @06:36AM (#64825793) Homepage

    If you try to build a ship with nothing but a welding torch, it won't go well.

    Copilot is excellent. But if you try to make it write ALL your code for you, that code will suck.

    • Re: (Score:2, Interesting)

      by echo123 ( 1266692 )

      If you try to build a ship with nothing but a welding torch, it won't go well.

      Copilot is excellent. But if you try to make it write ALL your code for you, that code will suck.

      I am an open-source CMS developer. There's a vast amount of relevant open-source code that the LLM Borg has trained itself on, and the feedback is pretty good. There's no irrelevant open-source code that's been published to the web, Github, or GitLab, it all is pretty much vetted and is valid. Coding by prompt is akin to critically reviewing another developer's code IMHO.

      Technology changes and the market expects developers to keep up in order to compete.

      • There's no irrelevant open-source code that's been published to the web, Github, or GitLab, it all is pretty much vetted and is valid.

        ...in the Framework I use -- I meant to write.

        • by dgatwood ( 11270 )

          There's no irrelevant open-source code that's been published to the web, Github, or GitLab, it all is pretty much vetted and is valid.

          ...in the Framework I use -- I meant to write.

          Yeah, for a minute there, I thought your post was dripping with more sarcasm than I ever thought possible.

        • Also just like discrete GIT commits, coding by prompt means to be very much issue oriented. By issue-oriented I mean: ideally each critical GIT commit closes a documented issue, which it links to directly for reference.

          Coding by prompt means small, focused changes to the current codebase. Not: 'this is the description of the castle of my dreams, give it to me now'.

          And you still gotta know how to code and what is acceptable and what isn't. Still, it saves a lot of time and it's a useful new tool in the frame

      • With a vast amount of code then the result runs the risk of being a soup that contains parts of all that code and with the potential for bugs caused by that mix.

        • With a vast amount of code then the result runs the risk of being a soup that contains parts of all that code and with the potential for bugs caused by that mix.

          This has not been my experience at all. I believe the LLM is well-trained because the framework I use is so API-driven, so all the published code is well structured by default. Also the formatting rules in this community are strictly enforced and automated tools and processes exist and are well documented.

          Because everything is open-source anyway, including the modules I contribute, I've concluded there's no point fighting the Borg and I might as well compete as best I can with what resources I have at my di

        • With a vast amount of code then the result runs the risk of being a soup that contains parts of all that code and with the potential for bugs caused by that mix.

          The LLMs spit out buggy, imperfect code for sure! Almost every time initially. But it's not difficult to drive the results forward rather efficiently via prompt, not unlike reviewing code prior to accepting a GIT commit from a colleague. Especially if you know what you're doing and what qualifies as 'good'.

          My point is, f the LLM is well-trained the results can be good, (and I get better results with some AI than others). AI is just a new tool to use to get the job done competitively.

    • Re:It's a tool. (Score:5, Insightful)

      by AmiMoJo ( 196126 ) on Sunday September 29, 2024 @08:25AM (#64825981) Homepage Journal

      I tried them out a couple of times and was not all that impressed with the results. Both times the code did at least work, but wasn't particularly good. StackExchange quality stuff, functional but far from ideal.

      In both cases I'd have preferred to re-write it from scratch myself. That would give me a chance to really think through the algorithm and the potential issues with every line, something I find easier when writing code than when reviewing it.

    • Re: (Score:2, Interesting)

      by q_e_t ( 5104099 )

      If you try to build a ship with nothing but a welding torch, it won't go well.

      Seemed to work OK for Liberty Ships in WW2...

      • What a great example!
        An era where ships were built quickly so that they could be fielded in quantity without much concern over longevity, reliability, or consistency.

    • Re: It's a tool. (Score:4, Interesting)

      by ahoffer0 ( 1372847 ) on Sunday September 29, 2024 @11:16AM (#64826331)

      A typical day will see me writing Java, Typescript, maybe a bash script, some docker compose files, and something random like ffmpeg command line. LLM can help me with Java only occasionally. But when it comes to things I don't do frequently like ffmpeg, or things that are a little arcane like bash, LLMs are a godsend.

      • Re: It's a tool. (Score:4, Insightful)

        by erice ( 13380 ) on Sunday September 29, 2024 @09:08PM (#64827313) Homepage

        A typical day will see me writing Java, Typescript, maybe a bash script, some docker compose files, and something random like ffmpeg command line. LLM can help me with Java only occasionally. But when it comes to things I don't do frequently like ffmpeg, or things that are a little arcane like bash, LLMs are a godsend.

        But isn't that where the danger lies? If it is code that you don't do frequently, how good are you at catching machine generated bugs?

        • I agree, and I think in a few years code written by AI will be a maintenance nightmare.
        • Indeed. I saw someone extoll the virtues of LLM generated FFMPEG and I certainly understand the desire to have someone else write its command line, but in the example of how awesome it was it just contained fragments of common parameters wholly unrelated to the query prompt. At best they'd be benign, at worst someone will have to re-encode all your videos once they figure out where the errors came from. If you don't know ffmpeg how could you possibly know they were malignant and if you do you probably don't
    • Every time I've asked Google "how do I do this in ChaiScript" it's AI provided code that (A) didn't work, and (B) wasn't even ChaiScript.
  • by Moblaster ( 521614 ) on Sunday September 29, 2024 @06:51AM (#64825811)

    The AI coding assistants are very powerful but right now in 2024, because code itself is very complex typically with hundreds of files in an app, AI is not quite there with a holistic code base (application level) training and output just yet.

    The most productive AI users now are senior developers who can use the AI to both 1. iterate code sections insanely fast 2. actually read the code the guide the AI in the next iterations.

    So TODAY you still have to know what you are doing to leverage AI tools for actually-better quality x speed output.

    3-6 months you won't have to know as much.

    • by VeryFluffyBunny ( 5037285 ) on Sunday September 29, 2024 @07:16AM (#64825855)
      I reckon there'll be worse problems in the longer term, e.g. AI tools may save time for developers if used appropriately, & for experienced developers, it's probably a good idea for routine work that they know inside-out. However, for inexperienced developers, who don't yet have the mastery, in-depth knowledge, & higher-level, more abstract understandings of coding, having a machine do the nitty-gritty for them may inhibit their development since they're not getting the hands-on experience & developing the working knowledge of coding features & strategies that are necessary. We may end up with a lot of coders who stay at a basic level & never progress into more competent coders. Then the older, more competent coders start retiring or moving on, & then... well, things might get a bit problematic since the remaining coders don't really understand the bigger picture.
      • by scrib ( 1277042 ) on Sunday September 29, 2024 @08:15AM (#64825963)

        The question of "mastery" is one that requires perspective.
        I'm over 50 and learned data structures and memory management but barely touched assembly in college. 10 years ago, I was told that understanding the difference between "pass by reference" and "pass by value" was rare.
        The point is that "mastery" is having the skills to be productive with the best tools of the time, and that changes. Learning how to get the best results out of AI but not understanding how its output works is just a different layer of abstraction from using console.log but not having any idea how that makes different pixels appear on the screen.
        I haven't worked with AI coders yet, but I have no doubt it is another technology I'll work with before my career is done; another thing I'll have to "master."

        • , I was told that understanding the difference between "pass by reference" and "pass by value" was rare.

          So that's the reason right there why people have so many problems with memory leaks that they are developing a language that force you to do contortions to do anything. Seems it would be easier to just treach people the difference.

      • Which is the same problem we have today. There are plenty of people that have no interest in advancing their careers. They are happy where they are and with what they make and some literally do first level helpdesk stuff until they retire.

        The thing I see AI doing right now if it has access to the codebase is the stuff junior frontend developers do. The so-called designers and artists are what is going to go, if you find out you needed a different or additional field somewhere in the middle of the project, y

    • by phantomfive ( 622387 ) on Sunday September 29, 2024 @07:37AM (#64825893) Journal

      The AI coding assistants are very powerful but right now in 2024, because code itself is very complex typically with hundreds of files in an app, AI is not quite there with a holistic code base (application level) training and output just yet.

      This is a problem because the context window of AIs is still very small. A large codebase will overwhelm it. To address that problem, we're going to need new algorithms.

    • by gweihir ( 88907 )

      3-6 months you won't have to know as much.

      Make that 30-60 years and it becomes a possibility.

    • 3-6 months you won't have to know as much.

      Why? What will happen in 3-6 months?

  • by Anonymous Coward on Sunday September 29, 2024 @06:55AM (#64825815)

    Companies have been trying to sell us Rapid Application Development and No-Code Solutions for decades already, this is just the latest incarnation of that.

    Yes, "AI" tools may provide benefits in some areas but it's not a silver bullet, a go-fast button, for the majority of tasks yet.

    • >Yes, "AI" tools may provide benefits in some areas but it's not a silver bullet, a go-fast button, for the majority of tasks yet.

      Hmm, it's almost like it is in the name - "assistant".

      There's a reason it's not called a "coding slave" or "under / unpaid intern" code writer.

      • 40% snake oil, 20% helpful and 40% misdirection

        AI does not help in the fundamental aspect of programming for most systems:

        Implementing business requirements into a computer system.

        AI is miles below that in code generation. It may help on things like "using C++, write a program to query Oracle table ABC, with columns W, E, Q, and print out the results to the console. It does not help in much more complicated processes where a) understanding the business is critical, b) requirements will not be detailed en

    • The big lesson here is that new technology sounds awesome... until someone tests it.
      But no one got promoted in the marketing department by checking their facts.

  • by Casandro ( 751346 ) on Sunday September 29, 2024 @07:03AM (#64825825)

    ... it kinda works for things people have done over and over again. Writing a CURD-Application in PHP probably works just fine... but then again, why on earth are we doing the same thing over and over again, shouldn't the software environment deal with such trivialities. The far bigger productivity gain would be in using environments that are tailored for the job you are trying to solve. If you have an application with 20 database tables... you shouldn't have to write your CURD-code for each one of them. Then again, productivity isn't an important measure in software development.

    Otherwise it's a bit better than autocomplete used to be in Delphi in the late 1990s.

    Where it really becomes a problem is when there are automatically generated websites drowning out important information. For example in my line of work you need to write software as Kamailio applications. That's essentially a programming language with C-like syntax, but somewhat different semantics. More and more we are seeing web-sites full of utter garbage. Web-sites which, on first glance, look as if there would be Kamailio code on there, but in reality it doesn't make sense or at least wouldn't be parsed.

  • by devslash0 ( 4203435 ) on Sunday September 29, 2024 @07:16AM (#64825849)

    Teaching people how to write crap code is not the quality we should be striving for!

    • Why not? They can then generate the code the next AI will learn from! Circular learning!

      • There is some truth to this. Asking for code in a comparative vacuum is going to render code without context, until the context evolves. This means organizational model training will eventually yield better code because of training and feedback loops into the blackbox which makes code.

        This also permits developers generating their own code-making models to have a companion for their generating efforts over a period of time, then understanding how code relates to a larger model. Isolating lib models to functi

    • by dvice ( 6309704 )

      That depends. Bad code can be better than no code, but it can also be worse than no code at all.
      If you have a problem that requires 100 man years to do manually, and you manage to write bad code that does that in 1 day, crashing randomly all the time, but still managing to do it and you can verify the work to be correct, this is a good solution.

      But if you add flashing lights to Linux kernel that only one person wants to use and this causes all Linux servers around the world to crash, it would have been bett

    • Thank you!

    • by gweihir ( 88907 ) on Sunday September 29, 2024 @12:58PM (#64826535)

      It is what the business world wants though. Engineering has to put its foot down or this crap will continue.

      Ah, well, who am I kidding. This insanity will continue until a rather large number of people gets killed by it. That was how other engineering disciplines finally got quality standards.

      • by strikethree ( 811449 ) on Monday September 30, 2024 @09:08AM (#64828241) Journal

        Engineering has to put its foot down or this crap will continue.

        Ask the engineer who was fired by Oceangate how that worked out for him. It will work the same for every other engineer. The one who owns is the one who needs to be convinced. Everyone else is completely powerless.

        Ah, well, who am I kidding. This insanity will continue until a rather large number of people gets killed by it.

        We frequently find that even that is not enough.

  • Yes (Score:5, Insightful)

    by cascadingstylesheet ( 140919 ) on Sunday September 29, 2024 @07:21AM (#64825861) Journal

    It's a tool. Used properly, it saves tons of time.

    "Rewrite this whole section of code to use OpenStreetMaps instead of Google Maps"

    Could I have done it myself? Sure. In 30 seconds? No ...

    • Re:Yes (Score:5, Insightful)

      by phantomfive ( 622387 ) on Sunday September 29, 2024 @07:41AM (#64825899) Journal

      Could I have done it myself? Sure. In 30 seconds? No ...

      If you think it only took 30 seconds, then you're one of those people leaving introducing 41% more bugs. You need to make sure you understand any code generated by AI...

      • Could I have done it myself? Sure. In 30 seconds? No ...

        If you think it only took 30 seconds, then you're one of those people leaving introducing 41% more bugs. You need to make sure you understand any code generated by AI...

        Gee, we could just assume I'm stupid ... or we could assume that I was talking about the initial writing part, leaving the review and QA as a given.

        In any case, the answer to the titular question is still "yes".

        • Re:Yes (Score:4, Insightful)

          by phantomfive ( 622387 ) on Sunday September 29, 2024 @08:31AM (#64825995) Journal
          Yeah, but you need to add extra time for understanding what the LLM gave you, because it could (and often does) have subtle errors.
          • by gweihir ( 88907 )

            Reading code is mich harder than writing code. This person either got worse code or did not save time, maybe both. The capacity of coders for lying to themselves is legendary.

            • by leptons ( 891340 )
              About all I trust copilot to do is write console.log statements, and even then it doesn't always do what I want it to do. Sometimes it does surprisingly well at guessing what I want done next, but it still takes me as long to read through it as it would have taken me to type it out myself, so in lots of cases the time it "saves" is a wash. Overall, it does save me time writing console.log statements though, but that is barely moving the needle that measures time saved.
              • by gweihir ( 88907 )

                It can mess up console.log statements? Impressive. The stupid is strong with this one. To be fair, even getting things as simple as that is difficult when you have no insight or understanding.

                What I am wondering is how much things like copilot hinder and sabotage coders learning their skills. Obviously, bad coders without potential will be bad coders, no matter what. But what about those with potential? Will "assistants" delay them getting good at things?

            • Yeah, but sometimes looking up the proper API calls is the hard part. Then the LLM can give you all that and you can verify it. In this case, the LLM replaces a stack overflow lookup (which also needs to be verified).
              • by gweihir ( 88907 )

                Well, yes. The only thing LLMs are somewhat solid on is "better search", but only really when they include references to the sources. Looking up an API, library function or ioctl, etc. is not "coding". It is literally looking up information. This is also the only use so far that I do not see "dumbing down" users, because they still have to understand that API call, at least the competent ones do. The incompetent ones are lost anyways.

                Hence LLMs are not completely useless. But they cannot replace skill or in

                • And "better search" does not justify the current investments or the hype.

                  It might, if you think that it could replace Google and steal all their profits.

                  • by gweihir ( 88907 )

                    Well maybe, for the profits that is. Until some FOSS LLM gets good at ad filtering. That may be the 2nd thing LLMs can actually do. Although I do not see many ads with the default settings of my preferred browser (Vivaldi) anyways. They even do a good job skipping or filtering YouTube ads. Not that I care that much anymore about YouTube. On the other hand, ads seem to be mostly targeted at stupid people anyways and many of them will not have ad blocking.

                    So, yes, good point and that may be the actual reason

          • Yeah, but you need to add extra time for understanding what the LLM gave you, because it could (and often does) have subtle errors.

            Unlike our own handwritten code, which is always flawless in the first draft and never has subtle errors ...

            It's almost like both should get review and QA. Oh wait, they do.

        • Could I have done it myself? Sure. In 30 seconds? No ...

          If you think it only took 30 seconds, then you're one of those people leaving introducing 41% more bugs. You need to make sure you understand any code generated by AI...

          Gee, we could just assume I'm stupid ... or we could assume that I was talking about the initial writing part, leaving the review and QA as a given.

          In any case, the answer to the titular question is still "yes".

          I'm totally with you. My concern is that as the models improve--which they have tremendously in just the past few weeks--we're going to be asking them to write far more code of far greater complexity and we'll not be able to vet it sufficiently before deployment.

      • Could I have done it myself? Sure. In 30 seconds? No ...

        If you think it only took 30 seconds...

        I think the point is the suggestion helped the skilled coder progress, rather efficiently.

        • If you are just copying and pasting the result (as opposed to using the output to understand the API), then you are not a skilled coder. You are a copy/paste coder. You MUST use AI to increase your skill at this point, since it's still not good enough to do it by itself.
          • If you are just copying and pasting the result (as opposed to using the output to understand the API), then you are not a skilled coder. You are a copy/paste coder. You MUST use AI to increase your skill at this point, since it's still not good enough to do it by itself.

            You've never even met me, and you think I don't understand? Ok.

            Well, you guys keep on not using new tools, that's fine. I don't know why I bother, lol. I guess I just have an instinctive devotion to reality.

            I hear people yammer on about these tools, and it's obvious that they just don't know how to use them effectively. That's not a crime, of course, but it does grate a bit to hear them confidently yammer on about it as if they know what they are talking about. So I say something. Worth a try, I guess.

    • It's a tool. Used properly, it saves tons of time.

      "Rewrite this whole section of code to use OpenStreetMaps instead of Google Maps"

      Could I have done it myself? Sure. In 30 seconds? No ...

      That in particular is an easy way to get bugs.

      Remember how the LLM works, predicting the pattern.

      That means it's great at turning out iterations on standard patterns, ie, this is a similar section of code using OpenStreetMaps in a fairly standard way.

      The problem is those things you do for your application don't really fit the standard pattern, and those bits the LLM will tend to screw up.

      Another way to understand this is a project I was working on where I was using an LLM to format some text. The problem ca

  • well, you can't complain about creativity. =/

  • by allo ( 1728082 ) on Sunday September 29, 2024 @07:35AM (#64825889)

    They do not replace the person typing code, but Stackoverflow for looking up (possibly trivial) questions. You have a sidebar where you can just type ("What is the C++ idiom to do ...") and get the answer instantly. Yes, it might be able to apply it to your code, but the answer itself is the important part.

    That's why Stackexchange is trying to lock down their exports (to the displeasure of the community) because they would rather have their user's content (which is CC licensed by their ToS) as their capital and nobody train on it free to use models.

    • Yes, yes, yes, this is the real benefit. Easy access to an expert (mostly) in coding. AI knows every language, all the syntax and tons of examples.
      Those who use the current generation of AI and complain it makes mistakes aren't using it correctly. One day they may be virtually infallible but for now use your own noodle to qualify any results.

  • It's damned close, though. GIve it 3-6 months.

    I really wonder how they are able to estimate timelines like this. What inside information do they have that we don't? 3 to 6 months? [xkcd.com]

  • It really depends - sometimes there is something relatively simple (but complex to write) that it can spit out correctly. Other times it is just creating new rabbit holes. A good developer should be able to spot if it helps or hurts pretty quickly. An inexperienced one will probably struggle more. Kind of like life before these chatbots.
    • by Hodr ( 219920 )

      But we don't learn by writing perfect code, we learn by fixing problems. If an inexperienced coder leans heavily on AI generated code, but then has to deal with fixing all of the issues generated by that AI, eventually they will no longer be an inexperienced coder.

  • I seem to get that with rabbit... https://leaflessca.wordpress.c... [wordpress.com]
  • by bradley13 ( 1118935 ) on Sunday September 29, 2024 @08:07AM (#64825953) Homepage

    As a teacher, and someone who supervises a wide variety of student projects: I do lots of random bits of coding in different languages and using different frameworks and APIs. I cannot possibly keep the details of all of them in my head. ChatGPT is great for reminding me how to do X in language Y or with framework Z. Basically, it is a single source for reference material.

    AIs are not yet very useful at actually writing code, at least, not beyond a trivial level. Just as an example, I had a student last week who was writing a web service in Java. When he closed his program, the ServerSocket was not always being properly released. ChapGPT came up with all sorts of overly-complicated solutions, none of which helped. All he actually needed to do was declare the thread to be a daemon thread, but ChatGPT never suggested that. I gave him the hint, ChatGPT gave him the syntax, and the problem was solved.

  • by Eunomion ( 8640039 ) on Sunday September 29, 2024 @08:17AM (#64825967)
    Make tools for tasks, not the other way around. Can't stand how much IT has turned the basic concept of technology on its ass.
    • by gweihir ( 88907 )

      Indeed. Same here. Too many people have this baseless deep belief that "doing it with IT" makes things universally better. That is very much not true.

      • It's cargo cult. A tool is found to be useful for something, therefore "usefulness" somehow becomes part of the definition of the thing rather than a strictly contingent statement. How much money have we lit on fire chasing this kind of bullshit since the IT revolution?
        • by gweihir ( 88907 )

          It is indeed cargo cult.

          How much money have we lit on fire chasing this kind of bullshit since the IT revolution?

          A _lot_. And there are other negative effects in similar areas. For example, I recently read an estimation that Microsoft has delayed technological progress in the IT space by 10-20 years. That is very credible and the cost of that will be astronomical. Obviously, we have other villains in this space as well, like Google, Amazon, Oracle and many others. The desire to get rich and/or accumulate power does incredible damage and in something as new as the IT field, there are little lega

  • by Tony Isaac ( 1301187 ) on Sunday September 29, 2024 @08:23AM (#64825979) Homepage

    Headline says AI didn't save developers time.

    The story itself describes the experience of two companies: Uplevel and Innovative Solutions. Uplevel says they didn't see any gains (and worse bug counts), Innovative Solutions says AI helped them achieve a 2x-3x increase in productivity.

    So the real headline should be "Mixed results" from AI coding assistants.

    This makes me wonder how the study methodologies of the two companies differ, and how their practice--use of AI--differs.

    • So the real headline should be "Mixed results" from AI coding assistants.

      They probably used AI to write the headline.

      • You might be right. Here's what I got when I asked Copilot to provide an alternative headline for the article:

        How about this: “AI Coding Assistants: Promises Unfulfilled as Productivity Gains Remain Elusive”?

        It did miss the "mixed results."

  • 41% more bugs (Score:4, Insightful)

    by Tony Isaac ( 1301187 ) on Sunday September 29, 2024 @08:29AM (#64825991) Homepage

    What does that mean, exactly? Not all bugs are created equal. Some are serious and consequential, others are more a matter of opinion. Does this higher bug count stem from new (AI) scanning tools?

    This reminds me of why I don't run Lint or ReSharper. Many of the bugs or flaws reported by these tools are accurate, but they are drowned out by a forest of inconsequential (though technically accurate) reported issues, that might or might not be necessary to fix. Many of these are more coding style preferences than actual code issues.

    In this study, was the same process used to scan or count bugs before the AI tools were introduced? Or were the old, lower bug counts the result of a mor manual process?

  • I find copilot quite useful and it definitely saves time. Things like auto completing comments, to getting descriptions of a block of code you cant quite get a grip on, To getting advice on how to make a certain change. Its still a long way from "write me a doom clone in rust" but I hope it gets there one day.
  • Use of GitHub Copilot also introduced 41% more bugs, according to the study...

    Let me guess, at least half of those bugs are from bad example code posted on the Internet, or from QUESTIONS rather than answers, you know the "why does this not work?" questions.

    LLMs are an excellent mirror of our world. They will reflect back what we communicated amongst ourselves. If flat earthers weren't a fringe group, the LLM would gladly tell you that the Earth is flat.

    • Yeah, the summary definitely buried the lede. If the use of GitHub Copilot "also introduced 41% more bugs, according to the study", I don't think the important part of the story is how fast it isn't.

  • Yes, that's very specific. Has anybody had success with AI in these areas ? I tried to get clues and solutions to problems with LwIP on STM32 with Chat GPT.
    All I got were :
    1) banalities, small talks.
    2) things I already knew, But not helping, not providing answers to things I don't know.
    3) errors, wrong answers.

    I gave up trying
  • I'm hopeful that it will eventually be developed into something useful
    Unfortunately, investors want profits NOW, so they demand that companies release half-baked, kinda useless crap
    Meanwhile financial journalists write articles about how AI is failing to meet expectations
    In software development, I don't see the value in using crappy AI to help mediocre programmers more quickly develop mediocre code
    I'm hopeful that the systems of the future will allow expert programmers to manage the complexity of large syst

  • Almost every new software-related idea is initially overdone and misused. Over time people figure out where and how to use it effectively instead of mostly make messes as gestures to the Fad Gods. But there will be fucked up systems left in their wake. Pitty the poor maintainers.

    OOP, microservices, crypto, 80's AI, distributed, Bootstrap, etc. etc. went thru a hype stage.

    Thus, I expect the initial stages will be fucked up.

  • I run a software company which uses MS SQL Server and .NET Core for our app development. My lead developer had a problem with a SQL script which was stubbornly slow - he was querying a huge table with an index that really wasn't helping the performance. He created a prompt to ChatGPT to ask how to optimize the query and it came back - instantly - with a subquery approach that moved the filtering logic from the index into memory. It ran significantly faster.

    I have a feeling that GPT had somehow scraped some

  • I can code... like shit. It's a tiny percentage of my job so while I can do it, I don't do it often enough to keep everything in my memory perfectly. I know what can be done, I know the general syntax, and I know lots of theory, and I know the terms.

    So for me, an AI assistant turns me from someone who is great in theory but very, very wasteful in practice due to how frequently I have to confirm things by checking reference materials (thank you, Google...) into someone who can do some light coding almost a

  • I've been using GitHub Copilot for a few months now and I can say that it definitely saves a little bit of time. I've seen reports that it makes developers 33% faster and it really does not come anywhere close to that. Where it shines is being able to fill in some boilerplate code faster than I can type it. When what I'm writing is something where the next few lines are something that an undergraduate student could easily predict, it can fill it in and I can tab complete it faster than I can type it. (E

  • Unfortunately these days, very few of us are coding in a vacuum. When's the last time you started coding and whatever you were creating was 100% standalone and did not interface with anything in any way? No APIs, SDKs, libraries, web services, etc. The only time I find myself doing that (and still there is at least a tiny amount of interfacing) is coding in C++ for microcontrollers (like ESP32 type embedded stuff).

    The rest of the time we have to continuously interface with 3rd party stuff, and that, more of

  • Or otherwise we would see significant savings by now. But what about code quality? As not all developers catch what AI screws up, it has to be going down. So then we would be at no time savings, while quality got worse.

  • In a nutshell, so far I am spending at least as much time fixing bugs in the AI generated code as I would writing the code myself in the first place without those bugs.
  • Blanket copy and paste without vetting the results first is a sign of a bad coder, not a bad tool.
  • I'm a firm believer that it saves me time, or at least I am a happier programmer when I can get suggestions realtime.

    I'm always skeptical when it suggests something I don't understand. I don't blindly trust it. I have to take some time to understand what it generates sometimes, but I'm still in a better spot than without aid.

  • First of all, and not entirely on topic, but I simply hate Gemini. Every interaction I have with it, regardless of coding or not, feels like a total waste of time, and annoying because I keep having to correct it and argue with it. I'm not sure how Google could create something that bad. Regarding coding, it tends to hallucinate quite a bit, offering functions from one language in another, inventing parameters, ... It's quite imaginative, really.

    ChatGPT on the other hand often offers good advice, especially

  • Just use AI as a search engine. I'm able to save a ton of time by referring to ChatGPT instead of Google for questions that tend to have ambiguous terms. Does threading refer to sewing or programming? Or if I'm not exactly sure what key terms to use, I can do a contextualized negative search: related to this, like that, but not these.
  • the dumber the operator.

    That's self evident. That is the exact reason "we" make tools, why we automate.

    Sadly the next self evident outcome of this is, after the slashdot cohort dies, there will be no one to vet the code, because 100% of everyone who graduates from "higher education" will be balls deep in the cult of AI. The only people left standing will not be able to evaluate the output of their tools.

    The MBAs work is done here.
  • ...are bits that you cut & paste. Even if you think you "carefully reviewed" it, it's many times more bug-prone than fresh, real code. I've been a freak about reviewing code ever since I worked on life-critical systems, but have to admit it's just too easy to get misled down the garden path by pasted-in code.
  • Recently I wanted to try reverse engineering the PSVR2 to implement a Linux driver. ChatGPT has massively accelerated the effort. By asking the AI about libUSB and how the protocol works, and integrating examples into a custom application I've made a lot of progress very rapidly. I would never have been able to do this without AI. The research would have taken ages and ADHD brain doesn't want to read/sort that much documentation.
  • by Jeremi ( 14640 ) on Sunday September 29, 2024 @08:44PM (#64827279) Homepage

    ... is that when you're done writing it and debugging it, you are extremely familiar with how the code works -- and also with your reasons for why you wrote the code that way and not some other way.

    Code that was written by someone else, OTOH, you have to read and review thoroughly to get an idea for how it works, and unless you sit down with that person and go over the code with them, you may never really understand why they chose that approach over the various alternative approaches they could have used.

    In some cases, that's okay, but in other cases, that is the difference between catching problems early and missing subtle design flaws that come back to bite you later on. And if an AI wrote your code, you're in the worst position, because you're not going to get any good explanation for the AI's design intent, as it didn't have any.

    • by DMJC ( 682799 )
      If the AI is literally telling you URB_TRANSFER_OUT is this call in libUSB, and URB_CONTROL_OUT is this call in libUSB, it's a powerful reference that explains how to setup USB communication. When you're trying to reverse engineer using wireshark, packet captures from Windows and libusb and Wireshark on Linux, it's actually more important to get the traffic flow working. You can sort out the design later once you've figured out what all the calls are doing.

Dennis Ritchie is twice as bright as Steve Jobs, and only half wrong. -- Jim Gettys

Working...