Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
AI Programming

Does AI Really Make Coders Faster? (technologyreview.com) 97

One developer tells MIT Technology Review that AI tools weaken the coding instincts he used to have. And beyond that, "It's just not fun sitting there with my work being done for me."

But is AI making coders faster? "After speaking to more than 30 developers, technology executives, analysts, and researchers, MIT Technology Review found that the picture is not as straightforward as it might seem..." For some developers on the front lines, initial enthusiasm is waning as they bump up against the technology's limitations. And as a growing body of research suggests that the claimed productivity gains may be illusory, some are questioning whether the emperor is wearing any clothes.... Data from the developer analytics firm GitClear shows that most engineers are producing roughly 10% more durable code — code that isn't deleted or rewritten within weeks — since 2022, likely thanks to AI. But that gain has come with sharp declines in several measures of code quality. Stack Overflow's survey also found trust and positive sentiment toward AI tools falling significantly for the first time. And most provocatively, a July study by the nonprofit research organization Model Evaluation & Threat Research (METR) showed that while experienced developers believed AI made them 20% faster, objective tests showed they were actually 19% slower...

Developers interviewed by MIT Technology Review generally agree on where AI tools excel: producing "boilerplate code" (reusable chunks of code repeated in multiple places with little modification), writing tests, fixing bugs, and explaining unfamiliar code to new developers. Several noted that AI helps overcome the "blank page problem" by offering an imperfect first stab to get a developer's creative juices flowing. It can also let nontechnical colleagues quickly prototype software features, easing the load on already overworked engineers. These tasks can be tedious, and developers are typically glad to hand them off. But they represent only a small part of an experienced engineer's workload. For the more complex problems where engineers really earn their bread, many developers told MIT Technology Review, the tools face significant hurdles...

The models also just get things wrong. Like all LLMs, coding models are prone to "hallucinating" — it's an issue built into how they work. But because the code they output looks so polished, errors can be difficult to detect, says James Liu, director of software engineering at the advertising technology company Mediaocean. Put all these flaws together, and using these tools can feel a lot like pulling a lever on a one-armed bandit. "Some projects you get a 20x improvement in terms of speed or efficiency," says Liu. "On other things, it just falls flat on its face, and you spend all this time trying to coax it into granting you the wish that you wanted and it's just not going to..." There are also more specific security concerns, she says. Researchers have discovered a worrying class of hallucinations where models reference nonexistent software packages in their code. Attackers can exploit this by creating packages with those names that harbor vulnerabilities, which the model or developer may then unwittingly incorporate into software.

Other key points from the article:
  • LLMs can only hold limited amounts of information in context windows, so "they struggle to parse large code bases and are prone to forgetting what they're doing on longer tasks."
  • "While an LLM-generated response to a problem may work in isolation, software is made up of hundreds of interconnected modules. If these aren't built with consideration for other parts of the software, it can quickly lead to a tangled, inconsistent code base that's hard for humans to parse and, more important, to maintain."
  • "Accumulating technical debt is inevitable in most projects, but AI tools make it much easier for time-pressured engineers to cut corners, says GitClear's Harding. And GitClear's data suggests this is happening at scale..."
  • "As models improve, the code they produce is becoming increasingly verbose and complex, says Tariq Shaukat, CEO of Sonar, which makes tools for checking code quality. This is driving down the number of obvious bugs and security vulnerabilities, he says, but at the cost of increasing the number of 'code smells' — harder-to-pinpoint flaws that lead to maintenance problems and technical debt."

Yet the article cites a recent Stanford University study that found employment among software developers aged 22 to 25 dropped nearly 20% between 2022 and 2025, "coinciding with the rise of AI-powered coding tools."

The story is part of MIT Technology Review's new Hype Correction series of articles about AI.


Does AI Really Make Coders Faster?

Comments Filter:
  • by ls671 ( 1122017 ) on Saturday December 20, 2025 @06:50PM (#65872083) Homepage

    It depends on your skills level. For trivial beginner stuff, it's OK but then again.

    For anything out of mainstream which no or very few examples are available for the model to train, it's pretty much useless.

    • Watch for the AI bubble crash in 2026.

      • by Entrope ( 68843 ) on Saturday December 20, 2025 @09:34PM (#65872293) Homepage

        As much as I hate seeing a brute-force approach burn huge amounts of electricity to enable morally dubious applications based on inputs that are often corrupt or illegal, I think the AI bubble is as likely to pop as the Bitcoin bubble.

        (You might ask: "Do you mean that AI is a brute-force approach that burns huge amounts of electricity, etc., or that Bitcoin is?" To which I answer: "Yes.")

        • by Pieroxy ( 222434 )

          Bitcoin doesn't lie on what it is and what it does. AI companies on the other hand ...

        • by tlhIngan ( 30335 )

          As much as I hate seeing a brute-force approach burn huge amounts of electricity to enable morally dubious applications based on inputs that are often corrupt or illegal, I think the AI bubble is as likely to pop as the Bitcoin bubble.

          The bicoin bubble popped nearly a decade ago. You might remember that time when an iced tea beverage company changed their name to blockchain and shot up in value.

          Just because bitcoin's still around doesn't mean it hasn't popped - does the existence of Amazon, Microsoft, Googl

      • by Moof123 ( 1292134 ) on Saturday December 20, 2025 @11:06PM (#65872395)

        Right now we are in the “first hit is free” phase of trying to get everyone hooked on this AI crap. All these upstarts are trying to get marketshare and get some spaghetti to stick to the wall and the usage is heavily subsidized by all the startup money gushing in. Once the influx peters out and these places have to pay their own rent we will see the reality of which companies are able to actually survive and which of the horde are houses of cards.

        I fully expect there to be plenty of actual applications, but it will peter out to much more mundane than currently advertised.

      • I am ready with my popcorns. You have to give it to the Tech giants in holding this party for this long. I was more like thinking 2024nit will pop.
    • by evanh ( 627108 )

      That's just what the LLM is trained on, rather than anyone's skill at using the LLM.

    • And that's where you're wrong, you simple think AI is a copy paste out of a large library, but it's not (at least the good ones aren't), it uses it to learn how to code exactly like a human does and is capable of doing code for which there is no real examples for, just like some developers can (as a lot of developers can't do anything themselves without any examples).
    • by dvice ( 6309704 )

      I am a professional with decades of experience. I think that AI is best at prototyping. If I have an idea, I can ask AI to write a program that does that. That program does not work properly, it is full of bugs, but it will instantly tell me a lot of things. Most valuable thing it can tell me is that "this idea does not work". That alone can save hours of my work. It will also reveal things that I didn't even think about, but which are essential for the idea to work.

      Because the code is just used as a protot

  • by SlashbotAgent ( 6477336 ) on Saturday December 20, 2025 @06:51PM (#65872087)

    Here is what keeps happening to me. I keep falling into the same trap.

    I throw various simple things at the AI, sort of a Google replacement, and it gives me what I want. I'll have several successes. And then it will lead me down the wrong rabbit hole, misdirect me, refuse to break out of it's mistaken path, and waste way way way too much of my time chasing my tail.

    Eventually, I'll arrive at my destination and I'll ask it why the fuck it took such a circuitous route full of errors and straight up lies to get to the simple and correct answer. It'll respond saying that it's complicated and that it doesn't have all the answers, sorry.

    I'll then swear not to use it anymore.

    Tomorrow, I'll start with it all over again like a crack addict.

    • Yep, this is when the context is full. Nuke the chat and start again.

      My current favourite is "Oh, now I understand completely what's happening" (for the seventeenth time in a row - all of which were too-hasty.

      • by Rei ( 128717 )

        Yeah, one of the things I like about Claude (and Gemini 3 as opposed to 2.5) is that they really clamped down on the use of "Oh, now I've got it! This is absolutely the FINAL fix to the problem, we've totally solved it now! Here, let me write out FIX_FINAL_SOLVED.md" with some half-arse solution. And yep, the answer to going in circles is usually either "nuke the chat" or "switch models".

    • I call it the coding LLM Doom Loop.
      A good bit of my effort with using LLMs has been in trying to avoid and correct it.
      I've found it gets easier when you start to treat the LLM and its entire context window as a single unit rather than thinking about prompts.
      Coding agents are variably successful at this.

      For my own agentic tests, I've had good results "context engineering" the LLM to solve tasks reliably that it previously couldn't.
      In the end- I'm not sure it's worth the effort, but hey, it keeps me ent
    • by Tailhook ( 98486 )

      The key here is that it helps, but it can't replace you. Not that I care whether you get replaced, but there are a couple trillion bubble bux riding on whether you can be replaced, so it's a big deal.

    • After a few failures, I generally give up using an LLM and look elsewhere.
    • I have mostly experience with a large app that I have been building lately. I use VSC with Claude. I have some background in coding but I do not do it for a living. As a tool, AI works best when there is a plan to follow and a master document for it to update to record progress. Starting from a proof of concept first and then expanding from there provides clarity. Sometimes I have used another AI to create a refined method to be implemented by another AI.

      What I do not like about AI coding: the intellectual

      • I heard similar stories about managers, who havent coded years, suddenly being able to do a little bit of coding. That is nice, but doesnt help seasoned devs - probably on the contrary, giving the managers false impressions of how simple it is.
  • Quick! (Score:4, Funny)

    by usedtobestine ( 7476084 ) on Saturday December 20, 2025 @06:55PM (#65872095)

    AI companies should pivot to predicting Anthropogenic Global Warming, I'm sure it will be perfect for that.

    • All that matters is they have everyone's money, can influence elections and are too big to fail. Give them a break - AI is hard.

  • At first (Score:5, Interesting)

    by jrnvk ( 4197967 ) on Saturday December 20, 2025 @06:59PM (#65872099)

    Professional dev in my third decade of experience speaking here. At first, these products really did assist quite a bit. In 2023 and 2024, I found the tools to be pretty decent at offering suggestions for small to medium snippets of code.

    Something changed late last year. It may just be that the shine is wearing off, but I find most of the AI products producing less quality results than they did previously.

    I rarely ever reach for them anymore. I sure would not rely on them over even an inexperienced junior dev, either.

    • by DamnOregonian ( 963763 ) on Saturday December 20, 2025 @07:41PM (#65872155)

      Professional dev in my third decade of experience speaking here.

      Only second decade, here.

      I rarely ever reach for them anymore. I sure would not rely on them over even an inexperienced junior dev, either.

      I find them comparable, unfortunately. But my new hires may not be as good as yours.

    • Something changed late last year. It may just be that the shine is wearing off, but I find most of the AI products producing less quality results than they did previously.

      Empirically, speaking around to a few people yeah something now somehow feels not quite as good as it used to be. I think the yes-man problem has got worse. If you're trying to find the API/argument/etc to do X it will always tell you what a great idea it is and give you the code, even if there is no way to do it. I think it's got more syc

    • by chthon ( 580889 )

      In my fourth decade, but been in C# since 2013, had a need to learn Angular and .NET 8 beginning of 2024. Professional engineer since 2012.

      Where chatGPT is an enormous help, is in the following:

      • Asking questions to get documented answers. Much better than StackOverflow etc, where many answers leave out context.
      • I mostly know what I want, but the details are sometimes difficult to find. E.g. most MS .NET classes documentation is sometimes just downright uninformative. Here AI helps to get simple examples.
  • It has its uses (Score:5, Interesting)

    by LindleyF ( 9395567 ) on Saturday December 20, 2025 @07:01PM (#65872101)
    Asking natural language questions is a fantastic way to search documentation.

    If you can describe exactly what you want, it can do a fine job accelerating that.

    If you are stuck, asking it to try to solve your problem can at least be entertaining.
  • The fact is! Some guesses are wrong! The question is can they recover. by post guess, guessing and conditioning output.
  • by Fly Swatter ( 30498 ) on Saturday December 20, 2025 @07:04PM (#65872105) Homepage
    Those bots that auto-expire valid bug reports after only a week of inactivity cuts way down on wasting time fixing bugs and lets me review^^^^^^H use and fix^^^H release all the bad code the other AI bots are creating for me.
  • by oldgraybeard ( 2939809 ) on Saturday December 20, 2025 @07:06PM (#65872107)
    And don't bother to double check things. And if you do check their goes the speed gain.
  • by jjaa ( 2041170 ) on Saturday December 20, 2025 @07:22PM (#65872119)
    Hear me out, the way all companies are pushing AI makes for additional, unwanted distraction. And i mean both - my company insisting on using the new fad just to stay hip, and all the service and tool providers pushing the product. It all adds overhead to daily work. I'm fine with AI being an improvement on search engine, but having to tell stuff to an AI agent is like babysitting an intern. An intern who majored in human scince at that.
    • oh well, i exaggerated that last bit. AI agent at least will try to correct and knows where to look for answers when i point the mistakes it made. Still, it's the same as leading a junior. The only benefit is the hope of the company to have less payed employees i guess
  • by Tablizer ( 95088 ) on Saturday December 20, 2025 @07:26PM (#65872125) Journal

    AI seems to be feeding the bloat habit instead of trimming it. It's becoming an auto-bloater.

    Very few in the industry are interested in parsimony. Devs would rather collect buzzwords for their resume rather than try to trim out layers and eye-candy toys. It's kind of like letting surgeons also be your general doctor, they'd recommend surgery more often than you really need it.

    The principles of typical biz/admin CRUD haven't really changed much since client/server came on the scene in the early 90's. Yet the layers and verbosity seem to keep growing. An ever smaller portion of time is spent on domain issues and ever more on the tech layers and parts to support the domain. Something is wrong but nobody is motivated to do anything about it because bloat is job security.

    YAGNI and KISS are still important, but is dismissed because it reduces one's resume buzzword count. The obsession with scaling for normal apps is an example of such insanity: there's only like a 1 in 50k chance your app or company will ever become FANG-sized, yet too many devs want to use a "webscale" stack. You're almost as likely to get struck by lightning while coding it. They patients are running the asylum.

    Humans, you are doing CRUD wrong!

    • You gotta be cynical. I agree this is a dog chasing its tail. Increasing consumption is the purpose of the machine. I do nearly the same things online as I did 10 years ago, but my bandwidth consumption is up by probably 3x. It's a self fulfilling prophecy until it's a bursting bubble.
    • Very few in the industry are interested in parsimony.

      I've come to accept that this as true, and further conjecture that bloat is often a corporate/institutional goal.

      This seems to be a joke [zerobugsan...faster.net], but in reality corporate incentives are aligned to make things more bloated. If you're a manager, then the more people you have under you, the more power you have. This means you want your people to go slower so you have to hire more of them.

      I don't have a solution but there must be one.

  • by rsilvergun ( 571051 ) on Saturday December 20, 2025 @07:29PM (#65872129)
    It's about lowering the skill ceiling so that they can pay substantially less. If there are productivity gains that's just a bonus.

    The goal here for AI is to eliminate wages. There is more than one way to skin a cat.

    Remember good enough is always good enough especially when monopolies exist and we refuse to vote for politicians that will enforce antitrust law because we're busy freaking out about whatever Petty moral panic or culture War bullshit the boob tube tells us to today
    • good enough is always good enough

      Yeah, the only outcome to a constantly lowering average standard is a downward spiral.
      Trying to achieve high quality is too much effort, so let's set aside pride in whatever's done.

    • by evanh ( 627108 )

      Eliminate wages, assuming the result still works, would be classed as a productivity gain.

  • In this, it is decent, and need to be something you actually understand as a result so you can proof read it.
    But letting it loose on the big code at large is pretty dumb.

    • But letting it loose on the big code at large is pretty dumb.

      I do this frequently, with both packaged agents and my modifications to them, just to see what comes out the other side.
      Sometimes it's great, sometimes it's pretty bad.
      I do it as a side job, not my regular work, so the consequences of the failure are minimal- I just throw it away and try again with another modification.
      If it were my actual main workflow... I think that would stress me the fuck out- each failure being significant wasted time and money.

  • No (but also yes).

    If you let it write code freely, you can spend hours or days trying to figure out what the f**k it's done and why it's not done what you asked or why it randomly introduces a complete re-write for a single-line fix.

    Heavily constrained yes.

  • It's really good at things I'm bad at. Making things look nice mostly. I can agonize over padding and fonts for hours and end up with something that looks terrible.
  • Do the productivity estimates include the tedious additional overhead at the start of every meeting talking about AI meeting summaries, during every meeting talking about AI, after the meeting talking about the AI-generated meeting summary notes, in workplace chat groups talking about the efficiency of otherwise of AI, etc?
  • The Flow State (Score:4, Interesting)

    by devslash0 ( 4203435 ) on Saturday December 20, 2025 @07:53PM (#65872175)

    Quite frankly, I don't want to talk to anyone all day, including an agent. And it's not really about being anti-social. As most people know, we have two primary brains - logical and emotional. I find that trying to explain what needs to be done the model engages my emotional/social brain too much, and the logical part can't focus on getting the job done. In order to do quality work, I need to completely ignore the emotional side of my brain and let the logical brain get into the state of flow, where I can accurate track the state of what I'm working on and follow logical workflows only. Talking to an agent all day engages my social side too much to do quality work.

    • This is a very refreshing new aspect, never thought of it!
    • by allo ( 1728082 )

      I wonder what you think about pair programming. Sounds like that would not match your style either. And some agent/human interactions are thought to be similar to that, either the agent monitors if you mistype something, or you review each step of the agent before letting it continue.

  • Tried to use Gemini to just hack my way through using a scientific Python package recently. In the end ended up almost all being worthless as a requirement couldn't be installed working at all (yay Assimulo) and no amount of AI was helping, had to scrap multiple days worth of relying partially on AI and just macguyver a different way around getting the results I wanted, if I hadn't had AI at all I might've started there and ended up going faster.

    I'm sure this is all worth trillions of dollars or whatever
  • <quote>
    Write a C++ program which calculates a "stardate" where each date is a 24 hour Earth day, and the origin (0) is January 1st, 2001 at midnight.

    Finished

    &#127760;
    &#127760;
    &#127760;
    To calculate a "stardate" based on Earth days since January 1, 2001 at midnight, you can use the C++ <chrono> library to compute the number of days (and fractional days) that have passed since the epoch.

    Here's a complete C++ program that calculates the stardate:

    #include <iostream>
    #include <chro
    • What exactly does this tell us? That the LLM was trained on this example / code fed by tens of thousands of trekkies and was therefore able to regurgitate it. Would be interesting to see where the code snippets are originating from
      • Right.. it's "as near as he can tell"... and you know what? I think it probably works too. But if it doesn't the bug will be slippery as shit and tracking it down is as much work as rewriting most of the program.

        and there's also the case it cranks out something broken and when you ask it to fix line 17 it's a coin flip if it goes "oh silly me!" and fixes it or goes "oh wow silly me!" and spits out an identical line.

        I've been saying this since oh chatgpt3, you gotta give it small problems the same as if yo

  • by A nonymous Coward ( 7548 ) on Saturday December 20, 2025 @08:26PM (#65872225)

    I came across some Emacs elisp code I'd written about 25 years ago, and it looked pretty useful. Emacs didn't like it. I researched the functions and variables and they apparently had been rejiggered about 5 years later. I said to myself, Self, sez I, this could be an interesting AI test. I could probably make this do what I want in a few minutes now if I did it from scratch, but that wouldn't help me understand why it was written that way 25 years ago.

    So I asked Grok. I was pleasantly surprised to find it understood 25 year old elisp code just fine, explained when and how they had been rejiggered, and rewrite it for the current standards. That was more than I had expected and well worth the time invested.

    One other time Grok surprised me was asking how much of FDR's New Deal legislation would have passed if it had required 2/3 passage instead of just 1/2. Not only did it name the legislation which would not have passed, it also named all the legislation which had passed by voice vote and there was no way to know if 2/3 had voted for it. The couple of bills I checked did match and were not hallucinations. The voice vote business was a nice surprise.

    I program now for fun, not professionally. The idea of "offshoring" the fun to AI doesn't interest me. But trying to find 25-year-old documentation and when it changed doesn't sound like fun, and I'm glad to know I can offshore at least some of the dreary parts.

  • Huh, what are the odds that MIT releases yet another paper with subjective contrarian views on productivity with AI?

    There is a MASSIVE conflict of interest with these MIT papers here, and nobody's calling it out.
    So yeah, okay, sure, MIT thinks:

    - AI makes you dumber (with methodology nobody without a dedicated lab can duplicate)
    - 95% of ai projects fail (using extremely rigid metrics and ignoring norms in the larger industry to reach conclusions, while including prototypes and showboat projec

  • by seoras ( 147590 ) on Saturday December 20, 2025 @09:12PM (#65872271)

    I've worked for myself as an independent developer for more than a decade now.
    Apps and websites and I do well working on my own.
    I'm getting old though that the saying "can't teach an old dog new tricks", is starting to make sense.
    AI couldn't have come at a better time in my life.
    As I've always warned youth thinking of getting into tech at higher education, the older you are the less valuable you become. The complete opposite of the other white collar grad workers. You want the old experienced doctor, lawyer, accountant etc. Not the fresh faced grad; unless you are hiring software devs.
    Since I started using AI I've found I'm a magnitude more productive in my output and my overall success.
    It's such a time saver my home looks fab this summer (I'm in NZ) as I've had so much spare time to enjoy gardening.
    AI has knowledge. What it doesn't have is wisdom.
    As long as you remember that and have the wisdom and intuition to know when it is wrong you can't lose.
    Which gives me hope as an old timer in this game.
    Maybe after all I have what the other white collar grad workers have that is most valuable.
    Experience and wisdom which is why AI is no threat to any of us right now.

  • As a developer, AI workflows still rub me the wrong way. If I was dedicated to the task, I'd produce better code.
    As a human, AI workflows let me have a life. I can let the agents knock out the easy things while I'm working on other tasks. I still need design out what's to be worked on, review the code, fix bone mistakes they make, etc. It's basically like having a junior developer assigned to you.

    Which brings up an important point. Junior developers need clear instructions/requirements and so do AIs
    • Yeah, only in the one case, you're helping a fellow human being learn something and become good in the profession. In the other, you're blowing money and resources away for absolutely no other gain but to increase bottom-line of a company. If you kill all junior devs because "you can replace them with the AI", who are going to be the senior developers of tomorrow?
      • If all code is written by AI, then no metric can exist to define/qualify a (human)  "senior" developer. Perhaps the biggest/fastest LLM will retain that "developer" title.  One "positive" thought. Perhaps the entirety of digital memes is toxic to humans, and should be as isolated from their presence like radioactive waste, rabies virus or black Mamba venom.
    • As a human, AI workflows let me have a life. I can let the agents knock out the easy things while I'm working on other tasks. I still need design out what's to be worked on, review the code, fix bone mistakes they make, etc. It's basically like having a junior developer assigned to you.

      Every time I see someone talking about AI being a junior developer, I am quite certain they have never worked with a junior developer.

  • I really do think coding using AI tools is a bit faster, at least it seems that way to me. As most of the morning but lengthy work can be done faster by AI.

    But I am also pretty sure it's VERY easy to rapidly incur technical debt, especially if you are telling AI to review its own work. Yeah it will do some stuff but who is to say post review fixes it's really better?

    More than ever I think the right approach to coding with AI is to build up carefully crafted frameworks that are solid (maybe use AI to help

  • AI might make newbies faster at producing... something. Probably something full of bugs and security holes.

    But it won't help non-newbies with software development, of which "coding" is a relatively minor part.

    • by azouhr ( 8526607 )
      Thanks. You are the first to recognize the main issue with this discussion. I really like to go with "Mythical Man Month" of Fred Brooks (can be found in google books library) that states for a project: 1/3 is Design, 1/6 is Coding, the rest is QA. Now, while AI can help with each of these topics, it will not do so by its own. If you do not exactly know what to ask, and if your requests are not small enough, you will be bust.

      Still, people like to concentrate on coding -- probably because software matures a

  • by godrik ( 1287354 ) on Saturday December 20, 2025 @10:29PM (#65872355)

    I've been playing with these genAI system both as code producer but as helper on various tasks.
    And overall, I find the models quite brittle unless they are fine tuned on the precise task that you want.

    The main problem that I see is that the tool is fundamentally a string in string out. But the strings could be absolutely anything including completely insane things without proper fine tuning.

    Today, I am writing a simple automatic typo correction tool. The difficult bits are making sure that the tool didn't crap out. I mean, it is easy to check you actually got an answer from the tool. The problem is that sometimes the tools will tell you: "Sure, I can fix typos. Here is your text corrected: ". Ans so you have to toss that output out probably. But how do you figure out that it shat the bed? Well, you can't really, it is just as hard as the original task in some cases. So you bake various heuristics, or you get a different LLM to check the work of the first one.

    At the end of the day, you really can't trust anything these tools do. They are way to erratic and unpredictible. And you should treat any of these tools as being possibly adversarial. It's exhausting to use really.

  • These wild swings between AI thinning out the workforce and making all our jerbs obsolete to not being sure if AI is even useful is giving me a headache.

    • by dskoll ( 99328 )

      It'll do both... dumb management and bean-counter types will replace people with AI, and the AI will suck at actually getting work done.

      Lose-lose!

    • They can both be true. A lot of companies have short sighted leaders. The LLM may produce sub standard code that will be a nightmare down the road but who cares I've saved the costs of 10 developers. But when non programming people are expected to judge the quality of LLM code it's no wonder "works on my machine" is the new baseline of quality.
  • Been using the Claude CLI the last few weeks and it has definitely been a great assistant in working with Qt 6, C++, and QML. The CLI interface is one of the best interfaces I have evey seen, and it's native use of markdown is ideal. I am still writing 90% of the code but Claude's a great way to get information on some aspects of the library that I'm not as familiar with. I'm not ready to set it loose with nothing but a specifications document yet.

    I've had it port some code from OpenGL to QRhi (similar t

  • That MIT study has been doing rounds on the web but it is misleading. The study showed that 56% of the developers had never used Cursor before, and crucially, the one developer with over 50 hours of Cursor experience actually did see positive speedup.
  • When using AI to quickly mock up small chunks of code I find it an accelerator. And I do mean small.

    But when code scales up simple systems or API's it falls apart pretty quickly. As the scale of the system grows the requirements grow even faster. GDPR, PII, FIRB, NIST, all start to pile up on as code bases grow. AI lacks the understand of the "business" need. So you get this blob of code out and then you have to spend large amounts of time understanding it so you can re-factor it because the AI engine

  • I use AI regularly, at least once or twice a week. It's a real productivity boost. It's completely replaced searching for me. It's basically an API expert I can talk to and get answers from in 20 seconds. Good stuff.

    Example: I'm working on a bad code base of a legacy application. The backend is quite a mess which I don't really like to touch, so I push a lot of my new logic into our Postgres DB. I don't really like SQL and anything beyond one or two joins I'd usually avoid. With progbuddy AI I'm doing trigg

"A mind is a terrible thing to have leaking out your ears." -- The League of Sadistic Telepaths

Working...