Does AI Really Make Coders Faster? (technologyreview.com) 97
One developer tells MIT Technology Review that AI tools weaken the coding instincts he used to have. And beyond that, "It's just not fun sitting there with my work being done for me."
But is AI making coders faster? "After speaking to more than 30 developers, technology executives, analysts, and researchers, MIT Technology Review found that the picture is not as straightforward as it might seem..." For some developers on the front lines, initial enthusiasm is waning as they bump up against the technology's limitations. And as a growing body of research suggests that the claimed productivity gains may be illusory, some are questioning whether the emperor is wearing any clothes.... Data from the developer analytics firm GitClear shows that most engineers are producing roughly 10% more durable code — code that isn't deleted or rewritten within weeks — since 2022, likely thanks to AI. But that gain has come with sharp declines in several measures of code quality. Stack Overflow's survey also found trust and positive sentiment toward AI tools falling significantly for the first time. And most provocatively, a July study by the nonprofit research organization Model Evaluation & Threat Research (METR) showed that while experienced developers believed AI made them 20% faster, objective tests showed they were actually 19% slower...
Developers interviewed by MIT Technology Review generally agree on where AI tools excel: producing "boilerplate code" (reusable chunks of code repeated in multiple places with little modification), writing tests, fixing bugs, and explaining unfamiliar code to new developers. Several noted that AI helps overcome the "blank page problem" by offering an imperfect first stab to get a developer's creative juices flowing. It can also let nontechnical colleagues quickly prototype software features, easing the load on already overworked engineers. These tasks can be tedious, and developers are typically glad to hand them off. But they represent only a small part of an experienced engineer's workload. For the more complex problems where engineers really earn their bread, many developers told MIT Technology Review, the tools face significant hurdles...
The models also just get things wrong. Like all LLMs, coding models are prone to "hallucinating" — it's an issue built into how they work. But because the code they output looks so polished, errors can be difficult to detect, says James Liu, director of software engineering at the advertising technology company Mediaocean. Put all these flaws together, and using these tools can feel a lot like pulling a lever on a one-armed bandit. "Some projects you get a 20x improvement in terms of speed or efficiency," says Liu. "On other things, it just falls flat on its face, and you spend all this time trying to coax it into granting you the wish that you wanted and it's just not going to..." There are also more specific security concerns, she says. Researchers have discovered a worrying class of hallucinations where models reference nonexistent software packages in their code. Attackers can exploit this by creating packages with those names that harbor vulnerabilities, which the model or developer may then unwittingly incorporate into software.
Other key points from the article:
But is AI making coders faster? "After speaking to more than 30 developers, technology executives, analysts, and researchers, MIT Technology Review found that the picture is not as straightforward as it might seem..." For some developers on the front lines, initial enthusiasm is waning as they bump up against the technology's limitations. And as a growing body of research suggests that the claimed productivity gains may be illusory, some are questioning whether the emperor is wearing any clothes.... Data from the developer analytics firm GitClear shows that most engineers are producing roughly 10% more durable code — code that isn't deleted or rewritten within weeks — since 2022, likely thanks to AI. But that gain has come with sharp declines in several measures of code quality. Stack Overflow's survey also found trust and positive sentiment toward AI tools falling significantly for the first time. And most provocatively, a July study by the nonprofit research organization Model Evaluation & Threat Research (METR) showed that while experienced developers believed AI made them 20% faster, objective tests showed they were actually 19% slower...
Developers interviewed by MIT Technology Review generally agree on where AI tools excel: producing "boilerplate code" (reusable chunks of code repeated in multiple places with little modification), writing tests, fixing bugs, and explaining unfamiliar code to new developers. Several noted that AI helps overcome the "blank page problem" by offering an imperfect first stab to get a developer's creative juices flowing. It can also let nontechnical colleagues quickly prototype software features, easing the load on already overworked engineers. These tasks can be tedious, and developers are typically glad to hand them off. But they represent only a small part of an experienced engineer's workload. For the more complex problems where engineers really earn their bread, many developers told MIT Technology Review, the tools face significant hurdles...
The models also just get things wrong. Like all LLMs, coding models are prone to "hallucinating" — it's an issue built into how they work. But because the code they output looks so polished, errors can be difficult to detect, says James Liu, director of software engineering at the advertising technology company Mediaocean. Put all these flaws together, and using these tools can feel a lot like pulling a lever on a one-armed bandit. "Some projects you get a 20x improvement in terms of speed or efficiency," says Liu. "On other things, it just falls flat on its face, and you spend all this time trying to coax it into granting you the wish that you wanted and it's just not going to..." There are also more specific security concerns, she says. Researchers have discovered a worrying class of hallucinations where models reference nonexistent software packages in their code. Attackers can exploit this by creating packages with those names that harbor vulnerabilities, which the model or developer may then unwittingly incorporate into software.
Other key points from the article:
- LLMs can only hold limited amounts of information in context windows, so "they struggle to parse large code bases and are prone to forgetting what they're doing on longer tasks."
- "While an LLM-generated response to a problem may work in isolation, software is made up of hundreds of interconnected modules. If these aren't built with consideration for other parts of the software, it can quickly lead to a tangled, inconsistent code base that's hard for humans to parse and, more important, to maintain."
- "Accumulating technical debt is inevitable in most projects, but AI tools make it much easier for time-pressured engineers to cut corners, says GitClear's Harding. And GitClear's data suggests this is happening at scale..."
- "As models improve, the code they produce is becoming increasingly verbose and complex, says Tariq Shaukat, CEO of Sonar, which makes tools for checking code quality. This is driving down the number of obvious bugs and security vulnerabilities, he says, but at the cost of increasing the number of 'code smells' — harder-to-pinpoint flaws that lead to maintenance problems and technical debt."
Yet the article cites a recent Stanford University study that found employment among software developers aged 22 to 25 dropped nearly 20% between 2022 and 2025, "coinciding with the rise of AI-powered coding tools."
The story is part of MIT Technology Review's new Hype Correction series of articles about AI.
It depends on your skills level (Score:5, Insightful)
It depends on your skills level. For trivial beginner stuff, it's OK but then again.
For anything out of mainstream which no or very few examples are available for the model to train, it's pretty much useless.
Re: (Score:3)
Watch for the AI bubble crash in 2026.
Re:It depends on your skills level (Score:5, Insightful)
As much as I hate seeing a brute-force approach burn huge amounts of electricity to enable morally dubious applications based on inputs that are often corrupt or illegal, I think the AI bubble is as likely to pop as the Bitcoin bubble.
(You might ask: "Do you mean that AI is a brute-force approach that burns huge amounts of electricity, etc., or that Bitcoin is?" To which I answer: "Yes.")
Re: (Score:3)
Bitcoin doesn't lie on what it is and what it does. AI companies on the other hand ...
Re: (Score:2)
Well, plenty of blockchain proponents and companies do/did.
Re: (Score:3)
The bicoin bubble popped nearly a decade ago. You might remember that time when an iced tea beverage company changed their name to blockchain and shot up in value.
Just because bitcoin's still around doesn't mean it hasn't popped - does the existence of Amazon, Microsoft, Googl
Re:It depends on your skills level (Score:5, Interesting)
Right now we are in the “first hit is free” phase of trying to get everyone hooked on this AI crap. All these upstarts are trying to get marketshare and get some spaghetti to stick to the wall and the usage is heavily subsidized by all the startup money gushing in. Once the influx peters out and these places have to pay their own rent we will see the reality of which companies are able to actually survive and which of the horde are houses of cards.
I fully expect there to be plenty of actual applications, but it will peter out to much more mundane than currently advertised.
Re: It depends on your skills level (Score:1)
Re: (Score:2)
That's just what the LLM is trained on, rather than anyone's skill at using the LLM.
Re: It depends on your skills level (Score:2)
Re: (Score:2)
I am a professional with decades of experience. I think that AI is best at prototyping. If I have an idea, I can ask AI to write a program that does that. That program does not work properly, it is full of bugs, but it will instantly tell me a lot of things. Most valuable thing it can tell me is that "this idea does not work". That alone can save hours of my work. It will also reveal things that I didn't even think about, but which are essential for the idea to work.
Because the code is just used as a protot
Here's What Happens To Me (Score:5, Informative)
Here is what keeps happening to me. I keep falling into the same trap.
I throw various simple things at the AI, sort of a Google replacement, and it gives me what I want. I'll have several successes. And then it will lead me down the wrong rabbit hole, misdirect me, refuse to break out of it's mistaken path, and waste way way way too much of my time chasing my tail.
Eventually, I'll arrive at my destination and I'll ask it why the fuck it took such a circuitous route full of errors and straight up lies to get to the simple and correct answer. It'll respond saying that it's complicated and that it doesn't have all the answers, sorry.
I'll then swear not to use it anymore.
Tomorrow, I'll start with it all over again like a crack addict.
Re: (Score:1)
Yep, this is when the context is full. Nuke the chat and start again.
My current favourite is "Oh, now I understand completely what's happening" (for the seventeenth time in a row - all of which were too-hasty.
Re: (Score:2)
Yeah, one of the things I like about Claude (and Gemini 3 as opposed to 2.5) is that they really clamped down on the use of "Oh, now I've got it! This is absolutely the FINAL fix to the problem, we've totally solved it now! Here, let me write out FIX_FINAL_SOLVED.md" with some half-arse solution. And yep, the answer to going in circles is usually either "nuke the chat" or "switch models".
Re: (Score:3)
A good bit of my effort with using LLMs has been in trying to avoid and correct it.
I've found it gets easier when you start to treat the LLM and its entire context window as a single unit rather than thinking about prompts.
Coding agents are variably successful at this.
For my own agentic tests, I've had good results "context engineering" the LLM to solve tasks reliably that it previously couldn't.
In the end- I'm not sure it's worth the effort, but hey, it keeps me ent
Re: (Score:2)
Re: (Score:2)
The key here is that it helps, but it can't replace you. Not that I care whether you get replaced, but there are a couple trillion bubble bux riding on whether you can be replaced, so it's a big deal.
Re: (Score:2)
Re: Here's What Happens To Me (Score:2)
I have mostly experience with a large app that I have been building lately. I use VSC with Claude. I have some background in coding but I do not do it for a living. As a tool, AI works best when there is a plan to follow and a master document for it to update to record progress. Starting from a proof of concept first and then expanding from there provides clarity. Sometimes I have used another AI to create a refined method to be implemented by another AI.
What I do not like about AI coding: the intellectual
Re: Here's What Happens To Me (Score:2)
Quick! (Score:4, Funny)
AI companies should pivot to predicting Anthropogenic Global Warming, I'm sure it will be perfect for that.
Re: (Score:1)
All that matters is they have everyone's money, can influence elections and are too big to fail. Give them a break - AI is hard.
At first (Score:5, Interesting)
Professional dev in my third decade of experience speaking here. At first, these products really did assist quite a bit. In 2023 and 2024, I found the tools to be pretty decent at offering suggestions for small to medium snippets of code.
Something changed late last year. It may just be that the shine is wearing off, but I find most of the AI products producing less quality results than they did previously.
I rarely ever reach for them anymore. I sure would not rely on them over even an inexperienced junior dev, either.
Re:At first (Score:4)
Professional dev in my third decade of experience speaking here.
Only second decade, here.
I rarely ever reach for them anymore. I sure would not rely on them over even an inexperienced junior dev, either.
I find them comparable, unfortunately. But my new hires may not be as good as yours.
Re: (Score:2)
Something changed late last year. It may just be that the shine is wearing off, but I find most of the AI products producing less quality results than they did previously.
Empirically, speaking around to a few people yeah something now somehow feels not quite as good as it used to be. I think the yes-man problem has got worse. If you're trying to find the API/argument/etc to do X it will always tell you what a great idea it is and give you the code, even if there is no way to do it. I think it's got more syc
Re: (Score:2)
In my fourth decade, but been in C# since 2013, had a need to learn Angular and .NET 8 beginning of 2024. Professional engineer since 2012.
Where chatGPT is an enormous help, is in the following:
It has its uses (Score:5, Interesting)
If you can describe exactly what you want, it can do a fine job accelerating that.
If you are stuck, asking it to try to solve your problem can at least be entertaining.
Today's AI may suffer from a critical flaw! (Score:2)
Of course it does. (Score:3)
Only if your a true believer! (Score:3)
Re: Only if your a true believer! (Score:2)
Re: (Score:1)
Nope! (Score:3)
Re: Nope! (Score:2)
Bloat Industrial Complex (Score:3)
AI seems to be feeding the bloat habit instead of trimming it. It's becoming an auto-bloater.
Very few in the industry are interested in parsimony. Devs would rather collect buzzwords for their resume rather than try to trim out layers and eye-candy toys. It's kind of like letting surgeons also be your general doctor, they'd recommend surgery more often than you really need it.
The principles of typical biz/admin CRUD haven't really changed much since client/server came on the scene in the early 90's. Yet the layers and verbosity seem to keep growing. An ever smaller portion of time is spent on domain issues and ever more on the tech layers and parts to support the domain. Something is wrong but nobody is motivated to do anything about it because bloat is job security.
YAGNI and KISS are still important, but is dismissed because it reduces one's resume buzzword count. The obsession with scaling for normal apps is an example of such insanity: there's only like a 1 in 50k chance your app or company will ever become FANG-sized, yet too many devs want to use a "webscale" stack. You're almost as likely to get struck by lightning while coding it. They patients are running the asylum.
Humans, you are doing CRUD wrong!
Re: Bloat Industrial Complex (Score:2)
Re: (Score:2)
Very few in the industry are interested in parsimony.
I've come to accept that this as true, and further conjecture that bloat is often a corporate/institutional goal.
This seems to be a joke [zerobugsan...faster.net], but in reality corporate incentives are aligned to make things more bloated. If you're a manager, then the more people you have under you, the more power you have. This means you want your people to go slower so you have to hire more of them.
I don't have a solution but there must be one.
It's not about being faster (Score:5, Insightful)
The goal here for AI is to eliminate wages. There is more than one way to skin a cat.
Remember good enough is always good enough especially when monopolies exist and we refuse to vote for politicians that will enforce antitrust law because we're busy freaking out about whatever Petty moral panic or culture War bullshit the boob tube tells us to today
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
I have met tons of h1bs who are great at what they do.
I also know the h1b msp ecosystem that hovers around the redmond campus like flies on shit and 90% of these "gold double platinum preferred msft vendor" companies are an outright scam.
But since it's other companies being dead weight and not a human being, the grift is completely acceptable. I know some of you might wanna say "a businesses job is to make money! why would they allow that?!?!" and to you gents i suggest getting a job and using your eyes.
Re: (Score:2)
good enough is always good enough
Yeah, the only outcome to a constantly lowering average standard is a downward spiral.
Trying to achieve high quality is too much effort, so let's set aside pride in whatever's done.
Re: (Score:2)
Eliminate wages, assuming the result still works, would be classed as a productivity gain.
"i need that small annoying snippet that does..." (Score:2)
In this, it is decent, and need to be something you actually understand as a result so you can proof read it.
But letting it loose on the big code at large is pretty dumb.
Re: (Score:3)
But letting it loose on the big code at large is pretty dumb.
I do this frequently, with both packaged agents and my modifications to them, just to see what comes out the other side.
Sometimes it's great, sometimes it's pretty bad.
I do it as a side job, not my regular work, so the consequences of the failure are minimal- I just throw it away and try again with another modification.
If it were my actual main workflow... I think that would stress me the fuck out- each failure being significant wasted time and money.
Re: (Score:2)
Re: "i need that small annoying snippet that does. (Score:2)
Re: (Score:2)
As I said, if it did- I'd have other feelings about it.
\o/ (Score:2)
No (but also yes).
If you let it write code freely, you can spend hours or days trying to figure out what the f**k it's done and why it's not done what you asked or why it randomly introduces a complete re-write for a single-line fix.
Heavily constrained yes.
It complements me perfectly. (Score:2)
Hidden cost (Score:2)
The Flow State (Score:4, Interesting)
Quite frankly, I don't want to talk to anyone all day, including an agent. And it's not really about being anti-social. As most people know, we have two primary brains - logical and emotional. I find that trying to explain what needs to be done the model engages my emotional/social brain too much, and the logical part can't focus on getting the job done. In order to do quality work, I need to completely ignore the emotional side of my brain and let the logical brain get into the state of flow, where I can accurate track the state of what I'm working on and follow logical workflows only. Talking to an agent all day engages my social side too much to do quality work.
Re: The Flow State (Score:2)
Re: (Score:2)
I wonder what you think about pair programming. Sounds like that would not match your style either. And some agent/human interactions are thought to be similar to that, either the agent monitors if you mistype something, or you review each step of the agent before letting it continue.
Yeah... (Score:2)
I'm sure this is all worth trillions of dollars or whatever
A simple, but lame, example. (Score:2)
Write a C++ program which calculates a "stardate" where each date is a 24 hour Earth day, and the origin (0) is January 1st, 2001 at midnight.
Finished
🌐
🌐
🌐
To calculate a "stardate" based on Earth days since January 1, 2001 at midnight, you can use the C++ <chrono> library to compute the number of days (and fractional days) that have passed since the epoch.
Here's a complete C++ program that calculates the stardate:
#include <iostream>
#include <chro
Re: A simple, but lame, example. (Score:2)
Re: (Score:1)
Right.. it's "as near as he can tell"... and you know what? I think it probably works too. But if it doesn't the bug will be slippery as shit and tracking it down is as much work as rewriting most of the program.
and there's also the case it cranks out something broken and when you ask it to fix line 17 it's a coin flip if it goes "oh silly me!" and fixes it or goes "oh wow silly me!" and spits out an identical line.
I've been saying this since oh chatgpt3, you gotta give it small problems the same as if yo
It helped research some 25-year-old code (Score:5, Insightful)
I came across some Emacs elisp code I'd written about 25 years ago, and it looked pretty useful. Emacs didn't like it. I researched the functions and variables and they apparently had been rejiggered about 5 years later. I said to myself, Self, sez I, this could be an interesting AI test. I could probably make this do what I want in a few minutes now if I did it from scratch, but that wouldn't help me understand why it was written that way 25 years ago.
So I asked Grok. I was pleasantly surprised to find it understood 25 year old elisp code just fine, explained when and how they had been rejiggered, and rewrite it for the current standards. That was more than I had expected and well worth the time invested.
One other time Grok surprised me was asking how much of FDR's New Deal legislation would have passed if it had required 2/3 passage instead of just 1/2. Not only did it name the legislation which would not have passed, it also named all the legislation which had passed by voice vote and there was no way to know if 2/3 had voted for it. The couple of bills I checked did match and were not hallucinations. The voice vote business was a nice surprise.
I program now for fun, not professionally. The idea of "offshoring" the fun to AI doesn't interest me. But trying to find 25-year-old documentation and when it changed doesn't sound like fun, and I'm glad to know I can offshore at least some of the dreary parts.
Re: (Score:3)
I program now for fun, not professionally.
UID checks out.
Didn't see that one coming (Score:1)
Huh, what are the odds that MIT releases yet another paper with subjective contrarian views on productivity with AI?
There is a MASSIVE conflict of interest with these MIT papers here, and nobody's calling it out.
So yeah, okay, sure, MIT thinks:
- AI makes you dumber (with methodology nobody without a dedicated lab can duplicate)
- 95% of ai projects fail (using extremely rigid metrics and ignoring norms in the larger industry to reach conclusions, while including prototypes and showboat projec
Re: Didn't see that one coming (Score:2)
Re: (Score:2)
But you don't. You didn't even read the paper. You're just an asshole.
Re: (Score:1)
Dude his whole post is like a guy in the middle of losing his life savings to a meme stock.
Deranged MBA in our midst (Score:1)
Dude the only one with an agenda is you now take your monkey NFTS, goatseus coins, LLMs and all your GPUs and go PHB somewhere else.
Gardening time (Score:3)
I've worked for myself as an independent developer for more than a decade now.
Apps and websites and I do well working on my own.
I'm getting old though that the saying "can't teach an old dog new tricks", is starting to make sense.
AI couldn't have come at a better time in my life.
As I've always warned youth thinking of getting into tech at higher education, the older you are the less valuable you become. The complete opposite of the other white collar grad workers. You want the old experienced doctor, lawyer, accountant etc. Not the fresh faced grad; unless you are hiring software devs.
Since I started using AI I've found I'm a magnitude more productive in my output and my overall success.
It's such a time saver my home looks fab this summer (I'm in NZ) as I've had so much spare time to enjoy gardening.
AI has knowledge. What it doesn't have is wisdom.
As long as you remember that and have the wisdom and intuition to know when it is wrong you can't lose.
Which gives me hope as an old timer in this game.
Maybe after all I have what the other white collar grad workers have that is most valuable.
Experience and wisdom which is why AI is no threat to any of us right now.
Faster, no. Multi-tasking yes. (Score:2)
As a human, AI workflows let me have a life. I can let the agents knock out the easy things while I'm working on other tasks. I still need design out what's to be worked on, review the code, fix bone mistakes they make, etc. It's basically like having a junior developer assigned to you.
Which brings up an important point. Junior developers need clear instructions/requirements and so do AIs
Re: Faster, no. Multi-tasking yes. (Score:2)
Re: (Score:2)
Re: (Score:2)
As a human, AI workflows let me have a life. I can let the agents knock out the easy things while I'm working on other tasks. I still need design out what's to be worked on, review the code, fix bone mistakes they make, etc. It's basically like having a junior developer assigned to you.
Every time I see someone talking about AI being a junior developer, I am quite certain they have never worked with a junior developer.
One thing is faster - increase of technical debt (Score:1)
I really do think coding using AI tools is a bit faster, at least it seems that way to me. As most of the morning but lengthy work can be done faster by AI.
But I am also pretty sure it's VERY easy to rapidly incur technical debt, especially if you are telling AI to review its own work. Yeah it will do some stuff but who is to say post review fixes it's really better?
More than ever I think the right approach to coding with AI is to build up carefully crafted frameworks that are solid (maybe use AI to help
Re: (Score:1)
Hey you'll never replace an entire department with 1 junior dev and 5 managers talking like that buckaroo.
"Coding" is not software development (Score:2)
AI might make newbies faster at producing... something. Probably something full of bugs and security holes.
But it won't help non-newbies with software development, of which "coding" is a relatively minor part.
Re: (Score:1)
Still, people like to concentrate on coding -- probably because software matures a
Brittle tech (Score:3)
I've been playing with these genAI system both as code producer but as helper on various tasks.
And overall, I find the models quite brittle unless they are fine tuned on the precise task that you want.
The main problem that I see is that the tool is fundamentally a string in string out. But the strings could be absolutely anything including completely insane things without proper fine tuning.
Today, I am writing a simple automatic typo correction tool. The difficult bits are making sure that the tool didn't crap out. I mean, it is easy to check you actually got an answer from the tool. The problem is that sometimes the tools will tell you: "Sure, I can fix typos. Here is your text corrected: ". Ans so you have to toss that output out probably. But how do you figure out that it shat the bed? Well, you can't really, it is just as hard as the original task in some cases. So you bake various heuristics, or you get a different LLM to check the work of the first one.
At the end of the day, you really can't trust anything these tools do. They are way to erratic and unpredictible. And you should treat any of these tools as being possibly adversarial. It's exhausting to use really.
Make up your minds (Score:2)
These wild swings between AI thinning out the workforce and making all our jerbs obsolete to not being sure if AI is even useful is giving me a headache.
Re: (Score:2)
It'll do both... dumb management and bean-counter types will replace people with AI, and the AI will suck at actually getting work done.
Lose-lose!
Re: Make up your minds (Score:2)
Been using Claude.ai CLI (Score:2)
Been using the Claude CLI the last few weeks and it has definitely been a great assistant in working with Qt 6, C++, and QML. The CLI interface is one of the best interfaces I have evey seen, and it's native use of markdown is ideal. I am still writing 90% of the code but Claude's a great way to get information on some aspects of the library that I'm not as familiar with. I'm not ready to set it loose with nothing but a specifications document yet.
I've had it port some code from OpenGL to QRhi (similar t
Bad study (Score:1)
AI fails the detail problem. (Score:2)
When using AI to quickly mock up small chunks of code I find it an accelerator. And I do mean small.
But when code scales up simple systems or API's it falls apart pretty quickly. As the scale of the system grows the requirements grow even faster. GDPR, PII, FIRB, NIST, all start to pile up on as code bases grow. AI lacks the understand of the "business" need. So you get this blob of code out and then you have to spend large amounts of time understanding it so you can re-factor it because the AI engine
Excellent concern! (Score:1)
Look buddy that's something we'll figure out after the corpo-feudalist business plot is finished.
My productivity is up 5x. At least. (Score:2)
I use AI regularly, at least once or twice a week. It's a real productivity boost. It's completely replaced searching for me. It's basically an API expert I can talk to and get answers from in 20 seconds. Good stuff.
Example: I'm working on a bad code base of a legacy application. The backend is quite a mess which I don't really like to touch, so I push a lot of my new logic into our Postgres DB. I don't really like SQL and anything beyond one or two joins I'd usually avoid. With progbuddy AI I'm doing trigg