Will Productivity Gains from AI-Generated Code Be Offset by the Need to Maintain and Review It? (zdnet.com) 95
ZDNet asks the million-dollar question. "Despite the potential for vast productivity gains from generative AI tools such as ChatGPT or GitHub Copilot, will technology professionals' jobs actually grow more complicated? "
People can now pump out code on demand in an abundance of languages, from Java to Python, along with helpful recommendations. Already, 95% of developers in a recent survey from Sourcegraph report they use Copilot, ChatGPT, and other gen AI tools this way.
But auto-generating new code only addresses part of the problem in enterprises that already maintain unwieldy codebases, and require high levels of cohesion, accountability, and security.
For starters, security and quality assurance tasks associated with software jobs aren't going to go away anytime soon. "For programmers and software engineers, ChatGPT and other large language models help create code in almost any language," says Andy Thurai, analyst with Constellation Research, before talking about security concerns. "However, most of the code that is generated is security-vulnerable and might not pass enterprise-grade code. So, while AI can help accelerate coding, care should be taken to analyze the code, find vulnerabilities, and fix it, which would take away some of the productivity increase that AI vendors tout about."
Then there's code sprawl. An analogy to the rollout of generative AI in coding is the introduction of cloud computing, which seemed to simplify application acquisition when first rolled out, and now means a tangle of services to be managed. The relative ease of generating code via AI will contribute to an ever-expanding codebase — what the Sourcegraph survey authors refer to as "Big Code". A majority of the 500 developers in the survey are concerned about managing all this new code, along with code sprawl, and its contribution to technical debt. Even before generative AI, close to eight in 10 say their codebase grew five times over the last three years, and a similar number struggle with understanding existing code generated by others.
So, the productivity prospects for generative AI in programming are a mixed bag.
But auto-generating new code only addresses part of the problem in enterprises that already maintain unwieldy codebases, and require high levels of cohesion, accountability, and security.
For starters, security and quality assurance tasks associated with software jobs aren't going to go away anytime soon. "For programmers and software engineers, ChatGPT and other large language models help create code in almost any language," says Andy Thurai, analyst with Constellation Research, before talking about security concerns. "However, most of the code that is generated is security-vulnerable and might not pass enterprise-grade code. So, while AI can help accelerate coding, care should be taken to analyze the code, find vulnerabilities, and fix it, which would take away some of the productivity increase that AI vendors tout about."
Then there's code sprawl. An analogy to the rollout of generative AI in coding is the introduction of cloud computing, which seemed to simplify application acquisition when first rolled out, and now means a tangle of services to be managed. The relative ease of generating code via AI will contribute to an ever-expanding codebase — what the Sourcegraph survey authors refer to as "Big Code". A majority of the 500 developers in the survey are concerned about managing all this new code, along with code sprawl, and its contribution to technical debt. Even before generative AI, close to eight in 10 say their codebase grew five times over the last three years, and a similar number struggle with understanding existing code generated by others.
So, the productivity prospects for generative AI in programming are a mixed bag.
Same question (Score:5, Insightful)
Same question asked when outsourcing coding to India. The answer is Yes. What you gain in cheap labor is lost when having to review and fix it.
Great analogy (Score:1)
Same question asked when outsourcing coding to India.
The seems like a really good comparison to me - having ChatGPT write code, is a lot like offshoring code development, just without the lag in communications.
But the result you get has to be checked so much, that it seems like use is limited to more confined areas - not so much "write me a whole website" as "Write me a form for input with these values" or "write a bunch of test cases".
Re:Same question (Score:5, Insightful)
I would never use AI generated code in production and I don't think anyone is. Code is arrived at iteratively in a process of discovery, and this built-in history that is critical is missing here.
It can't even be used for POC code for the same reason: by writing the POC you discover where the needs and the possibilities are.
AI generated code is great for making pong in Javascript and sharing on Twitter what generative AI can do.
Re: (Score:2)
I would, and have, used AI-generated code in production, in the same way that we all have used code snippets from Stack Overflow. All AI basically does is search Stack Overflow and regurgitate code samples it finds there (or in other similar code sites). That code is rarely production-ready, it has to be at least tweaked to fit within your own code base. It's useless to someone who is not a real programmer, but in the hands of a skilled developer, it can lead to big time savings.
Requests like "Write a C# fu
Re: (Score:2)
I would never use AI generated code in production and I don't think anyone is.
Then you're doing it wrong.
Lots of people are using AI generated code with great success.
Code is arrived at iteratively in a process of discovery, and this built-in history that is critical is missing here.
I'm not sure what you're getting at here, git history of past revisions? I rarely ever touch that.
The iterative process of making the feature? You still do that with AI.
It can't even be used for POC code for the same reason: by writing the POC you discover where the needs and the possibilities are.
AI generated code is great for making pong in Javascript and sharing on Twitter what generative AI can do.
It's not like you give it the JIRA ticket, paste in the result, and move on.
You figure out what the AI did, evaluate it for intent and correctness, see if it works, see if it's what you wanted after all, and move on.
Maybe it can do some relatively compli
Re: Same question (Score:1)
Re: (Score:2)
That's a great point. Which language do you use it for?
Re: Same question (Score:1)
Re: (Score:2)
Thanks, I'll check it out. I have only tried AI with C++ and wasn't happy with the results, but I do use Go from time to time and I'm intermediate there at best, so that kind of assistance might help.
Re: (Score:1, Insightful)
What you gain in cheap labor is lost when having to review and fix it.
And nobody cares. Which is why everything is shit now.
Nobody cares that the code written by Indian monkeys is shit. Nobody cares that products made by Chinese monkeys are shit. They saved a lot of money by outsourcing to third world monkeys, and the short term benefit of that is the only thing they care about.
Re: (Score:2)
Why would you "maintain" AI generated code. Wouldn't you just regenerate a new model each from scratch each time and run it through the automated test infrastructure?
It wouldn't need to be the entire project, but libraries or modules shouldn't be edited they should be built new.
Re: (Score:2)
Re: (Score:2)
That's OK. The AI-powered optimization tools will clean it up.
Re: (Score:2)
You create a test plan with test cases. It only passes if there are no failed tests.
If you find new failure criteria you add that to the list of tests.
The system should always be generating code based on the latest learning models. It should be come more efficient over time too.
Re: (Score:2)
> So each time you will have new bugs and performance problems different from the previous.
To be fair that's been Microsoft's MO for years and they weren't even using AI.
Re: (Score:1)
Same question asked when outsourcing coding to India. The answer is Yes. What you gain in cheap labor is lost when having to review and fix it.
Nah you're just missing the bigger picture. All you have to do is say the magic word and all the problems go away. "Agile". Shit code from India isn't shit code from India. It's the "minimum viable product". A product that doesn't work isn't a poor release, it's a "live service".
It's all about perspective.
Someone has to (Score:2)
The software ain't gonna review itself.
Oh wait . . .
Re: (Score:3)
The software ain't gonna review itself.Oh wait . . .
I expect that to work about as well as the software verifying its own cited legal precedents.
Re: (Score:2)
I expect it will work better. If the only reason you are writing tests is that the customer (Google) insists on 90% coverage by unit tests, the AI can write the tests, generate the test coverage report, and then the contract can be signed and the garbage code is properly documented for compliance with the contract.
I'm waiting for an AI to write a test to verify a security flaw exists and for the test to be discovered in litigation after a security incident.
Broken as per the spec.
Re: (Score:2)
The software ain't gonna review itself.
Oh wait . . .
Especially if the only reason people are doing code reviews is to hit compliance checkboxes. They are definitely going to automate those reviews.
No (Score:5, Insightful)
I use those tools, and as an experienced developer, it is as if I really have a "high school level" assistant.
Writing documentation? In a snap:
"Can you document this piece of code"
Writing tests? Again:
"Can you write simple tests for this code"
Updating configs, writing basic imports, writing repetitive code...
It makes me several times faster.
It can even introduce new libraries, but it is where AI starts feeling short. That code only works some of the time. And requires a lot of tinkering. Still it usually forms a good template to start. And again, I know my way around to fixing it.
So, AI is a tool like any other. Know its strengths and limits, and it will work for you.
Trust it blindly, and it will cause more pain than you can imagine.
Re:No (Score:5, Insightful)
Management will trust it blindly, therefore it will cause you more pain than you can possibly imagine.
I've been through a couple of Panaceas To The Great Problem, AI seems oddly familiar. It will work out to a useful tool, but along the way expectations will be a bit unrealistic.
Just wait for McKinsey to get rolling.
Re: (Score:2)
Management will trust it blindly, therefore it will cause you more pain than you can possibly imagine.
How?
You still need a dev to drive the tool, it's not like management can say "insert all the ChatGPT code without testing because it's so awesome" as your code won't even compile/run.
I can see management asking unrealistic deadlines, or pushing the tool when it doesn't make sense, but that's hardly a change from the status quo.
Re: (Score:2)
Management will trust it blindly, therefore it will cause you more pain than you can possibly imagine.
How?
You still need a dev to drive the tool, it's not like management can say "insert all the ChatGPT code without testing because it's so awesome" as your code won't even compile/run.
I can see management asking unrealistic deadlines, or pushing the tool when it doesn't make sense, but that's hardly a change from the status quo.
Management will assign the task to an intern who doesn't know any better.
Re: (Score:2)
Re: (Score:3)
> Trust it blindly, and it will cause more pain than you can imagine.
1 out of 10?
https://xkcd.com/883/ [xkcd.com]
Re: No (Score:1)
This IS my experience (Score:5, Insightful)
I'm always so impressed when ChatGPT etc. churns out a bunch of code:
- At first it just FEELS like it does exactly what I want! I'm so happy! ...and that it's not quite working
- Then I look through the code and think to myself that I need to change some small things like var names, put in my content, etc
- Then I realize that some parts I don't understand
-
- I ask ChatGPT for some kind of explanation, and it apologizes and gives me a rewrite of part of the code
- This new part doesn't completely work with the old part, so I have to figure that out
- In the course of figuring it out, I realize that the part I don't understand is some bizarrely convoluted crap that, now that I've wrapped my mind around the issue, should be a simple one-liner
- I ask ChatGPT about this and it apologizes and gives me my one-liner
- I realize that all the formatting conventions and patterns ChatGPT recommended are not what I usually do, and that there are all kinds of subtle and hidden costs and assumptions -- there were a million hard-earned reasons I was coding MY way, and I had got carried away and forgot
- I realize that I just wasted a bunch of time
StackOverflow usually helps me wrap my mind around issues, ChatGPT makes me take a long, long route to get there
Re: (Score:2)
20-year industry veteran here.
> I realize that the part I don't understand is some bizarrely convoluted crap that, now that I've wrapped my mind around the issue, should be a simple one-liner
> I realize that I just wasted a bunch of time
I've been trying to use generative AI for the things I look up every time: dealing with dates/times, location, string encoding etc. The above is why I gave up and went back to just looking it up again.
If this kind of needlessly convoluted, not-quite-right code starts m
Re: (Score:2)
So would you say it will lead to larger amounts of code, produced more quickly, and at a lower quality?
Would you say that has any similarity to general trends in programming prior to this year?
Re: (Score:1)
> So would you say it will lead to larger amounts of code, produced more quickly, and at a lower quality?
'zactly.
> Would you say that has any similarity to general trends in programming prior to this year?
The only thing I can think of is in the early 2000s when outsourcing blew up and there was an 'IT Training Institute' on every other block in India's major cities (for example). So many enterprise orgs tried to save a few bucks, then paid them right back again when they had to call in consultants to
Re: (Score:2)
To be fair it doesn't *usually* choke so bad on a simple, bread and butter util
Re: (Score:2)
It's hard enough to keep tech debt at bay with a team of senior devs, I predict heavy LLM use will make it 10x more difficult and unpleasant.
Well, that is the best case. The worst case is that it will make stuff convoluted enough as to be unmaintainable because nobody really can understand anymore what it does. At that point you can not even really rewrite it.
Re: (Score:2)
Same.
ChatGPT doesn't know the answers to hard questions.
It knows what's common on the internet: the answers to college homework and interview questions.
Re: (Score:2)
Same here, and even if the code did "work" -- to start with the best case -- I still wouldn't use it. If may pass, in the best case, a few tests I can think of, but there are still too many edge cases which normally you guard against those with logic and reasoning, which is absent in its code. ChatGPT's logic is similar to the logic you experience in a dream: feels right but when you look deeper it is bizarre.
In my experience ChatGPT's only strength -- though an important one -- is it can bring to my awaren
Re: (Score:1)
What do you think about AI for automated testing? It seems like the AI could be trained on both existing tests and user behavior (from logs), and look for clear errors.
The AI would have much more patience than humans to check for defects that require a combination of factors to trigger.
Developers could then focus on the main scenarios, knowing that the AI safety net will check the outliers.
Simpsons predicted (Score:2)
"The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots."
"Productivity gains" (Score:2)
There's no doubt that if you just want to throw together something simple and straightforward, that AI is a handy tool. But the more complex the thing is you want to do the more dangerously it fucks it up, often in subtle ways. So unless you're competent and capable of reviewing the code it emits, you're better off not using it at all.
That was the AI-friend way of putting the question (Score:2)
Will LAYOFFS from AI-generated code be DELAYED by the need to maintain and review it
is what people really want to know.
With ~25yr of experience, I used ChatGPT... (Score:5, Interesting)
I had a conversation with ChatGPT to help me learn some new technologies.
It started out with a simple "I'm unfamiliar with this project structure - explain it to me". Then "Give me an example of code that does X". And "In this platform I'm an expert in, I'd do X with Y. how does Z accomplish Y?"
Much of the code it wrote was about 80% correct, enough for a veteran dev to quickly see the bugs and chat back with more questions to clarify, and to know what to look up.
In about 4 hours I had a working non-trivial implementation that *I* wrote. I never used GPT's code, but instead used it as a tutor to fire any questions I had at. It impressed the hell out of me for this purpose.
Re: (Score:2)
Even when it is (inevitably) wrong, the poorly-done part helps me think through what a good implementation looks like. I've also found it useful for certain types of boilerplate code, providing I am very precise with my prompt.
Like any tool, it is much more useful for a master than a novice.
Re: (Score:2)
Yes - this is the correct way to use ChatGPT: as a debugger for your mental process.
Basically, it's a more elegant Rubber Duck: https://en.wikipedia.org/wiki/Rubber_duck_debugging
Try to use it for anything else and it will fall short every time.
early days (Score:2)
There are a lot of valid criticisms of the code written by generative AI. But what will happen is that human-enhanced versions of that code will be fed back into the models, and the generated code will incrementally improve. This may not take very long depending on how much human effort is invested. Eventually the code will be hardened and (one would hope) reasonably easy to understand. Well and good I guess, programmers will write specifications and evaluate results. More work gets done and people's jobs a
Re: (Score:2)
There are a lot of valid criticisms of the code written by generative AI. But what will happen is that human-enhanced versions of that code will be fed back into the models, and the generated code will incrementally improve.
Unless they start paying good coders to produce this human-enhanced code, any incremental improvement will be very slow... if it happens at all.
Re: (Score:2)
The employers of coders are paying to get the un-enhanced code, so it would be to their benefit to up-submit the human-enhanced version in hopes of future improvement. Or there could be a quid pro quo. My AI will write code for you at no charge if you will submit the fixed versions back to me.
Re: (Score:2)
Re: (Score:2)
That seems to be OpenAI's strategy. They are currently paying lots (thousands) of coders to write high quality code to train on.
It's going to be an interesting experiment
Re: (Score:2)
Unlikely. You are basically postulating that coding can be done by only selecting elements from a catalog. If that were possible, we would already have it.
Re: (Score:2)
As I understand it, the generative AI's are being trained up with open source code and things like Stack Overflow. All this would do is provide increasingly better examples.
Re: (Score:2)
That would also assume people know how to improve what ChatAI offers to them. Observable evidence seems to not support that expectation in the general case. And ChatAI needs a _lot_ of training data to "lock on" to something. Hence GIGO would just continue.
Re: (Score:2)
I think you are overly pessimistic, these are early days. Code quality will improve and the current set of flaws will gradually be ironed out. And obviously there could be more focus on guardrails in the code and more appropriate testing before humans see it.
Also there is a lot of opportunity for 'cookbook' code that's similar across multiple applications and therefore more likely to get properly refined.
Re: (Score:2)
I think you are overly pessimistic, these are early days.
They are really not. The first failed attempt at something like this I witnessed was the 5GL project about 35 years back and I am pretty sure that was not the first attempt to generate code by some form of automation from user input. This has failed and failed and failed again and this time will not be different. Also, statistical approaches are the absolute _worst_ ones to code generation.
Re: (Score:2)
Things do change as time passes.
This appears to be on a different level from what happened decades or just a few years ago. People appear to be very impressed with what they are seeing even if you aren't.
Re: (Score:2)
Established Mathematics does not change. The only thing that has changed is that statistical approaches now can do very simple stuff but fail at things that a smart human would have needed a week or two to pick up. And actually, a lot of people are very _unimpressed_ with what they see, incidentally. You have to look for those that tried more than very simple generic stuff.
Re: (Score:2)
As I understand it, the generative AI's are being trained up with open source code and things like Stack Overflow. All this would do is provide increasingly better examples.
That was the GPT3 GPT3.5 and GPT4. OpenAI has hired thousands of programmers to write high quality code for future AIs to train on. It will be interesting to see how that turns out.
Many have theorized that the poor quality of ChatGPT-generated code is because of the poor quality of the training data.
Re: (Score:2)
Easy to imagine, yes but if you take a hard look around all the AI tasks you will hardly find any that reach this level of autonomy. If we were to be conservative about our expectations we should conclude that AI has never been proven to work without human assistance on critical tasks and expect the future to be similar. It would be rather a mi
Re: (Score:2)
But what will happen is that human-enhanced versions of that code will be fed back into the models, and the generated code will incrementally improve.
Only if the generative algorithms are changed to allow for such, which comes with its own downside - less 'creativity'.
Essentially there are two competing goals at play when it comes to using LLM's for code generation: Slight randomisation from the 'most common' answer of a small selection of 'next words' in order to both create something novel and to avoid 'gibberish loops', versus complying with a programming language's syntax / rules.
Novel structure and syntax in informal language text can be 'a good thi
Reviewing someone else's code harder (Score:2)
I find it harder to review and understand someone else's code than to understand the workings of code I designed and coded. That's me -- since i have a more vested interest in the code I designed and coded being correct than in looking for nits in an AI's code.
Thinking about that -- I'd have less interest in reviewing an AI's code than I would a fellow team member's code. It's a bit like a the difference between playing a chess game against a person vs. a computer. Somehow the outcome against a computer
Re: (Score:2)
Generally, reviewing code for security is significantly harder than producing secure code with trusted people. And it takes longer. So not only is the person doing it more expensive, it needs to work on this longer. When clients ask for general code security review because the do not trust the devs that wrote their code, I tell them to throw it away and rewrite it with trusted devs because that is cheaper.
Re: (Score:2)
So, along the lines of saying the same thing w/different words... If I trust myself its cheaper/easier to write trusted code (code that I trust) than it is to review code from an untrusted source. Seems like you are giving a more general rule about difficulty of reviewing one's own code vs. someone else's?
I know when one uses the phrase 'trusted code' as in a TCB (Trusted Code Base), there are differences in meaning vs. saying one trusts one's own code and thus regards it as trusted code (of some level).
Re: (Score:2)
Test cases do not make code trustworthy. Security problems often hide in border-cases that may not even happen at all in normal operation and typical test-cases, but that attackers can produce. Like a 1 in 100'000'000 timing condition that attackers can get down to 1 in 1000. Or in using the one special case missed in input validation that nobody thought to test for.
What makes code trustworthy is a) coders that know how to write secure code and that are careful and b) coders that are motivated to not attack
Can confirm (Score:4, Insightful)
I expect to see more security breaches in the future.
Re: (Score:2)
No surprise, really. The model used is just not fit to produce anything that needs any level of insight because it has none.
I also expect more security breaches coming from this. All it takes is for attackers to identify some general patterns in the security mistakes ChatAI makes and they are golden. Next stage is then to seed subtly insecure but good looking code to the net so the next generation of ChatAI eats it up and uses it.
Using ChatAI to generate production code is just a really abysmally bad idea.
Re: (Score:2)
>Using ChatAI to generate production code is just a really abysmally bad idea. It just shows the ones doing it have no clue how producing secure, reliable and maintainable code works.
That'll be the PHBs & CEOs who'll decide "we can save money doing this, which will get me a bonus, and, when it all goes horribly wrong, I'll be at another company anyway.".
Betteridge's law (of headlines) strikes again (Score:2)
Re: (Score:2)
Oh crap.... I misread the question.
Slashdot has broken my brain. I don't know what to think anymore. Why are we even here? What is the meaning of life?
auto-generating new code... (Score:2)
Is the wrong use of the new AI tools
We need tools to help us manage complexity
We need tools that can help us find tricky bugs and unintended interactions
We need tools to help us visualize the operation of systems that are too complex to fit into one mind
We need tools to clean up the really, really old code that still performs important functions
We need tools that allow us to make better software
We need tools that allow us to create much more powerful and complex, bug-free code
We don't need tools that auto-g
No, not yet (Score:2)
Current AI code generation is still too weak. I've tested it with embedded C code ( my background ) and it appears to be happily generating code, that does not work. I suspect it comes from analysing the typical stackexchange question of "here is my solution to problem X, why does it not work?" without parsing point 2.
Re: (Score:2)
That is more like a "not ever", because this thing is incapable of understanding. It just parrots the average of what seems statistically relevant.
Re: (Score:1)
That is more like a "not ever", because this thing is incapable of understanding. It just parrots the average of what seems statistically relevant.
Understanding was encoded during the training of the neural network. GANs used to minimize loss function are effectively causing a conceptual model representing the meaning of training dataset to be compiled.
GPT4 is able to answer new questions it has not seen before for this very reason. It is exploiting knowledge of similar concepts. What's holding back the technology is business / computational cost of the service. It costs nearly a million dollars a day in computer time alone to execute the pre-trai
Douglas Adams got it right (Score:2)
This problem will expand exponentially when a machine starts to come up with new realizations or even simple-looking new formulas of the world we live in. Everything needs to be verified and the reasoning needs to be backtraced. Otherwise, we would not gain new understanding but blindly trust in a black box.
Sure, the machine can walk us through its process, but we would only be on its leash. An option would be to let go of anthropocentrism and accept the presence of new intelligence (when it actually arises
Unless the "AI" starts providing references (Score:2)
You don't win genuine knowledge of it that you don't have to review first.
Let me know this. (Score:1)
Please put a written disclaimer on ANY and ALL websites, programs, applications, etc that use this crap. I think that the people will demand it as we don't want to risk our money or privacy on a site that is 100% sure to be hacked by this unmonitored system.
Actually, it is worse (Score:3)
For code, you will not only have essentially higher effort reviewing it than manually producing it would have cost you. You will also still have less quality, especially with regards to security, architecture, performance and other aspects. There really is no shortcut when it comes to work that requires understanding, get over it and do it right. Also, ChatAI can only do things it has seen often enough. Anything a bit more rare or more specialized, it cannot do at all. Example: ChatGPT was a complete failure for a simple firewall configuration with NAT when my students tried it.
For low-skill, no-insight white-collar work, ChatAI could work well. But this work is typically not "productive" in the first place, but rather consists of bureaucratic hassles that benefit nobody besides the bureaucrats doing it.
Re: (Score:2)
I still see value even in the code made by ChatGPT. It is good for prototyping and experimenting.
Re: (Score:2)
That probably comes from you not having enough experience with it yet. Give it some time.
No, of course not (Score:2)
Yes, a few will waste money on boondoggles, but we're all missing the forest for the trees.
What AI has done is convince pointy haired managers that automation works. They're now going top to bottom through their entire enterprise automating everything they can. Even before that we'd seen more job losses to automation than outsourcing [businessinsider.com] but now it's going to accelerate into
Re: (Score:2)
To paraphrase an old saying:
Adding an AI to a late software project will only make it later.
Also, when working with an AI, you are depending on the work of a "three year old child".
Re: (Score:2)
this is just silly. If it costs more to run a machine than it saves in labor you don't buy that machine. That's how business works.
Wow, now I think you're a fake. The way business works is the salesdroid convinces your boss to buy it, and then you get crucified on it.
AI is trash (Score:2)
Not a coder, but I have read a lot of undeclared AI-generated articles (masquerading as human-written blog posts) and they are garbage. At first I thought they were written by people with a NESB and poor English skills, but once chatGPT hit the mainstream, it all made sense: it didnâ(TM)t understand context or flow.
Ditto for AI-generated audio transcripts. You have to spend so much time with them, itâ(TM)s worth paying the real price to have a human do it.
AI Code is maintenance free (Score:2)
Ah, A Rare Post. (Score:1)
Don't forget 'rewrite' (Score:2)
Yes, like with all trainees (Score:2)
Until they have proven their value.
Just trust the magic 8-ball (Score:2)
What could possibly go wrong?
Amdahl's Law (Score:2)
The overall improvement from optimizing a part of a system is limited by the fraction of the system that the part represents.
Validating code is ALREADY harder than generating code. Optimizing the generation of code will have limited effect on overall productivity.
If I didn't write it, I don't really understand it (Score:2)
When I've written a codebase I can fix bugs and add new features really quickly because I know what the code does and how all the pieces interact. Things slow down when I have to modify somebody else's code - including the AI-generated kind.
When it comes to productivity, you need to ask: What fraction of a programmer's time is spent writing new code vs modifying existing code? And what fraction of the code they're modifying was written by them in the first place?
If programmers are mostly modifying code that
Just used it last night (Score:2)
I was toying around with an idea for a personal project last night, so I asked for some boilerplate code to hit a particular API using Python. Out popped a decent-at-a-glance result.
Impressed, I asked it to use the 1Password CLI to populate the credentials, rather than baking them into the code. It again spat out something that looked reasonable to me (a person unfamiliar with any of these specific tools) at a glance, but it included a pipe to a command I didn’t immediately recognize that seemed to be
Hallucinations (Score:2)
At this point, I would most definitely agree you need it. If you use these generative AI for creating/generating content, you most definitely need someone to fact check EVERY LITTLE THING. Even something as simple as giving a model a "PDF" to read, i.e. "To Kill a Mockingbird", if you ask how many children does "Tom Robinson" have, depending on the model and the LLM, you could get very different answers, even though you literally ask it to read the PDF and yet it will generate an answer sourced from its pre
Has any *experienced* programmer audited the code? (Score:2)
Has anyone here, with more than five years as a programmer, who is familiar and USES modular code, audited any major code generated by the AI?
Or is it all spaghetti code?
AI is a fad for stupid people (*kind of...) (Score:2)
ChatGPT and CoPilot are those tapes, and well they can certainly walk you through simple tasks, when you have to start grilling Elk, or Pheasant, they won't help you. The moment the problem b