Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Programming

Will Productivity Gains from AI-Generated Code Be Offset by the Need to Maintain and Review It? (zdnet.com) 95

ZDNet asks the million-dollar question. "Despite the potential for vast productivity gains from generative AI tools such as ChatGPT or GitHub Copilot, will technology professionals' jobs actually grow more complicated? " People can now pump out code on demand in an abundance of languages, from Java to Python, along with helpful recommendations. Already, 95% of developers in a recent survey from Sourcegraph report they use Copilot, ChatGPT, and other gen AI tools this way.

But auto-generating new code only addresses part of the problem in enterprises that already maintain unwieldy codebases, and require high levels of cohesion, accountability, and security.

For starters, security and quality assurance tasks associated with software jobs aren't going to go away anytime soon. "For programmers and software engineers, ChatGPT and other large language models help create code in almost any language," says Andy Thurai, analyst with Constellation Research, before talking about security concerns. "However, most of the code that is generated is security-vulnerable and might not pass enterprise-grade code. So, while AI can help accelerate coding, care should be taken to analyze the code, find vulnerabilities, and fix it, which would take away some of the productivity increase that AI vendors tout about."

Then there's code sprawl. An analogy to the rollout of generative AI in coding is the introduction of cloud computing, which seemed to simplify application acquisition when first rolled out, and now means a tangle of services to be managed. The relative ease of generating code via AI will contribute to an ever-expanding codebase — what the Sourcegraph survey authors refer to as "Big Code". A majority of the 500 developers in the survey are concerned about managing all this new code, along with code sprawl, and its contribution to technical debt. Even before generative AI, close to eight in 10 say their codebase grew five times over the last three years, and a similar number struggle with understanding existing code generated by others.

So, the productivity prospects for generative AI in programming are a mixed bag.

This discussion has been archived. No new comments can be posted.

Will Productivity Gains from AI-Generated Code Be Offset by the Need to Maintain and Review It?

Comments Filter:
  • Same question (Score:5, Insightful)

    by Anonymous Coward on Sunday June 11, 2023 @12:41PM (#63593388)

    Same question asked when outsourcing coding to India. The answer is Yes. What you gain in cheap labor is lost when having to review and fix it.

    • Same question asked when outsourcing coding to India.

      The seems like a really good comparison to me - having ChatGPT write code, is a lot like offshoring code development, just without the lag in communications.

      But the result you get has to be checked so much, that it seems like use is limited to more confined areas - not so much "write me a whole website" as "Write me a form for input with these values" or "write a bunch of test cases".

    • Re:Same question (Score:5, Insightful)

      by iMadeGhostzilla ( 1851560 ) on Sunday June 11, 2023 @12:55PM (#63593406)

      I would never use AI generated code in production and I don't think anyone is. Code is arrived at iteratively in a process of discovery, and this built-in history that is critical is missing here.

      It can't even be used for POC code for the same reason: by writing the POC you discover where the needs and the possibilities are.

      AI generated code is great for making pong in Javascript and sharing on Twitter what generative AI can do.

      • I would, and have, used AI-generated code in production, in the same way that we all have used code snippets from Stack Overflow. All AI basically does is search Stack Overflow and regurgitate code samples it finds there (or in other similar code sites). That code is rarely production-ready, it has to be at least tweaked to fit within your own code base. It's useless to someone who is not a real programmer, but in the hands of a skilled developer, it can lead to big time savings.

        Requests like "Write a C# fu

      • I would never use AI generated code in production and I don't think anyone is.

        Then you're doing it wrong.

        Lots of people are using AI generated code with great success.

        Code is arrived at iteratively in a process of discovery, and this built-in history that is critical is missing here.

        I'm not sure what you're getting at here, git history of past revisions? I rarely ever touch that.

        The iterative process of making the feature? You still do that with AI.

        It can't even be used for POC code for the same reason: by writing the POC you discover where the needs and the possibilities are.

        AI generated code is great for making pong in Javascript and sharing on Twitter what generative AI can do.

        It's not like you give it the JIRA ticket, paste in the result, and move on.

        You figure out what the AI did, evaluate it for intent and correctness, see if it works, see if it's what you wanted after all, and move on.

        Maybe it can do some relatively compli

      • What degree of code, though? Because intelligent auto complete like Copilot is AI generated code. Short fragments of code like tiny functions or boilerplate, error handling, generically formatted error messages, etc are all cases that such AI does *really* well. It generates such a tiny amount of code that reviewing it is *very* quick. I consider it very different from the often imagined hypothetical of telling an AI "please implement this feature for me". My experience is that such small fragments of AI
        • That's a great point. Which language do you use it for?

          • Mostly Go. If you've seen Go code, it has quite verbose error handling, as you must explicitly check if an error variable is nil and typically return if so (often wrapping with a message). AI is good at trivially completing stuff like that. Go also likes to have table driven unit tests that have a rather common structure. I've also found AI to be good at completing parameters to commonly used functions, logging strings, Go's "assertions" (which are actually usually just regular if statements followed by `t.
            • Thanks, I'll check it out. I have only tried AI with C++ and wasn't happy with the results, but I do use Go from time to time and I'm intermediate there at best, so that kind of assistance might help.

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      What you gain in cheap labor is lost when having to review and fix it.

      And nobody cares. Which is why everything is shit now.

      Nobody cares that the code written by Indian monkeys is shit. Nobody cares that products made by Chinese monkeys are shit. They saved a lot of money by outsourcing to third world monkeys, and the short term benefit of that is the only thing they care about.

    • by NFN_NLN ( 633283 )

      Why would you "maintain" AI generated code. Wouldn't you just regenerate a new model each from scratch each time and run it through the automated test infrastructure?

      It wouldn't need to be the entire project, but libraries or modules shouldn't be edited they should be built new.

      • So each time you will have new bugs and performance problems different from the previous.
        • That's OK. The AI-powered optimization tools will clean it up.

        • by NFN_NLN ( 633283 )

          You create a test plan with test cases. It only passes if there are no failed tests.
          If you find new failure criteria you add that to the list of tests.

          The system should always be generating code based on the latest learning models. It should be come more efficient over time too.

        • by NFN_NLN ( 633283 )

          > So each time you will have new bugs and performance problems different from the previous.

          To be fair that's been Microsoft's MO for years and they weren't even using AI.

    • Same question asked when outsourcing coding to India. The answer is Yes. What you gain in cheap labor is lost when having to review and fix it.

      Nah you're just missing the bigger picture. All you have to do is say the magic word and all the problems go away. "Agile". Shit code from India isn't shit code from India. It's the "minimum viable product". A product that doesn't work isn't a poor release, it's a "live service".

      It's all about perspective.

  • The software ain't gonna review itself.

    Oh wait . . .

    • The software ain't gonna review itself.Oh wait . . .

      I expect that to work about as well as the software verifying its own cited legal precedents.

      • by micheas ( 231635 )

        I expect it will work better. If the only reason you are writing tests is that the customer (Google) insists on 90% coverage by unit tests, the AI can write the tests, generate the test coverage report, and then the contract can be signed and the garbage code is properly documented for compliance with the contract.

        I'm waiting for an AI to write a test to verify a security flaw exists and for the test to be discovered in litigation after a security incident.

        Broken as per the spec.

    • by micheas ( 231635 )

      The software ain't gonna review itself.

      Oh wait . . .

      Especially if the only reason people are doing code reviews is to hit compliance checkboxes. They are definitely going to automate those reviews.

  • No (Score:5, Insightful)

    by stikves ( 127823 ) on Sunday June 11, 2023 @01:00PM (#63593412) Homepage

    I use those tools, and as an experienced developer, it is as if I really have a "high school level" assistant.

    Writing documentation? In a snap:
    "Can you document this piece of code"

    Writing tests? Again:
    "Can you write simple tests for this code"

    Updating configs, writing basic imports, writing repetitive code...

    It makes me several times faster.

    It can even introduce new libraries, but it is where AI starts feeling short. That code only works some of the time. And requires a lot of tinkering. Still it usually forms a good template to start. And again, I know my way around to fixing it.

    So, AI is a tool like any other. Know its strengths and limits, and it will work for you.
    Trust it blindly, and it will cause more pain than you can imagine.

    • Re:No (Score:5, Insightful)

      by Mspangler ( 770054 ) on Sunday June 11, 2023 @01:45PM (#63593480)

      Management will trust it blindly, therefore it will cause you more pain than you can possibly imagine.

      I've been through a couple of Panaceas To The Great Problem, AI seems oddly familiar. It will work out to a useful tool, but along the way expectations will be a bit unrealistic.

      Just wait for McKinsey to get rolling.

      • Management will trust it blindly, therefore it will cause you more pain than you can possibly imagine.

        How?

        You still need a dev to drive the tool, it's not like management can say "insert all the ChatGPT code without testing because it's so awesome" as your code won't even compile/run.

        I can see management asking unrealistic deadlines, or pushing the tool when it doesn't make sense, but that's hardly a change from the status quo.

        • by micheas ( 231635 )

          Management will trust it blindly, therefore it will cause you more pain than you can possibly imagine.

          How?

          You still need a dev to drive the tool, it's not like management can say "insert all the ChatGPT code without testing because it's so awesome" as your code won't even compile/run.

          I can see management asking unrealistic deadlines, or pushing the tool when it doesn't make sense, but that's hardly a change from the status quo.

          Management will assign the task to an intern who doesn't know any better.

        • It seems the 7 months worth of chatGPT experience we have under our belt has calibrated our expectations pretty well. As a ML engineer I was already aware of many AI issues even before, but it's been fascinating to see everyone catching up fast.
    • by dvice ( 6309704 )

      > Trust it blindly, and it will cause more pain than you can imagine.

      1 out of 10?
      https://xkcd.com/883/ [xkcd.com]

    • Strongly agreed. I've used such tooling and love it. The kinds of things it generates are mostly straightforward stuff that I can instantly recognize as whether or not that's what I want. It usually just takes away the boring stuff for me. And if it suggests the wrong thing, I just don't accept the suggestion and keep on typing. I'm very confident that I've saved much more time than it's cost me and it also makes the job more fun by letting me focus on interesting stuff.
  • by jamienk ( 62492 ) on Sunday June 11, 2023 @01:01PM (#63593416)

    I'm always so impressed when ChatGPT etc. churns out a bunch of code:

    - At first it just FEELS like it does exactly what I want! I'm so happy!
    - Then I look through the code and think to myself that I need to change some small things like var names, put in my content, etc
    - Then I realize that some parts I don't understand
    - ...and that it's not quite working
    - I ask ChatGPT for some kind of explanation, and it apologizes and gives me a rewrite of part of the code
    - This new part doesn't completely work with the old part, so I have to figure that out
    - In the course of figuring it out, I realize that the part I don't understand is some bizarrely convoluted crap that, now that I've wrapped my mind around the issue, should be a simple one-liner
    - I ask ChatGPT about this and it apologizes and gives me my one-liner
    - I realize that all the formatting conventions and patterns ChatGPT recommended are not what I usually do, and that there are all kinds of subtle and hidden costs and assumptions -- there were a million hard-earned reasons I was coding MY way, and I had got carried away and forgot
    - I realize that I just wasted a bunch of time

    StackOverflow usually helps me wrap my mind around issues, ChatGPT makes me take a long, long route to get there

    • 20-year industry veteran here.

      > I realize that the part I don't understand is some bizarrely convoluted crap that, now that I've wrapped my mind around the issue, should be a simple one-liner
      > I realize that I just wasted a bunch of time

      I've been trying to use generative AI for the things I look up every time: dealing with dates/times, location, string encoding etc. The above is why I gave up and went back to just looking it up again.

      If this kind of needlessly convoluted, not-quite-right code starts m

      • So would you say it will lead to larger amounts of code, produced more quickly, and at a lower quality?

        Would you say that has any similarity to general trends in programming prior to this year?

        • > So would you say it will lead to larger amounts of code, produced more quickly, and at a lower quality?

          'zactly.

          > Would you say that has any similarity to general trends in programming prior to this year?

          The only thing I can think of is in the early 2000s when outsourcing blew up and there was an 'IT Training Institute' on every other block in India's major cities (for example). So many enterprise orgs tried to save a few bucks, then paid them right back again when they had to call in consultants to

      • When asked for a simple routine to convert a date to a Windows FILETIME, I had to check the URL and make sure it wasn't CrackGPT, because smoking crack is the only way I could think of it coming up with this batshit xor and bitshifting solution involving the normal Unix epoch offset format and some obscure non-Unix epoch starting in 1600. It wouldn't even compile, much less work. Several parts weren't even close to valid syntax.
        To be fair it doesn't *usually* choke so bad on a simple, bread and butter util
      • by gweihir ( 88907 )

        It's hard enough to keep tech debt at bay with a team of senior devs, I predict heavy LLM use will make it 10x more difficult and unpleasant.

        Well, that is the best case. The worst case is that it will make stuff convoluted enough as to be unmaintainable because nobody really can understand anymore what it does. At that point you can not even really rewrite it.

      • Same.

        ChatGPT doesn't know the answers to hard questions.

        It knows what's common on the internet: the answers to college homework and interview questions.

    • Same here, and even if the code did "work" -- to start with the best case -- I still wouldn't use it. If may pass, in the best case, a few tests I can think of, but there are still too many edge cases which normally you guard against those with logic and reasoning, which is absent in its code. ChatGPT's logic is similar to the logic you experience in a dream: feels right but when you look deeper it is bizarre.

      In my experience ChatGPT's only strength -- though an important one -- is it can bring to my awaren

    • What do you think about AI for automated testing? It seems like the AI could be trained on both existing tests and user behavior (from logs), and look for clear errors.

      The AI would have much more patience than humans to check for defects that require a combination of factors to trigger.

      Developers could then focus on the main scenarios, knowing that the AI safety net will check the outliers.

  • "The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots."

  • There's no doubt that if you just want to throw together something simple and straightforward, that AI is a handy tool. But the more complex the thing is you want to do the more dangerously it fucks it up, often in subtle ways. So unless you're competent and capable of reviewing the code it emits, you're better off not using it at all.

  • Will LAYOFFS from AI-generated code be DELAYED by the need to maintain and review it

    is what people really want to know.

  • I had a conversation with ChatGPT to help me learn some new technologies.

    It started out with a simple "I'm unfamiliar with this project structure - explain it to me". Then "Give me an example of code that does X". And "In this platform I'm an expert in, I'd do X with Y. how does Z accomplish Y?"

    Much of the code it wrote was about 80% correct, enough for a veteran dev to quickly see the bugs and chat back with more questions to clarify, and to know what to look up.

    In about 4 hours I had a working non-trivial implementation that *I* wrote. I never used GPT's code, but instead used it as a tutor to fire any questions I had at. It impressed the hell out of me for this purpose.

    • My experience has been very similar. LLMs like GitHub Copilot are great for getting 80% of the way there. It's like a writing prompt; not the finished product, but it helps me think about how to tackle the problem.

      Even when it is (inevitably) wrong, the poorly-done part helps me think through what a good implementation looks like. I've also found it useful for certain types of boilerplate code, providing I am very precise with my prompt.

      Like any tool, it is much more useful for a master than a novice.
    • by CAIMLAS ( 41445 )

      Yes - this is the correct way to use ChatGPT: as a debugger for your mental process.

      Basically, it's a more elegant Rubber Duck: https://en.wikipedia.org/wiki/Rubber_duck_debugging

      Try to use it for anything else and it will fall short every time.

  • There are a lot of valid criticisms of the code written by generative AI. But what will happen is that human-enhanced versions of that code will be fed back into the models, and the generated code will incrementally improve. This may not take very long depending on how much human effort is invested. Eventually the code will be hardened and (one would hope) reasonably easy to understand. Well and good I guess, programmers will write specifications and evaluate results. More work gets done and people's jobs a

    • There are a lot of valid criticisms of the code written by generative AI. But what will happen is that human-enhanced versions of that code will be fed back into the models, and the generated code will incrementally improve.

      Unless they start paying good coders to produce this human-enhanced code, any incremental improvement will be very slow... if it happens at all.

      • The employers of coders are paying to get the un-enhanced code, so it would be to their benefit to up-submit the human-enhanced version in hopes of future improvement. Or there could be a quid pro quo. My AI will write code for you at no charge if you will submit the fixed versions back to me.

        • There is a new trend for companies to train their own copilot models on the internal codebase and support issues. This makes suggestions much better and creates a situation where junior employees learn indirectly from their seniors. Companies love that because it reduces the cost of continual churn and speeds up onboarding. This trickle-down process is much more efficient when focused on a single company.
      • by micheas ( 231635 )

        That seems to be OpenAI's strategy. They are currently paying lots (thousands) of coders to write high quality code to train on.

        It's going to be an interesting experiment

    • by gweihir ( 88907 )

      Unlikely. You are basically postulating that coding can be done by only selecting elements from a catalog. If that were possible, we would already have it.

      • As I understand it, the generative AI's are being trained up with open source code and things like Stack Overflow. All this would do is provide increasingly better examples.

        • by gweihir ( 88907 )

          That would also assume people know how to improve what ChatAI offers to them. Observable evidence seems to not support that expectation in the general case. And ChatAI needs a _lot_ of training data to "lock on" to something. Hence GIGO would just continue.

          • I think you are overly pessimistic, these are early days. Code quality will improve and the current set of flaws will gradually be ironed out. And obviously there could be more focus on guardrails in the code and more appropriate testing before humans see it.

            Also there is a lot of opportunity for 'cookbook' code that's similar across multiple applications and therefore more likely to get properly refined.

            • by gweihir ( 88907 )

              I think you are overly pessimistic, these are early days.

              They are really not. The first failed attempt at something like this I witnessed was the 5GL project about 35 years back and I am pretty sure that was not the first attempt to generate code by some form of automation from user input. This has failed and failed and failed again and this time will not be different. Also, statistical approaches are the absolute _worst_ ones to code generation.

              • Things do change as time passes.

                This appears to be on a different level from what happened decades or just a few years ago. People appear to be very impressed with what they are seeing even if you aren't.

                • by gweihir ( 88907 )

                  Established Mathematics does not change. The only thing that has changed is that statistical approaches now can do very simple stuff but fail at things that a smart human would have needed a week or two to pick up. And actually, a lot of people are very _unimpressed_ with what they see, incidentally. You have to look for those that tried more than very simple generic stuff.

        • by micheas ( 231635 )

          As I understand it, the generative AI's are being trained up with open source code and things like Stack Overflow. All this would do is provide increasingly better examples.

          That was the GPT3 GPT3.5 and GPT4. OpenAI has hired thousands of programmers to write high quality code for future AIs to train on. It will be interesting to see how that turns out.

          Many have theorized that the poor quality of ChatGPT-generated code is because of the poor quality of the training data.

    • > It's easy to imagine that the generative AI's will eventually write highly optimized code that works exceptionally well but is hard for humans to understand.

      Easy to imagine, yes but if you take a hard look around all the AI tasks you will hardly find any that reach this level of autonomy. If we were to be conservative about our expectations we should conclude that AI has never been proven to work without human assistance on critical tasks and expect the future to be similar. It would be rather a mi
    • by Whibla ( 210729 )

      But what will happen is that human-enhanced versions of that code will be fed back into the models, and the generated code will incrementally improve.

      Only if the generative algorithms are changed to allow for such, which comes with its own downside - less 'creativity'.

      Essentially there are two competing goals at play when it comes to using LLM's for code generation: Slight randomisation from the 'most common' answer of a small selection of 'next words' in order to both create something novel and to avoid 'gibberish loops', versus complying with a programming language's syntax / rules.

      Novel structure and syntax in informal language text can be 'a good thi

  • I find it harder to review and understand someone else's code than to understand the workings of code I designed and coded. That's me -- since i have a more vested interest in the code I designed and coded being correct than in looking for nits in an AI's code.

    Thinking about that -- I'd have less interest in reviewing an AI's code than I would a fellow team member's code. It's a bit like a the difference between playing a chess game against a person vs. a computer. Somehow the outcome against a computer

    • by gweihir ( 88907 )

      Generally, reviewing code for security is significantly harder than producing secure code with trusted people. And it takes longer. So not only is the person doing it more expensive, it needs to work on this longer. When clients ask for general code security review because the do not trust the devs that wrote their code, I tell them to throw it away and rewrite it with trusted devs because that is cheaper.

      • by lpq ( 583377 )

        So, along the lines of saying the same thing w/different words... If I trust myself its cheaper/easier to write trusted code (code that I trust) than it is to review code from an untrusted source. Seems like you are giving a more general rule about difficulty of reviewing one's own code vs. someone else's?

        I know when one uses the phrase 'trusted code' as in a TCB (Trusted Code Base), there are differences in meaning vs. saying one trusts one's own code and thus regards it as trusted code (of some level).

        • by gweihir ( 88907 )

          Test cases do not make code trustworthy. Security problems often hide in border-cases that may not even happen at all in normal operation and typical test-cases, but that attackers can produce. Like a 1 in 100'000'000 timing condition that attackers can get down to 1 in 1000. Or in using the one special case missed in input validation that nobody thought to test for.

          What makes code trustworthy is a) coders that know how to write secure code and that are careful and b) coders that are motivated to not attack

  • Can confirm (Score:4, Insightful)

    by memory_register ( 6248354 ) on Sunday June 11, 2023 @02:12PM (#63593508)
    I am already seeing this problem manifest with some clients. Easy code generation is leading clients to expect code faster, not realizing that the devil is in the details and AI leaves vulnerabilities all over the place because it is not intelligent- it is just very fancy statistics.

    I expect to see more security breaches in the future.
    • by gweihir ( 88907 )

      No surprise, really. The model used is just not fit to produce anything that needs any level of insight because it has none.

      I also expect more security breaches coming from this. All it takes is for attackers to identify some general patterns in the security mistakes ChatAI makes and they are golden. Next stage is then to seed subtly insecure but good looking code to the net so the next generation of ChatAI eats it up and uses it.

      Using ChatAI to generate production code is just a really abysmally bad idea.

      • >Using ChatAI to generate production code is just a really abysmally bad idea. It just shows the ones doing it have no clue how producing secure, reliable and maintainable code works.

        That'll be the PHBs & CEOs who'll decide "we can save money doing this, which will get me a bonus, and, when it all goes horribly wrong, I'll be at another company anyway.".

  • You know, amid the turmoils of life, it's actually kind of comforting that one can still rely on some things, even if they are simple, in it to remain reliably constant.
    • by mark-t ( 151149 )

      Oh crap.... I misread the question.

      Slashdot has broken my brain. I don't know what to think anymore. Why are we even here? What is the meaning of life?

  • Is the wrong use of the new AI tools
    We need tools to help us manage complexity
    We need tools that can help us find tricky bugs and unintended interactions
    We need tools to help us visualize the operation of systems that are too complex to fit into one mind
    We need tools to clean up the really, really old code that still performs important functions
    We need tools that allow us to make better software
    We need tools that allow us to create much more powerful and complex, bug-free code
    We don't need tools that auto-g

  • Current AI code generation is still too weak. I've tested it with embedded C code ( my background ) and it appears to be happily generating code, that does not work. I suspect it comes from analysing the typical stackexchange question of "here is my solution to problem X, why does it not work?" without parsing point 2.

    • by gweihir ( 88907 )

      That is more like a "not ever", because this thing is incapable of understanding. It just parrots the average of what seems statistically relevant.

      • by Anonymous Coward

        That is more like a "not ever", because this thing is incapable of understanding. It just parrots the average of what seems statistically relevant.

        Understanding was encoded during the training of the neural network. GANs used to minimize loss function are effectively causing a conceptual model representing the meaning of training dataset to be compiled.

        GPT4 is able to answer new questions it has not seen before for this very reason. It is exploiting knowledge of similar concepts. What's holding back the technology is business / computational cost of the service. It costs nearly a million dollars a day in computer time alone to execute the pre-trai

  • This problem will expand exponentially when a machine starts to come up with new realizations or even simple-looking new formulas of the world we live in. Everything needs to be verified and the reasoning needs to be backtraced. Otherwise, we would not gain new understanding but blindly trust in a black box.

    Sure, the machine can walk us through its process, but we would only be on its leash. An option would be to let go of anthropocentrism and accept the presence of new intelligence (when it actually arises

  • You don't win genuine knowledge of it that you don't have to review first.

  • Please put a written disclaimer on ANY and ALL websites, programs, applications, etc that use this crap. I think that the people will demand it as we don't want to risk our money or privacy on a site that is 100% sure to be hacked by this unmonitored system.

  • by gweihir ( 88907 ) on Sunday June 11, 2023 @02:36PM (#63593544)

    For code, you will not only have essentially higher effort reviewing it than manually producing it would have cost you. You will also still have less quality, especially with regards to security, architecture, performance and other aspects. There really is no shortcut when it comes to work that requires understanding, get over it and do it right. Also, ChatAI can only do things it has seen often enough. Anything a bit more rare or more specialized, it cannot do at all. Example: ChatGPT was a complete failure for a simple firewall configuration with NAT when my students tried it.

    For low-skill, no-insight white-collar work, ChatAI could work well. But this work is typically not "productive" in the first place, but rather consists of bureaucratic hassles that benefit nobody besides the bureaucrats doing it.
     

    • by dvice ( 6309704 )

      I still see value even in the code made by ChatGPT. It is good for prototyping and experimenting.

      • by gweihir ( 88907 )

        That probably comes from you not having enough experience with it yet. Give it some time.

  • this is just silly. If it costs more to run a machine than it saves in labor you don't buy that machine. That's how business works.

    Yes, a few will waste money on boondoggles, but we're all missing the forest for the trees.

    What AI has done is convince pointy haired managers that automation works. They're now going top to bottom through their entire enterprise automating everything they can. Even before that we'd seen more job losses to automation than outsourcing [businessinsider.com] but now it's going to accelerate into
    • To paraphrase an old saying:
      Adding an AI to a late software project will only make it later.

      Also, when working with an AI, you are depending on the work of a "three year old child".

    • this is just silly. If it costs more to run a machine than it saves in labor you don't buy that machine. That's how business works.

      Wow, now I think you're a fake. The way business works is the salesdroid convinces your boss to buy it, and then you get crucified on it.

  • Not a coder, but I have read a lot of undeclared AI-generated articles (masquerading as human-written blog posts) and they are garbage. At first I thought they were written by people with a NESB and poor English skills, but once chatGPT hit the mainstream, it all made sense: it didnâ(TM)t understand context or flow.

    Ditto for AI-generated audio transcripts. You have to spend so much time with them, itâ(TM)s worth paying the real price to have a human do it.

  • If it breaks, or you want to change it, you throw it away and start over.
  • Betteridge's Law *doesn't* apply here. Anything generated by AI needs to be reviewed, edited, fine-tuned. I haven't tried code generation, but if code generation goes anything like generating poetry, the amount of work needed for refinement will be on par with just doing it myself.
  • So review, attempt to maintain, realize that 'close' is not good enough when it comes to machine instructions, and finally rewriting it at a higher cost.
  • Until they have proven their value.

  • Really. The makers of the magic 8-Ball, aka any LLM system, asked the LLM if it worked and it answered:

    It is certain

    It is decidedly so

    Without a doubt

    Yes definitely

    You may rely on it

    As I see it, yes

    Most likely

    Outlook good

    Yes

    Signs point to yes

    What could possibly go wrong?

  • The overall improvement from optimizing a part of a system is limited by the fraction of the system that the part represents.

    Validating code is ALREADY harder than generating code. Optimizing the generation of code will have limited effect on overall productivity.

  • When I've written a codebase I can fix bugs and add new features really quickly because I know what the code does and how all the pieces interact. Things slow down when I have to modify somebody else's code - including the AI-generated kind.

    When it comes to productivity, you need to ask: What fraction of a programmer's time is spent writing new code vs modifying existing code? And what fraction of the code they're modifying was written by them in the first place?

    If programmers are mostly modifying code that

  • I was toying around with an idea for a personal project last night, so I asked for some boilerplate code to hit a particular API using Python. Out popped a decent-at-a-glance result.

    Impressed, I asked it to use the 1Password CLI to populate the credentials, rather than baking them into the code. It again spat out something that looked reasonable to me (a person unfamiliar with any of these specific tools) at a glance, but it included a pipe to a command I didn’t immediately recognize that seemed to be

  • At this point, I would most definitely agree you need it. If you use these generative AI for creating/generating content, you most definitely need someone to fact check EVERY LITTLE THING. Even something as simple as giving a model a "PDF" to read, i.e. "To Kill a Mockingbird", if you ask how many children does "Tom Robinson" have, depending on the model and the LLM, you could get very different answers, even though you literally ask it to read the PDF and yet it will generate an answer sourced from its pre

  • Has anyone here, with more than five years as a programmer, who is familiar and USES modular code, audited any major code generated by the AI?

    Or is it all spaghetti code?

  • Programming has turned from a specialized field, over to a "fast food" field, without removing the need for experience, expertise, knowledge, skill, or quality. It's no different from taking a Michelin Star eatery, placing a McDonald's worker on the grill, and claiming they'll do fine because they cookbooks on tape.

    ChatGPT and CoPilot are those tapes, and well they can certainly walk you through simple tasks, when you have to start grilling Elk, or Pheasant, they won't help you. The moment the problem b

The most exciting phrase to hear in science, the one that heralds new discoveries, is not "Eureka!" (I found it!) but "That's funny ..." -- Isaac Asimov

Working...