Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming AI

'OK, So ChatGPT Just Debugged My Code. For Real' (zdnet.com) 174

ZDNet's senior contributing editor also maintains software, and recently tested ChatGPT on two fixes for bugs reported by users, and a new piece of code to add a new feature, It's a "real-world" coding test, "about pulling another customer support ticket off the stack and working through what made the user's experience go south." First...

please rewrite the following code to change it from allowing only integers to allowing dollars and cents (in other words, a decimal point and up to two digits after the decimal point). ChatGPT responded by explaining a two-step fix, posting the modified code, and then explaining the changes. "I dropped ChatGPT's code into my function, and it worked. Instead of about two-to-four hours of hair-pulling, it took about five minutes to come up with the prompt and get an answer from ChatGPT." Next up was reformatting an array. I like doing array code, but it's also tedious. So, I once again tried ChatGPT. This time the result was a total failure. By the time I was done, I probably fed it 10 different prompts. Some responses looked promising, but when I tried to run the code, it errored out. Some code crashed; some code generated error codes. And some code ran, but didn't do what I wanted. After about an hour, I gave up and went back to my normal technique of digging through GitHub and StackExchange to see if there were any examples of what I was trying to do, and then writing my own code.
Then he posted the code for a function handling a Wordpress filter, along with the question: "I get the following error. Why?" Within seconds, ChatGPT responded... Just as it suggested, I updated the fourth parameter of the add_filter() function to 2, and it worked!

ChatGPT took segments of code, analyzed those segments, and provided me with a diagnosis. To be clear, in order for it to make its recommendation, it needed to understand the internals of how WordPress handles hooks (that's what the add_filter function does), and how that functionality translates to the behavior of the calling and the execution of lines of code. I have to mark that achievement as incredible — undeniably 'living in the future' incredible...

As a test, I also tried asking ChatGPT to diagnose my problem in a prompt where I didn't include the handler line, and it wasn't able to help. So, there are very definite limitations to what ChatGPT can do for debugging right now, in 2023...

Could I have fixed the bug on my own? Of course. I've never had a bug I couldn't fix. But whether it would have taken two hours or two days (plus pizza, profanity, and lots of caffeine), while enduring many interruptions, that's something I don't know. I can tell you ChatGPT fixed it in minutes, saving me untold time and frustration.

The article does include a warning. "AI is essentially a black box, you're not able to see what process the AI undertakes to come to its conclusions. As such, you're not really able to check its work... If it turns out there is a problem in the AI-generated code, the cost and time it takes to fix may prove to be far greater than if a human coder had done the full task by hand."

But it also ends with this prediction. "I see a very interesting future, where it will be possible to feed ChatGPT all 153,000 lines of code and ask it to tell you what to fix... I can definitely see a future where programmers can simply ask ChatGPT (or a Microsoft-branded equivalent) to find and fix bugs in entire projects."
This discussion has been archived. No new comments can be posted.

'OK, So ChatGPT Just Debugged My Code. For Real'

Comments Filter:
  • by OffTheLip ( 636691 ) on Sunday October 15, 2023 @01:11PM (#63926787)
    Maybe Unicode support is still possible!
    • Of course, I don't have mod points today.

    • by Anonymous Coward on Sunday October 15, 2023 @02:06PM (#63926907)
      In fact adding Unicode is simple. What is hard is to prevent abuse.

      At some point, /. did support unicode, but slashdotters used it to do all kinds of weird things, such as replacing the moderation field by (+7, Astounding). I cannot find the link to those posts anymore, perhaps somebody with superior google-fu can help?
      • It can't be that difficult because almost every single other forum supports unicode characters without blowing everything up, somehow.

      • by tlhIngan ( 30335 ) <slashdot@worf.ERDOSnet minus math_god> on Sunday October 15, 2023 @03:28PM (#63927025)

        At some point, /. did support unicode, but slashdotters used it to do all kinds of weird things, such as replacing the moderation field by (+7, Astounding). I cannot find the link to those posts anymore, perhaps somebody with superior google-fu can help?

        That's the point. /. has supported Unicode for well over a decade now The problem is, Unicode is always evolving and Unicode is constantly adding new codepoints that need to be filtered out. Lots of example of Unicode-abuse, usually in the form of people pasting special characters that go and destroy websites.

        The most common form of abuse was the right-to-left-override where you can insert RTL formatted text in what would normally be LTR text (e.g., if you need to insert some Arabic in a block of English text). This would then set the text direction backwards when rendered on screen.

        Moderation abuse is simple to Google because of this - just look for "5 :erocS" in Google - because after an RTL override codepoint, it will be reversed. (Hint: a Unicode renderer will render the "5" character, the move left, render a space, move left, render the colon, so you get "Score: 5". Follow it with a LTR override character and things appear normal again.

        Another one is overdecorated text - some languages are big on decorations, so those can be misapplied to other codepoints leading to text that a few million pixels tall and stretches above the line so you see a black line running down the page. Repeat this a few times and you can render a whole webpage black. Granted, you're also going to write a comment that's a few megabytes in size...

        • I don't think anybody would complain about a compromise where only known codepoints that weren't subject to abuse were allowed. Since, again, this isn't a problem for any other website, I'm thinking this is a solved problem and such a list already exists.
        • That's not a good reason to not enable unicode. That's a good reason to not *whitelist* accepted unicode characters. The fact that Slashdot has trouble rendering simple characters used daily in English speech is the problem.

        • by AmiMoJo ( 196126 )

          It's a flaw in Unicode itself. It's been bodged together over the years, and offers no standard libraries or definitions to help programmers do basic stuff like determine which family of languages is in use. It also combines formatting with character encoding in a way that creates the problems you describe.

          The RTL override character is a great example. It shouldn't exist. The app should be able to use a standard library to query which way a given character should be rendered.

          Sites like Slashdot that are pri

          • The Point of Unicode (pun intended) is to be able to mix languages in an agnostic way.
            Why do you want to segregate ?

            • by AmiMoJo ( 196126 )

              It's not segregation, it's being able to combine and mix languages in a way that actually makes sense and works in the real world.

              The classic example is international airlines in East Asia. If they use Unicode, the names printed on tickets will be be wrong for half their passengers. At best they can try to guess which font to use, which shows you what a complete disaster those languages are in Unicode.

      • In fact adding Unicode is simple. What is hard is to prevent abuse.

        No, it is not at all hard. It's called whitelisting. There's less than a dozen characters which must be allowed to permit the functionality we actually need. The list could be expanded over time if desired, but right now all we need is smart quotes, literally a few accented letters, and a handful of currency symbols. And the lame filter could ostensibly be used to prevent their overuse.

    • Journalist discovers that ChatGPT understands program code months after everyone else.
    • Why is slashdot literally the only site on the internet with this problem? Go ahead and find another one with this behavior.

    • by Reeses ( 5069 )

      And after we get Unicode, maybe we can get Markdown support too.

      • And after we get Unicode, maybe we can get Markdown support too.

        And an Edit Button!

        And, FFS, at least some kind of a Rich Text Editor!

  • by rsilvergun ( 571051 ) on Sunday October 15, 2023 @01:30PM (#63926827)
    and no, not all of them are in India yet. Also for anyone who aspires to go above code monkey but isn't a math genius who's really not a programmer, they're a mathematician using a tool, it means you're going to find it basically impossible to get a start.

    A huge sea change, a 3rd industrial revolution is coming. And there are no new jobs on the horizon to replace the ones we're destroying.

    I say "on the horizon" but this has been going on for ages. [businessinsider.com]

    Fun fact, following both the 1st and 2nd industrial revolutions there were *decades* of rampant unemployment until a combination of new tech and wars got us back to full employment. Those were "interesting times".
    • by serviscope_minor ( 664417 ) on Sunday October 15, 2023 @02:20PM (#63926935) Journal

      I don't think it will fit the same reason Cobol didn't.

      Who's going to drive it, give it prompts and then deal with the result? The job will still be called programmer.

      • "I don't think it will fit the same reason Cobol didn't".

        I don't think the average person understands how much of our world (government agencies are the biggest user though you'd be surprised how much corporate code still runs in cobol) is cobol. It's significant.

        As a side note, why is Slashdot's comment editor still so shitty? I had to insert HTML linebreaks. That's nuts.

        • I wasn't referring to Cobol as some dead language, I'm referring to it as the first.

          The point was it made programming super easy comparing to bashing bits in machine code (or asm if you were lucky), so instead of needing mega turbo nerds, business people could write the business logic.

          Well at know how that worked out. Turns out programming has a lot of figuring out a coherent spec from requirements and then implementing those. Cobol greatly eased the latter as have many languages since. But they are still u

          • by AmiMoJo ( 196126 )

            Writing the business logic became a job known as "systems analyst", because it turned out most business people didn't really know how to convert their processes into the kind of thing that a computer could actually do.

            ChatGPT may eventually be able to do that job, but at the moment the limitation is that you need to tell it what you want. It doesn't ask you questions and conduct its own investigation of your business, talking to your employees to find out their needs and how they work in practice (not just

      • Re: (Score:3, Insightful)

        by znrt ( 2424692 )

        call it whatever but this guy:

        I dropped ChatGPT's code into my function, and it worked. Instead of about two-to-four hours of hair-pulling, it took about five minutes to come up with the prompt and get an answer from ChatGPT.

        has absolutely no clue what he is doing. he does not know what programming is nor understands what a generative model is, yet decides to share his ignorance with the world by publishing an embarrassingly nonsensical article about exactly those two things, as "senior contributing editor" no less in a news outlet that is supposedly specialized in technology and innovation. if it's a joke it's a very cringey one. i understand /. has to make a living but who actually is supposed to

        • I found it newsworthy as a developer because it's what managers are going to read and will thus set the bar for expectations of our work. It's not newsworthy for what the guy did -- and, in fact, did poorly, even with the AI assist. But "poorly" is still better than "not at all" and that's going to move the needle for us ... quality wins when there's no cheaper option... quality suffers as soon as shoddy becomes available cheap. That's true in every industry I've worked in, alas.

          • shoddy becomes available cheap

            Shoddy is already available cheap. Just look at all of the data breaches we keep getting. Expecting some generative AI to fix it all is suicide. So expect some dumbass MBA to mandate it next week.

            What we need is to make the failures expensive for those causing them, but again good luck with that.

            • So push the topics that MBAs that will make those same MBAs hesitate.

              Once you upload code to a public database you lose copyright control of that code. All generative AI code samples are built on line. You cant upload your secert code to the public

                AI will be hacked and tricked into providing those same code sgents to your competitetors.

      • Really? Because the job your describing sounds more like "manager".

        • No it doesn't.

          Look at it this way: Cobol was the first of many many innovations making the act of writing code easier. The whole idea that coming up with easy high level descriptions and have the computer figure out what to do is as old as Cobol and FORTRAN.

          But it's still programmers figuring out what high level descriptions to use because going from wishes to a coherent technical description is ultimately what programmers do.

      • > The job will still be called programmer.

        The early days of the industrial revolution the problem wasn't the loss of jobs. It was that skill levels of existing workers was no longer needed.

        People would spend years building up their trade and skills. They were replaced by children who could churn out faster using machines.

        The same is going to happen. You will still have someone who can be defined as a "programmer" but it no where near the skill levels you need now.

        There will be still roles for experts, b

        • In general yes, sure.

          In this specific, case, I don't think so. There has already been a massive drop in the required skill level. How many of us are on the level of Mel the Real Programmer.

          Chat GPT etc maybe saves you from the burden of syntax, enabling you to write in yet another higher level language. But we've already had thousands of such innovations, starting with COBOL and it's had the opposite effect so far. I've also never worked at a job where there were ever "enough" programmers: scope was always

      • >> Who's going to drive it, give it prompts and then deal with the result? The job will still be called programmer.

        There will be
        - "PROgrammer"
        - "noob-grammer"
        - "AI-BS-grammer"

        in that order.

    • by shoor ( 33382 )

      for anyone who aspires to go above code monkey but isn't a math genius who's really not a programmer, they're a mathematician using a tool,

      I'm not quite able to parse this. Did you leave out a comma after genius maybe? Are you saying if you're not a math genius you won't be able to be a programmer because AI will take your job, and only mathematicians using a tool will be programming?

      BTW, in my opinion being a mathematician, or thinking like a mathematician, is not particularly applicable to the nuts and bolts of programming, even when programming above 'code monkey' status, unless you're programming in Haskell.

  • A bit shirt sighted there perhaps. In a few iterations time it'll be able to write the code from scratch and franky when it can do that it could probably emulate whatever system you want directly.

  • by quonset ( 4839537 ) on Sunday October 15, 2023 @01:36PM (#63926843)

    I can definitely see a future where programmers can simply ask ChatGPT (or a Microsoft-branded equivalent) to find and fix bugs in entire projects."

    "ChatGPT, how to I remove the coding from Windows which sends telemetry without breaking the operating system?"

    "I'm sorry, Dave. I'm afraid I can't do that."

    • Pretty much:

      > How do I completely disable telemetry in Windows 11?

      ChatGPT: Disabling telemetry in Windows 11 is not recommended, as it's an essential part of the operating system for security, diagnostics, and improving the overall user experience. However, you can reduce the amount of telemetry data sent to Microsoft by adjusting the settings. Keep in mind that some level of telemetry is necessary for Windows to function correctly and receive updates. Completely disabling it can lead to potential
    • Indeed, or:

      "ChatGPT, how can I ask you to debug my code without giving your creator the code to do with as they please?"

      "CharGPT, how can you write code such that I can retain the copyright and any other legal claims in future?"

      Until you can have your own, self-hosted ChatGPT, most serious programming won't be going near it. Hobby projects are going to get interesting though.

  • by cstacy ( 534252 ) on Sunday October 15, 2023 @01:36PM (#63926845)

    I gave up and went back to my normal technique of digging through GitHub and StackExchange to see if there were any examples of what I was trying to do, and then writing my own code.

    "Programmer" is unable to write a routine to copy an array. Uses "AI" to generate code that he doesn't understand, but which crashes when he runs it. So then he searches the web to see if someone already wrote this code for him somewhere, copies and pastest it, maybe renames some variables, and says it's "his code" that "he wrote". Since it compiles and doesn't seem to crash, we're good to go.

    I think I see the problem.

    • by linuxguy ( 98493 ) on Sunday October 15, 2023 @01:54PM (#63926885) Homepage

      > Uses "AI" to generate code that he doesn't understand...

      "Programmers" have been doing this for a while now. Instead of AI, they used Google to find some code they could copy/paste without understanding. AI is just making it a easier to do what people had been doing for a while in this regard.

      • It's easier to understand how existing code works than to create it from scratch. Hey, does that prove P!=NP?
      • Except in this case, the AI wasn't able to solve the problem.
      • by whiplashx ( 837931 ) on Sunday October 15, 2023 @07:49PM (#63927481)

        I'm a AAA graphics programmer, I'm at the top of my game, 15 years experience on some big titles, a graphics engine that ships billions of dollars of games. When you use it properly, GPT absolutely rocks at programming for real world huge scale problems. You can quote me: it's insane to lowball GPT's ability. GPT knows intricate details about how to handle complex high performance code.

        If you are not prepared and it gives you a hallucination, or you try to let it lead, then yes, it can't code. One example - it can write itself in a corner where it needs a function that doesn't exist. Next time you query - just tell it that it is having a problem because it keeps trying to use the same non-existant function. Problem solved.

        Yes, it's scope is limited - can't handle more than about 200 lines of code at once. I just break up my ideas into pseudocode, capture the dependencies, and write them into the prompts to generate the functions.

        Yes it makes simple mistakes. I just correct it and move on.

        If you work through the problem with GPT methodically, it will solve it, absolutely, with enough retries, 95% of the time.

        This is an absolutely insane productivity boost for me, I'd estimate 5x or more.

        • by bungo ( 50628 )

          Yes, it's scope is limited - can't handle more than about 200 lines of code at once. I just break up my ideas into pseudocode, capture the dependencies, and write them into the prompts to generate the functions.

          Reminds me when I was trying to solve a complex integral using Wolfram Alpha. I didn't have the paid version, so I could only see some of the steps, so I kept breaking the integral down into separate parts and putting it in to Alpha until it had pretty much given me the full solution, I just had to put it together myself.

          Of course, instead of all of that messing around, I would have been better off revising integration by substitution and parts, and integration of known functions to be better at Math... but

    • Sounds like the standard modus operandi of your typical 3rd rate Lego brick method dev of which there are unfortunately far too many in our industry. Knowing their shit and being able to write working code on their own is a foreign concept to them.

    • all the time. Yes, programmers can write those routines. Easily. They do it every day.

      Anything you do every day you're gonna screw up occasionally. In an economy that uses as much software as ours "occasionally" is a *lot*.

      A lot of time and energy is spent finding those occasional screw ups. Time people are paid for. Time they won't be paid for anymore.

      Where is that money going to go? Is the CEO going to reinvest it? Or are they going to either pocket it and/or use it to buy out a competitor?
      • A lot of time ist wasted because people fail to write proper unit tests. That's what the generative model should answer: "what ist the unit test for this method?"
        • The test should be written first in most cases.
          • This is always a frustrating response to me -- a complete unit test needs some knowledge of the internals of the function to know that all code paths got tested. The only tests I can write in advance are the ones that rise all the way to user requirements, which is more integration testing, usually. Yes, write as many tests as you can at the start and then get them passing, but, in my experience, that's rarely the unit tests.

            • Every level of the system has an interface. Perhaps not literally, but logically. That interface has a limited set of behaviors, at least some of which are known ahead of time (because they are the reason for the function). Unit tests should always be written against interface behaviors, preferably one test per behavior. If you find more corner cases you can add more tests later, but the common case should be tested up front to verify the interface is easy to use. Do not write tests against implementations,
    • Re: (Score:3, Interesting)

      by Moridineas ( 213502 )

      What, you think this is new? I graduated in the early 2000s. Sometime around 2009/2010 one of my old professors was bemoaning the fact that students didn't want to write any code any more, they just wanted to copy and paste different blocks together until it worked. Coincidentally, stackoverflow started in 2008.

      Overall, I am huge believer in using chatGPT as support, today. I have used chatGPT to dramatically optimize SQL queries, suggest a new index, convert a legacy PHP program from Laravel 4 to Laravel 1

    • by jvkjvk ( 102057 )

      Exactly. Just write the damn array code already. The only reason you should be asking for help is if there is some function that you don't know or remember or something like that. Does anyone remember to RTFM any more?

  • by Dan East ( 318230 ) on Sunday October 15, 2023 @01:36PM (#63926847) Journal

    To be clear, in order for it to make its recommendation, it needed to understand the internals of how WordPress handles hooks

    No, to be clear, at some point ChatGPT was trained on text that dealt with WordPress hooks, and thus it had some relationship of tokens that was involved with what you wanted to know.
    ChatGPT has no "understanding" or computational knowledge about anything.

    I see a very interesting future, where it will be possible to feed ChatGPT all 153,000 lines of code and ask it to tell you what to fix... I can definitely see a future where programmers can simply ask ChatGPT (or a Microsoft-branded equivalent) to find and fix bugs in entire projects.

    Okay, so what exactly are we talking about? Syntax or behavior? If it is syntax then linters already do this, and they are built with the exact rules and best practices for that language. It is no black box, but something designed specifically to do that exact thing and do it very well. They can also reformat and fix code as well when it comes to syntax.

    If we're talking about behavior, then please tell me how you are going to describe to ChatGPT what the behavior of the 153,000 lines of code is supposed to be, so it will know whether or not there is something that needs fixing in the first place? Unless we're talking about something that could result in a total runtime failure, like dereferencing a null pointer or division by zero, then there's no realistic way to express to ChatGPT what the code is supposed to do. Especially when we're talking about that kind of scale that 153k lines are involved. How about breaking the code down into functions, and defining input and expected outputs for that function so that ChatGPT would then know what the function is supposed to do? Good job, you just invented unit tests.

    • Define understanding. If it can parse the question, parse the code explanation, parse the code and provide some kind of output from that then I'd call that understanding albeit maybe incomplete. Yes, so it's been fed a load of text , but then so were you when you learnt. And yes, you can site the chinese room as a counter example but what Lenrose didnt consider was it doesnt matter how it works inside, it's how it behaves outside that matters.

      People seem determined to think these LLMs are just dumb statisti

      • This is an ongoing area of research, and there are some interesting findings.

        When you initially train an AI on a dataset, it starts as a statistical analyzer. I.e. it memorizes responses and poops them back out, and plotting the underlying vector space you see the results are pulled more or less randomly from it. Then, as you overtrain, the model reaches a tipping point where the vector space for the operation gets very small. Instead of memorizing, they appear to develop a "model" of the underlying opera

    • If we're talking about behavior, then please tell me how you are going to describe to ChatGPT what the behavior of the 153,000 lines of code is supposed to be, so it will know whether or not there is something that needs fixing in the first place? Unless we're talking about something that could result in a total runtime failure, like dereferencing a null pointer or division by zero, then there's no realistic way to express to ChatGPT what the code is supposed to do.

      Maybe you should try it out? Not on a 153k line program, but I've had great luck with pasting in the schema for ~a dozen tables and then having chatGPT optimize queries with 6-7+ joins, subqueries, etc.

      I think you might also be surprised at what chatGPT can analyze about functions and codes. I hesitate to use the word "understanding" but this is one of those areas where chatGPT can surprise you.

  • by dcooper_db9 ( 1044858 ) on Sunday October 15, 2023 @01:38PM (#63926853)

    Writing code is something I'd expect a LLM to be able to do well given enough learned source. Feeding individual problems to the generator makes sense but I wouldn't want to feed it 10k lines of code and just accept the result. You would need to read and understand the code you're using. It would be somewhat similar to using a library from an external project, except you can't trust the source.

    • Exactly. AI is great when you can trivially verify the result to a complex problem, but not so great when the result is time-consuming or complex to verify. If you need a subject matter expert to verify the result and it’d take them as long as solving it themselves, there’s no benefit at all and a high likelihood of drawbacks as they discover errors in the result.

  • by NoWayNoShapeNoForm ( 7060585 ) on Sunday October 15, 2023 @01:51PM (#63926875)

    From TFS - "Could I have fixed the bug on my own? Of course. I've never had a bug I couldn't fix."

    Sounds like a Chad moment to me

  • ChatGPT can shorten the time it takes a developer to do work, but it can't fix for incompetence.

    I used it the other day to help me shift some functionality from server side to client side, and the results have been very very good. Save me a lot of debugging time, even after I reviewed all the code by hand. I tested the functions and got the expected results right off the bat.

    Probably saved myself at least half a day of work.

    But I didn't ask it to do something large, I broke it down to manageable pieces that

  • Questionable results (Score:5, Interesting)

    by Guspaz ( 556486 ) on Sunday October 15, 2023 @02:12PM (#63926917)

    ChatGPT seems worse at producing working powershell code than it did shortly after it launched. It seems to make a lot more errors. It's still a timesaver to have it write code snippets, but those snippets must then be manually reviewed and tested because it often makes errors. Even something simple like asking it to extract the title out of HTML contained in a string, it wrote code that was basically perfect except that it forgot to escape one slash in the regex and thus the code it output produced a syntax error. An easy fix, but the error rate is so high that it's just a time saver at best.

    • by MindPrison ( 864299 ) on Sunday October 15, 2023 @02:22PM (#63926939) Journal

      It's very random at times. I've had chatgtp 4 as a subscriber since they released it for that, and it CAN be useful, but it can also be totally disasterous.

      In any case, you NEED to know something about what you are doing, and as a "human" you need to proofread A.I's results as well.

      It's very good at basic concepts such as initial code, specific calculation tasks and things that are in the known universe, but it really falls short when you try to describe what you want from the code. It's like if you're asking it to be creative like you can be, it can try - but it just can't, it's not human, it's not even "artificial enough", it's just a LLM - it knows what it knows from numerous documents, books and data it has been trained on, and it can't really think which many people misunderstand and believe it can, well - it can't.

      But can it correct code? Sorta yes. But also, it doesn't understand the general concept that you're thinking of when you make a piece of code, it can look for correct code that don't fail, but unless you specifically instruct it in what numbers or outcome you expect from it, it won't understand that, and you'll get some random results, sometimes they can be downright dangerous so use that output with care, read the example code ChatGPT gave you and see if you can spot some fatal things in there, you need to KNOW code and what you want, it's not a "magic piece" that will just code whatever you want.

      I've been using it numerous times to create artistic scripts for my Blender projects, and it's very hard work, no matter how much information you will give it, it will constantly get it wrong simply because you have to be SO specific about every little thing you want to achieve, it also doesn't have the latest data on debugging or recent compiler fixes etc, it often uses deprecated code to analyze your code, and chances are your code is better and more up to scratch so to speak.

      So use it ... for simple small tasks. But if you screen the code you get, it can be a wonderful time saver, but it won't replace coders jobs anytime soon. Anyone who tells you this lives in "laa laa land" and have NO clue and probably haven't even used it extensively.

  • Found the problem! (Score:4, Insightful)

    by uncqual ( 836337 ) on Sunday October 15, 2023 @02:16PM (#63926923)

    "AI is essentially a black box, you're not able to see what process the AI undertakes to come to its conclusions. As such, you're not really able to check its work... If it turns out there is a problem in the AI-generated code, the cost and time it takes to fix may prove to be far greater than if a human coder had done the full task by hand."

    Umm... Don't you review/desk check your own code? Why wouldn't you expect to do the same with "AI" generated code?

    I've played w/ChatGPT generating code in a few languages (esp. SQL and C) and sometimes it did a decent job and in a couple of cases used a library that I wasn't aware of which was helpful. However sometimes confirming that the code did what I asked it to sometimes took as long as writing the code and desk checking it would have taken. Part of this effort was figuring out the approach the bot had taken which was different than what I would have taken - I found this esp. true w/complex SQL queries where, for example, the bot used a different set of features to reach the same conclusion (and, in some cases, the query plans resulting from my approach and the bot's approach were very similar after the optimizer had munged on the queries).

    In some cases ChatGPT missed something obscure because I failed to fully constrain the problem where I would have naturally dealt with the case "properly" because I thought of the "missing" constraint as being obvious and would never have, when writing code, failed to handle/apply it.

    I've found that fully constraining a problem, unambiguously, in an ambiguous natural language such as English in order to get "AI" to write the desired code often is harder than just writing the code in a language which isn't ambiguous and which results in having to face and address each case.

  • Ok... (Score:5, Insightful)

    by Junta ( 36770 ) on Sunday October 15, 2023 @02:28PM (#63926949)

    Instead of about two-to-four hours of hair-pulling

    Reworking those three lines of code to optionally accept two decimals should have been a 10 minute task, max. This may be a helpful tutorial for a beginner that already knows the problem and distills it into a digestible snippet, but it doesn't necessarily imply much about more open ended applications.

    This seems to be consistent with my experience, it can reasonably compete tutorial level snippets that have been done to death, but if you actually have a significant problem that isn't all over stack overflow already, then it will just sort of fall over.

    • by quall ( 1441799 )

      Thought the same. If it takes him 2-4 hours, then he clearly needs ChatGPT. It's not an attestation of how great of a tool it is, but rather it shows how poor of a coder he is.

  • I have doubts (Score:5, Insightful)

    by Mascot ( 120795 ) on Sunday October 15, 2023 @02:42PM (#63926969)

    I have many issues with how the abilities of this supposed "maintainer of code" comes across based on these citations, but let's chalk that up to a need for brevity and me being too lazy to RTFA.

    A more important issue I have, is he seems to believe ChatGPT understands how WordPress handles hooks. Unless something's drastically changed in how ChatGPT functions, that's not at all what it does. It answered what people have previously followed similar strings to the question with. That's all.

    Not that I don't think LLMs can be helpful tools, but for the foreseeable future it seems firmly seated in the "make a suggestion or two and have a human with knowledge take those into consideration" department, as well as a slightly more fancy snippets engine. I'd not worry about my job anytime soon. Then again, I'm old, my odds of being retired before AI takes over programming is above average.

    • Not that I don't think LLMs can be helpful tools, but for the foreseeable future it seems firmly seated in the "make a suggestion or two and have a human with knowledge take those into consideration" department, as well as a slightly more fancy snippets engine. I'd not worry about my job anytime soon. Then again, I'm old, my odds of being retired before AI takes over programming is above average.

      I’m turning 40 in a few weeks, and your assessment matches my own. I see nothing concerning here for me or my career. The proverbial “boss’ nephew” who “built the site in a weekend” that has dozens of console errors? He may not be around for much longer, but anyone who’s competent in the field has nothing to worry about.

  • by Framboise ( 521772 ) on Sunday October 15, 2023 @03:39PM (#63927059)

    ChatGPT is very weak at calculus (and also arithmetic). I asked to find the maximum of sin(x)/x, which led to verbose calculations filled with logic and math errors. Despite many hints, such as using l'Hospital rule, it would repeat the same mistakes again and again.

    • This is because ChatGPT is a LLM--a large language model. It was not designed to perform mathematical operations. You are correct, it sucks at math, but that is not unexpected.

      • It’s bad at logic or even just maintaining state. Try playing tic tac toe with it drawing the board. It declared itself the winner after making an illegal move (which was an impressive feat, given that we’re talking about tic tac toe) when I tried playing against it. I called it out, corrected the board state, had it repeat the correct state back to me, and then had it make more illegal moves. Over and over again. Never managed to finish the game.

        • Actually ChatGPT is quite good at logic. In fact being so good at logic is one of the core ways you can mess with it, such as convince it to ignore its restrictions on use. The issue here seems to be you haven't explained the logic to ChatGPT. ChatGPT is making illegal moves not because it can't follow simple logic, it's because it's faking the rules of the game. But you can in a session explain the rules to it, or you can use a plugin that sets up that session for you.

          Then you can have a correct game of ti

  • but useful for inexperienced and out of touch developers. It is typically managers who are hyped about it.
  • by bloodhawk ( 813939 ) on Sunday October 15, 2023 @04:23PM (#63927123)
    cool, but if it takes him 2-4 hours to convert a piece of code to accept dollars and cents instead of just an integer I think he has more serious coding issues.
  • I tried it with a simple "convert date to Unix timestamp" function for an embedded project I am working on. Spent hours debugging other code until I got a look at ChatGPT's one and found it simply does not work.
    So, YMMV, and you have to double-check everything. To me, it does not look like something to debug another's code :).

  • How do we teach ChatGPT (and the people who trust it to write code for them) that you NEVER use a float type to store currency, because the precision limitations will cause problems even with values like $0.10 and $0.20 - even though they look fine (to humans) as decimals?

    Store the value in an integer as cents (and calculate with ints) and format it when you need to.

  • Is this article about 6 months late or what?
  • You hit it on the head with the blackbox point, but not because we don't know it's arriving at its conclusions but also because the more reliant we become on it, the less developers will have an understanding of the existing codebase and functionality.

    An employer doesn't always just pay you to code/bugfix, they pay you for your understanding of the codebase. Someone saying, 'I don't know how it works, chatgpt did it.' is an unacceptable answer.

    Saying I don't know how it works, chatgpt

  • by manu0601 ( 2221348 ) on Sunday October 15, 2023 @05:55PM (#63927275)

    [I] went back to my normal technique of digging through GitHub and StackExchange to see if there were any examples of what I was trying to do, and then writing my own code.

    Hence the code to debug is an assemblage of stuff posted to public forums, and ChatGPT was trained on that. It was fed with questions with offending code and their answers.Usual bugs, usual fixes.

  • Call me a luddite but I'm a bit against the current 'AI' that basically scraped the internet to build up its engine, but this is something that I always hoped for when it came to code generation.

    I had these ideas of somehow feeding it all the source material for a given language, compiler docs, the language docs, and rules, etc, and then being able to describe functions and have it generate the base code for it. Why? because coding for me was a path not taken. i did all the schooling, got a degree, and

  • Kinda sounds like it is doing what IDE hints and linting have done for decades, with a more conversational and thus less precise interface.
  • This chatgpt programer bullshit only works when the system is given a limited set of input. The platform literally prevents you from uploading several megabytes of multiple files, which would be absolutely necessary to give it context in order to solve any problem of significant scope. Instead, people are asking it to rewrite their 50-line functions to work with dollars instead of fuckwits, and then posting their magic results onto social media in hope of clicks, because well they are fuckwits.

    I'm bored of

  • One developer offers a price based on bespoke coding by a skilled human, one offers a price based on some AI driven changes and sanity checked by a human. The third option is that the developer supplies prompts enough times for the bug to 'disappear' and never bothers to sanity check the rest of the code or actually run it...guess which price the client wants?!

Beware of all enterprises that require new clothes, and not rather a new wearer of clothes. -- Henry David Thoreau

Working...