Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Stats

Coders Don't Fear AI, Reports Stack Overflow's Massive 2024 Survey (thenewstack.io) 134

Stack Overflow says over 65,000 developers took their annual survey — and "For the first time this year, we asked if developers felt AI was a threat to their job..."

Some analysis from The New Stack: Unsurprisingly, only 12% of surveyed developers believe AI is a threat to their current job. In fact, 70% are favorably inclined to use AI tools as part of their development workflow... Among those who use AI tools in their development workflow, 81% said productivity is one of its top benefits, followed by an ability to learn new skills quickly (62%). Much fewer (30%) said improved accuracy is a benefit. Professional developers' adoption of AI tools in the development process has risen rapidly, going from 44% in 2023 to 62% in 2024...

Seventy-one percent of developers with less than five years of experience reported using AI tools in their development process, as compared to just 49% of developers with 20 years of experience coding... At 82%, [ChatGPT] is twice as likely to have been used than GitHub Copilot. Among ChatGPT users, 74% want to continue using it.

But "only 43% said they trust the accuracy of AI tools," according to Stack Overflow's blog post, "and 45% believe AI tools struggle to handle complex tasks."

More analysis from The New Stack: The latest edition of the global annual survey found full-time employment is holding steady, with over 80% reporting that they have full-time jobs. The percentage of unemployed developers has more than doubled since 2019 but is still at a modest 4.4% worldwide... The median annual salary of survey respondents declined significantly. For example, the average full-stack developer's median 2024 salary fell 11% compared to the previous year, to $63,333... Wage pressure may be the result of more competition from an increase in freelancing.

Eighteen percent of professional developers in the 2024 survey said they are independent contractors or self-employed, which is up from 9.5% in 2020. Part-time employment has also risen, presenting even more pressure on full-time salaries... Job losses at tech companies have contributed to a large influx of talent into the freelance market, noted Stack Overflow CEO Prashanth Chandrasekar in an interview with The New Stack. Since COVID-19, he added, the emphasis on remote work means more people value job flexibility. In the 2024 survey, only 20% have returned to full-time in-person work, 38% are full-time remote, while the remainder are in a hybrid situation. Anticipation of future productivity growth due to AI may also be creating uncertainty about how much to pay developers.

Two stats jumped out for Visual Studio magazine: In this year's big Stack Overflow developer survey things are much the same for Microsoft-centric data points: VS Code and Visual Studio still rule the IDE roost, while .NET maintains its No. 1 position among non-web frameworks. It's been this way for years, though in 2021 it was .NET Framework at No. 1 among IDEs, while the new .NET Core/.NET 5 entry was No. 3. Among IDEs, there has been less change. "Visual Studio Code is used by more than twice as many developers than its nearest (and related) alternative, Visual Studio," said the 2024 Stack Overflow Developer survey, the 14th in the series of massive reports.
Stack Overflow shared some other interesting statistics:
  • "Javascript (62%), HTML/CSS (53%), and Python (51%) top the list of most used languages for the second year in a row... [JavaScript] has been the most popular language every year since the inception of the Developer Survey in 2011."
  • "Python is the most desired language this year (users that did not indicate using this year but did indicate wanting to use next year), overtaking JavaScript."
  • "The language that most developers used and want to use again is Rust for the second year in a row with an 83% admiration rate. "
  • "Python is most popular for those learning to code..."
  • "Technical debt is a problem for 62% of developers, twice as much as the second- and third-most frustrating problems for developers: complex tech stacks for building and deployment."

This discussion has been archived. No new comments can be posted.

Coders Don't Fear AI, Reports Stack Overflow's Massive 2024 Survey

Comments Filter:
  • by mccalli ( 323026 ) on Sunday August 04, 2024 @02:49AM (#64679326) Homepage
    That leaves a disappointingly high number of gullible, credulous developers.
    • by jhoegl ( 638955 )
      And that is what everyone in every job, fears the most.

      Competent at your job? Incompetent person comes along and messes it all up.

      incompetent at your job? Hope for a competent person to come along and hide your incompetency.
    • That leaves a disappointingly high number of gullible, credulous developers.

      It's the question that is kind of meaningless. "Trust the accuracy"?

      LLMs are a tool. The accuracy of the final product depends on the person wielding the tool.

      That a doofus can get bad results by just swinging a tool around, doesn't mean that the smart approach is "tools bad", lol

      • LLMs are a tool, and like all tools, they are either a benefit or a hazard.

        When they're a benefit they're not my problem.

      • by narcc ( 412956 )

        LLMs are toys. The sooner we realize that the better.

        • LLMs are toys. The sooner we realize that the better.

          I don't know why I bother. I mean, just the instinctual urge to counter nonsense, I guess.

          So yes, pivoting to my own self interest, y'all just keep believing that these tools that I use to great effect every day are useless. Yup, sure are!

          • by narcc ( 412956 )

            If you find a silly toy helpful, perhaps you're not as capable as you believe yourself to be...

            • by jma05 ( 897351 )

              Or perhaps you might be incapable of leveraging new tools like the rest and adapt from old workflows.
              Getting old now, are we?

              • by narcc ( 412956 )

                Again, they're not tools. They're toys. You'll figure that out eventually.

                • by jma05 ( 897351 )

                  They are tools, not toys any more. They were toys (research prototypes) until GPT-2 or so.
                  You will figure that out eventually, everyone else has. Its.... inevitable.

                  • by narcc ( 412956 )

                    Like I said, you'll figure it out eventually.

                    Even the most delusional realize that fussing with the things is a waste of time, even if they're not ready to acknowledge it. You'll soon find yourself using it less and less, making excuses like "I'll just do this thing real quick" or "I don't need it for something this simple". You'll start to get annoyed by the endless failures and get tired of constantly cleaning up after it. Eventually, you'll realize you haven't bothered with it for a few hours, then a

                    • by jma05 ( 897351 )

                      The point was about whether they were tools, not a panacea.

                      I am already at a point where I will almost (except in some secure environments) never again write code the way I have in the past. It changed the way I engage with documents.

                      > "I don't need it for something this simple"

                      It costs fractions of cents. I use it for the simplest tasks, they are already integrated into my workflows.

                      They will disrupt every industry. Of course, some more than others, some sooner than others. And people will adapt to use

                    • by jma05 ( 897351 )

                      * mass unemployment

                    • by narcc ( 412956 )

                      They will disrupt every industry.

                      Everyone but you stopped believing that bullshit more than a year ago.

                      You're just a little behind the times. You'll catch up to the rest of us eventually. Have fun playing with your toys. You'll come around when you're ready.

                    • by jma05 ( 897351 )

                      > Everyone but you stopped believing that bullshit more than a year ago.

                      Oh, they did?

                      IMF warns of massive labour disruption from AI (June 17, 2024)
                      https://www.ft.com/content/3d7... [ft.com]

                      Want more?

                    • by jma05 ( 897351 )

                      > Everyone but you stopped

                      Here are a few more (2.8M) folk besides me who don't think it's just a toy.
                      https://www.reddit.com/r/singu... [reddit.com]

                      > You'll catch up to the rest of us eventually

                      You are literally the only guy (I am sure they are a few more) I have seen so far in any tech forum with that viewpoint in the last 2 years. So it's hilarious when you insinuate that I am the eccentric one. Did the recommendation algorithms create a special bubble for you? It can happen to all of us.

                    • by narcc ( 412956 )

                      More examples of stupid people believing stupid things? Your silly nonsense is quite enough stupid for today, thanks.

    • This is like asking if you trust the accuracy of stackoverflow answers.

      Does a yes imply you believe in naively copy and pasting SO answers into your code? No, because adapting close enough examples is part of the job, and yes can mean they're on the right track.

      So the obvious problem with the question is what does accuracy mean, it can do the job for me, or it's on the right track, or a useful answer. People answering that question would be well aware how you're trying to take it because nimrods keep talkin

    • by gweihir ( 88907 )

      Yep, pretty much. Explains nicely why there is so much crappy code out there.

  • by BadDreamer ( 196188 ) on Sunday August 04, 2024 @02:58AM (#64679330) Homepage

    I'm surprised at the number of developers who trust the accuracy of the tools. That could be because they are mainly used for trivial "fill in the blanks" tasks, where they actually are really helpful, and not actual design problems.

    In my experience, even having an AI assistant help with designing a SQL query will lead to so much extra verification work it ends up taking a lot more time and effort than just writing it from scratch. For more complex design problems it ends up nearly impossible to even verify the mess it makes.

    • by Hodr ( 219920 )

      I don't know anyone who directly integrates a tools generated code. Usually they use the tool to provide an outline of the algorithm/model/procedure that they want to build and then use that like a rough draft or to see if there's an approach they didn't consider. Also useful for identifying errors in existing code. You can't trust that it will find any/all, but anything it does find is beneficial and easy to check for "accuracy".

      So I agree, if used in an appropriate manner there's no reason to consider t

    • Given the need to check everything, there's a simple rule for deciding when to use an LLM: the same set of places you'd be more comfortable fixing existing code than writing new code. For contained, easily verified snippets, it's great. For big complex things or especially subtle stuff, give it a miss.
    • It's shitty reporting. Most of that percentage is from the answer "somewhat trust", which is a far cry from "trust".

      Actual data here: https://survey.stackoverflow.c... [stackoverflow.co]

  • by war4peace ( 1628283 ) on Sunday August 04, 2024 @03:15AM (#64679352)

    One reason I like AI-generated code is it having ample comments and explanations.
    For someone who only occasionally dabbles in development and scripting, it's a godsend.

    My use cases might be niche, but still maybe worth mentioning.
    I occasionally need to create a certain bash script, or a YAML config, or a Powershell script, or a Python script, etc. Now, I really don't have time to learn all of them properly, save for basic stuff, and asking around in forums or communities has had disappointing results, because my requirements are usually a bit too complex for a simple answer there, but not complex (and not often) enough to be worth learning the whole scripting / development language(s).

    For example, I needed a Powershell script which would recursively parse several folders with lots of surveillance camera stills, then rename all images from "yesterday" using a counter, then pass them to ffmpeg to generate timelapses and finally delete all images older than 50 hours (could have gone with 48, but I wanted a small buffer). The application generating the images is Windows-based (Blue Iris), hence the need to use Powershell.
    ChatGPT generated code with explanations and comments, and after about 1h of iterating it, I ended up with a perfectly usable script which had been running daily for months, with zero errors and a log output which I check every now and then. And I learned a bit more Powershell scripting in the process.

    Another time, I needed a bash script which would recursively parse through hundreds and hundreds of folders with various archives (rar, zip, 7z, etc), and orderly extract them all into predefined folders, with error output in a log file. I am not skilled at bash scripting (I know the basics, though), so I asked ChatGPT. Took a couple hours of iterating, but got what I wanted, in the end.

    It also generates excellent YAML files for my Home Assistant configuration.

    • by serviscope_minor ( 664417 ) on Sunday August 04, 2024 @03:28AM (#64679362) Journal

      I occasionally need to create a certain bash script, or a YAML config,

      It can be really useless here, I've tried.

      Doing python dev recently, and python project config is a complete mess and badly documented especially if you're trying to do it in the new style with a single pyproject.toml. After struggling with docs for a while I tried chatgpt and it spits out convincing looking garbage or just old style ini configs.

      It has not learned how to map old configs to pyproject.toml, nor has it managed to extract the relevant data from the internet.

      • Doing python dev recently, and python project config is a complete mess and badly documented especially if you're trying to do it in the new style with a single pyproject.toml. After struggling with docs for a while I tried chatgpt and it spits out convincing looking garbage or just old style ini configs.

        As always, YMMV.

        After not really using Python for ... er, 20 years or so, I had to tweak a report generator python thing. I had to add some new filter options, that were a bit complex and not like any of the existing filters. ChatGPT happily spat out some new code for this, with fancy lambda functions that would have taken me forever to figure out. They worked though (and I had enough chops at least to see that they would work, even before testing). Saved me hours.

        It has not learned how to map old configs to pyproject.toml, nor has it managed to extract the relevant data from the internet.

        Again, YMMV. I've had it work multiple tim

      • Yes, for very new and badly documented stuff, of course it would fail. It's not a Holy Graal.

    • This sounds like a job for linux. In bash you would not need 1 hour to write this kind of workflow script. One reason I am so productive at work is that we have mounted all the file shares on Linux workstations as well as Windows workstations. So I can have Windows boxes writing their own files and then I do all the heavy lifting using the best scripting tools. Finally the Windows apps can just read back the results from the filter. Moreover, the scripts are usually very easy to parallelize in bash, which o
      • In my case, you don't want to copy 300K images a day from one network server to another.
        Also, the Windows box does gave a RTX 4000 GPU, my Linux server does not - therefore converting images to AV1 videos must be performed locally on the Windows machine.

        So, while it may be a Linux job too, the scope varies case by case.

        • You don't need to copy images to access them from Linux. You need to expose the storage they are on to Linux.

          If you're in a situation where you don't even get the authority to move a GPU to the machine where it will do its job best, it would seem this is not a job you will be keeping for long.

          • Dude.
            Those are my personal servers.
            What the hell are you talking about?
            I use the GPU on the Blue Iris machine because it allows me to use CodeProject.AI for person and car detection and several automations based on that.
            The Linux server doesn't need a GPU. OK, well, it does, but I first need to upgrade it to an EPYC system, because I am out of PCI Express lanes there.

            And exposing the storage is easy, working with 100+ GB of files on remote storage is a bottleneck. No, I don't have 10g NIC on the Blue Iris m

            • No, that isn't a bottleneck either. But, you were talking about this as if it's a production critical system with hard requirements, when it's really just a little hobby project and you've made bad choices in tools you have to deal with. You could have solved all this cheaper and easier without the need for AI to code up stuff, but instead you did it the hard way.

              • Mate, you are rambling.
                Making assumptions without even understanding what this is about doesn't help your argument.

                • Probably the best thing about LLMs is that they'll give you a solution while a human would still be telling you how stupid you are for having a problem to solve.

                  The LLM has no need to stroke its own ego by looking down on everyone else and smugly doubling down on their insults. The LLM just tirelessly revises and re-revises as you refine the requirements until you've got something that works and you can move on with your day.

                  You'll never see an LLM return insults and criticism in response to a request for c

                  • Probably the best thing about LLMs is that they'll give you a solution while a human would still be telling you how stupid you are for having a problem to solve

                    This. I hate talking to my co-workers because they all have opinions despite ample evidence that literally nobody knows what's going on. But if you ask them how A & B works they're going to tell you about C & D and how they'd do it and you're going to have to get code review from them later so have fun refactoring your code.

                    Oh A & B? Yeah man that's tricky I dunno. When it comes time for code review if they've never seen it they will just say looks good and approve it... unless they've alre

                  • You know what? You are spot on.
                    Of course, when it comes to very complex coding, humans can arguably churn better code, if they know what they are doing.
                    But a 2-page script that needs to run daily for 1h on my hobby server? Yeah, I couldn't care less if it could be made to finish in 55 minutes, or if it would "look better", or if it could be "more elegant".

                    My bash script example was actually first presented as a question on a very well known community, and the question was closed because it was "too complex"

    • by m00sh ( 2538182 )

      I agree. It's also amazing at docker compose yaml config. I hadn't been able to set up my NAS the way I wanted for a long time but with ChatGPT it was a week and everything was set up.

      Even things without proper documentation, I was able to make my own and bundle it up and run it. I could have done it myself but documentation is sparse and getting it to a working state is a very very slow process with docker. It still had errors but with a few iterations I had everything ironed out.

    • Here is the code:

      # Set the source folder where your images are located
      $sourceFolder = "C:\Path\To\Your\Images"

      # Set the destination folder for processed images
      $destinationFolder = "C:\Path\To\Processed\Images"

      # Get all image files recursively from the source folder
      $imageFiles = Get-ChildItem -Path $sourceFolder -Include *.jpg,*.png -Recurse

      # Get the current date minus one day
      $yesterday = (Get-Date).AddDays(-1)

      # Initialize a counter for renaming
      $counter = 1

      # Process each image
      foreach ($image in $imageFiles)
      • Yeah, that's a very basic and limited code.
        1. The images generated use a special naming format, containing camera name and timestamp, and they need to be arranged chronologically from their filename timestamp. Creation timestamp might be off at the upper bound limits (image from 23:59:59:318 might have a creation timestamp of 00:00:00:118 next day).
        2. Counter needs to specifically have enough leading zeros (pad the counter) to cover all images, e.g. "filename.000001.jpg" instead of "filename1.jpg", otherwis

        • by Osgeld ( 1900440 )

          yea sorry that doesn't seem that much more complicated than the example framework, filename manipulation, sending arguments to outside processes, and path direction and basic logging doesnt seem that outrageous to even this non programmer.

          I did most of this with a batch file gathering CSV files 10 years ago and the hard drive had 980GB sorted and timestamped csv files pumping them to a SQL sever daily

    • by Osgeld ( 1900440 )

      yes I find it handy on occasion, but I am not a coder. I occasionally need to make widgets or scripts to glue shit to other shit to make my life slightly more convenient when faced with a large repetitive task.

  • by ihadafivedigituid ( 8391795 ) on Sunday August 04, 2024 @03:31AM (#64679364)
    Interesting survey, and the headline results feel pretty inline with my own observations .... up to a point.

    The focus seems to be on code generation, however--which isn't the best use of current LLMs, in my experience. It might do the right thing, and it might not. I'd guess part of the problem is that it's really hard to precisely describe what you want to do in English.

    But damned if it isn't one of the best code reviewers/critics I've ever encountered--and it's by far the most responsive and patient! It's also great for spitballing stuff or for playing devil's advocate when you need to have your opinions tested.

    And oh wow is it great in sysadmin work for looking at error messages and coming up with possible solutions. The savings in time and frustration makes the $20/month feel like the best deal in tech history.

    Which brings me to the point a lot of people are missing: "AI" doesn't need to be a robot programmer in order to destroy tech jobs--it only needs to make some of us a lot more productive.

    It's a brilliant interactive cookbook too.
    • The problem with the "destroy tech jobs" narrative is, just like robots didn't destroy manufacturing jobs, what higher productivity leads to is more production. Not stable production numbers and lowering head count.

      And yes, there has been local reductions in head count, but globally, manufacturing jobs are steadily increasing. So are tech jobs. And AI will not reverse this, with the current ability and scaling.

      What would reverse it would be a real breakthrough, but LLM on its own ain't it.

      • by quonset ( 4839537 ) on Sunday August 04, 2024 @07:11AM (#64679522)

        just like robots didn't destroy manufacturing jobs,

        Would you like to try again [mit.edu]?

        The researchers found that for every robot added per 1,000 workers in the U.S., wages decline by 0.42% and the employment-to-population ratio goes down by 0.2 percentage points — to date, this means the loss of about 400,000 jobs. The impact is more sizable within the areas where robots are deployed: adding one more robot in a commuting zone (geographic areas used for economic analysis) reduces employment by six workers in that area.
        . . .
        Improvements in technology adversely affect wages and employment through the displacement effect, in which robots or other automation complete tasks formerly done by workers. Technology also has more positive productivity effects by making tasks easier to complete or creating new jobs and tasks for workers. The researchers said automation technologies always create both displacement and productivity effects, but robots create a stronger displacement effect.

        • Would you like to re-read my comment?

          World wide, manufacturing jobs are on the rise, and always have been. And the main reason people were laid off in the US (and other first world economies) is because of outsourcing, not automation. For every such position lost in the US, several have cropped up across the world.

          The amount of production has vastly increased from automation. That lifts the need for amount of workers, even with the multiplier of higher productivity per worker.

          • The number of manufacturing jobs has gone up, but have you looked at where they have shifted and the kind of 19th century horrorshows they really are?

            https://data.worldbank.org/ind... [worldbank.org]

            Look at the map version of the data. You think India, Algeria(!), Iran, and Russia are great places to work? My god, the medieval conditions in the numerous Indian factory videos are heartbreaking. The number of jobs has gone up, sure, but is that what we really want?
            • It is what is happening, is my point. Rather than vanishing, manufacturing is growing. And would you rather be unemployed in those places than having a manufacturing job?

              What happens over time is that the productivity multipliers start moving into lower cost countries as well, eventually pushing the average living standards up. That can be held back for political reasons, of course, but it's the overall trend.

              • This is zero help for people in the USA that have built careers on software. How will they feed their kids? Will they build 19th century foundries in pits in their back yards and manufacture crap quality parts in competition with people in India?
                • If they want to work, they move to where the work. If they don't, they better change the political system. Nobody has a right to a specific career. We all take our chances when we pick one.

                  And if your country leaves them no choice but to build foundries in pits in their back yard, that's nobody's fault but your own.

      • by m00sh ( 2538182 )

        Yep.

        Managers are much less scared of letting people go now.

        We used to have experts who knew a lot about this topic who managers were afraid to let go because there would be a hole in the team.

        Now no more. They let go of people so much more easily. Those who have questions on this topic can ask LLMs. It won't be as quick but it gives you answers. You don't really need an expert around.

        Also teams are much smaller. There was always hiring with excess excess capacity but now not anymore.

        • That seems to be the way things go in the US. It's not how it looks in Europe. The demand for skilled tech workers is higher than ever.

          Teams are smaller here as well, this is correct. But instead there are a lot more teams, doing a lot more tasks.

      • Programming is not manufacturing.

        Do you have data to support the notion that the number of jobs in software in particular is increasing? All I see is a layoff bloodbath. Apparently the BLS thinks this is going to continue:

        Employment of computer programmers is projected to decline 11 percent from 2022 to 2032. [emphasis mine]
        https://www.bls.gov/ooh/comput... [bls.gov]

        Remember that the government moves slowly so this almost certainly doesn't take into consideration the effects of AI. My own experience from having worked on the web side of software at a senior level for 25 years is that far fewer people can get a ton more done in way less time now--an

        • No, programming is growing a lot faster than manufacturing.

          https://kinsta.com/software-en... [kinsta.com]

          The US might be in a losing position, for various reasons. But world wide, software jobs are growing, and will keep growing, especially if productivity per programmer increases. The demand for software is enormous. Most businesses around the world have a shortage of software solutions for their business problems.

    • I'd guess part of the problem is that it's really hard to precisely describe what you want to do in English.

      Right. Most of the critics are really just saying "I don't know how to use this effectively".

      • Most of the critics are really saying "if I put in the work to tell this tool how to do a proper job, I have already solved the problem, so it only gave me more work".

        Some are saying they don't know how to use it, for sure. But for anything but trivial tasks the current generation of generative tools only get in the way.

    • I work in mostly C++, and when I've asked it for a code review it mostly spits out nonsense that looks like some generic advice that was scraped from a best practices blog, often not even being applicable to the code in question. It does do pretty well with very specific questions regarding how to do something very focused and well defined. The code it gives will seldom work but the ideas are often useful. But that's a pretty rare use case.

      • Which LLM and version have you tried?

        Every time I have asked this question, the answer has been something in the GPT-3.5 class.
        • Claude Sonnet and ChatGPT-4o

          • Stipulating that we are both intelligent and competent in our respective fields, what do you suppose explains our drastically different results?
            • I'm not sure, but it might be the domains we work in, the tools we use, perhaps things like that. I'm doing mostly "Modern C++" in low level system service code. When I've asked LLMs to do simpler tasks, for instance solutions to leetcode problems it does OK. One that I spent some time on early on, basically to evaluate LLMs for this, the AI gave a working solution from the leetcode question, and the solution was dead average.

              It was also not at all the way I would have done it, so I followed up with a serie

    • > I'd guess part of the problem is that it's really hard to precisely describe what you want to do in English.

      No - the REAL "problem" with ChatGPT/etc is that these are simply statistical language models...

      When a trained LLM looks at its input and decides what word to generate next, all it is doing is looking at the statistics of the training data, and determining what word does that predict would most likely come next, or in other words what word would the people who created the training data (which has

      • ChatGPT-4o disagrees with you:

        The response you received contains some misconceptions about how large language models (LLMs) like ChatGPT work. Firstly, while it's true that LLMs are based on statistical patterns, they do not operate by simulating a "majority vote" from a "stadium full of people." Instead, these models use complex algorithms that consider context and patterns across vast amounts of data to predict likely continuations of text. The notion that there is "literally no plan" is misleading; while LLMs do not have goals or plans like humans, they use context windows and attention mechanisms to maintain coherence and relevance across input. The models analyze patterns at multiple levels, not merely word-by-word voting, allowing for more sophisticated and coherent responses than the analogy suggests. Lastly, these models do not simply reflect the majority; they are designed to generate text that is contextually and syntactically appropriate based on a learned representation of language.

        • Right, and the reason ChatGPT is generating that response if because that reflects the average opinion of those whose data it was trained on.

          It should be no surprise that their are tons of people who think that ChatGPT has a mind of it's own and is more than the statistical word generator that it actually is.

          If you realize how Transformer models work in detail, then you'd also realize that ChatGPT is lying to you (on behalf of those misguided souls whose training data it is generating based off). The whole

          • GPT-4o disagrees again, and you do realize that you have described your own opinion as being in the minority compared to the training data, right?

            The response you received continues to misunderstand how large language models like ChatGPT function. While it's true that LLMs process input sequences and generate text based on learned patterns, the assertion that they merely reflect "the average opinion" is oversimplified. These models do not "lie" or "bullshit"; they generate text based on complex patterns in the data they were trained on, aiming to provide coherent and contextually relevant responses. The idea that LLMs start "from scratch" with each new word is misleading. Instead, they use mechanisms like attention layers to maintain context within a conversation, ensuring responses remain relevant to the input. The model's outputs are not about following a "plan" in a human sense but rather about leveraging learned patterns to produce coherent text across sequences. While they don't have memory in the way humans do, they efficiently use context within conversations to generate appropriate responses.

            Prior to GPT-4, I would have agreed with you based on just using it and seeing what others were producing. But GPT-4, especially in its original "unsafe" and jailbroken form, is spooky sometimes--and I am a jaded senior menu tech industry guy who has seen hype going back to the 70s. Some of the smartest people in the world, who actually work on this stuff and know

            • For sure they can do some very impressive things (but also fail on some incredibly simple things, in ways that may seem counter-intuitive, but aren't really surprising).

              I think many people get fooled by seeing the intelligent responses from ChatGPT and assuming that therefore it is both intelligent and has a mind of it's own, when in reality the reason it *appears* to be thinking, and *appears" to be intelligent (some of the time) is because it is essentially regurgitating human output, with the "stadium of

              • I believe you are technically incorrect about the current SOTA LLMs. We've moved beyond that, and the leap in functionality shows it.

                But supposing your stadium analogy was correct, I for one would expect worse results as the training data sets grew larger. I strongly doubt human knowledge, as posted to the interwebs, has a Gaussian distribution of quality.
                • The improvements in functionality are down to more and better quality data, and different "post-training" steps. The core architecture of the models remains the same, as evidenced by "open weights" models such as the largest Llama ones, close to SOTA, which can be run via the open source llama.cpp (which reflects the standard Transformer architecture).

                  • I'm not drawing distinctions between training and inference--I'm after results by whatever means. We had clear breakthroughs like 3D games in the 90s running on moldy DOS operating systems, so pointing to one part of the system and saying there has been no change can be misleading. llama.cpp has a fair bit of activity on Github, in any case.

                    Inference is not the big horsepower part of the equation even in human terms, or at least that's how it feels: learning stuff, and learning it well, is difficult and
                    • > Inference is not the big horsepower part of the equation even in human terms, or at least that's how it feels: learning stuff, and learning it well, is difficult and time consuming. Applying that knowledge is by comparison pretty easy.

                      If applying knowledge is so easy, and these models are so powerful, then how do you explain their failure on many simple reasoning tasks?

                      Did the N-trillion tokens of data used to train GPT-4 really not include the world knowledge needed to help the farmer cross the river

            • GPT-4o does not agree or disagree with anything. It generates the statistically most likely response to the prompt it is given. So what is disagreeing is the average writing it has been trained on.

              "Think of how stupid the average person is, and realize half of them are stupider than that."
              -- George Carlin

              That is what it has been trained on. The average. The people who are impressed by them generally are not that well versed in the subjects they ask about. Like in this case. You blindly copypasta a hallucina

              • Eh, I could join the Triple Nine Society so I'm looking at this from the right end of the bell curve.

                My observation is that there is a nonlinear positive correlation between intelligence and success with LLMs. I don't have any data to back that up, but I know a lot of people and have seen their output over time.

                Also, you are being a bit literal minded and missed the obvious humor in my choice of the word "disagrees". ;-)
                • by narcc ( 412956 )

                  You believe silly nonsense. I feel bad for you.

                  • Coming from someone who thinks one's feelings about a subject are all that matters (my rhetoric professor and indeed my whole debate team would have laughed at your .sig line link), I have to take that as a compliment. :-)

                    Have a great day!
          • by narcc ( 412956 )

            I applaud the effort, but you can't reason with religious nuts. They're only interested in confirming their existing beliefs and converting others so that their beliefs aren't regularly challenged.

            • Au contraire, I love a good challenge! That's the only way to really grow: either I successfully defend my own beliefs against vigorous and smart opponents, or I update my position and give my opponent(s) my thanks.

              Maybe I missed the part where you offered a challenge, though. Could you restate your argument?
  • ...about people's feelings, & that's all it is, i.e. How developers feel about AI tools. I wonder how they'll feel if AI does indeed reduce the need for human developers? (I'm not saying it will, just "what if")
  • by khchung ( 462899 ) on Sunday August 04, 2024 @05:08AM (#64679440) Journal

    Most codes are crap, LLM built using existing codes as input will produce crap.

    The only result of using AI to write code is more crap that will need more human coders to maintain. What to worry?

    • Sturgeon's law applies, indeed. There are vast amounts of churned out, horrible, barely working and underdocumented code in critical systems all over the world.

      That is one thing I really have hopes for when it comes to generative AI. It could make it possible to go through a lot of such code and document and/or simplify it. It has the potential to help clean out a lot of technical debt. And the confidence C level people have in AI means it may be approved, which isn't usually the case with refactoring to in

      • by narcc ( 412956 )

        It could make it possible to go through a lot of such code and document and/or simplify it. It has the potential to help clean out a lot of technical debt.

        That's the fantasy, but it's not realistic.

        the training on the kind of code it would need to clean out is a possible stumbling block for this.

        That's like saying the biggest problem with your riding your bicycle across the Pacific is carrying enough gasoline. It doesn't matter how much you have, it's fundamentally incapable of the task.

  • by Rei ( 128717 ) on Sunday August 04, 2024 @06:57AM (#64679500) Homepage

    All our jobs have come
    Here but now they're gone
    Coders don't fear the AI
    For queries, for code and for scripts
    We can be like they are
    Come on, baby (don't fear the AI)
    Baby, let it code (don't fear the AI)
    We'll be able to slack (don't fear the AI)
    Baby, it's your dev
    La, la, la, la, la
    La, la, la, la, la

    • Love it, needs more cowbell.

      Prompt:
      "Please write a satirical song in the style of a popular rock and roll song (you pick the song) about AI taking programmer jobs." (Yes, I say "please", and it's a running joke with myself.)

      ChatGPT-4o Response:
      Here's a satirical song about AI taking programmer jobs, inspired by the classic rock and roll tune "Johnny B. Goode" by Chuck Berry.

      AI Be Good

      Verse 1:
      Way down in Silicon Valley, in a cubicle so tight,
      There worked a lonely coder, burning through the
    • Comment removed based on user account deletion
  • AI lets the below average people hope that the AI can spoon feed it to them in a way that they can one day be average. But the execs think it is going to make a person off the street into a super hero AND help them understand what everyone else is doing but it can never do that.
  • by billybob2001 ( 234675 ) on Sunday August 04, 2024 @07:49AM (#64679566)

    Seventy-one percent of developers with less than five years of experience reported using AI tools in their development process,
    as compared to just 49% of developers with 20% years of experience coding

    Of course, coders don't fear AI threatening their jobs, because coders understand AI's limitations and know that human coders are still necessary.

    PHBs and hirers however, see AI as cheaper than people.

    FEAR more, coders.

  • Having used several LLMs for testing, I wouldn't trust one to write "hello world" without bugs.

    What they are fairly good at is code review. In the same sense that static code analysis is - pointing out parts that you probably want to look at again and verify that it indeed does what you think it does.

    For code generation, it's fairly obvious that stuff like stackoverflow goes into the training data - places where a lot of WRONG code is published by people asking for help finding the issue, as well as EXAMPLE

  • I have little doubt that one day an AI will be able to write better software than me. However, I think AI will replace a lot of other jobs before mine and society will have already had to start tackling how we handle this fundamental shift in our economy by then. As someone else once put it: in order for AI to write better software than me, the client has to be able to accurately specify the details of what they want the software to do. As developers, we know that specifying clear and complete requiremen
    • Software developer will probably be one of the last jobs to fall rather that one of the first, since it's utterly dependent on reasoning (one of "AI", aka LLMs, biggest weaknesses), and well as on-the-job learning at a variety of timescales from seconds to decades.

      In fact, lack of post-training learning ability is the elephant in the room that makes this current "AI" tech completely unsuitable for many things other than basic automation of rote tasks (incl. auto-complete), and as an NLP tool.

      Companies that

  • Seventy-one percent of developers with less than five years of experience reported using AI tools in their development process, as compared to just 49% of developers with 20% years of experience coding

    um... not sure if I have 20% years of experience or not. Past that, spelling out "Seventy-one percent" and then later in the same sentence using (vastly typeable/readable) "49%" notation... nice.

    • That's a journalism thing. (According to AP Stylebook, you're supposed to always spell out numbers when they're at the beginning of a sentence.) Here's a tweet about it from the official AP Stylebook account [x.com].

      But yeah, "20% years" has gotta be one of those typos where you're typing a lot of numbers with percent signs, and then have one number that doesn't need it. (It was in the original article -- but it looks like they fixed it.)

      Slashdot has the best commenters -- even proofreading the rest of t
  • "Seventy-one percent of developers with less than five years of experience reported using AI tools"
  • I use these LLM based tools to help me write code. But they are clearly not good.
    I used it yesterday again to edit a bibtex style file. I had never looked at that programming model, so I thought it would be useful.

    Honestly, I don't think it was more useful than googling. What it suggested to me did not work. It did help me learn how the language worked enough for me to write it correctly. But I am not sure it helped me more than googling would have.

    And what I am sure of is that if I had never seen a stack b

  • I didn't read the article but it would be fascinating to hear what juniors or people just entering the job market this year think. Guys with 20 years of experience and up-to-date skills probably don't need to worry about anything... in climate change terms, they're the ones living at the top of the hill. Sure, the tech sector is on a major downswing at the moment but that's a global/Western economy problem rather than specifically tech

You know you've landed gear-up when it takes full power to taxi.

Working...