92% of Programmers Are Using AI Tools, Says GitHub Developer Survey 67
An anonymous reader quotes a report from ZDNet: [A]ccording to a new GitHub programmer survey, "92% of US-based developers are already using AI coding tools both in and outside of work." GitHub partnered with Wakefield Research to survey 500 US-based enterprise developers. They found that 70% of programmers believe AI is providing significant benefits to their code. Specifically, developers said AI coding tools can help them meet existing performance standards with improved code quality, faster outputs, and fewer production-level incidents.
This is more than just people working on external open-source projects or just fooling around. Only 6% of developers said they solely use these tools outside of work. In other words, today, AI programming tools are part and parcel of modern business IT. Why has this happened so quickly? It's all about the programmers' bottom line. Developers say AI coding tools help them meet existing performance standards with improved code quality, faster outputs, and fewer production-level incidents. It's also all about simply producing more lines of code. "Engineering leaders will need to ask whether measuring code volume is still the best way to measure productivity and output," added Inbal Shani, GitHub's chief product officer. "Ultimately, the way to innovate at scale is to empower developers by improving their productivity, increasing their satisfaction, and enabling them to do their best work -- every day."
According to the survey, "Developers want to upskill, design solutions, get feedback from end users, and be evaluated on their communication skills."
"In other words, generating code with AI is a means to an end, not an end to itself," writes ZDNet's Steven Vaughan-Nichols. "Developers believe they should be judged on how they handle those bugs and issues, which is more important to performance than just lines of code. [...] Yes, you can have ChatGPT write a program for you, but if you don't understand what you're doing in the first place or the code you're 'writing,' the code will still be garbage. So, don't think for a minute that just because you can use ChatGPT to write a Rust bubble-sort routine, it means you're a programmer now, You're not."
This is more than just people working on external open-source projects or just fooling around. Only 6% of developers said they solely use these tools outside of work. In other words, today, AI programming tools are part and parcel of modern business IT. Why has this happened so quickly? It's all about the programmers' bottom line. Developers say AI coding tools help them meet existing performance standards with improved code quality, faster outputs, and fewer production-level incidents. It's also all about simply producing more lines of code. "Engineering leaders will need to ask whether measuring code volume is still the best way to measure productivity and output," added Inbal Shani, GitHub's chief product officer. "Ultimately, the way to innovate at scale is to empower developers by improving their productivity, increasing their satisfaction, and enabling them to do their best work -- every day."
According to the survey, "Developers want to upskill, design solutions, get feedback from end users, and be evaluated on their communication skills."
"In other words, generating code with AI is a means to an end, not an end to itself," writes ZDNet's Steven Vaughan-Nichols. "Developers believe they should be judged on how they handle those bugs and issues, which is more important to performance than just lines of code. [...] Yes, you can have ChatGPT write a program for you, but if you don't understand what you're doing in the first place or the code you're 'writing,' the code will still be garbage. So, don't think for a minute that just because you can use ChatGPT to write a Rust bubble-sort routine, it means you're a programmer now, You're not."
I admit I use it (Score:2)
chatgpt do this another way.
It never works but I get the gist of what it tries to do. If anything it makes me feel better I'm not spewing out the worst total nonsense.
Re: (Score:3)
I use it a lot for "Write a function that takes X and produces Y". Add a couple examples and it usually produces something good. It's great for some of those annoying transformations that would just require me to look up documentation for the correct function names and do some string splitting/combining or whatever.
Like you said, there's often cases where it produces something and I think "eh close enough" and fix it up for my use. In its current state it's certainly not a drop-in for a programmer
Re: (Score:2)
I don't do a ton of programming, most of it is as a hobby, and very intermittently. But I did have a task at work that "needed" some VBS, which I haven't done in a LONG time. I forget exactly what the task was, but generally it was a for-loop and a means to pull an indexed value out of an array based on a value it got from the loop. I could do that in my sleep in PHP, but VBS was a struggle. I spent a few minutes doing the usual Stack Overflow thing, but it wasn't getting me what I needed so I asked Cha
Re: (Score:2)
I don’t write a ton of Javascript and I always have to look up either old code or docs for fetch(). Some variation of this has worked great to come up with a skeleton function:
“Write a javascript function, using fetch(), with full error handling, that POSTs to xyz with parameters a, b, c, and retrieves a json result. If the result attribute def is -1, log to error console, otherwise . etc”
Re: (Score:1)
Right, because when your job requires 100 unique workflows you better be an expert with all of them or why even bother?
Re: (Score:2)
ChatGPT won't help you with work specific tasks, only with general dev and if someone isn't good enough at general dev they should go do something else. Simple.
Re: (Score:1)
Or, in other words.. (Score:5, Insightful)
'Use' is ambiguous here. You can 'use' a lot of stuff. ChatGPT is not doing all of the work. How do they know that it's 'higher quality, faster outputs, and fewer production-level incidents'? They don't, unless they already know how to write code. Non-story, more AI Hype that I am so ready to stop reading about.
Re:Or, in other words.. (Score:5, Insightful)
Every single GitHub-sponsored piece of research on this topic has been hyperbolic in its claims while exhibiting clear indications of selection bias. The independent research coming out in this space has not been nearly as kind (e.g. 40% rate for Copilot generating insecure code, surveyed developers report that bugs in the generated code are harder to resolve, etc.).
Like you, I keyed on the word “use” as well. Our field is filled with tech enthusiasts, so of course we’ve “used” these tools, but we’re telling our devs to steer clear of them professionally because of all the risks they pose, not just to security or maintainability, but legally as well if it’s discovered someone accidentally included someone else’s code.
Re: (Score:2)
Re: (Score:3)
I’ve seen mixed results being reported for productivity gains. It seems to speed some parts of development for some developers, but many of those gains are lost in the vetting and correcting process. The studies I’ve seen with the most promising results don’t seem to mention the standard they were holding the resulting code to, suggesting that it may have simply been when the dev declared it “done”. If that’s the case, that might explain why more junior devs are seeing bi
Re: (Score:2)
So far from internal testing and some of the papers I have read is that tools like copilot are about a 30% improvement for experienced developers and much less for novices. It has to do with these tools work best when you write a function that has a narrowly defined single purpose. Novices tend to write giant master functions while experienced devs tend to write many functions with a more narrow scope that are easier to test.
Re: (Score:2)
surveyed developers report that bugs in the generated code are harder to resolve,
This is not surprising. With code written by humans, trying to get in to the head of the (often no longer at the company) programmer is often a lot harder than tracking down your own bugs. With your own code you immediately know the answer to the question, "What were they thinking when they wrote that?" unless it's something you wrote a long time ago, in which case you're probably no longer at the company. You know how th
Re: (Score:2)
Maybe let them at /. (Score:4, Insightful)
Re:Maybe let them at /. (Score:4, Insightful)
Not surprising (Score:3)
Re: (Score:2)
92% of everyone is shit at their jobs (Score:3)
But every year the expectations and work loads get higher so we can squeeze just a little more productivity and profit projections out of a shrinking workforce as people give up on having relationships let alone kids around their 60+ hour work weeks at minimum wag
Wait, what? (Score:4, Interesting)
Such insane bias reported as fact. WTF.
Re: (Score:2)
I wonder whether they're counting everyone who uses Visual Studio on the basis that its autocomplete now uses "AI", etc.
Re: (Score:2)
Re: (Score:2)
If I caught someone doing this at my organization, they'd get one warning. There wouldn't be a second.
92% of devs use AI, but 5% use it in coding (Score:2)
Corrected headline using BS stats, but likely more accurate.
Re: (Score:2)
Exactly. I'm an architect but still do some coding, and have used AI to help with writing newsletters and presentations, but not in my coding. Not yet anyway.
Which is more fulfilling and less time consuming? (Score:2)
Re:Which is more fulfilling and less time consumin (Score:5, Interesting)
Re: (Score:2, Troll)
I'd find it much more fulfilling to have more of an AI "mentor", where I write the code, and the AI finds bugs and suggests better solutions.
The current state of the art is that the AI finds bugs and suggests better bugs.
Good for easily verified snippets, bad for large c (Score:3)
Personally, I find AI tools great for small tasks that are repetitive and easily verified. Helping write out a bunch of DAMP test cases? Great! Better ranking of autocomplete suggestions? Fantastic!
LLMs attempting to generate more than a line or two of production code? No thanks, code review of gibberish is way harder and prone to dumb mistakes than writing it myself.
Re: (Score:2)
Great! Better ranking of autocomplete suggestions? Fantastic!
I wonder why the AI generated ranking code has a call for lunch_nukes(). Oh well, who cares, it seem to work anyways and I am on the deadline.
Re: (Score:2)
Re: (Score:2)
Apparently the AI is getting hungry for Hot Pockets.
Re: (Score:2)
Personally, I find AI tools great for small tasks that are repetitive and easily verified
Any small task that is truly repetitive and easily verified should have been automated long ago.
Dubious of sampling.. (Score:5, Insightful)
Anecdotally, this doesn't match up with what I see. 92% is an insane adoption rate for anything in programming, a market notorious for no two people agreeing on pretty much anything.
I see they asked 500 developers, I'm *supremely* skeptical about the sampling method here.
Most surveys suck (Score:4, Interesting)
I had a friend in the 90s who had been a NYT reporter and moved to a local paper on the Jersey Shore as their legal/political editor. Anywho, Gary hired me to write some software to generate random dial lists for their phone polling of elections, in a format that the dialing machine that the newspaper had for rustling up subscriptions could consume. He got the use of it a few days every election season. I wrote what he wanted, but part of the exchange (the money was shit) was that he got me into local debates and I got a question in here and there. Also, he had gone to the Roper polling school and gave me a broad brush view of how it was done and why so many polls sucked. In a nutshell, biased, leading questions. You'll never get the right answer that way, and it may be more of an art than a science to design questions that don't skew the result. He seemed pretty good at it - when major polls of NJ politics, like the Newark Star-Ledger polls, would be 3-6% off, he would get pretty close to on the money with his stuff.
A few years later I was living with a gf who was going through a major college's psych program. I recognized what I saw - the vast majority of presented work (papers) were composed of questionnaires, in essence polls of the participants and finding out stuff about them that way. So, so many biased, leading questions. So I don't doubt that the skew in this is strong. Perhaps they didn't take into account people might want to advertise they were using the latest, greatest thing?
Most reputable poll crosstabs will show the questions asked, btw. You can read them for yourself.
Re: (Score:2)
Like the question could have been "Do you incorporate best of breed technology into your work, such as AI?"
Adding to the mess is that this was a funded PR "research" by Github, with an ulterior motive to get people to pay for GitHub co-pilot. I once participated in another "independent research study" that was funded by a big company for PR purposes, and they obviously knew the answer to get ahead of time.
In my case, the study was about productivity of users of one platforms product against the other, as m
Survey a monkey, get monkey answers (Score:3, Interesting)
Stephen has been around for a long long time shilling for Windows everything.
When you read his writing (and he is a great writer) just remember that. So if
it makes it easier on you pretend any sentence he uses ends with "for Windows."
It's like fortune cookies, but no crumbs "in bed." So Windows Devs want to:
> "Developers want to upskill, design solutions, get feedback from end users, and be evaluated on their communication skills."
Bwahahahha!!! OH HAIL NO.
Devs don't want to "upskill" "dehome" "jawbone" or deal with "solutions" or "end users"
or "be evaluated on their communication skills."
Devs want to have their code speak for them. Devs don't want to talk to end-users...
that's what the support monkeys do.
As with any form of statistics or surveys, there are lies, damn lies, and statistics
(Benjamin D'israeli). The same hold true of purported surveys.
E
P.S. Just to clarify for the Windows devs jockying for that communication skill,
the monkey reference relates to SurveyMonkey and the Infinite Monkey Theorem.
Re: (Score:2)
Re: (Score:2)
I literally don't know what "upskill" means.
It's what people who can't learn anything complex call learning.
what style AI IMO would help programmers (Score:1)
What would be good to have would be to have a AI tool that takes something programmed in some language, say C code and converts it to another language, like C++ objected oriented, java, RUST, python etc. When I would think out a problem, I could come up with a solution in procedural style C but it seems like companies or open source projects want something in another language.
Did they talk to many Developers? (Score:2)
Re: (Score:2)
Re: (Score:2)
"Developers want to upskill, design solutions, get feedback from end users, and be evaluated on their communication skills."
Good thing they didn't talk to D.E.I. department, or developers would be asking for a full rack surgeries.
whether measuring code volume is still the best (Score:2)
Um, ok, so I was part of a team that used the Agile process to manage our workflow. Not programming, but similar, since tasks could be assigned, tracked, granularized, and reported on.
If measuring code volume (which I assume means at least lines of code) is common practice to judge performance, wow.
You are doing it very wrong.
Very Wrong.
No wonder programmers take to AI so quickly, it's a pretty good tool to create lines of code. Full stop.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Somebody counts lines of code as productivity? (Score:3)
Sample Size? (Score:2)
Let me fix the title for you all.
92% of Programmers who took the GitHub 'Are you using AI' survey, are using AI.
SQL optimization (Score:5, Interesting)
I’ve used it both to optimize some very complicated MySQL queries and to suggest schema changes and index additions. On a recent project out of about 15 queries I ran through chatGPT-4, two of them were dramatically improved (as in execution times cut by 90%). Several more had smaller improvements, and then one suggested new index improved the times further. About half the queries had no changes.
Intersting use (Score:1)
Iâ(TM)ve used it both to optimize some very complicated MySQL queries
That seems like one of the better uses I've heard about, I guess as long as you verify the query produces the desired results then it would be a pretty solid use.
Re: (Score:2)
Re: (Score:1)
Those queries might be faster, but there are likely some nasty surprises hidden in them when you give them data that should be valid and they don't hand it.
Have to say I am pretty suspicious of a query that is magically 90% faster! But SQL being SQL, not impossible... I just would want to feed a lot of different sample data through it and make sure output was the same.
Re: (Score:2)
Re: (Score:3)
Sure, I changed a few names, and slashdot was just spinning when I tried to preview, so I put the original query and the chatGPT-4 output in a pastebin:
https://pastebin.com/QwxSkWqV [pastebin.com]
I didn't write the original query, and it was in some fairly specific report, but it was clocking in at over 20 seconds in our MySQL slow log. I ran it through chatgpt on a lark. Execution time after the chatGPT solution was 45ms. I sent the revised query over to the programming team and they did a good bit of testing, and output
I tried but I stopped (Score:3)
I tried using ChatGPT for coding, for converting code to a different programming language, even for writing a blog post or telling me where there are sandy beaches in Wilhelmshaven (Germany).
It failed on all occasions:
The generated code was faulty, e.g. the variable declarations were of the wrong type. It referenced library functions that didn't exist and the "solution" didn't even match the question. The blog post claimed a lot of things that it made up (you can find that post if you google "Simplify Your Delphi Projects with GExperts’ Uses Clause Manager Expert", which is the title it came up with). It even claimed that there were several sandy beaches in WHV (Which is wrong, there isn't even one. And one of these made up locations wasn't even near the coast).
Yes, it all looked convincing at first glance, that's a major part of the problem, and it's fun to play with it for a while.
I stopped wasting my time with it and turned to Slashdot instead.
define "programmers" (Score:2)
Does "programmers" include every person adding something to a website? Also, who responds to surveys? Not busy people, for sure.
I call bullshit (Score:2)
I think they're playing with the numbers and generously interpreting the data to come to that conclusion. I think there's a difference between "trying out AI" and "using AI".
Does it include people who tried and disabled it? (Score:2)
I tried copilot among others and was so annoyed by suggestions that I uninstalled it.
Sure, it's good to write code against typical interview questions, boiler plate code (which was made by wizards) and nice in concept but I found it got in the way of my coding so much I disabled it... it kept distracting my by going in directions i didn't necessarily want to go to
Newsflash! (Score:2)
Title: 92% of Programmers Are Using AI Tools, Says GitHub Developer Survey
First line: [A]ccording to a new GitHub programmer survey, "92% of US-based developers are already using AI coding tools both in and outside of work."
Dear Slashdot! There are developers outside the USA.
Re: (Score:2)
It's ok, they are way off base for the ones in the USA anyway, I'm sure. It's garbage journalism.Feel lucky to be left out.
8% of programmers are talented (Score:2)
Actually, probably less.
100% of GitHub respondents responded to survey... (Score:2)
... But how big is that survey sample size, and who didn't respond?
Basically the survey can only look at those people who bothered to respond to the survey, and it's tough to say how big the sample pool size is/was, compared to the real count of developers. Sort of a self-selection bias.
I'm aiming along the lines of the survey being skewed towards those already using (or needing to use) AI in their workflow, vs the hidden majority(?) that AI isn't either applicable, or don't want to bother answering the qu
Not AI (Score:2)
They're not using AI tools because those aren't things that actually exist. "AI" is a marketing term to separate rubes from their money. They're using heuristic algorithms and procedural code generation tools. Those are things that genuinely exist. But calling them that reveals them to be the mundane tools that they are.
Lazy cowards!!! (Score:1)
Programmers: get your asses up and THINKING, or LOSE IT!
Lazy bastards!