Maybe ChatGPT Isn't Coming for Your Coding Job (wired.com) 99
Today Wired published an opinion piece by software engineer Zeb Larson headlined "ChatGPT Isn't Coming for Your Coding Job."
Firing engineers and throwing AI at blocked feature development would probably result in disaster, followed by the rehiring of those engineers in short order.
More reasonable suggestions show that large language models (LLMs) can replace some of the duller work of engineering. They can offer autocomplete suggestions or methods to sort data, if they're prompted correctly. As an engineer, I can imagine using an LLM to "rubber duck" a problem, giving it prompts for potential solutions that I can review. It wouldn't replace conferring with another engineer, because LLMs still don't understand the actual requirements of a feature or the interconnections within a code base, but it would speed up those conversations by getting rid of the busy work...
[C]omputing history has already demonstrated that attempts to reduce the presence of developers or streamline their role only end up adding complexity to the work and making those workers even more necessary. If anything, ChatGPT stands to eliminate the duller work of coding much the same way that compilers ended the drudgery of having to work in binary, which would make it easier for developers to focus more on building out the actual architecture of their creations... We've introduced more and more complexity to computers in the hopes of making them so simple that they don't need to be programmed at all. Unsurprisingly, throwing complexity at complexity has only made it worse, and we're no closer to letting managers cut out the software engineers.
More reasonable suggestions show that large language models (LLMs) can replace some of the duller work of engineering. They can offer autocomplete suggestions or methods to sort data, if they're prompted correctly. As an engineer, I can imagine using an LLM to "rubber duck" a problem, giving it prompts for potential solutions that I can review. It wouldn't replace conferring with another engineer, because LLMs still don't understand the actual requirements of a feature or the interconnections within a code base, but it would speed up those conversations by getting rid of the busy work...
[C]omputing history has already demonstrated that attempts to reduce the presence of developers or streamline their role only end up adding complexity to the work and making those workers even more necessary. If anything, ChatGPT stands to eliminate the duller work of coding much the same way that compilers ended the drudgery of having to work in binary, which would make it easier for developers to focus more on building out the actual architecture of their creations... We've introduced more and more complexity to computers in the hopes of making them so simple that they don't need to be programmed at all. Unsurprisingly, throwing complexity at complexity has only made it worse, and we're no closer to letting managers cut out the software engineers.
"Engineer" ... You keep using this word. (Score:4, Insightful)
Most software is the opposite of engineering, which is unsurprising in an industry with greater than a 50% project failure rate by many estimates. I write code, but I also have an electronics engineering background so I know the difference.
The authors of this piece miss the same point many others miss, namely that LLMs or whatever don't have to completely replicate what a human can do in order to be extremely disruptive. They merely have to increase the efficiency of a certain number of people to accelerate the winner-take-all trends in tech.
Re:"Engineer" ... You keep using this word. (Score:5, Insightful)
So cut out the managers (Score:1)
Re: So cut out the managers (Score:1)
Re: (Score:3)
While there are good managers who make a positive contribution to the effort, there ARE enough deadweight managers out there that the AC may not have seen the good ones in action.
Re: (Score:2)
If the static is just loud enough, it is really, really hard to find a signal, ya know...
Re: (Score:2)
You should try switching from AM to FM is loud static is your problem. Use frequency to transmit data instead of amplitude.
So if you want to find a good boss, don't listen what they say, instead send them specific questions and see how they reply. I've seen plenty bad bosses who can give good first impression, but I have never seen a bad boss who can frequently give good answers to my questions when they are removed from their comfort zone.
Re: (Score:2)
"Write it down in a memo and put it on my desk, I'll get to it when I have time"
Sorry, they know how to deal with this kind of curve ball that could expose they know jack shit.
Re: (Score:2)
You need something to get the team focused in the same direction, and hopefully working towards the same goal. Agile doesn't cut it. Left alone to their own devices, developers will do whatever they want to. I've seen plenty who will go off and do bizarre and unnecessary tasks. A good manager sets the directions and keeps the team on track, while being a buffer between the people doing the work and the project and product managers who keep wanting to interfere in the process.
Re:"Engineer" ... You keep using this word. (Score:5, Interesting)
A good way to help people further understand software development jobs is to be up-front and clear about the difference between software engineering and less-structured software development.
Software engineering is a subset of software development that involves serious consideration of user needs, requirement capture, error modes during development, runtime failures, verification, validation, and similar topics.
In contrast, software development just means you write code that runs (and hopefully it passes some tests). Sometimes that is all that is needed, or at least all that management or a customer is willing to pay for. Other times, one needs to engineer software with more care.
Re: (Score:2)
Right, but as you describe it, "software engineering" doesn't involve "engineering". This is a misnomer that's stuck around far too long. What you describe is really "software management".
Re: (Score:2)
You sound like you have never done serious software or systems engineering. Would you know how to spell ISO/IEC 12207 or 15288 if one of them bit you on the face? Have you ever used ARP 4574A and the RTCA DO-178/254/278 suite of guidance documents? ISO 26262? I did not touch on engineering or software management topics! Contrast what I listed with what ISO 12207 calls "technical management processes" and what ISO 26262 calls "supporting processes".
If you do a good job at the things I mentioned, develop
Re: (Score:3)
What none of us need is an arrogant few...
I was once a graduate with a keen desire to build projects and learn the art, nothing wrong with that.
Exactly. Too many fucking nerds with no self-awareness about how they progressed in their careers. As if they were perfect the day they started programming.
They lack the intelligence to develop people (they think teaching someone is simply regurgitating definitions at them), so they assume people can't be developed in their careers.
Re: (Score:1, Informative)
No amount of self-congratulation from the inflated job title crowd will erase that stigma.
Re:"Engineer" ... You keep using this word. (Score:5, Insightful)
Re: (Score:1)
Re: "Engineer" ... You keep using this word. (Score:2)
Re: (Score:2)
You can estimate (not perfectly) if you first know what you're building. The problem is that often this isn't known. Sure, the high level view may be known but there will be a lot of stuff that no one thought about up front. Writing software is the easy part, the hard part comes when it needs to be integrated with other stuff (software, hardware, network), and when it needs to be tested, and so forth. The snag is that the project tends to be specified inexactly and a deadline is set before asking the te
Re: (Score:3)
Again: at least 50% of software projects completely fail
No. 50% fail in some way, such as being over budget, behind schedule, or not meeting all of the original goals.
That is not "complete" failure, and isn't so different than failure rates in other industries. How many construction projects are completed on schedule and under budget?
according to widely reported industry averages.
In other words, numbers were pulled from someone's butt.
Re: (Score:2)
No. 50% fail in some way, such as being over budget, behind schedule, or not meeting all of the original goals.
This would mean that 50% of software project are delivered on budget, on schedule and meet the original goals. If only it would be true :-)
My 35-years experience in the domain tells me that it is rather :
YMMV depending on the business you're in.
Re: (Score:3)
there are also plenty of Electronics engineers I have met and worked with in my career who give designing electronics a disservice
This rings so true. I was in embedded solutions for long enough to know that at zero points in time should I over estimate the quality of my work. I think the folks who have large egos in embedded must live in a bubble of sorts. I've found the field nothing but humbling to say the least.
Re: (Score:2)
There are a lot of people who are the opposite - to them the end result isn't nearly as important as the process. I see some who are hung up on the "framework". Months spent drawing out UML diagrams and in-depth design docs that don't really convey the actual work that needs to be done. Engineering is not just snapping together lego pieces with a lot of blobs of glue for the places where the pieces don't fit just right.
There's a big push to be like engineering a bridge, but they fail to notice that bridg
Re: (Score:3)
Writing code and engineering software are two separate tasks. Most software developers are just code monkeys playing in an already designed system. Successful software (think any OS) is absolutely engineered and it is the same level of engineering that an electrical engineer would do in designing a power system for an automobile.
Re: (Score:1)
"Most software is the opposite of engineering
Re: (Score:3)
I deal with real time systems. You have to think low level, you have to worry about optimization, you can't just slap stuff together. Recently some people who were pre-building components in parallel were annoyed that I was avoiding both their code and the chip's Hardware Abstraction Layer (intended for fast prototyping rather than production). The answer was that it was too slow, the operation was taking several seconds, and it took me a day to just talk directly to hardware and the code was 10 times fa
Re:"Engineer" ... You keep using this word. (Score:5, Insightful)
The difference between engineering, and coding, is scale.
Any handyman can build a backyard shed, but it takes engineers to build a high-rise.
Any road grader driver can make a dirt road through a field, but it takes engineers to build a freeway.
Any coder can make a command line tool or a simple web site, but it takes engineers to build something as large as Wikipedia.
In between those extremes is lots is gray, but that's essentially how I see the difference.
Re: (Score:1, Insightful)
- An engineer could design a shed that had predictable performance in an earthquake, for instance.
- An engineer could specify the grade and crown appropriate to local conditions and plan a route that minimized work required. My stepfather was a heavy equipment operator, no way would I want him designing a dirt road.
- You picked a terrible example with command line tools: some of the best code around is found in Unix utilities.
Re:"Engineer" ... You keep using this word. (Score:5, Insightful)
An engineer would only be needed for your earthquake-resistant shed, if there weren't already codes available that specify how to build an earthquake-resistant shed. Engineers develop the building codes, but a builder that follows the codes, isn't an engineer.
Same is true for your road example.
For a freeway, it's necessary to have custom engineering done, because of the scale. There's no book the covers every situation a freeway will need to be successful.
As for command line tools, engineering has nothing to do with "good" or "best" code. You can have an excellent shed, that didn't involve any engineering. You can also have an engineered skyscraper that's a piece of junk.
I think the command line tool example is valid. These generally don't have sufficient scale to qualify as "engineered." Some possible exceptions might include 7-zip, where the compression algorithm is carefully engineered, or perhaps RabbitMQ, which is engineered to be fault-tolerant and supports clustering.
Re: (Score:2)
Yeah so have I. So where did that get us?
I'd say this definition helps clarify things: https://en.wikipedia.org/wiki/... [wikipedia.org]
The key phrase that stands out to me is that an engineer "applies the scientific method." The shed builder, or the road grader, aren't doing that. Nor is the command-line utility developer. They are all just building stuff.
Re: (Score:3)
But it's fun to be stupid!
Generally, when someone stops debating the topic at hand, and starts insulting the person they are debating, they've run out of actual logical arguments.
Re: (Score:3)
All right, great, so now I'm:
- a kindergartener
- aggressively ignorant
- won't follow a line of thought
But you're the one attacking me, instead of debating the actual line of thought.
Got it.
Re: (Score:1)
Re: (Score:2)
Agreed, so broadly characterizing command-line apps as not being engineered, would be unfair.
Git, while it is command line, certainly required significant engineering when it comes to revision management and the merge process.
Re: (Score:1)
Re: (Score:2)
Other threads on this topic have led me to a bit of clarification of that boundary. Wikipedia has a good definition:
https://en.wikipedia.org/wiki/... [wikipedia.org]
So to me the key to that definition is that engineering involves applying the scientific method to a project. That certainly relates to scale, as small projects generally just require following a pre-engineered "building code." When your project is large or complex enough to require applying the scientific method to come up with the specifications, that's engin
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
I read between the lines you believe in the lump of work fallacy. Do you see AI impact as a zero sum game?
Jevon's paradox comes in here: technological advances make resource use more efficient, but lower costs drive up demand, often increasing overall consumption. This contradicts the common assumption that efficiency will reduce resource use. See wider roads and traffic as well.
Re: (Score:1)
This is my concluding line: "They merely have to increase the efficiency of a certain number of people to accelerate the winner-take-all trends in tech." Emphasis added.
We're seeing the effects of this trend in the "gig economy", delayed household formation, and other widely-observed disruptions in society. Some people are doing extremely well, sure, but this is not a rising tide that lifts all boats equally.
Re: (Score:2)
I'm a software engineer, who is a also a real engineer (from software technology). Our education contained also some electronics and electricity, but was focused on software.
I agree with you that software is not like other engineering, but in a way it is. Customer gives you a problem and you need to solve it using existing tools.
You said that software projects fail often. That is mostly just because you are looking it from a wrong angle. When you are doing a software project, it is almost a fact that specif
Work has increased (Score:3)
So far every "technology" that was supposed to make life easier and less work has only increased work. TV was supposed to eliminate acting jobs by eliminating local theater. Phones were supposed to eliminate delivery jobs. Cell phones were supposed to make life convenient, instead if means your boss can call you on Sunday and make you work from home. Factory automation was supposed to eliminate factory jobs. Instead we have more humans (worldwide) working in factories than at any time in history. Sure we outsourced it to China, but fact is there's humans doing jobs making crap.
Re: (Score:2)
Re: (Score:2)
I don't know; a lot of people used to spend 12 hours a day tilling fields; now a lot of people spend 12 hours a day watching streaming videos. That seems like a decrease in work to me.
But will PHBs understand this? (Score:5, Insightful)
Every boss I've ever had wanted me to set things up so that non-developers could do some or all of the work developers were doing. Especially when it didn't make sense.
Re: (Score:2)
Nope. They won't. But that's allright, we'll replace them with ChatGPT sooner or later...
Re: (Score:3)
Hell, that's almost an optimal use case.
The number of roles where 'baffling with bullshit' was the primary skillset...
Wait a tick, how is Sales doing these days!
Re: (Score:2)
We sold that.
Re: (Score:2)
Classic Dogbert: “Next week a doctor with a flashlight will show us where sales projections come from “
Re: (Score:2)
Creating complex systems is hard (Score:4, Funny)
It doesn't matter if it's described in programming language, legal jargon, specifications or a text prompt. The description of the system must capture ALL of the complexity, including special cases, edge cases and unexpected asynchronous events
My hope is that some future AI will help us manage complexity and deal with the rare or odd cases
Re:Creating complex systems is hard (Score:5, Insightful)
...My hope is that some future AI will help us manage complexity and deal with the rare or odd cases[.]
LLM's would be useful to analyze code and find those corner cases that I missed (but I would be surprised if it were capable of even that little bit of analytical success), but my experience with trying to get it to write code for me has been an exercise in frustration. As so many others have pointed out, it will confidently present an answer that is just plain wrong. By the time I have revised my instructions to get the generated code to look as error free as possible (leaving me to manually correct the remaining errors), I probably could have written the code from scratch more quickly.
To put it bluntly, AI-generated code sucks. Any company that would fire its developers and replace them with LLM's is destined for a quick dissolution.
Re: (Score:3)
Indeed. All LLMs can do is make an expert faster. Under some circumstances. Unless crappy results is all you need. Incidentally, AI is unsuitable to find missed "corner cases". Doing that is one of the hardest deduction problems and cannot generally be solved by machines. Human experts can do it reasonably well if they have a very keen sense for what they understand and what they do not understand.
Re: (Score:1)
Re: (Score:2)
That strategy works great for human languages because that's the way we are. We have the multi $billion industry, PR & marketing, based solely on that characteristic of human interaction & communication.
Re: (Score:2)
Re: (Score:2)
Indeed. Anybody that can just turn complete specs into simple business logic is will have their job threatened. But how many coders are actually that limited?
Re: (Score:3)
It can't be that many since I have never seen business logic so well defined that it can actually be coded without analysis, Q&A wuth the client and finally just making a few best guesses when the client is unable to further articulate.
Re: (Score:2)
Exactly.
Re:Creating complex systems is hard (Score:5, Insightful)
Well, in this case, no job is being threatened.
Have you EVER gotten complete specs?
Re: (Score:2)
For something non-trivial? Obviously not.
Re: (Score:2)
Re: (Score:2)
Then again, performance is no longer mission critical. For most applications, whether they run a second longer or not is moot and nothing worth throwing talent at. There are of course a select few where performance is still key because costs go up exponentially with a linear increase of runtime, but these cases are few and far between. It usually is the other way around, you could get a linear increase in performance for an exponential increase in cost, and for most applications, this simply isn't warranted
Re: (Score:3)
That's exactly right, but it's not enough to just have that data -- it has to be correct, which it never is. The biggest challenge is not in writing the code; it's in understanding the problem space, and the expected outcomes, and what expectations are reasonable, re
If money is on the line, ChatGPT won't cut it. (Score:5, Insightful)
Take Hibernate. Nearly my entire career has been helping companies put their DB code in JPA..and then take out several pieces and put them into native SQL when JPA performs poorly. If you ever turn on show SQL, you see it generates awful SQL that works in all scenarios...just poorly. If you knew you were going to save 3 object rows and 12 child rows, you'd write 2 insert statements because you know the domain. You know the object being input and the bounds for the child objects and the specific type. ORM, just like generative AI or machine learning tools don't know this. They have to write 15 statements because they don't know the real relationship between these rows or constraints. All AI-generated code was either broken or incredibly inefficient.
A well run business doesn't write a TON of code. They write a moderate amount and keep editing it frequently. They tune their codebase based on evolving requirements of their business...it's very bespoke and custom.
However, these are my explanations as to why. I am more confident that I am correct than I am I fully understand why. Why am I so certain I am correct? AI has been funded by the richest companies in human history sitting on mountains of cash, getting the top brains in the world. They have the talent, experience, funding, manpower, and most importantly the incentive to make it happen. They hire armies of expensive engineers and never seem to have enough talent. If it could be done, they wouldn't be selling you tools to do it, they'd be reorganizing their own companies to use it. Google wouldn't sell you tools, but a ChatGPT prompt to generate a working Android app. Microsoft would be selling Generative AI game builders. You'd tell it "I want a shooter set in the 1930s with Aliens and a rainbow palette set in the South Pole where my character does parkour on icebergs" and the next day, you'd have a working game.
Or...they'd use Generative AI in their runtimes to increase efficiency. That's an even more straightforward test of their abilities. If ChatGPT was so good, why not incorporate it in the C# CLR to make C# consistently faster than nearly all C code written out there? Why not have ChatGPT translate C# to assembly code customized for your specific processor? If you could increase efficiency with it, they'd save a ton of money in Azure electricity costs and well as be able to charge a massive premium for this more efficient runtime.
We'll know Generative API can generate code when it's used to build useful applications. The press release won't be telling us about the tools, but demonstrating impressive applications that we could have never imagined a human being writing.
Re: (Score:2)
Its good for bullshitting (actual language, not code), if no one is going to call you out on it.
So I've seen management types using it to write apologies and status-page postmortem promises that will never actually be played out and were only posted to mollify a userbase after some server incident.
Re: (Score:2)
If you're paying someone today to write code, or even documentation or advertising copy, it has to be correct, not close to correct. Today's Generative AI has no clue what it's doing and no clue if it's correct. It is fancy autocomplete with a super-high carbon footprint. Algorithms are always superior to machine learning. Algorithmic code generation is as old as day 1.
[...]
Take Hibernate. Nearly my entire career has been helping companies put their DB code in JPA..and then take out several pieces and put them into native SQL when JPA performs poorly. If you ever turn on show SQL, you see it generates awful SQL that works in all scenarios...just poorly. If you knew you were going to save 3 object rows and 12 child rows, you'd write 2 insert statements because you know the domain. You know the object being input and the bounds for the child objects and the specific type. ORM, just like generative AI or machine learning tools don't know this. They have to write 15 statements because they don't know the real relationship between these rows or constraints. All AI-generated code was either broken or incredibly inefficient.
So you're writing in assembly then? Because everything else is algorithmicly generated code, even more so for "native SQL". Not only is the DB itself a big abstraction you had nothing to do with, but the DB is doing a bunch of optimizations.
The difference between languages like C & SQL and what we usually think of as arithmetically generated code is whether they're complete enough that you never need to work at a lower level. Either way, traditional generative code is typically a translation from a high
ORM + Time Travel is impossible (Score:2)
Btw, comparing SQL generated by JPA to AI-generated code is really, really, wrong (for reasons I'll mention at the end).
Java to SQL is a far easier application than generative AI is attempting to solve. No machine can create a good solution unless it knows the intent. This is a major reason why ORM gives slow results and Ruby never really took off. If writing code is running, generating SQL from objects is crawling.
I don't think it's out of the realm of machine learning to pick optimal algorithms based on pattern matching. Most runtimes, from DBA optimizers to JVMs and CLRs and JavaScript engines do that now via diffe
Re: (Score:2)
Btw, comparing SQL generated by JPA to AI-generated code is really, really, wrong (for reasons I'll mention at the end).
Java to SQL is a far easier application than generative AI is attempting to solve. No machine can create a good solution unless it knows the intent. This is a major reason why ORM gives slow results and Ruby never really took off. If writing code is running, generating SQL from objects is crawling.
It's a different problem. ORM is a higher level language like C or Python. It translates into a lower level language and that translation MUST be correct. It allows for some really fancy optimizations since you can put a ton of work into getting a 0.1% improvement and it pays off. But it's also limited since you don't know intent.
For C the balance is favour of the compiler, it's really hard to write assembly faster than an modern compiler.
For python, you can usually do better writing in C but it may not be
Re:If money is on the line, ChatGPT won't cut it. (Score:4, Insightful)
If you're paying someone today to write code, or even documentation or advertising copy, it has to be correct, not close to correct. Today's Generative AI has no clue what it's doing and no clue if it's correct. It is fancy autocomplete with a super-high carbon footprint.
What scares me is what LLMs are able to do without the many benefits people with brains take for granted.
LLMs currently cannot learn from experience, learn how to learn, ground their knowledge to reality or impose consonance. They cannot iterate to improve their designs, leverage support tools or even think in ways not rigorously fixed by the models execution. Many of these limitations are likely to be rather fleeting given pace of innovation and known active areas of research.
When you ask a present day LLM to spit out code to do much of anything it is akin to asking a human to spit out code to do something off the top of their head without thinking much about it. Oh shit the computer didn't consider some corner, it fucked up, it forgot to do... well no shit it's not going to be perfect.
Humans are also incapable of writing "correct" code. To the extent correctness is possible at all it is only made so by imposition of rigorous process and iteration to inherently fallible minds.
My personal prediction in the not so distant future we will see AI driving proof assistants bringing more reliable methods of programming to the mainstream.
Algorithms are always superior to machine learning.
Machine learning "algorithms" figured out how to find lowest energy conformations of proteins, they found more efficient ways to multiply matrices of certain sizes than humans are known to have ever discovered. It succeeded where humans and their "algorithms" failed despite considerable persistent efforts by smart humans spanning decades.
However, these are my explanations as to why. I am more confident that I am correct than I am I fully understand why. Why am I so certain I am correct? AI has been funded by the richest companies in human history sitting on mountains of cash, getting the top brains in the world. They have the talent, experience, funding, manpower, and most importantly the incentive to make it happen. They hire armies of expensive engineers and never seem to have enough talent.
If it could be done, they wouldn't be selling you tools to do it, they'd be reorganizing their own companies to use it. Google wouldn't sell you tools, but a ChatGPT prompt to generate a working Android app. Microsoft would be selling Generative AI game builders. You'd tell it "I want a shooter set in the 1930s with Aliens and a rainbow palette set in the South Pole where my character does parkour on icebergs" and the next day, you'd have a working game.
GPT-4 the first LLM with any kind of generally useful thinking capability was first released just half a year ago. The algorithms and enabling hardware are in their infancy. Expecting end game capabilities out of the gate isn't reasonable and drawing conclusions of what "could be done" at this point is premature.
Or...they'd use Generative AI in their runtimes to increase efficiency. That's an even more straightforward test of their abilities. If ChatGPT was so good, why not incorporate it in the C# CLR to make C# consistently faster than nearly all C code written out there?
Or better still apply it to LLVM and optimize all the languages.
https://arxiv.org/pdf/2309.070... [arxiv.org]
Re: (Score:2)
Today's Generative AI has no clue what it's doing and no clue if it's correct. It is fancy autocomplete
Thsts why I call it Autocomplete Insanity.
Re: (Score:3)
Algorithms are always superior to machine learning.
But machine learning is done with algorithms. It's algorithms all the way down.
Re: (Score:2)
I came out of retirement to work on AI.. at a name brand place.
The transition you describe is happening. Mid 2024 most of the tools start rolling out, by end of 2024 it will be clear things have dramatically changed. Not just code, but mangement.
What we call software development is going away by end of 2025, and it will be replaced by something else, but it isn't going to look like what it does now.
The problem with his analysis (Score:5, Insightful)
He's thinking like the guy who actually writes the code - but that's now the person who makes these decisions.
I fully expect that this (currently hypothetical) scenario is going to play itself out, all over the place. Managers are gonna buy into some flim-flam artist's promises of great code with few developers, laying off tons of them. After several months, it's going to be obvious it's not working. But the complication is the manager won't want to admit to such a huge, stupid error - so any "hiring back" of developers is going to start as a trickle at best.
Re: The problem with his analysis (Score:2)
Then the business will fail, and the businesses who know how to hire smart managers will survive.
What's wrong with that? Learning the hard way is better than not learning at all.
If you've been using ChatGPT, you know this (Score:3)
ChatGPT can give you good code suggestions, when you are very explicit about what you need. This kind of "assistant" might replace some low-level outsource developers, but certainly not anyone who has to "think."
Re: (Score:2)
I throw code at ChatGPT and it tells me what's wrong with it. It doesn't understand that I have to fit a module into a stream of existing code. I give it initial conditions and instruct it to provide strict outputs, but it's getting its "intelligence" from some obscure articles and textbooks.
As you say, it doesn't understand. That's not very intelligent.
Not coming for "Your" job (Score:2)
History is likely pretty useful here; remember when writing HTML was "hard" in 2000? Those jobs that were lost are the same class of jobs that will be lost to generative AI.
My miserable level of coding capability can easily be replaced by AI... but I only program to solve (via brute force) a problem I can't figure out any other way, not for any long-life production systems. That is useful... but not transformative.
Re: (Score:2)
History is likely pretty useful here; remember when writing HTML was "hard" in 2000? Those jobs that were lost are the same class of jobs that will be lost to generative AI.
HTML is harder than it used to be, because now you have to know how to write CSS too. It's more powerful yes, but there's so much more to know...
Re: (Score:2)
It sort of is, but for junior programmers. (Score:3)
Rarely does it do anything I couldn't do, and often doesn't even do as well as I could do. But it speeds my work right along doing the boring for loops, etc.
But where it really kicks some ass is in the super drudge work. Things like cooking up unit tests, and putting asserts everywhere. Making sure every conditional is caught, etc.
Some of these things are exactly what junior programmers are assigned, and where they learn. Paired programming is another huge opportunity for junior programmers to learn. Except I don't want to work with a junior programmer ever again. I can crap out unit tests, integration tests, and with these tools doing the drudge work, I can stay hyper-focused on the hard stuff.
Before this, a junior programmer could be helpful and occasionally had some cool trick or fact I could learn from. But now they are almost pure productivity-sapping distractions.
Another group of programmers are those rote learning algo fools. I can get these AI tools to give me just about any answer I need where it twists graph theory into knots. These people were mostly useless to begin with, but now are officially worse than useless.
And this is exactly where I see a big cadre of programmers getting shot. Junior programmers who will now largely go mentorless, and those rote learning algo turds who used to get jobs at FAANGS because some other rote learning fool convinced everyone that rote learning was good.
I asked ChatGPT what these people should do and it said, "They should go back to their spelling bees.... nerds."
Tools (Score:1)
No worries (Score:2)
I've been learning a new framework. ChatGPT has been great at reminding me of simple things, and showing me how other simple things can be done. The combination of explanation and example code it produces is really good - better than any single resource I can find via search.
However, as soon as you start asking more complex questions, it gets unreliable. It will confidently present and explain code that does not - indeed, cannot - work. We don't even need to think about genuinely difgicult questions based
It's the same things in all industries... (Score:1)
ChatGPT and OpenSCAD (Score:2)
The line in question:
Firstly as OpenSCAD is by default Metric there was no scaling set (by defining an inch as 25.4).
ChatGPT failed to come up with the definition itself so that's 1/16 of a millimetre, not inch.
Then there's the comment that conflicts with the code. The diameter of the result of h
Well Duh (Score:2)
Reoccuring theme (Score:2)
Chat GPT can't fill my TPS report (Score:2)
Chat GPT can't fill my TPS report.
Will my PHB understand how intricate the TPS report is? Too intricate for ChatGPT to figure out.
I have 5000 TPS reports done and ChatGPT can't handle that size codebase.
Will ChatGPT always fill my TPS report perfectly? That line 14 sec 32 can mean a lot of things and can be quite complicated. It is my job and I spend days making sure that the TPS report is filled perfectly without any mistakes.
I define my life through TPS reports. Let me find more reason. Oh oh. That one ti
Re: (Score:2)
TPS reports are the things ChatGPT does best. Not that it matters, since nobody actually reads them anyway.
"AI" (Score:2)
So what you're telling me is that ChatGPT wasn't the be-all and end-all of AI and it isn't intelligent and it won't be paradigm-breaking, world-changing AI as they hype proposed, and all those thousands of companies lumping onto it as "the next big thing" are actually not understanding what it can and cannot do, and that it's in fact unreliable and often wrong?
Gosh... just like every other single AI "revolution" since... what? The 60's?
I have the solution: Just throw more training at it, more processing p
Binary? (Score:2)
Duh (Score:2)
Of course ChatGPT isn't coming for my job. Instead, companies built around ChatGPT (or similar) will be coming for my employer and my job will simply disappear during the next economic shake-up.
Disruption hardly ever happens on the job-level. Also, it usually won't be predicted at the analyst level, which is proven by things like this:
More reasonable suggestions show that large language models (LLMs) can replace some of the duller work of engineering.
Reality is that there is almost no dull work left in proper software development. Any dull work left is due to legacy, stupidity, stubbornness or lack of access to proper too
Deja Vu all over again (Score:2)
wait so (Score:2)
So LLM’s won’t take our jobs but “large language models (LLMs) can replace some of the duller work of engineering”? Doesn’t this mean that some software engineering work will be lost to LLM’s?
very disappointed in supposed AI (Score:1)
I asked chatGPT to create a data structure in python that can hold the data described in Table XX.X in the document at the link www.XXX.XXX/XXX.pdf. About the only useful thing I got out of the thing was the name of the pdf document. At least I know it can read a pdf. The rest I might as well have typed "sample python data structure" in google. It completely ignored the data in the table and just spit out some random data struct that has no connection with the described data.
This is the kind of simple t