Analyst Mocks the Idea That It's 'The End of Programming' Again (zdnet.com) 97
January's Communications of the ACM includes an essay predicting "the end of programming," in an AI-powered future where "programming will be obsolete."
But IT analyst and ZDNet contributor Joe McKendrick remains skeptical, judging by a new essay sardonically titled "It's the end of programming as we know it — again." Over the past few decades, various movements, paradigms, or technology surges — whatever you want to call them — have roiled the software world, promising either to hand a lot of programming grunt work to end users, or automate more of the process. CASE tools, 4GL, object-oriented programming, service oriented architecture, microservices, cloud services, Platform as a Service, serverless computing, low-code, and no-code all have theoretically taken the onerous burdens out of software development. And, potentially, threaten the job security of developers.
Yet, here we are. Software developers are busier than ever, with demand for skills only increasing.
"I remember when the cloud first started becoming popular and companies were migrating to Office 365, everyone was saying that IT Pros will soon have no job," says Vlad Catrinescu, author at Pluralsight. "Guess what — we're still here and busier than ever."
The question is how developers' job will ultimately evolve. There is the possibility that artificial intelligence, applied to application development and maintenance, may finally make low-level coding a thing of the past.... Catrinescu believes that the emerging generation of automated or low-code development solutions actually "empowers IT professionals and developers to work on more challenging applications. IT departments can focus on enterprise applications and building complicated apps and automations that will add a lot of value to the enterprise."
Even the man predicting "the end of programming" in an AI-powered future also envisions new technology that "potentially opens up computing to almost anyone" (in ACM's video interview). But in ZDNet's article Jared Ficklin, chief creative technologist and co-founder of argodesign, even predicts the possibility of real-time computing.
"You could imagine asking Alexa to make you an app to help organize your kitchen. AI would recognize the features, pick the correct patterns and in real time, over the air deliver an application to your mobile phone or maybe into your wearable mobile computer."
But IT analyst and ZDNet contributor Joe McKendrick remains skeptical, judging by a new essay sardonically titled "It's the end of programming as we know it — again." Over the past few decades, various movements, paradigms, or technology surges — whatever you want to call them — have roiled the software world, promising either to hand a lot of programming grunt work to end users, or automate more of the process. CASE tools, 4GL, object-oriented programming, service oriented architecture, microservices, cloud services, Platform as a Service, serverless computing, low-code, and no-code all have theoretically taken the onerous burdens out of software development. And, potentially, threaten the job security of developers.
Yet, here we are. Software developers are busier than ever, with demand for skills only increasing.
"I remember when the cloud first started becoming popular and companies were migrating to Office 365, everyone was saying that IT Pros will soon have no job," says Vlad Catrinescu, author at Pluralsight. "Guess what — we're still here and busier than ever."
The question is how developers' job will ultimately evolve. There is the possibility that artificial intelligence, applied to application development and maintenance, may finally make low-level coding a thing of the past.... Catrinescu believes that the emerging generation of automated or low-code development solutions actually "empowers IT professionals and developers to work on more challenging applications. IT departments can focus on enterprise applications and building complicated apps and automations that will add a lot of value to the enterprise."
Even the man predicting "the end of programming" in an AI-powered future also envisions new technology that "potentially opens up computing to almost anyone" (in ACM's video interview). But in ZDNet's article Jared Ficklin, chief creative technologist and co-founder of argodesign, even predicts the possibility of real-time computing.
"You could imagine asking Alexa to make you an app to help organize your kitchen. AI would recognize the features, pick the correct patterns and in real time, over the air deliver an application to your mobile phone or maybe into your wearable mobile computer."
Nope. Not the End of Programming. (Score:5, Interesting)
Basically, NO. AI, will be writing buggy bullshit for a looooong time. It takes a real person with some real world experience to troubleshoot and creatively solve real world problems.
Buckle up for a torrent of really shitty software. Yes, even worse than now.
Re:Nope. Not the End of Programming. (Score:4, Insightful)
The GPT thing is very good at making text that look like what you asked it to.
Which makes it great at making code that seems correct but isn't
Re:Nope. Not the End of Programming. (Score:4, Insightful)
The GPT thing is very good at making text that look like what you asked it to.
Which makes it great at making code that seems correct but isn't
And that is exactly the point in a nutshell. This thing can emulate style and make things look or sound great. It cannot do the details though and in engineering details are critical.
Re: (Score:2)
Well, programming isn't engineering (that's an insult to real engineers) but you're otherwise correct. AI generating code is a parlor trick, not a serious technology.
Re: Nope. Not the End of Programming. (Score:2)
Re:Nope. Not the End of Programming. (Score:5, Insightful)
Your analogy about raising a child is apt, but not in the way you intended: unlike so-called 'AI', a child is capable of cognition, whereas so-called 'AI' is by definition entirely incapable of 'thinking/reasoning'; a child can go beyond what they've been taught, while the machine lacks the ability to do that.
All this 'AI' crap is the most over-hyped garbage I've seen my entire life.
Re: (Score:2)
Mycroft seems real enough to me:
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
Indeed. And that stupidity is _old_. For example, Marvin "the Idiot" Minsky claimed that as soon as a computer has more transistors than a human brain has brain cells, it will be more intelligent. Completely clueless drivel, of course (Neuroscience struggles to model single human-complexity level brain-cells completely and they are using a lot more than one transistor in their attempts), but many people believed that because they cannot fact-check and it came from some "authority".
"The Moon Is A Harsh Mistr
Re: Nope. Not the End of Programming. (Score:3)
Re: Nope. Not the End of Programming. (Score:4, Funny)
Re: Nope. Not the End of Programming. (Score:4, Funny)
You bet I could! I'm not a bad programmer myself!
We don't have to sit here and listen to this...
Re: (Score:2)
Re: (Score:2)
I think they will just be tools that you learn how to use that accelerates some parts of the job. I have done some experimenting with these tools and they did a good job of writing unit tests and some basic documentation. The unit tests were not 100% correct but they were about 90% correct and saved a lot of time.
I don't think I would give them to a novice programmer though. What I have seen is they are not experienced enough yet to handle the kinds of errors these things create but for an expert I think th
He's experiencing AI denial (Score:2)
and it is also what the artists said https://www.genolve.com/design... [genolve.com]
Re: (Score:3, Informative)
The guy who wrote the original article is selling AI. If people believe him he stands to make more money, therefore his opinion doesn't count.
Re: (Score:2)
The guy who wrote the original article is selling AI. If people believe him he stands to make more money, therefore his opinion doesn't count.
"Ad hominem [wikipedia.org] (Latin for 'to the person'), short for argumentum ad hominem (Latin for 'argument to the person'), refers to several types of arguments, most of which are fallacious.
Typically, this term refers to a rhetorical strategy where the speaker attacks the character, motive, or some other attribute of the person making an argument rather than addressing the substance of the argument itself. The most common form of ad hominem is "A makes a claim x, B asserts that A holds a property that is unwelcome, and
Re: (Score:2)
Ad hominem is trying to discredit an opinion by attacking its holder SPECIFICALLY. In this case it is a general rule: any claim X made by any A who stands to gain something from people accepting it has no standing.
The reverse is also true: a claim Y made by a B who stands to LOSE from people accepting it needs to be taken seriously.
Re: He's experiencing AI denial (Score:3)
"Ad hominem (Latin for 'to the person'), short for argumentum ad hominem (Latin for 'argument to the person'), refers to several types of arguments, most of which are fallacious.
A conflict of interest (COI) [wikipedia.org] is a situation in which a person or organization is involved in multiple interests, financial or otherwise, and serving one interest could involve working against another. Typically, this relates to situations in which the personal interest of an individual or organization might adversely affect a duty owed to make decisions for the benefit of a third party.
Pound sand.
Re: He's experiencing AI denial (Score:2)
Re: (Score:2)
One problem with questions like these is:
1) Even though we are probably aren't not much closer to AI doing significant programming tasks
2) Almost no one will be able to accurately identify when we are close to AI doing significant programming tasks
I don't see anything in recent OpenAI or other similar technologies which leads me to believe programmers are at risk of being disrupted by AI, but I don't really think I'll know it when I see it.
Re: (Score:1)
Its important to point out; we are for the most part not asking the ai how to do something, we are asking it to do something for us. In the end most of us won't care if it does a good job or not. We just want things to turn on when we press a button.
Figuring what too do is most of the work (Score:5, Interesting)
Look at the mess South West has. They can't track their pilots and flight attendants because they have a bunch of undocumented scheduling code running on different machines that is so convoluted that they can't integrate a new tracking system into it. AI won't help with that.
Put it another way, if you knew exactly what you wanted your code to do it would be a high school project or something you could give a coop. The fact that many of us can bill over $100/hr shows that we aren't hired for our programming skills its because we can figure out what needs to be done.
Re: Figuring what too do is most of the work (Score:3)
Re: (Score:2)
Look at the mess South West has. They can't track their pilots and flight attendants because they have a bunch of undocumented scheduling code running on different machines that is so convoluted that they can't integrate a new tracking system into it. AI won't help with that.
I've heard they have to reboot their scheduling system every night to keep it functioning. They've been accumulating IT debt for decades at this point.
Re:Figuring what too do is most of the work (Score:4, Interesting)
AI will be writing "Hello World" and impressing managers everywhere, so like I said, get ready for really really bad software. I'll go further. It will take someone getting killed or a power plant blowing up, or some disaster, before managers, politicians, and other idiots to realize, this isn't going to work the way it does in Star Trek.
Re: Figuring what too do is most of the work (Score:2)
Re: (Score:2)
Not yet. But the AI image and text stuff do give me hope that, in fact, the "holoprogram from a few sentences" may one day be possible.
The problem with the current batch of AI generators is that they're no more than a statistical compilation of everything they've seen, grouped by subject. It follows that, from that summary, you can only retrieve content that it has seen elsewhere.
Surely it can mix and match that content in new ways if you know how to ask carefully for the correct subjects; but it can't be used to solve problems that it has not seen before. It does not have a component of 'creativity' in the sense of building new code to so
Re: (Score:3)
I wrote an article about my experiments with ChatGPT where I asked it to build a SwiftUI form with name, address, city, state and zip fields. It did. I asked it to move the fields to a view model. It did. I asked it to add some validation and it did. I asked it to create a struct from the data, populated from the VM. It did. I asked it to add a function to populate the VM from the struct. It did.
And it did all of that in less than two minutes.
I wouldn't ask it to build an entire app from scratch. But it's v
Re: Figuring what too do is most of the work (Score:2)
I can see dedicated IDEs coming that do this sort of thing automatically. At which point developers will be free to concentrate on other problems.
This is such an old dream.
Boilerplate code seen as tedious waste of time. Programmer makes tool that generates boilerplate code automatically. Programmer invents new framework with no boilerplate code. New methods of writing boilerplate code are developed. Signal repeats.
Re: (Score:2)
New methods of writing boilerplate code are developed.
What is the motivation that causes people to re-introduce boilerplate code?
Re: (Score:2)
Re: (Score:2)
If you have an API that returns JSON and you need to convert that JSON into a struct in the language of your choice, that's largely a tedious, handwritten boilerplate function. Mutating initializers and functions. Marshaling that data to and from a set of form fields is largely just writing more boilerplate code. Creating mocks and creating the API request routines to get and put that data are mostly boilerplate.
Fetching lists from APIs. Presenting selection lists. Validation. None of it is rocket science a
Re: (Score:2)
Not just someone. A few hundred must die first. And maybe a few tech journalists too. Look at the whole self-driving car thing: a few hundred have died already. More will die. And still there are many people who believe that they can stop paying attention behind the wheel of a so called self driving car.
Re: (Score:3)
I just finished a 900 hour contract and wrote 20,000 lines of code. I spent less than 200 hours writing the code. I spent 40 hours in meetings, 100 hours debugging, 100 hours writing documentation
Where to you work ? :) Where I work the 900 hours translate to this: 500 Meetings, 100 writing Code, 200 keeping Project Mang System updated, 100 Test (on a good project). Testing usually means only do enough to tick off the boxes in a form.
The contract was fixing technical debt (Score:2)
Don't take contracts to fix technical debt. You will get zero support, you are the lowest of the l
Re: (Score:2)
Put it another way, if you knew exactly what you wanted your code to do it would be a high school project or something you could give a coop.
Respectfully, you and other professional programmers underestimate how very rare it is to have the capacity to write functioning code. I teach 2nd-semester (community) college CS majors, and the majority of them can't write a for loop, declare an array, or understand scoping. Even if they can do that, then it's likely they can't read a specification written in clear English.
Re: (Score:2)
Exactly this.
Programming is translating human intent into a format the computer can understand—code. People see an AI generating code and say, “It’s coming for your jobs, programmers”, but for that to be the case those people would need to be able to perfectly express their intent to a computer, or at least validate that their intent had been successfully expressed...at which point they themselves would be programmers, just with a natural language rather than a formal one.
What they
Re: (Score:2)
Way back in the before time, we had Junior Programmers, the greenest of which were called "code monkeys". They would do the tedious actual writing of code to create functions specified by Senior Programmers. The job required a high school diploma and good results on an aptitude test. Meanwhile, the Senior programmers educated them so they could become Senior Programmers after a few years.
The Senior programmers did a lot more thinking and specifying and a lot less actual coding. They were in short supply so
Is it here? No. Is it close? Maybe. But... (Score:2)
AI isn't going to replace programmers immediately. But what it is going to do is reduce the amount of work they need to do, which means either more programming can be done, or there will be less programming jobs.
The AI commonly produces bad answers... but it also commonly produces good ones. Programmers will spend more of their time writing test cases, which are a good idea anyway, and some of the software will be written by the computer.
Writing good test cases is hard, but it's already necessary.
The point
Re: (Score:2)
That will not work and cannot work. Artificial Ignorance coding will have very specific security problems in it that are not accessible to test cases because they will be too complex and testing for security is already hard for simple things and generally several (!) orders of magnitude harder than regular testing. But attackers that find these can then exploit a mass of systems with them because the same mistakes will be in a lot of different code.
Testing is very limited in what it can do and can never ass
Re: (Score:2)
Can we teach an AI pen testing?
Re: (Score:2)
Can we teach an AI pen testing?
Nope. And there is no need to. All the standard attacks are already automated. The rest requires a creative and experienced human driving things. Caveat: While I have not personally pen-tested things (well only so minimally it really does not count), I have closely worked with pen-testers and know several.
Incidentally, pen-testing is very limited in what it can do. It cannot replace an in-dept security review. It cannot replace a code-review. It cannot do anything beyond very shallow things. And it is never
End of the year tradition? (Score:2)
Re: (Score:2)
I'll see if I can run with that analogy a bit further.
Your original goals or predictions can come about in a completely unexpected way. "Linux on the desktop" was sort of code-speak for "we want mass adoption". At this point, smartphones have largely replaced what a PC used to be for most people (basic digital consumption, communication, entertainment, and simple personal tasks), so the desktop really isn't even the ultimate mass-adoption target anymore. But Linux is used almost everywhere else... everyw
Good for some things, not for others (Score:2)
In the past computers have been really good at some things, really bad at others. Some of the things they were bad at, humans were good at. That's where AI is having a big impact. It lets computers be good at the things they used to be bad at but humans were good at.
That doesn't change the things computers have always been good at. If you need a program to process financial transactions or a device driver for a new GPU, you aren't going to write it by training an AI model. You need code that follows we
Re: (Score:2)
"Machine Learning" does not exist. It (and AI) are moron-speak for Statistical Modelling.
Re: (Score:2)
"Machine Learning" does not exist. It (and AI) are moron-speak for Statistical Modelling.
Sure, but in the long run, it's the morons (i.e. the general public) that decides how words are used, and therefore what those words mean.
So if the world decides that it's valid to use the word "literally" to emphasize a figurative (i.e. non-literal) point, then eventually that is what the word will mean, and its old definition will fall by the wayside. It's stupid, and there's not a lot you or can do about it.
Similarly, if the world decides to refer to Statistical Modelling via the name "Machine Learning"
Re: (Score:2)
The "morons" in this case being the people who invented the field. "Morons" like Marvin Minsky, John McCarthy, and Claude Shannon.
SMH...
Re: (Score:2)
If you need a program [...] for a new GPU, you aren't going to write it by training an AI model. You need code that [..] produces exactly the right result every time
I agree, sure, but would you please mind letting AMD know?
Appity app app app (Score:1)
It strikes me that while the amount of grunt work necessary to make any kind of "app" has gone down somewhat over the past 30 or 40 years, the grunt work has always required the smallest portion of yhe developer's mental cycles, compared to the actual business logic.
At work we've got code, some of which dates back to the 80s. Aside from some references to the memory structure of the VAX or whatever it originally ran on, most of the code is generally equivalent to what one would write today. In some places t
Re: (Score:2)
Indeed. The fact of the matter is that coding is a creative act that does require understanding. Like all engineering design work. And if you look at established engineering fields (which coding is not at this time), the one part they can never get rid of is the engineer doing the thinking and the designing.
Re: (Score:2)
Programming has changed significantly over the last 40 years. Not necessarily for the better, but it has undoubtedly changed.
Automation will continue until moral improves. . . (Score:3)
Generalized AI is still 100+ years out. Until then it's gonna be another tool, like a wrench or an IDE, that might make the programmer more efficient, but ultimately will still have the role of taking stakeholder requirements and converting them into actual implementable solutions and then actually implementing them.
Re:Automation will continue until moral improves. (Score:4, Interesting)
> Generalized AI is still 100+ years out.
Hard disagree. We have no definition of intelligence yet, or even a basis from which to describe it. We can measure it, but the only true measure seems to come in high stakes games for which humans are the only viable participants. the AI developed so far is little more than a tool for a human to use, and not a competitor to a human.
The best we can do right not is measure intelligence with super primitive means such as turing tests. There has been zero, ZERO progress on AI since the idea was conceptualized. If there was an honest speedometer on AI progress, it would be reading a flat 0mph since 1940s, never even blipping up a micron.
ML is interesting and useful, but has nothing to do with AI. In fact, one key sign that ML is not AI is that it is only useful in collaborative problem spaces and utterly fails in contentious ones. Its a fancy pattern recognizer, no more intelligent than a coin sorter in a vending machine.
Until we at least have some kind of theoretical or logical way to analyze what intelligence or sentience is, we cant make any extrapolation of when it can be created by us. 100 years is an arbitrary and short timeline for something which might not be possible in 1 million years.
It may simply not be possible for a given intelligence to purposefully create another intelligence even half as smart as it is, much less smarter.
Re: Automation will continue until moral improves. (Score:2)
Re: (Score:3)
We can measure it [...] The best we can do right not is measure intelligence with super primitive means such as turing tests.
The Turing test is in no way a measure of intelligence.
ML is interesting and useful, but has nothing to do with AI.
On the contrary, ML is AI. You just want AI to refer to something completely different. The term AI covers a broad range of things that you wouldn't consider "AI", like decision trees, linear regression, or clustering.
Is the term misleading? Absolutely! But all the complaining in the world isn't going to change the meaning. That ship sailed in the 1956, at a conference at Dartmouth. We can blame John McCarthy, who was more interested in questions
Re: (Score:2)
Generalized AI is still 100+ years out.
I used to think like that, too. Then, after reading article after article about the progress AI has made in the last 50 years, I realized that I was short by at least an order of magnitude.
I'm convinced that it is at LEAST 1000 years out.
Modern AI hype very closely resembles the notion from The Time Machine that steam power will enable time travel. It just isn't going to happen. We have neither the hardware nor the software to make AI anything more than glorified code completion.
Re: (Score:2)
If your job is so trivial that it can be automated, then it *should* be automated.
Factory jobs are an example. These jobs are mind-numbing, dehumanizing work. Automation is a good thing, freeing people to do more human things with their time. Yes, I realize that some people can't be, or don't want to be, retrained. Change takes time, but that doesn't mean change shouldn't happen.
Human knowledge and skill mechanized (Score:2)
One impetus for the Babbage engines was to create the navigational and mathematical tables. Skilled mathematicians would create the non linear brackets and then semiskilled labor would compute the linear intervals. It would be
Sure, Alexa can write software for you... (Score:1)
... but keep in mind that the "AI" Alexa provides is actually just warehouse sweatshops of real people. They sell it as AI but it's the furthest thing from it.
Re: (Score:2)
If Alexa is really powered by people (in any significant way), they aren't worth their food rations. Or it's amazing how perfectly they make mistakes that machines would make, so as to hide their real nature.
Re: (Score:1)
You mean like selling your data to organized crime? Yea, sure that's totally a mistake a machine would make. /sarcasm
Re: (Score:2)
Oh, yeah, now I understand why my garbage didn't get picked up after the snowstorm!
Well, I would too (Score:4)
But I have more gotten tired of the same stupid crap being claimed again and again and again. Programming is engineering (No, I will not discuss this, if you cannot see it, then that is a limitation on your side.) and engineering is hard and cannot be automatized because you need to understand what you are doing. All the stuff that could be "automatized" has already been put into libraries or can be put into libraries. For the rest, it is just not possible. Artificial Ignorance is dumb as bread and can only do statistical classification, pattern matching and lookups in catalogs. It has zero clue what it is doing. It has no insight. It has no common sense. And it will remain that way for the foreseeable future, because we have absolutely nothing, not even in theory, that could do better. (Just for completeness: Go away physicalists, nobody knows whether humans are "just machines", but it very much does not look that way at this time. Take your deranged, self-denying religion someplace else.)
Asd to the claims to "programming going away" or "being automatized", these are basically as old as programming is. When I went to university about 30 years ago, the 5GL project that was supposed to automatize programming using constraint solving had just failed resoundingly. The idea was that you specify your problem using constraints and the machine generates the code. Turns out constraint solving is too hard for machines in the complexity needed. Also turns out (as a side result) that specifying a problem using constraints is about as hard as writing code for it directly and requires more expertise and experience. Since then, this moronic claims have cropped up again and again and again. I have no idea what the defect of the people making these claims is, but it must be a serious, fundamental defect, because they will just not go away and refuse to learn from history.
Re: (Score:3)
Re: (Score:2)
Ahahaha, yeah, I forgot that classic fiasco from around 1960 (!). To be fair, I was not born yet at that time.
Re: (Score:2)
Fiasco? COBOL was wildly successful!
No, it didn't eliminate programmers, but it did allow non-experts to understand and audit code to some degree.
The world still runs on COBOL. It's easy to read, easy to write, and crazy fast and efficient. All those failed COBOL to Java projects failed for a reason. COBOL is hard to beat.
Re: (Score:2)
Programming is engineering (No, I will not discuss this, if you cannot see it, then that is a limitation on your side.)
Programming is absolutely nothing like engineering. Why are programmers always pretending to be something that they're not?
Don't worry, I'm not asking you to discuss it. There is nothing to discuss. This is simply reality vs fantasy.
Non-Coders don't understand coding (Score:5, Insightful)
Re: (Score:2)
This is a great and concise observation, should be upvoted.
Re: (Score:1)
Re: (Score:1)
ChatGPT is already at the "structure the program" stage.
I asked in rather general terms for ChatGPT to generate me a program for a rather complex UI simulation using tools I was aware of but not yet proficient in. Apparently I asked for too much so ChatGPT didn't generate code, it generated an outline for the steps that code would need to do (structure). I then asked for code for each of those steps. ChatGPT either provided the code or broke down the step into smaller steps. In the end it had generated
That time of year again (Score:3)
I have been programming (it used to be called that) for pay for a half century now, and I can't recall a year where it was not predicted that the job of programming was going to be automated. In just a year or too look at all the progress we are making.
There is something about the lay understanding of technology that promulgates this grail as something that is real. So we keep getting these predictions.
The trouble isn't the end of programming (Score:2)
I'll never understand why people think supply and demand doesn't apply to their wages.
Re: (Score:2)
How does adding bad code to a project mean you need fewer programmers?
Not yet (Score:2)
I'm not an expert in AI by any means but I feel like AI is still far away from "understanding" what it's doing or working with and perhaps understanding is the most important part of trying to translate something into code.
As to whether "understanding" requires consciousness I've no idea. I hate the term because it's so loosely defined and real wet AI cannot quite understand it and whether it's necessary at all to be truly intelligent. E.g. many biologists claim that ever simple forms of life such as gras
Re: (Score:1)
AI == Coder. A monkey that knows not what it is copying and pasting, merely that is a statistically significant snippet.
Copying and pasting a bunch of "Statistically Significant Snippets" does not a working program make.
There exists exactly zero working programs created by this method. They have all been massive and spectacular failures.
Jevons paradox (Score:3)
The Jevons paradox [wikipedia.org] shows how by decreasing the amount of a resource needed to make a product, you can actually increase the amount of resource that is used. As the product becomes cheaper (because less of the resource is needed), demand for the product rises enough to offset the smaller amount of resource per product.
It's entirely possible that this "paradox" is relevant here, with programmer labor as the resource in question. The cloud, AI, and other "technology surges" have made programmers more efficient - allowing them to produce more product per unit labor, and thus to sell the product for a lower price (frequently free these days!). This has in turn increased demand for software - perhaps enough to entirely offset the lesser number of programmers needed to make a particular unit of software.
Re: (Score:2)
I don't think so. If Artificial Ignorance was actually capable of generating functional code with some reliability, then yes. But it is not and it will not be anytime soon because that requires some actual understanding and AI does not have that and may never have that. The approaches known today will certainly never get there. Statistical classifiers can have an incredible width of view, but they can never do more than scratch the surface. No approach "trained" on data can as the amount of data needed incr
C-level Eet Fream (Score:2)
Fewer programmers. Greater margins. More profit. And completely imaginary.
Re: (Score:2)
In other news "VVet Dream" gets censored and replaced with Eet Fream. ROTFL.
Jet Packs are the Next Thing - Really! (Score:2)
2 of 4 million. (Score:2)
Re: (Score:2)
Re: 2 of 4 million. (Score:2)
Just some historical perspective (Score:3)
One would expect that solving a given software problem would require fewer and fewer lines of code. For a while it looked like this would be the case. After all writing a simple program in dBase (yes it had a programming language) essentially just consists of defining forms for your data sets. The database itself would take care of all manipulations. The same was true for Delphi, which offered you ways for automatically generating forms which you could then edit.
One would think that such database applications today would be much simpler, but they aren't. Instead people now build multi-layer systems where the functional is not only duplicated in the database and the GUI, but also a server layer in between.
If we follow the trends we see today, we will see people writing more and more code to solve the same problems, while adding more and more external dependencies which continuously get worse and worse. I mean we now have web apps, written with insane amounts for developer time... yet they barely are able to compete with mediocre desktop software form the 1990s.
Re: (Score:1)
There's a reason for this absurdity. Well, more than one, but I'll point out just one. This bizarre aversion to actually writing code. It's all frameworks, plugins, and third-party dependencies. It's bloated projects beyond all reason. Kids are so terrified of "reinventing the wheel" that they'll waste countless hours getting various shady libraries to somehow work together.
The worst part about it is that it takes longer and makes software larger, slower, less efficient, harder to maintain, and less se
AI is gonna teach better programmers (Score:4, Interesting)
I'm a pretty decent programmer. Good enough that I've made a career out of it and none of my code will (likely) ever make it to the Daily WTF. But there are programming concepts that I've always struggled to understand because frankly, the documentation is obtuse and hard to parse, and it wasn't really worth my time.
For instance, the Wikipedia entry on monads is frankly just obnoxious to read. I program in elisp a bit, so trying to understand monads is mostly about satisfying some curiosity, but something about the article just doesn't click with me and I have to move through it really slowly.
I asked ChatGPT to explain it to me in simple terms, and it did a good job. It even provided an example in JavaScript. Then I asked it to provide an example in elisp and it did that too. I'm not super concerned about correctness of the code, as long as it's generally okay, and it seems to have done an okay job.
I've also asked it to document some elisp functions that I've always thought were poorly described (emacs' documentation can really be hit or miss) and it really did a great job.
I'm not so arrogant as to say that these models won't one day generate a lot of good, usable code, but I honestly think that this ability to collate a tonne of data and boil it down to something understandable could fill in the gaps in a lot of documentation. The longest, most tedious parts of my job very often boil down to research for some engine-specific feature that I need, or some sort of weird platform quirk. For publicly available engines like Unreal, this will honestly improve my productivity quite a lot.
Wow, their chief technologist (Score:2)
I’ll call this device a “computer” and it would run “programs”. You heard it first right here!
Clearly I have what i
Hey AI (Score:1)
Interesting failure cases (Score:2)
This all reminds me of an interesting test of genetic algorithms programming FPGAs a few years ago. There were a few tests where the target FPGA did perform correctly according to the test conditions.
Then they copied the program into another FPGA, same type, and it failed miserably. Analysis was very difficult because the GA had ignored any standard conventions, but they found the problem finally. The GA had programmed chains of gates to act as analog resonators (against any specification of the chip) that
The main reason I am not an ACM member anymore (Score:2)
The level of bullshit and outright nonsense they have been publishing has become untenable. This is not an organization I want to continue to be associated with. A pity, really.
Programming was killed more than 3 times already (Score:3)
After that , the changes in programming were more incremental, so there are not clear breakpoints. The key point is that every step automating programming changes the nature of programming and the tasks that may be addressed by programs, but doesn't eliminate programming. Perhaps some day the word "programming" will be replaced, but there will still be a task to describe what a computer should do. A number of commenters have made essentially this observation, but I thought it worthwhile to think of the specific big steps from the past.
Long before electronic digital computers, writers of fantasy and fairy tales observed the inherent difficulty of expressing a desire clearly enough to get what you actually want. My favorite is Five Children and It [gutenberg.org] by Edith Nesbit. That book concerns a Psammead, or sand fairy, which is additionally amusing since like modern computers the fairy is silicon based.
Communications of the ACM? (Score:2)
Let me see if I have this right: an article in Communications criticizing this idiot idea.
Hmmm... so what's the IEEE's Computer got to say? That would be the magazine that in Jan, 1994, presented OO as "the silver bullet" to the programming backlog, and that was *literally* the cover of the issue.
Look at restaurants and cooking shows. (Score:2)
We live in a time where the regular person in the developed world has EASY access to ingredients and information about how to combine those ingredients, and yet take-out is more popular than ever, we have dozens of meal-kit delivery services, and dozens or hundreds of cooking shows. Even though the process of preparing a meal has been made as cheap and easy as it has ever been in history ... fewer people are preparing their own meals than ever before. In fact, many people spend hours and hours watching co
Will sofware engineers ever stop being in demand? (Score:2)
There are two schools of thought:
https://preview.redd.it/ze2brz... [preview.redd.it]