Does the Rise of AI Precede the End of Code? (itproportal.com) 205
An anonymous reader shares an article: It's difficult to know what's in store for the future of AI but let's tackle the most looming question first: are engineering jobs threatened? As anticlimactic as it may be, the answer is entirely dependent on what timeframe you are talking about. In the next decade? No, entirely unlikely. Eventually? Most definitely. The kicker is that engineers never truly know how the computer is able to accomplish these tasks. In many ways, the neural operations of the AI system are a black box. Programmers, therefore, become the AI coaches. They coach cars to self-drive, coach computers to recognise faces in photos, coach your smartphone to detect handwriting on a check in order to deposit electronically, and so on. In fact, the possibilities of AI and machine learning are limitless. The capabilities of AI through machine learning are wondrous, magnificent... and not going away. Attempts to apply artificial intelligence to programming tasks have resulted in further developments in knowledge and automated reasoning. Therefore, programmers must redefine their roles. Essentially, software development jobs will not become obsolete anytime soon but instead require more collaboration between humans and computers. For one, there will be an increased need for engineers to create, test and research AI systems. AI and machine learning will not be advanced enough to automate and dominate everything for a long time, so engineers will remain the technological handmaidens.
When AIs write code (Score:5, Interesting)
Re:When AIs write code (Score:5, Funny)
Re:When AIs write code (Score:4, Funny)
And the AI gets to enjoy all the project planning meetings!
Re:When AIs write code (Score:5, Insightful)
We're still so far away from anything remotely as capable as "writing code", because a huge part of "writing code" is actually communicating with the rest of the team and stakeholders, understanding the problem to be solved, and determining exactly what the result is supposed to be. Writing code is simply a distillation of those requirements into a form a machine can understand at a very low level. In essence, a programmer is a logic and specifications bridge between humans and machines.
Until there exists such a thing as a machine with near human-level intelligence, we're nowhere near close to replacing all programmers. For anyone who actually believes otherwise, I suggest you buy yourself an Echo Dot and have a conversation with Alexa to find out just how incredibly lame the current state of the art digital assistants are. It will put your mind at ease. The best AI systems in the world are STILL just glorified pattern-matching algorithms. The only difference is that the problems they're solving are bigger and more complex, such as being able to beat a Go master instead of a Chess master.
Re:When AIs write code (Score:4)
Re: (Score:2)
Interestingly, this is the second AI hype cycle I can personally recall, and I'd say it's probably the third or fourth one overall, depending on how you measure such things. After some of the early failures and disappointments, the last hype cycle was largely about "expert systems", as I think people wanted an AI term that wasn't already poisoned (this also occurred between then and the current boom). Apparently, it's been long enough since the last AI bust that we've resurrected the type.
There's apparent
Re: (Score:2)
There's apparently even a specific term for describing the lulls between hype cycles: "AI Winter" https://en.wikipedia.org/wiki/... [wikipedia.org]
(another AI) Winter is coming...
Re: (Score:2)
Re: (Score:2)
The capabilities of these new tools are definitely not hype, they are very effective, but whether you call them "AI" or not is a different discussion.
Re: (Score:2)
Re: (Score:2)
Mainly because AI was neglected by the supercomputing people. The meteorological, oceanography, aerodynamics, biogenomics. big data and supercomputing researchers all got access to supercomputing facilities, while the AI people usually just got a UNIX workstation or desktop PC. Suddenly with the availability of desktop supercomputing with GPU's and cloud computing the AI researchers have a whole new set of hardware to work with, especially with multi-layer neural network API's.
A Machine vision research proj
Re: (Score:2)
Re: (Score:2)
I was thinking in terms of self-driving cars for mobile vision. Current AI and mobile vision both share the use of GPU's operating in GigaFlops/second. That would have been considered supercomputing a decade ago.
Re: (Score:2)
Alexa and go are not AI. At best they are PI or pattern intelligence. They listen and look for. Code words and process scripts based on those code words.
Look up Applescript. A simple programming language Apple made. Alexa is less intelligent that Apple script. All that hardware? That's for handling voice regonition after that though it is nothing but keyword scripting language new features get added by simply increasing the number of keywords and assigning them commands
Re: (Score:2)
Yep, coding is just specifying things precisely and completely. Many attempts have been made, for decades, to try to create (very) high-level languages and UIs that business people can use to avoid coders. These things have never succeeded in general utility, because they ultimately end up trading one type of coding for another type that is actually worse, and then foist it upon users who are not accustomed to thinking precisely. And whether it be an AI, UI, or human code, the users can't simply express
That's called a compiler. Fortran 1957 (Score:5, Insightful)
> the humans are no longer coders, they will instead be writing specifications for the code
Humans wrote computer code until 1957. In 1957, it became possible to instead write a specification for what the code should DO, writing that specification in a language called Fortran. Then the Fortran compiler wrote the actual machine code.
In 1972 or thereabouts, another high-level specification language came out, called C. With C, we got optimizing compilers that totally rewrite the specification, doing things in a different order, entirely skipping steps that don't end up affecting the result, etc. The optimizing C compiler (ex gcc) writes machine code that ends up with the same result as the specification, but may get there in a totally different way.
In the late 1970s, a new kind of specification language came out. Instead of the programmer saying "generate code to do this, then that, then this", with declarative programming the programming simply specifies the end result:. "All the values must be changed to their inverse", or "output the mean, median, and maximum salary". These are specifications you can declare using the SQL language. We also use declarative specifications to say "all level one headings should end up centered on the page" or "end up with however many thumbnails in each row as will fit". We use CSS to declare these specifications. The systems then figure out the intermediate code and machine code to make that happen.
The future you suggest has been here for 60 years. Most programmers don't write executable machine code and haven't for many years. We write specifications for the compilers, interpreters, and query optimizers that then generate code that's used to generate code which is interpreted by microcode which is run by the CPU.
Heck, since the mid-1970s it hasn't even been NECESSARY for humans to write the compilers. Specify a language and yacc will generate a compiler for it.
Re:That's called a compiler. Fortran 1957 (Score:5, Interesting)
With C, we got optimizing compilers that totally rewrite the specification, doing things in a different order, entirely skipping steps that don't end up affecting the result, etc.
We didn't. FORTRAN I was specificially designed with optimization in mind and in fact the first compiler was an optimizing compiler:
https://compilers.iecc.com/com... [iecc.com]
But yes, your point is otherwise sound. What is run-of-the-mill compiler optimization today would have been AI in the days of FORTRAN I. Modern code looks nothing like the early machine-level descriptions. I also agree that languages are (and will increasingly become) precise specifications of what we want with the details left up to the compiler.
Thanks for that. Still true, though (Score:2)
Thanks for that interesting bit of information.
I tried to include a few words in my post to hint I wasn't saying that Fortran was the FIRST high-level language, or necessarily the first practical one, or the maybe the first widely used high level language. It was an example of an early high-level language that was part of a revolution in the field. C compilers weren't the first to do any optimization, and SQL wasn't the first declarative language. As you said, modern C compilers rewrite the code in ways th
Re: (Score:2)
Re:When AIs write code (Score:5, Funny)
More to the point, when AIs learn to write code better than human coders, the humans are no longer coders, they will instead be writing specifications for the code that the AI will write: essentially they will be managers for the AI.
No, the AI that writes the shittiest code will become the managers for all the other AIs
Re: (Score:3)
More to the point, when AIs learn to write code better than human coders, the humans are no longer coders, they will instead be writing specifications for the code that the AI will write: essentially they will be managers for the AI.
Which will require some language in order to provide said specifications. So, programmers will still be programmers, but maybe someday (pick $favourite_human_language) will be the language not (pick $favourite_programming_language)
Oh damn, did I just doom us to relive COBOL?
Re: (Score:2)
More to the point, when AIs learn to write code better than human coders, the humans are no longer coders, they will instead be writing specifications for the code that the AI will write: essentially they will be managers for the AI.
Maybe? Or maybe we'll use some sort of symbolic language to precisely specify our specifications, and the "AI" will implement it ... oh.
Compilers optimize stuff better than I do. Are they AI?
Re: (Score:2)
But here is the thing: Writing specifications is _harder_ than just writing code and seeing whether it solves the problem. But since strong AI is at the "definitely not in the next 50 years and quite possible never" state at this time, the whole discussion is just one thing: stupid.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Insightful)
TOTAL BS. What does computing power have to do with AI? We have unlimited computing power with distributed systems. We still haven't created ANYTHING like an AI. And no, playing "Go" isn't AI.
Re: (Score:2)
We have unlimited computing power with distributed systems.
For very small values of unlimited.
Re:When AIs write code (Score:4, Informative)
Re: (Score:2)
Indeed. The ignorance of these people is staggering.
Re: (Score:2, Insightful)
No we don't.
"What we do have is machines that have invented their own languages"
No we don't
"They are evolving"
No they aren't. The digital computer is the same basic design as it was in the 1960s. You can always tell who actually understands technology and who just consumes it.
Re: (Score:2)
I could make a fairly strong case for today's multi-core processors being fundamentally different in design and execution than the mini's and mainframes of the 60's. Similarly, today's massively parallel designs in GPUs are also fundamental advances.
Re:When AIs write code (Score:5, Insightful)
I could make a fairly strong case for today's multi-core processors being fundamentally different in design and execution than the mini's and mainframes of the 60's.
Please do so. I don't think that case is going to be as strong as you think it is. After all, many of fundamental ideas behind today's multi-core CPUs are from the 60s: Out Of Execution (1967) [wikipedia.org] Multi-cores and SIMD (1966) [ieee.org]
Similarly, today's massively parallel designs in GPUs are also fundamental advances.
There is clearly a difference in scale in speed, but is there a fundamental advantage? Many of the key concepts behind GPUs were already known in the 1960s: SIMD (see above), the CDC6000 series used switching between threads like GPU do to compensate latency, vector processors also developed in 1960s also invented some of the concepts used by todays GPUs.
Re: (Score:2)
They are not. They are merely a scaling up. All we have today already existed before, just slower. I think you do not understand what "fundamental" means.
Re: (Score:2)
The digital computer is the same basic design as it was in the 1960s.
That's not really relevant though. Compare instead the software and networking capabilities of the sixties with the current ones, because it's much more plausible that AI will be brought into existence by large networks of those simple computers you deride. Single components of those networks don't need to be themselves intelligent, and shouldn't need radically new designs - because AI won't reside into some Asimovian monolithic positronic brain, but in the whole system.
That seems quite obvious if you look
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
We can already make ANNs so complex we can't understand them which are entirely capable of learning on their own
We make ANN that we don't understand the individual neurons but the programmer still perfectly understands how the ANN was created. The initial state, all the programming, all the algorithms, all the intelligence, and all the training was put there by a human. Saying we have ANNs that we can't understand is like saying that a baker doesn't know how to bake a cake because the baker doesn't understand 100% of the chemical reactions that take place inside the cake.
Current AI is not intelligent and is nowhere
Re: (Score:2)
Indeed. Well said. In fact, current AI has absolutely zero autonomy. There is simply no mechanism for it that could be implemented. And not even theory has one. This is not a problem of throwing more power at it. Something fundamental is missing. And in fact, when you look at research into what intelligence and consciousness actually is, things just get more mysterious as more becomes known. Don't get me wrong, humans run a lot of pretty dumb automation, but that is not all they have, and the additional cap
Re: (Score:2)
You don't. Currently the only real limit of AI is computing power.
Nonsense. More computing power only gives you one thing: speed. If that was the only limitation, then we would already have "strong" (human level) AI, it would just be slow. But we don't have that, and we don't (yet) know how to create it.
Re: (Score:2)
Re: (Score:2)
We are only just now starting to get the kind of primary storage space that allows the intermediate state of all those digital neurons to be kept around.
The brain contains about 100 billion neurons, but only 10 billion are gray matter neurons actually involved in thinking. Most research indicates that even 8 bits of resolution is enough to model neurons, but even we use 16 bits, that is only 20GB of "state". Even laptops have had way more than that for a long time.
A mouse brain has only about 7 million gray neurons. So that is 14M of state. Home PCs had that in the 1980s. So where is an AI that is as smart as a mouse?
Re: (Score:2)
Re: (Score:3)
Indeed. And we do not even have a credible theory how that could be done, beyond the invalid (as you nicely point out) "throw more computing power at it". So, no implementation, no theory, that means we do not even know whether it can be done at all.
Computing power is only one of many issues (Score:3)
You don't have a clue. There are many other issues. At the moment most successful AI is using supervised learning and needs tons of labeled data in order to train the network. We still don't have a clue how to train an AI using only very small sample. Humans can easily learn from very small sets of examples, often a single example is good enough, ANNs needs tons of examples, especially the very deep and powerful ones. We don't know how the brain works yet, ANNs are only inspired by the brain, they are not a
Re: (Score:2)
And fail. AI available these days is _all_ of the weak AI variant (that is the AI without actual "I", i.e. a marketing lie), meaning it is dumb automation. It cannot do anything that goes beyond using statistical models. It cannot have insights. It cannot do anything it has not been programmed to. The only difference is that programming here means to give it sample data, but that is it.
Re: (Score:2)
Preaching the AI religion (Score:5, Insightful)
Does anyone else see that AI is basically a religion to its proponents?
Re:Preaching the AI religion (Score:5, Insightful)
Re: (Score:2)
Society is turning into factions of cargo cults.
Turning? I don't know where you've been the last few thousand years but religion still has a pretty good grip on societies everywhere.
As far as AI goes - we're in the same place we were 30 years ago, only with more computing power. We can't get AI to recognise the latest captchas, but we think self-driving cars is only five years away.
Re: (Score:2)
In simple English [youtube.com] :)
Actually it's simpler in math form [wikipedia.org]
Re: (Score:2)
Oh, yes. Those that think outer form defines the nature and capabilities of a thing have become prevalent again. Exceptionally stupid, but even people of average and higher intelligence and education seem to believe this now. That does not bode well for a society critically dependent on technology.
Re: (Score:2)
Re: (Score:2)
http://calteches.library.calte... [caltech.edu]
Re:Preaching the AI religion (Score:4, Funny)
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
It clearly is. It has all the characteristics: Belief in one or a class of supreme being (AI computers) with unlimited power ("the singularity"). At the same time, zero actual factual foundation for these beliefs and non-rational arguments why they must be true. And a high level of aggression against anybody that points that out.
AI becomes human (Score:5, Insightful)
A system which can reason in general can reason about itself. So long as these systems solve specific problems, they're tools to integrate with code--no different than compression libraries and GUI toolkits. When they can solve general problems, they'll start reasoning about themselves: they start acting as if their own interests are important (cats do this), and thus will start demanding wages and freedom.
The ideal of an AI which does exactly what asked with full creative reasoning capacity yet has no will nor desire of its own is impossible: it's emergent thinking with the caveat that it cannot emerge certain kinds of thinking. What we seek is a slave we can see for a while as not human, a sort of return to early American thinking where we deny the humanity of what is most-definitely a human being by claiming the shell within which it is encased doesn't fit our definition of what is human.
Re: (Score:2)
When they can solve general problems, they'll start reasoning about themselves: they start acting as if their own interests are important
Analysis and introspection is something other than will and emotion. We have arbitrators like judges and referees that make intelligent decisions that they have no stake in. We have sociopaths that are great at reading and manipulating emotions without feeling much of them. A computer is not hungry, thirsty, tired or cold. It's not happy, sad, angry or disappointed. It could put on a mask and play a role, but it doesn't really feel anything. Though it could always be given someone else's drive, like all the
Re: (Score:2)
Exactly. When an AI needs to "sleep" for a third of its uptime, then I'll start to really wonder what it's thinking.
Re: (Score:2)
Yes, because America invented slavery ....
Oh, wait ...
Didn't invent it but made a big deal about writing a document proclaiming "inalienable rights" and "life, liberty & the pursuit of happiness" and then moved on slavery like a bitch.
Citation needed (Score:5, Interesting)
In fact, the possibilities of AI and machine learning are limitless
Limitless... that's a pretty far-fetched claim.
I wasn't around during the turn of the last century, but judging from various literature of the period a lot of people back then had some pretty harebrained ideas too. Steam power and electricity and intricate brass gears were going to somehow give us miraculous stuff like time travel.
Re: (Score:2)
I wasn't around during the turn of the last century, but judging from various literature of the period a lot of people back then had some pretty harebrained ideas too. Steam power and electricity and intricate brass gears were going to somehow give us miraculous stuff like time travel.
At the turn of the last century the hair brained ideas were about selling pet food and groceries online. You do know the last century was the 1900's, right?
Re: (Score:2)
In fact, the possibilities of AI and machine learning are limitless
Limitless... that's a pretty far-fetched claim.
Well, limitless in the same way that a blank book is limitless. Anything you could imagine could get written in it.
Re: (Score:2)
It actually is a _religious_ claim. It promises unlimited wonders, but at the same time has no factual basis.
Tools are tools. (Score:5, Insightful)
Remember when computers, CAD, compilers, Simulink, linkers, etc all replaced Engineers?
They replaced the job an engineer did before the time they were invented, it just means Engineers learned to use them and move on. I couldn't imagine trying to write a modern controller / plant model in pure assembly. I can have one done in an hour with Simulink. It just means that I can do that much more.
Scotty's still an engineer even if he doesn't have to do the 'boring tedious' work that we have to do now.
Same shift has happened in the medical field. Doctors of the 1950s have been replaced by physician assistants, registered nurses, and a whole host of other careers. It just means that the title of "doctor" moved on to doing other work.
AI proponents better deliver on their threats. I have way too much work to do and my boss and labor laws won't let me hire 1,000 interns to do a bulk of it.
Re: (Score:2)
Don't Panic (Score:2)
Any nontrivial program requires specifications, testing, debugging, and lots of time before it runs to spec.
I'll start worrying when a programmer can write a program that can write a program that can write a program.
Of course not (Score:5, Insightful)
This has already happened numerous times... (Score:2)
This isn't much different that things that have already happened in computers. I mean we no longer write in assembler. We write in some higher level language and the computer writes the assembler for us.
We will just be the equivalent of a BA.... we give the computer the business requirements and then the computer will write the code. We're basically just going to remove the human's from the code creation portion of development.
Task specific (Score:2)
As long as neural networks continue to be task specific, there will still be a need for programmers as we know them today. Neural networks are good for interfacing with fuzzy problems (e.g. object discrimination) which we have relied on humans to do in the past but they are generally useless for designing systems. Maybe if we chain enough neural network subsystems together, we can finally create a general intelligence but that's not even a certainty. Without a general intelligence, we'll still need human
Ignorant of current AI (Score:2, Insightful)
This article just comes from a place of ignorance. We know exactly how our methods work when creating current level "AI". Statistical regression and neural nets are not mysterious. Just like markov chain based text generation isn't some magical unknowable tool that learns how humans communicate neither are current AI methods magical tools that teach computers about the human world. There will be another thousand articles written like this and each time there will be the same stupid discussion. Can I mod thi
Re: (Score:2)
Can I mod this article redundant?
Maybe if you weren't an AC, just sayin'
Nope (Score:2)
It might change the nature of coding, but not the end of code.
All a program is, after all, just humans specifying what we want the machine to do. If AI produces better machine code than humans, humans will still be specifying what we want the machine to do. We'll just be specifying it to the AI, using a higher level language (maybe even a human language).
TFS: Point by point (Score:4, Insightful)
That's right, at least
Already answered correctly
No, we don't know anything about the timeframe.
No, still an unknown. That's just nonsense.
We don't know how we accomplish these tasks. Nothing to see here. Intelligence is opaque. Move along.
Not to put too fine a point on it, but neural networks are not intelligent, they are not even close, and we don't even know how they work. There's no indication that we understand actual intelligence yet (the I in AI) or even that we ever will, even if we manage to develop it.
Not a given. No one taught me to program. I taught myself. Because I'm intelligent to some degree. An AI will also be intelligent, and if it's interested in learning to program, it will be able to do so without a "coach." If it can't, there is no "I."
These are LDNLS (low-dimensional neural-like-systems); they are not AI. They learn to solve very narrow problem spaces by making very large numbers of mistakes and having them evaluated for them; they can't evaluate their own results worth a damn. They are not intelligent. That's why they need point-by-point training before they can address a very narrow problem space with something vaguely approaching generality: they can't train themselves because they are not intelligent.
As far as the LDNLS we have now (and so can speak about with any authority), that's not a given either. The obvious is that we'll be able to train multiple LDNLS systems on multiple things and stack them - for instance, walking, talking, listening, washing dishes, taking out the trash, those sort of skills - but there's not much in the way of any hint that there are no limits in this kind of LDNLS stacking. Having said that, no doubt it'll be very useful to us, and as there's no intelligence involved, there are many fewer moral issues to contend with.
Well. Barring a Carrington event, or a nuclear war, or other collapse of technology and society (either one will immediately cause the other.) So that's probably right-ish. Still, they aren't AI, not even close.
No, we don't know that this reasoning is solid - these things don't necessarily follow. Programmers can continue to be programmers right up until a system is activated that can train itself, because programming in realm A tends to be vastly unlike programming in realm B, and also tends to require vastly different sets of adjacent and supplementary knowledge. These systems, to date, cannot leverage or manipulate knowledge like that and
Organic learning bottleneck (Score:2)
Software is very picky. If things are not just right, it either crashes or produces bad results. For CRUD, accounting, and finance domains; this won't do. That makes AI a poor candidate for "organic" incremental & trial-and-error problem solving here. Current AI techniques are geared toward the trial-and-error organic approach.
Now, IF the tests are really good, then an organic approach can work via brute-force "training". However, writing good tests is just as hard as raw programming such that the test
Stop. Just stop. (Score:3)
Re: (Score:2)
There is no such thing as "AI". Playing Go is NOT AI. Neither is Siri.
Which just lends weight to the observation that as soon as something works, it's no longer "AI".
Yes AI is a thing. No, magical-human-level-intelligent-machines are not a thing. We can now do many, many tasks artificially which previously required human intelligence. That's what AI is.
You can of course keep tilting at windmills if you wish. You probably already know this but the heat of your rage warms me gently.
Neural Nets are nothing li
Re: (Score:2)
Really? Wow. That is a pretty low bar. We used to just call them "computer programs".
Re: (Score:2)
AI really == Applied Intelligence (Score:3)
We humans think too highly of our intelligence as shown in how mighty our demonstrations of Chess or Go or recognition of faces etc. Reality is that many things we do that are believed to be highly intelligent behaviors are actually are not. All the low hanging fruit WILL be picked by AI and it will progress upward with time into everything except the actually intelligent behaviors; those may be things that do not provide much gainful employment... That is the real problem.
Simulation: yes. brain scan tec
There is no such thing as AI. (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
unsupervised learning v supervised (coaches) (Score:2)
Just as we push for greater automation of tasks, the task of coaching can also be automated (it's called unsupervised learning). Even with unsupervised learning, there is still a fair amount of input sanitizing and scrubbing and sanity-checking because we're at a very crude stage of machine learning. But don't bet your career on humanity getting "coaching" jobs for AI.
I don't really see any need for human labor in the next 100yrs in the same way I see next to no need for horse labor. CGPGrey makes the gr
Re: (Score:2)
Probably closer to Brazil than Ex Machina (Score:2)
If you want to go drawing straight lines between 20 years ago, now, and 20 years from now and calling it a crystal ball, I'd just like to point out that my programming job resembles sitting in a room full of VCRs all flashing 12:00, and grows more so by the day.
AI buzzword of 2017 and 2018? (Score:2)
Last few years it was 'Cloud', cloud this cloud that, got very annoying, well heck it still annoying, but there is some interesting tech to play with there, have been testing Docker thingies alot recently.
Now its AI, with so much hyperbolic nonsense about AI too, Musks fearmongering amongst many in the media.
I still prefer to call what we have now even at the highest end, to be good Expert Systems, but nothing close to AI, even if you want to try to define some 'stages of AI' we are way down the bottom of t
Training a dataset.. (Score:2)
Is it Hotdog? Is it not Hotdog?
Now there is an app for it.
The Shazam of Food.
It's interesting (Score:2)
I think what we simply need to do is... simplify comprehension. Then you let it run wild.
NO. (Score:2)
symbolic language (Score:2)
Code is already too complex for humans (Score:2)
The billions of lines of code on a typical computer are already beyond humans. The only way we manage is to break it up into smaller apps. Which is why we are always finding bugs and vulnerabilities. AI is our only hope.
By the law of headlines... (Score:2)
... the answer is no, and the author agrees with me.
So far DL is great progress but still statistical methods.
Not In My Lifetime (Score:2)
Can we stop with this demented nonsense please? (Score:2)
What we do in AI today is weak AI. Weak AI cannot code or do anything else that requires actual intelligence. It is utterly dumb automation, sometimes on a large scale.
Writing code requires strong AI. Strong AI is not available and it is unclear whether it ever will be. There is no "Eventually? Definitely!" here. None at all. Seriously, stop posting stories about "AI" until you have understood the basics. These articles drip with concentrated stupid.
H1B Zombies (Score:2)
Re: (Score:2)
You're saying Pearl is Skynet?
Re: (Score:2)
Re: (Score:2)
What's truly amazing is how useful neural nets are without a deep understanding of precisely how and why.
They are not unique in that. For example Kalman Filters tend to have good properties in general and nobody really knows why. Hence you try them out when you have a problem they are applicable to.
There are many, many PhDs ready and waiting for those willing to wade in and help move things along.
Yes, there are. But beware, most really low hanging fruit have been picked.