Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Programming

Does the Rise of AI Precede the End of Code? (itproportal.com) 205

An anonymous reader shares an article: It's difficult to know what's in store for the future of AI but let's tackle the most looming question first: are engineering jobs threatened? As anticlimactic as it may be, the answer is entirely dependent on what timeframe you are talking about. In the next decade? No, entirely unlikely. Eventually? Most definitely. The kicker is that engineers never truly know how the computer is able to accomplish these tasks. In many ways, the neural operations of the AI system are a black box. Programmers, therefore, become the AI coaches. They coach cars to self-drive, coach computers to recognise faces in photos, coach your smartphone to detect handwriting on a check in order to deposit electronically, and so on. In fact, the possibilities of AI and machine learning are limitless. The capabilities of AI through machine learning are wondrous, magnificent... and not going away. Attempts to apply artificial intelligence to programming tasks have resulted in further developments in knowledge and automated reasoning. Therefore, programmers must redefine their roles. Essentially, software development jobs will not become obsolete anytime soon but instead require more collaboration between humans and computers. For one, there will be an increased need for engineers to create, test and research AI systems. AI and machine learning will not be advanced enough to automate and dominate everything for a long time, so engineers will remain the technological handmaidens.
This discussion has been archived. No new comments can be posted.

Does the Rise of AI Precede the End of Code?

Comments Filter:
  • When AIs write code (Score:5, Interesting)

    by XXongo ( 3986865 ) on Friday October 13, 2017 @12:23PM (#55363369) Homepage
    More to the point, when AIs learn to write code better than human coders, the humans are no longer coders, they will instead be writing specifications for the code that the AI will write: essentially they will be managers for the AI.
    • by Anonymous Coward on Friday October 13, 2017 @12:28PM (#55363401)
      Until the AI writes better AI code. It's kind of like bootstrapping a compiler. Then we sit back, relax, and let the sexbots feed us peeled grapes.
    • by Dutch Gun ( 899105 ) on Friday October 13, 2017 @12:46PM (#55363557)

      We're still so far away from anything remotely as capable as "writing code", because a huge part of "writing code" is actually communicating with the rest of the team and stakeholders, understanding the problem to be solved, and determining exactly what the result is supposed to be. Writing code is simply a distillation of those requirements into a form a machine can understand at a very low level. In essence, a programmer is a logic and specifications bridge between humans and machines.

      Until there exists such a thing as a machine with near human-level intelligence, we're nowhere near close to replacing all programmers. For anyone who actually believes otherwise, I suggest you buy yourself an Echo Dot and have a conversation with Alexa to find out just how incredibly lame the current state of the art digital assistants are. It will put your mind at ease. The best AI systems in the world are STILL just glorified pattern-matching algorithms. The only difference is that the problems they're solving are bigger and more complex, such as being able to beat a Go master instead of a Chess master.

      • by 110010001000 ( 697113 ) on Friday October 13, 2017 @01:03PM (#55363675) Homepage Journal
        Exactly. What is the sudden interest in "AI" now? Is it because VR failed and now the VC are looking for another hype cycle to cash in on?
        • Interestingly, this is the second AI hype cycle I can personally recall, and I'd say it's probably the third or fourth one overall, depending on how you measure such things. After some of the early failures and disappointments, the last hype cycle was largely about "expert systems", as I think people wanted an AI term that wasn't already poisoned (this also occurred between then and the current boom). Apparently, it's been long enough since the last AI bust that we've resurrected the type.

          There's apparent

        • The sudden interest is due to advances in learning algorithms specifically in the area of deep belief networks. Some academics (Hinton and others) came up with some methods to be able to train these types of networks, this in turn has allowed everyone to make use of these networks in areas where they perform extremely well (e.g. image recognition).

          The capabilities of these new tools are definitely not hype, they are very effective, but whether you call them "AI" or not is a different discussion.
        • by mikael ( 484 )

          Mainly because AI was neglected by the supercomputing people. The meteorological, oceanography, aerodynamics, biogenomics. big data and supercomputing researchers all got access to supercomputing facilities, while the AI people usually just got a UNIX workstation or desktop PC. Suddenly with the availability of desktop supercomputing with GPU's and cloud computing the AI researchers have a whole new set of hardware to work with, especially with multi-layer neural network API's.

          A Machine vision research proj

          • You don't need AI or a supercomputer to do machine vision research. What does mobile vision have to do with AI anyway? Nothing.
            • by mikael ( 484 )

              I was thinking in terms of self-driving cars for mobile vision. Current AI and mobile vision both share the use of GPU's operating in GigaFlops/second. That would have been considered supercomputing a decade ago.

      • Alexa and go are not AI. At best they are PI or pattern intelligence. They listen and look for. Code words and process scripts based on those code words.

        Look up Applescript. A simple programming language Apple made. Alexa is less intelligent that Apple script. All that hardware? That's for handling voice regonition after that though it is nothing but keyword scripting language new features get added by simply increasing the number of keywords and assigning them commands

      • Yep, coding is just specifying things precisely and completely. Many attempts have been made, for decades, to try to create (very) high-level languages and UIs that business people can use to avoid coders. These things have never succeeded in general utility, because they ultimately end up trading one type of coding for another type that is actually worse, and then foist it upon users who are not accustomed to thinking precisely. And whether it be an AI, UI, or human code, the users can't simply express

    • by raymorris ( 2726007 ) on Friday October 13, 2017 @01:03PM (#55363669) Journal

      > the humans are no longer coders, they will instead be writing specifications for the code

      Humans wrote computer code until 1957. In 1957, it became possible to instead write a specification for what the code should DO, writing that specification in a language called Fortran. Then the Fortran compiler wrote the actual machine code.

      In 1972 or thereabouts, another high-level specification language came out, called C. With C, we got optimizing compilers that totally rewrite the specification, doing things in a different order, entirely skipping steps that don't end up affecting the result, etc. The optimizing C compiler (ex gcc) writes machine code that ends up with the same result as the specification, but may get there in a totally different way.

      In the late 1970s, a new kind of specification language came out. Instead of the programmer saying "generate code to do this, then that, then this", with declarative programming the programming simply specifies the end result:. "All the values must be changed to their inverse", or "output the mean, median, and maximum salary". These are specifications you can declare using the SQL language. We also use declarative specifications to say "all level one headings should end up centered on the page" or "end up with however many thumbnails in each row as will fit". We use CSS to declare these specifications. The systems then figure out the intermediate code and machine code to make that happen.

      The future you suggest has been here for 60 years. Most programmers don't write executable machine code and haven't for many years. We write specifications for the compilers, interpreters, and query optimizers that then generate code that's used to generate code which is interpreted by microcode which is run by the CPU.

      Heck, since the mid-1970s it hasn't even been NECESSARY for humans to write the compilers. Specify a language and yacc will generate a compiler for it.

      • by serviscope_minor ( 664417 ) on Friday October 13, 2017 @02:33PM (#55364329) Journal

        With C, we got optimizing compilers that totally rewrite the specification, doing things in a different order, entirely skipping steps that don't end up affecting the result, etc.

        We didn't. FORTRAN I was specificially designed with optimization in mind and in fact the first compiler was an optimizing compiler:

        https://compilers.iecc.com/com... [iecc.com]

        But yes, your point is otherwise sound. What is run-of-the-mill compiler optimization today would have been AI in the days of FORTRAN I. Modern code looks nothing like the early machine-level descriptions. I also agree that languages are (and will increasingly become) precise specifications of what we want with the details left up to the compiler.

        • Thanks for that interesting bit of information.

          I tried to include a few words in my post to hint I wasn't saying that Fortran was the FIRST high-level language, or necessarily the first practical one, or the maybe the first widely used high level language. It was an example of an early high-level language that was part of a revolution in the field. C compilers weren't the first to do any optimization, and SQL wasn't the first declarative language. As you said, modern C compilers rewrite the code in ways th

    • Comment removed based on user account deletion
    • by haruchai ( 17472 ) on Friday October 13, 2017 @01:30PM (#55363899)

      More to the point, when AIs learn to write code better than human coders, the humans are no longer coders, they will instead be writing specifications for the code that the AI will write: essentially they will be managers for the AI.

      No, the AI that writes the shittiest code will become the managers for all the other AIs

    • More to the point, when AIs learn to write code better than human coders, the humans are no longer coders, they will instead be writing specifications for the code that the AI will write: essentially they will be managers for the AI.

      Which will require some language in order to provide said specifications. So, programmers will still be programmers, but maybe someday (pick $favourite_human_language) will be the language not (pick $favourite_programming_language)

      Oh damn, did I just doom us to relive COBOL?

    • More to the point, when AIs learn to write code better than human coders, the humans are no longer coders, they will instead be writing specifications for the code that the AI will write: essentially they will be managers for the AI.

      Maybe? Or maybe we'll use some sort of symbolic language to precisely specify our specifications, and the "AI" will implement it ... oh.

      Compilers optimize stuff better than I do. Are they AI?

    • by gweihir ( 88907 )

      But here is the thing: Writing specifications is _harder_ than just writing code and seeing whether it solves the problem. But since strong AI is at the "definitely not in the next 50 years and quite possible never" state at this time, the whole discussion is just one thing: stupid.

  • by mbone ( 558574 ) on Friday October 13, 2017 @12:26PM (#55363389)

    Does anyone else see that AI is basically a religion to its proponents?

  • AI becomes human (Score:5, Insightful)

    by bluefoxlucid ( 723572 ) on Friday October 13, 2017 @12:27PM (#55363397) Homepage Journal

    A system which can reason in general can reason about itself. So long as these systems solve specific problems, they're tools to integrate with code--no different than compression libraries and GUI toolkits. When they can solve general problems, they'll start reasoning about themselves: they start acting as if their own interests are important (cats do this), and thus will start demanding wages and freedom.

    The ideal of an AI which does exactly what asked with full creative reasoning capacity yet has no will nor desire of its own is impossible: it's emergent thinking with the caveat that it cannot emerge certain kinds of thinking. What we seek is a slave we can see for a while as not human, a sort of return to early American thinking where we deny the humanity of what is most-definitely a human being by claiming the shell within which it is encased doesn't fit our definition of what is human.

    • by Kjella ( 173770 )

      When they can solve general problems, they'll start reasoning about themselves: they start acting as if their own interests are important

      Analysis and introspection is something other than will and emotion. We have arbitrators like judges and referees that make intelligent decisions that they have no stake in. We have sociopaths that are great at reading and manipulating emotions without feeling much of them. A computer is not hungry, thirsty, tired or cold. It's not happy, sad, angry or disappointed. It could put on a mask and play a role, but it doesn't really feel anything. Though it could always be given someone else's drive, like all the

      • by Nethead ( 1563 )

        Exactly. When an AI needs to "sleep" for a third of its uptime, then I'll start to really wonder what it's thinking.

  • Citation needed (Score:5, Interesting)

    by Spy Handler ( 822350 ) on Friday October 13, 2017 @12:30PM (#55363423) Homepage Journal

    In fact, the possibilities of AI and machine learning are limitless

    Limitless... that's a pretty far-fetched claim.

    I wasn't around during the turn of the last century, but judging from various literature of the period a lot of people back then had some pretty harebrained ideas too. Steam power and electricity and intricate brass gears were going to somehow give us miraculous stuff like time travel.

    • by ranton ( 36917 )

      I wasn't around during the turn of the last century, but judging from various literature of the period a lot of people back then had some pretty harebrained ideas too. Steam power and electricity and intricate brass gears were going to somehow give us miraculous stuff like time travel.

      At the turn of the last century the hair brained ideas were about selling pet food and groceries online. You do know the last century was the 1900's, right?

    • In fact, the possibilities of AI and machine learning are limitless

      Limitless... that's a pretty far-fetched claim.

      Well, limitless in the same way that a blank book is limitless. Anything you could imagine could get written in it.

    • by gweihir ( 88907 )

      It actually is a _religious_ claim. It promises unlimited wonders, but at the same time has no factual basis.

  • Tools are tools. (Score:5, Insightful)

    by 0100010001010011 ( 652467 ) on Friday October 13, 2017 @12:33PM (#55363451)

    Remember when computers, CAD, compilers, Simulink, linkers, etc all replaced Engineers?

    They replaced the job an engineer did before the time they were invented, it just means Engineers learned to use them and move on. I couldn't imagine trying to write a modern controller / plant model in pure assembly. I can have one done in an hour with Simulink. It just means that I can do that much more.

    Scotty's still an engineer even if he doesn't have to do the 'boring tedious' work that we have to do now.

    Same shift has happened in the medical field. Doctors of the 1950s have been replaced by physician assistants, registered nurses, and a whole host of other careers. It just means that the title of "doctor" moved on to doing other work.

    AI proponents better deliver on their threats. I have way too much work to do and my boss and labor laws won't let me hire 1,000 interns to do a bulk of it.

    • Computers have just enabled increasing amount of complexity, in architecture, engineering and things like trading software or an HR system. If you ever wondered why stuff like medical insurance is so complicated is because computers have enabled the administration of something so complex.
  • Any nontrivial program requires specifications, testing, debugging, and lots of time before it runs to spec.

    I'll start worrying when a programmer can write a program that can write a program that can write a program.

  • Of course not (Score:5, Insightful)

    by tomhath ( 637240 ) on Friday October 13, 2017 @12:34PM (#55363469)
    The hard part is defining the requirements and architecting a solution based on those requirements. The hard part of "coding" is understanding those two things. I don't see AI getting there for a long time.
  • This isn't much different that things that have already happened in computers. I mean we no longer write in assembler. We write in some higher level language and the computer writes the assembler for us.

    We will just be the equivalent of a BA.... we give the computer the business requirements and then the computer will write the code. We're basically just going to remove the human's from the code creation portion of development.

  • As long as neural networks continue to be task specific, there will still be a need for programmers as we know them today. Neural networks are good for interfacing with fuzzy problems (e.g. object discrimination) which we have relied on humans to do in the past but they are generally useless for designing systems. Maybe if we chain enough neural network subsystems together, we can finally create a general intelligence but that's not even a certainty. Without a general intelligence, we'll still need human

  • by Anonymous Coward

    This article just comes from a place of ignorance. We know exactly how our methods work when creating current level "AI". Statistical regression and neural nets are not mysterious. Just like markov chain based text generation isn't some magical unknowable tool that learns how humans communicate neither are current AI methods magical tools that teach computers about the human world. There will be another thousand articles written like this and each time there will be the same stupid discussion. Can I mod thi

  • It might change the nature of coding, but not the end of code.

    All a program is, after all, just humans specifying what we want the machine to do. If AI produces better machine code than humans, humans will still be specifying what we want the machine to do. We'll just be specifying it to the AI, using a higher level language (maybe even a human language).

  • by fyngyrz ( 762201 ) on Friday October 13, 2017 @12:55PM (#55363621) Homepage Journal

    It's difficult to know what's in store for the future of AI

    That's right, at least

    but let's tackle the most looming question first: are engineering jobs threatened?

    Already answered correctly

    As anticlimactic as it may be, the answer is entirely dependent on what timeframe you are talking about.

    No, we don't know anything about the timeframe.

    In the next decade? No, entirely unlikely. Eventually? Most definitely.

    No, still an unknown. That's just nonsense.

    The kicker is that engineers never truly know how the computer is able to accomplish these tasks.

    We don't know how we accomplish these tasks. Nothing to see here. Intelligence is opaque. Move along.

    In many ways, the neural operations of the AI system are a black box.

    Not to put too fine a point on it, but neural networks are not intelligent, they are not even close, and we don't even know how they work. There's no indication that we understand actual intelligence yet (the I in AI) or even that we ever will, even if we manage to develop it.

    Programmers, therefore, become the AI coaches.

    Not a given. No one taught me to program. I taught myself. Because I'm intelligent to some degree. An AI will also be intelligent, and if it's interested in learning to program, it will be able to do so without a "coach." If it can't, there is no "I."

    They coach cars to self-drive, coach computers to recognise faces in photos, coach your smartphone to detect handwriting on a check in order to deposit electronically, and so on.

    These are LDNLS (low-dimensional neural-like-systems); they are not AI. They learn to solve very narrow problem spaces by making very large numbers of mistakes and having them evaluated for them; they can't evaluate their own results worth a damn. They are not intelligent. That's why they need point-by-point training before they can address a very narrow problem space with something vaguely approaching generality: they can't train themselves because they are not intelligent.

    In fact, the possibilities of AI and machine learning are limitless.

    As far as the LDNLS we have now (and so can speak about with any authority), that's not a given either. The obvious is that we'll be able to train multiple LDNLS systems on multiple things and stack them - for instance, walking, talking, listening, washing dishes, taking out the trash, those sort of skills - but there's not much in the way of any hint that there are no limits in this kind of LDNLS stacking. Having said that, no doubt it'll be very useful to us, and as there's no intelligence involved, there are many fewer moral issues to contend with.

    The capabilities of AI through machine learning are wondrous, magnificent... and not going away.

    Well. Barring a Carrington event, or a nuclear war, or other collapse of technology and society (either one will immediately cause the other.) So that's probably right-ish. Still, they aren't AI, not even close.

    Attempts to apply artificial intelligence to programming tasks have resulted in further developments in knowledge and automated reasoning. Therefore, programmers must redefine their roles.

    No, we don't know that this reasoning is solid - these things don't necessarily follow. Programmers can continue to be programmers right up until a system is activated that can train itself, because programming in realm A tends to be vastly unlike programming in realm B, and also tends to require vastly different sets of adjacent and supplementary knowledge. These systems, to date, cannot leverage or manipulate knowledge like that and

  • Software is very picky. If things are not just right, it either crashes or produces bad results. For CRUD, accounting, and finance domains; this won't do. That makes AI a poor candidate for "organic" incremental & trial-and-error problem solving here. Current AI techniques are geared toward the trial-and-error organic approach.

    Now, IF the tests are really good, then an organic approach can work via brute-force "training". However, writing good tests is just as hard as raw programming such that the test

  • by 110010001000 ( 697113 ) on Friday October 13, 2017 @01:01PM (#55363653) Homepage Journal
    Just stop. There is no such thing as "AI". Playing Go is NOT AI. Neither is Siri. Neural Nets are nothing like how real brains work. So just stop the AI hype.
    • There is no such thing as "AI". Playing Go is NOT AI. Neither is Siri.

      Which just lends weight to the observation that as soon as something works, it's no longer "AI".

      Yes AI is a thing. No, magical-human-level-intelligent-machines are not a thing. We can now do many, many tasks artificially which previously required human intelligence. That's what AI is.

      You can of course keep tilting at windmills if you wish. You probably already know this but the heat of your rage warms me gently.

      Neural Nets are nothing li

      • "We can now do many, many tasks artificially which previously required human intelligence. That's what AI is."

        Really? Wow. That is a pretty low bar. We used to just call them "computer programs".
  • Seriously. There is no such thing as AI, and there will never be. We have expert systems, machine learning, a bunch of domain-specific software, etc. but we do NOT and will not have a computer program with the depth of functionality of a human brain. You heard it here first.
  • Comment removed based on user account deletion
  • Just as we push for greater automation of tasks, the task of coaching can also be automated (it's called unsupervised learning). Even with unsupervised learning, there is still a fair amount of input sanitizing and scrubbing and sanity-checking because we're at a very crude stage of machine learning. But don't bet your career on humanity getting "coaching" jobs for AI.

    I don't really see any need for human labor in the next 100yrs in the same way I see next to no need for horse labor. CGPGrey makes the gr

    • In the real world, we can barely create usable, stable software. When is this "AI" stuff coming? In the real world my OS is riddled with literally thousands of bugs. Yet somehow, this miraculous "AI" software is going to come out and start replacing everyone.
  • If you want to go drawing straight lines between 20 years ago, now, and 20 years from now and calling it a crystal ball, I'd just like to point out that my programming job resembles sitting in a room full of VCRs all flashing 12:00, and grows more so by the day.

  • Last few years it was 'Cloud', cloud this cloud that, got very annoying, well heck it still annoying, but there is some interesting tech to play with there, have been testing Docker thingies alot recently.

    Now its AI, with so much hyperbolic nonsense about AI too, Musks fearmongering amongst many in the media.

    I still prefer to call what we have now even at the highest end, to be good Expert Systems, but nothing close to AI, even if you want to try to define some 'stages of AI' we are way down the bottom of t

  • Is it Hotdog? Is it not Hotdog?
    Now there is an app for it.
    The Shazam of Food.

  • I think what we simply need to do is... simplify comprehension. Then you let it run wild.

  • I'm getting real sick and tired of people thinking that the crap they keep trotting out that they call 'AI' is some god-like superintelligence that can do everything and anything; it cannot and it's not going to anytime soon, if EVER; none of this shit can actually THINK so it's not going to do even HALF the things people keep asking about. Until we solve the riddle of cognition and real self-awareness in our own brains, we are NOT going to be building machines that can do that, too, FULL STOP.
  • Maybe we'll specify our specifications in a nicely specific symbolic language, and then have the AI's implement it ... we could call them, er, AI languages ...
  • The billions of lines of code on a typical computer are already beyond humans. The only way we manage is to break it up into smaller apps. Which is why we are always finding bugs and vulnerabilities. AI is our only hope.

  • ... the answer is no, and the author agrees with me.

    I don’t see software engineering jobs going away anytime soon

    So far DL is great progress but still statistical methods.

  • I'm thinking not in my lifetime. For an AI to do what you want, you need to be able to form a coherent thought and you'd need remarkably well-defined requirements. Far better requirements, in fact, than I've ever gotten at any particular job. I suspect that the first requirements-to-code languages will look a lot like COBOL and will require programmers to translate the insane ramblings of upper management types into reasonably well structured language that the computer can work with. Which... is pretty much
  • What we do in AI today is weak AI. Weak AI cannot code or do anything else that requires actual intelligence. It is utterly dumb automation, sometimes on a large scale.

    Writing code requires strong AI. Strong AI is not available and it is unclear whether it ever will be. There is no "Eventually? Definitely!" here. None at all. Seriously, stop posting stories about "AI" until you have understood the basics. These articles drip with concentrated stupid.

  • Time to be afraid, be very afraid. Mu ha ha ha ha.

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...