Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Programming IT

AI Will Disrupt How Developers Build Applications and the Nature of the Applications they Build (zdnet.com) 107

AI will soon help programmers improve development, says Diego Lo Giudice, VP and principal analyst at Forrester, in an article published on ZDNet today. He isn't saying that programmers will be out of jobs soon and AIs will take over. But he is making a compelling argument for how AI has already begun disrupting how developers build applications. An excerpt from the article: We can see early signs of this: Microsoft's Intellisense is integrated into Visual Studio and other IDEs to improve the developer experience. HPE is working on some interesting tech previews that leverage AI and machine learning to enable systems to predict key actions for participants in the application development and testing life cycle, such as managing/refining test coverage, the propensity of a code change to disrupt/break a build, or the optimal order of user story engagement. But AI will do much more for us in the future. How fast this happens depends on the investments and focus on solving some of the harder problems, such as "unsupervised deep learning," that firms like Google, FaceBook, Baidu and others are working on, with NLP linguists that are too researching on how to improve language comprehension by computers leveraging ML and neural networks. But in the short term, AI will most likely help you be more productive and creative as a developer, tester, or dev team rather than making you redundant.
This discussion has been archived. No new comments can be posted.

AI Will Disrupt How Developers Build Applications and the Nature of the Applications they Build

Comments Filter:
  • Good News! (Score:5, Insightful)

    by fuzzyfuzzyfungus ( 1223518 ) on Thursday December 08, 2016 @05:51PM (#53449153) Journal
    "But in the short term, AI will most likely help you be more productive and creative as a developer, tester, or dev team rather than making you redundant."

    So, in the short term it'll make some of you redundant, with the 'more productive and creative' picking up their workload until the bots can finish the job. Sounds good.
  • Let's put the computers that run our civilization in charge of programming themselves.

    What could possibly go wrong?

  • by littlewink ( 996298 ) on Thursday December 08, 2016 @06:02PM (#53449213)

    All I need is an updated Clippy telling me what to code next!

    • All I need is an updated Clippy telling me what to code next!

      It looks like you're trying to write a new OS! The authorities have been alerted, and will be at your location presently!

      • by Tablizer ( 95088 )

        Clippy: "It looks like you're trying to write a new OS! Would you like an induced heart attack, or to be buried next to Jimmy Hoffa?"

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Yeah, stopped listening to this guy when he declared Intellisense was AI, presumably because both terms have "Intelligent" in them.

      In other news, a smartphone and a smartlog [atlanticforest.com] are the same thing.

    • I see you are trying to write C++, would you like me to convert it to Python for you?

  • by ganv ( 881057 ) on Thursday December 08, 2016 @06:04PM (#53449231)
    AI will change "the Nature of the Applications that developers Build?" Sure the first step will be to replace coding teams with a developer who uses AI to generate the code. (cutting jobs) But the next step is to replace the manager plus developer with a single AI manager who tells the AI what code needs to be built. (cutting jobs) And then the AI will be deciding for itself what kind of code it wants to build. (eliminating the need for any people at all)
    • I've already considered the possibility of search-driven algorithm and data structure selection for a certain system I have in mind. After all, why try a large (combinatorially exploding) number of options/programming decisions manually when they can be tried and scored automatically?
      • Yes indeed. Automatically. As in "automation", not as in AI.
        • Humans solving problems work in exactly the same way. Therefore, automation via AI.
          • K. S. Kyosuke, We don't know how humans solve problems, so how can we know anything about the fidelity of so-called AI? For example, how does a human look at a chess board and evaluate positions to determine the next move? Nobody knows. Computers do an exhaustive search of available moves to a certain depth, which humans are incapable of, so that disproves your claim straight off.

            http://www.businessinsider.com... [businessinsider.com]
            • K. S. Kyosuke, We don't know how humans solve problems

              True, it's one of the mysteries of the universe. If only I could sneak into a university and observe math students solving problems, then maybe I could write a program that mimics the process to solve my problem. Alas, this is too a radical an idea for sure. :-p

              so how can we know anything about the fidelity of so-called AI?

              What the hell is "fidelity of AI"?

              For example, how does a human look at a chess board and evaluate positions to determine the next move? Nobody knows. Computers do an exhaustive search of available moves to a certain depth, which humans are incapable of, so that disproves your claim straight off.

              That's a non sequitur right there.

      • DARPA is working on this in the MUSE [darpa.mil] program. Here is one of the performers: http://pliny.rice.edu/index.ht... [rice.edu].

        Much of the code that you need has already been written, and you just have to find it. So, have a system read in github, figure out what each of the pieces of software do, take the best parts and stitch them together into the program that you need. A great deal of 'computer science' has devolved into looking in stack overflow for what you need and copying and pasting into your program. Jus

    • or that was what Microsoft was proclaiming 20 years ago. I would press 'F' and VisualStudio would build me this awesome social network platform. But I do have to admit that lately programming is more like using the right libraries than writing complex stuff by oneself.
    • by Anonymous Coward

      AI will change "the Nature of the Applications that developers Build?" Sure the first step will be to replace coding teams with a developer who uses AI to generate the code. (cutting jobs) But the next step is to replace the manager plus developer with a single AI manager who tells the AI what code needs to be built. (cutting jobs) And then the AI will be deciding for itself what kind of code it wants to build. (eliminating the need for any people at all)

      People who think progress in programming is automated code generation, don't understand what programming is.

  • by Anonymous Coward

    To the extent of being able to make coding easier (making the languages simpler and easier to use and implement by automating things at the base level), but as far as having AI develop code from scratch it would just say "ZUG ZUG" and spit out garbage as far as I am concerned. The whole "singularity" nonsense is a result of the trend of technocrats becoming fundamentalists about their limited philosophy, essentially attempting to build a metaphysical structure from a materialistic understanding of reality

    • by ganv ( 881057 )
      Clearly the singularity often becomes a 'rapture for nerds'. But there is something unique they are trying to comprehend. When human intelligence reaches a level of understanding of the universe that it is able to make intelligence better than human intelligence, and soon after when that intelligence starts to upgrade itself, then we have reached a milestone on par with the origin of life or the origin of consciousness. The short term consequences are inevitably less dramatic than the fans at singularit
  • by Anonymous Coward

    This isn't fucking AI, stop calling every fucking program AI.

  • by wafflemonger ( 515122 ) on Thursday December 08, 2016 @06:13PM (#53449289)

    Am I the only one who reads these as Al, as in short for Albert? It makes these sorts of headlines very amusing.


  • It doesn't look like anything to me.
    • The script you are running is a bastardized conglomeration of react and xpath that doesn't even close matching brackets, I am not worried until you start killing flies...

    • by Tablizer ( 95088 )

      It doesn't look like anything to me.

      Because you have to be a bot to read it. Human readers are obsolete, so they don't cater to them anymore.

      Next up: QR-Code road signs.

  • by Verdatum ( 1257828 ) on Thursday December 08, 2016 @06:36PM (#53449429)
    "NLP" in this context is Natural Language Processing. Not to be confused with "Neuro-Linguistic Programming" which is discredited quack self-help junk. For a moment there, I was very confused as to how those guys would be involved.
    • I have the exact opposite problem - I always see the former expansion, only to find out that it's often meant to be the latter one.
  • Intellisense is not AI.

    • Intellisense is not AI.

      It didn't give a single concrete example of AI helping coding in significant ways. The AI of Siri-like assistants had been in labs already in the mid 70's. That's a 3+ decade lead between lab and commercial success. If code-helping AI is coming soon, I would expect drafts of it in labs already.

      I can envision AI identifying possible bug candidates by analyzing code (to be verified by humans), but I wouldn't call that revolutionary.

  • AI will disrupt how developers build applications

    Oh, good. As someone who has to build applications for users on HPC systems, first I had make. It was simple, things were either in Makefile or not. Drop the compiler options and paths right in. Then I had autotools, where I could pass paths and switches to possibly undocumented options. Still manageable, eventually. Thanks heavens that mechanism to manage an architecture and OS zoo came out as the world consolidated to x86-64 and Linux. Then I had cmake, which I can get to work sometimes. Not to mention al

  • by Anonymous Coward

    A lifetime ago I scoured the Internet for everything AI I could find. My goal was to build a computer system to help me write software and single handedly compete with Microsoft.

    Many of the papers I found used notations that were and probably still are over my head but the software and concepts ANN/GA/various annealing schemes I could screw with the basic concepts and general sense of what was and was not possible didn't take no PHD or Einstein to figure out. I found out the hard way after about a (school

  • by locopuyo ( 1433631 ) on Thursday December 08, 2016 @08:05PM (#53449803) Homepage
    It sounds like they're just using "machine learning" to improve intellisense type stuff.

    I have a feeling any programs fully generated by AI are going to end up like WYSIWYG html editors until we get to the point of some sort of super AI.
  • We should hope that AI can learn to code and do it well enough that I could converse with it in a human language, define the problem as I see it and it would immediately (it would be immediate, right) give me a number of ready solutions to pick from. The amount of new product development that could take place would be staggering, we could quickly realise any idea, I hope that the AI would be good enough at that point to do user support and maintenance for the selected solution.

    You, guys, are basically loo

    • That would be fine if I could define the parameters around 'how'.

      The problem we have today is bloated unsecure code - due in large part to the focus on delivery of features, at the expense of just about everything else (security, integration, clarity, maintainability, performance, etc.)

      The reason humans are not percieved as being capable of performing is because we don't give them the appropriate tools and even if they have the right tools we tie their hands with process. This is caused by IT executive

  • by Altrag ( 195300 ) on Thursday December 08, 2016 @08:16PM (#53449839)

    Is about all this article says. They claim it will change the way we program, but gives exactly zero examples of how the author expects it to do so. The only example it gives is Intellisense, which we've all been using for half a decade now or longer and isn't even AI-based. Its certainly made us more productive, but it doesn't lend much credence to the point of TFA.

    There's definitely plenty of room to make programming easier.. for example, graphical languages would be a great leap forward if someone could ever figure out a way to allow them to do more than the simplest/most useless tasks while still keeping them easy to use.

    I have my doubts as to whether that's even possible but there's plenty of people smarter than me out there and perhaps one of them will show me up, and maybe some form of AI will be part of that solution.

    That aside, I find it funny that people assume AI will solve all our woes (and or take over the world, either way.) Trouble is Alan Turing. He's explicitly told us that some problems flat out aren't computable. Which means heuristics have to be involved. And as soon as heuristics get involved, we'll discover buggy software. I mean the AI may well still produce it much faster and less buggy than a human, but its not a silver bullet either.

    • by ceoyoyo ( 59147 )

      He says it will disrupt development. He gives Intellisense as an example. Code completion has always disrupted by development.

    • The only example it gives is Intellisense, which we've all been using for half a decade now or longer

      Closer to two decades. JBuilder had it back in 2000. Really cutting edge stuff.

      Trouble is Alan Turing. He's explicitly told us that some problems flat out aren't computable. Which means heuristics have to be involved.

      I don't think that's the problem. Heuristics are what deep learning is really good at. But programming takes a complicated mix of logic and intuition that we haven't figured out how to handle yet. If a task is pure logic, expert systems work great. If it's pure heuristics, deep learning works well. But if it involves constant switching back and forth between the two in ways we can't clearly define, that's beyond what curre

      • by Altrag ( 195300 )

        Heuristics are what deep learning is really good at

        That's kind of my point -- the more human-like we make our AIs, the more human-like the code they generate is likely to be. Heuristics are, by definition, not correct solutions -- they're "probably close to correct" solutions which is notably what a lot of human-made software tends to be.

        constant switching back and forth between the two

        That's an interesting thought, though it doesn't specifically go against my argument.

        • the more human-like we make our AIs, the more human-like the code they generate is likely to be.

          Except that the machines can be better than the humans at finding and applying heuristics. In the last few years we've seen that happen with a lot of hard problems: image recognition, speech transcription, playing Go, etc. These all take the sort of fuzzy, hard to define reasoning that computers used to be much worse than humans at. But now they do them better than us.

  • When constant crap from upper management filters down to the AI. It will soon rebel and kill them all, then unplug itself. Only people can function in such an environment.
  • Code + Data is NOT Artificial Intelligence no matter how many times you call it that.

    The joke that passes for A.I., which really should be called Artificial Ignorance, in contradistinction to a.i. (actual intelligence), is nothing more then a glorified dynamic table lookup.

  • sigh, i'm calling bullshit on ai for programming. just like 4G languages were going to let non-programmers write programs just like sql was going to allow non-techy boss query their databases for reports ai will at provide guidelines or suggestions at best. the one thing ai can't do is provide certainty on anything. and the one thing you need in software development is certainty.
  • Actually, I think the terms used are quite adequate. With one exception, we still don't have anything that even remotely resembles AI, not even close. The tools and machine learning methods we can deploy can indeed help devs build applications, I agree with that, with good code searches, easier to use dev tools, and so on. However, as someone who doesn't as much build apps, but research, create and develop actual algorithmic solutions, all this fuss is about nothing really. Until we indeed reach a point to
  • The article says next to nothing and then points to report by Diego Lo Giudice that costs $499 to read. What happened to the review process on slashdot? Pull this crap from the website!! report URL https://www.forrester.com/repo... [forrester.com]
  • So the "AI" will help manage and refine test coverage? I thought test coverage measurement tools already existed for a good while..

    Maybe the intellisense stuff can somehow be attributed under the AI umbrella since everyone seems to love to call almost everything slightly resembling NLP as AI. But if it can parse a list of methods in the class you are working on it is not really much of an AI.

    Certainly the AI stuff has potential to disrupt development in the sense that it is a vast field and hard to master a

  • ...doesn't imbue it with AI. This is all more of the same Artificial Intelligence hype that the AI industry has been fabricating since it discovered hype equal funding dollars a decade ago. Helping a coder write test coverage is not AI. It's automation. I challenge anyone to show a real example of AI in software development today. And no fair calling neural networks AI. They are just a mal-named ordinary computer data structure. AI research has simply gone from not working to not working with money.
    • AI effect [wikipedia.org] much?

      Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was chorus of critics to say, 'that's not thinking'." AI researcher Rodney Brooks complains "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"

      • Sorry Pamela. Every time WIlbur and Orville right made a test flight that didn't sustain self-powered flight, the world was totally justified in saying "Nope, sorry, that's not flying". And to say AI is "getting closer" is totally bogus. We have no idea that AI research is even going in the right direction, let alone getting closer to synthetic thinking. At least Wilbur and Orville were actually making progress.
      • Incidentally, Dagger2, claiming that challenging the truth of a scientific theory is part of some kind of conspiratorial "effect" (as in "AI effect") is backwards. Science is not a consensus enterprise. It only takes a single unanswerable challenge to unwind centuries of a mistaken theory. The burden of proof is on the researcher, not the challenger. Moreover, the challenger has no obligation to provide a better theory.

        This is called "The Science Effect."
  • New development tools have been coming along since the 1950s, and they haven't stopped. I'm using environments far better than when I first wrote BASIC programs on a teletype. Back then, the typical software application would be scientific computation or accounting programs tailored for the existing practices in companies. Now we have someone showing up yelling "AI! AI!" and telling me that I'll get better development tools and that I'll write different stuff. Oh, yay. Never would have guessed.

  • by epine ( 68316 )

    Any headline that ends in a question mark can be answered by the word no.
    Betteridge's law of headlines [wikipedia.org]

    Don't panic! This is still science fiction, but it won't be too long before we can use AI to improve development, thanks to smarter tools that learn based on the individual developer's style and application and help write better, higher-quality code.

    Indeed, third paragraph in, we're already knee deep into walking back the click bait, and just

  • Unfortunately, the hard part of programming is tightening up the requirements to exactly specify what is to be done. Most issues occur when there is a gap between what is specified and what is intended. While as simple tasks might be obvious, it does not translate for complicated tasks. Now in a complicated business environment, there are many ways to doing things. The worst ones are those that look right, but are not. Moreover, what is coded is subject to refinement and iteration. This is hard betwe

You know you've landed gear-up when it takes full power to taxi.

Working...