Could AI Transform Continuous Delivery Development? (thenextweb.com) 78
An anonymous reader quotes The Next Web:
According to one study, high-performing IT units with faster software releases are twice as likely to achieve their goals in customer satisfaction, profitability, market share and productivity. Acknowledgement of this has fueled a headlong rush toward what software developers call "continuous delivery"... It's a process most technology departments aspire to but only a fraction have achieved. According to a recent survey by Evans Data, 65 percent of organizations are using continuous delivery on at least some projects, but only 28 percent are using it for all their software. Among non-SaaS companies, that proportion is just 18 percent...
So what comes next? The future of application development depends on using artificial intelligence within the continuous delivery model... We're at the precipice of a new world of AI-aided development that will kick software deployment speeds -- and therefore a company's ability to compete -- into high gear. "AI can improve the way we build current software," writes Diego Lo Giudice of Forrester Research in a recent report. "It will change the way we think about applications -- not programming step by step, but letting the system learn to do what it needs to do -- a new paradigm shift." The possibilities are limited only by our creativity and the investment organizations are willing to make.
The article was written by the head of R&D at Rainforest QA, which is already using AI to manage their crowdsourced quality assurance testing. But he ultimately predicts bigger roles for AI in continuous delivery development -- even choosing which modifications to use in A/B testing, and more systematic stress-testing.
So what comes next? The future of application development depends on using artificial intelligence within the continuous delivery model... We're at the precipice of a new world of AI-aided development that will kick software deployment speeds -- and therefore a company's ability to compete -- into high gear. "AI can improve the way we build current software," writes Diego Lo Giudice of Forrester Research in a recent report. "It will change the way we think about applications -- not programming step by step, but letting the system learn to do what it needs to do -- a new paradigm shift." The possibilities are limited only by our creativity and the investment organizations are willing to make.
The article was written by the head of R&D at Rainforest QA, which is already using AI to manage their crowdsourced quality assurance testing. But he ultimately predicts bigger roles for AI in continuous delivery development -- even choosing which modifications to use in A/B testing, and more systematic stress-testing.
First post. AI is the new silver bullet. (Score:2)
Re: (Score:2, Insightful)
Yeah, this is an incredibly low quality article. It doesn't specify what it means by what AI should do, doesn't specify which type of AI, doesn't specify why AI should be used, etc. Junk article.
It's basically a bullshit bingo post where someone repeats a buzzword without any knowledge of the material behind it.
buzzwords (Score:5, Funny)
>a new paradigm shift.
I stopped reading after this.
Re: buzzwords (Score:3)
Not enough leveraging core competencies through blue sky thinking and synergistic best of breed cloud machine learning for you?
Same Old Thing (Score:5, Insightful)
Holy Fuck.
Continuous integration
Prototyping
Incremental development
Rapid application development
Agile development
Waterfall development
Spiral development
Now, introducing, "Continuous Delivery"...or something.
Here is the actual model, a model that will exist for the next 1,000 years.
1. Someone (or something) gathers requirement.
2. They get it wrong.
3. They develop the wrong thing that doesn't even work they way they thought it should
4. The project leader is canned
5. The software is implemented by an outside vendor, with all the flaws.
6. The software operates finally after 5 years of modifications to both the software and the workflows (to match the flaws in the software).
7. As soon as it's all running properly and everyone is trained, a new project is launched to redo it, "the right way".
8. Goto 1
Re: (Score:2)
If everyone is stupid, no one is.
Re: (Score:2)
No no. We got rid of line numbers a long time ago.
Re: (Score:2)
AI meets Hunger Games (Score:1)
It's a genetic algorithm where YOU are the population being flushed out each cycle.
Re: (Score:1)
Here is the actual model, a model that will exist for the next 1,000 years.
1. Someone (or something) gathers requirement.
2. They get it wrong.
3. They develop the wrong thing that doesn't even work they way they thought it should
4. The project leader is canned
5. The software is implemented by an outside vendor, with all the flaws.
6. The software operates finally after 5 years of modifications to both the software and the workflows (to match the flaws in the software).
7. As soon as it's all running properly and everyone is trained, a new project is launched to redo it, "the right way". 8. Goto 1
You just accurately described a 6 year project within our organization....and it made me cry
Does this model have a name? an urban dictionary name?
if not it needs one.
Re:buzzwords (Score:5, Insightful)
I'm also a little skeptical of any study published by a company looking to sell you what the study has just claimed to be great. That doesn't mean its a complete sham, but how hard did they look for other explanations why some companies are more successful than others?
Re: (Score:2)
I smell Bullshit Bingo... (Score:1)
that's all, folks...
Re: (Score:2)
I'd take 'high'.
Meeting goals (Score:2)
I notice the targets are all set from the company's point of view... including customer satisfaction. However it's quite easy to meet any goal, as long as you set it low enough.
Companies like Comcast or Qwest objectively have abysmal customer satisfaction ratings; but they likely meet their internal goal for that metric. I notice, in their public communications, they always use phrasing along the lines of "giving you an even better customer service experience" - again, the trick is to set the target low and
continuous delivery == constant change (Score:5, Insightful)
Any professional outfit will test a new release (in-house or commercial product) thoroughly before letting it get anywhere close to an environment where their business is at stake.
This process can take anywhere from a day or two to several months, depending on the complexity of the operation, the scope of the changes, HOW MANY (developers note: not if any) bugs are found and whether any alterations to working practices have to be introduced.
So to have developers lob a new "release" over the wall at frequent intervals is not useful, it isn't clever, nor does it save (the users) any money or speed up their acceptance. It just costs more in integration testing, floods the change control process with "issues" and means that when you report (again, developers: not if) problems, it is virtually impossible to describe exactly which release you are referring to and even more impossible for whoever fixes the bugs to produce the same version to fix and then incorporate those fixes into whatever happens to be the latest version - that hour. Even more so when dozens of major corporate customers are ALL reporting bugs with each new version they test.
Re: (Score:3)
I wanted to chime in with a tangible anecdote to support your
Re: (Score:2)
I can sympathize with that few, of it appearing to have too many developers focused upon deployment/testing then actual development.
However, I come at it from the other side, the developers just push new development out and production support is responsible for addressing the mess, it is horrible, there is too much disconnect between developers and their resulting output creating consistent outages.
The most successful teams follow the mantra "Eat your own dog food", developers who support the crap they push
Re: (Score:3)
This might be good for developers
It's not even good for developers.
"a new paradigm shift." (Score:2)
Another one?
Let's hope not. (Score:1)
AI is enough of a problem, why make it worse?
According to one study (Score:2)
One study, well then I'm sold.
But do you know who likes Continuous Delivery?
Not the users.
The users hate stuff changing for the sake of change, but trying to convince management seems an impossible task.
Re: (Score:2)
Why should users not like it?
If you shop on amazon you don't know if a specific feature you notice today came there via continuous delivery or a more traditional process.
Re: (Score:2)
The crux of the problem is that we (in these discussions and the analysts) describe *all* manner of 'software development' as the same thing. Whether it's a desktop application, an embedded microcontroller in industrial equipment, a web application for people to get work done, or a webapp to let people see the latest funny cat video.
Then we start talking past each other, some of us terrified what 'continious delivery' means in the context of software in the microcontroller of a health care device, others t
Re: (Score:2)
Well,
'continuous delievery' is a term with a defined meaning. And releasing phone apps with unwanted UI/functionality in rapid succession is not part of that definition.
Continuous delievery basically only is the next logical step after continuous integration.
You deploy the new functionallity automatically (or with a click of a button) when certain test criteria are met. Usually on a subset of your nodes so only a subset of your customers sees it. If you have crashes on those nodes or customer complaints you
Re: (Score:2)
You deploy the new functionallity automatically (or with a click of a button) when certain test criteria are met. Usually on a subset of your nodes so only a subset of your customers sees it. If you have crashes on those nodes or customer complaints you roll back.
Why do you consider this to be a good thing? It's certainly not for those poor customers who were chosen to be involuntary beta testers, and it's also not for the rest of the customers who have to deal with software that is constantly changing underneath them.
Re: (Score:2)
Why do you consider this to be a good thing?
Because it is?
It's certainly not for those poor customers who were chosen to be involuntary beta testers
They are not forced to use the new functionality.
And sooner or later they get it anyway, what has that to do with "continuous delivery"?
and it's also not for the rest of the customers who have to deal with software that is constantly changing underneath them.
Software is constantly changing. Deal with it.
Re: (Score:2)
They are not forced to use the new functionality.
Assuming that it is new functionality, and not a change to old functionality. Or, assuming that it isn't replacing something outright.
And sooner or later they get it anyway, what has that to do with "continuous delivery"?
I was responding to your comment about part of "continuous delivery" being that you use a subset of the user base to do your testing. That means they are running code that hasn't been sufficiently tested. That they would get (hopefully fully tested) code eventually has nothing to do with it.
Software is constantly changing. Deal with it.
This amount of callous disregard is a great example of why users are increasingly view
Re: (Score:2)
There is a misunderstanding. The code usually is sufficiently tested.
However there are two reasons to not fully roll it out: there could be a glitch, especially in conjunction with other parts of the system, or simply connecting to the life data.
Secondly some systems are so big (e.g. Amazon, Zalando) that it makes sense to gradually deploy the new software and gradually shut down old nodes.
As a developer myself, I think that what we're supposed to be doing is solving problems users have, not making more.
And
Re: (Score:3)
'continuous delievery' is a term with a defined meaning. And releasing phone apps with unwanted UI/functionality in rapid succession is not part of that definition.
It is a natural consequence of a continuous delivery, emphasis on always evolving and changing and that the developer is king and no one can question developer opinion. Devolper decides it should move, it moves. No pesky human testers to stand up and say 'you confused the piss out of us' to make them rethink it. No automatic test is going to capture 'confuse the piss out of the poor non-developer users'.
If you have crashes on those nodes or customer complaints you roll back.
Note that a customer with a choice is likely to just go somewhere else rather than use your software.
Re: (Score:2)
The problem is that automated testing is no substitute for a QA team.
The QA team is supposed to provide the automatic testing.
Why don't you check out "shops" that actually do "continuous delivery" instead of boring us with your nonsense rants?
This is a long video, you will find shorter ones coming to the point of continuous delivery more quickly.
https://www.youtube.com/watch?... [youtube.com]
Zalando is the only company I know that has realized it and is marketing, positioning itself, as an IT company, not as "a shop".
If
Re: (Score:2)
Again, not all software development is the same. A QA team in many situations is a team that cannot develop automated testing because they know how to use the software, not develop software. Because they represent the target customer base, not 'more development'. They provide a *distinct* perspective from what those skilled in software development can do for themselves. While automated testing can be done to assure that the various functions execute according to the design intent of the developer, usabi
Re: (Score:2)
So the QA then would click the button to trigger the last step: automatic delievery/deployment.
Re: (Score:2)
So in all my prior conversations with folks advocating the approaches, the 'continuous delivery/integration' referred to reacting automatically to a commit. Triggering automation doesn't seem a particularly new concept to me. For example, software I deal with executes unit test on commits alongside human code reviews. Upon code review and unit test success, then a merge which triggers test builds for QA. Upon the human QA cycle and some external beta testers marking success, then a human triggers update
AI written paper (Score:2)
IT what? (Score:2)
IT in my company does network, Windows, Office and Virus etc. type of work. Is this what they talk about? Anyway, it's been long outsourced to IT (as in "Indian"
technology)...
For some businesses maybe but... (Score:2)
I recently interviewed at a couple of the new fangled big data marketing startups that correlate piles of stuff to help target ads better, and they were continuously deploying up the wazoo. In fact, they had something like zero people doing traditional QA. It was not totally insane at all. But they did have a blaze attitude about deployments -- if stuff don't work in production they just roll back, and not worry about customer input data being dropped on the floor. Heck, they did not worry much about da
Re: (Score:3)
But they did have a blaze attitude about deployments -- if stuff don't work in production they just roll back, and not worry about customer input data being dropped on the floor.
It's amazing how common this attitude has become. It's aggressively anti-customer, and a big part of the reason for the acceleration of the decline of software quality over the past several years.
Re: (Score:2)
I'd say a large chunk of the problem is people churning out trivial 15 minute frontends to existing functionality being very loud and thinking too much of their perspective, drowning out the voices of people that work on more complex software.
Also, decision makers see more the impact of .css and it's very easy to make a sophisticated looking UI, which was formerly a viable indicator for someone at least had some sort of skill. The good thing is people who are more purely design minded can provide better qu
Re: (Score:2)
I couldn't agree more.
Re: (Score:2)
No (Score:2)
You want your deployment system to be predictable, and as my old AI professor used to say, intelligent means hard to predict. You don't want AI for systems that just have to do the exact same thing reliably over and over again.
Re: (Score:2)
This is what people keep missing about 'AI' as it stands today. AI is a good approach for problems that are impractical to address with manual programming. Areas where programmers know the general algorithms that would be useful, but to describe how to apply them in a general chaotic dataset to order it would be incredibly complex. So we know various algorithms will find edges and glossiness and color and those are characteristics that appear in photos, but to describe how to apply all those techniques t
Summary sounds retarded (Score:2)
A continuous delievery pipeline has as much AI as a nematode has natural inteligence ... probably even less.
In other words... (Score:2)
Analyst who understands neither software development nor AI proceeds to try to sound insightful about both.
All I know is (Score:2)
All I know is that, as a user, rapid-release or continuous delivery has been nothing but an enormous pain in the ass to me and I wish it would die the horrible death it deserves already.
Every morning: git update; make install (Score:1)
Re: (Score:2)
As long as customers are comfortable with doing this, I do not see a problem
I'm a developer, not a "normal" user, and I would not be comfortable with this at all. However, it would be better than what is usually being done, because at least with that system I could easily decide when to upgrade and when not to.