Report: 97% of Software Testing Pros Are Using Automation (venturebeat.com) 49
It turns out, software testers are relying more on automation than ever before, driven by a desire to lower testing costs and improve software quality and user experience. VentureBeat shares the findings from a new report by Kobiton: Kobiton asked 150 testers in companies with at least 50 employees across a range of industries. [...] For context, there are two kinds of software testing: manual and automated. Manual is still common but it's not ideal for repetitive tests, leading many testers to choose automation, which can expedite development and app performance. To wit, 40% of testers responding to Kobiton's study said their primary motivation for using automation is improving user experience. "In a study we conducted two years ago, half the testers we asked said their automation programs were relatively new, and 76% said they were automating fewer than 50% of all tests," said Kevin Lee, CEO of Kobiton. "Nearly 100% of testers participating in this year's study are using automation, which speaks to how far the industry has come."
Testing managers are prioritizing new hires with automation experience, too. Kobiton's study found that automation experience is one of the three skills managers are most interested in. And how is automation being used? A plurality (34%) of respondents to Kobiton's survey said they are using automation for an equal mix of regression and new feature testing. And it's made them more efficient. Almost half (47%) of survey respondents said it takes 3-5 days for manual testing before a release, whereas automated tests can have it done in 3-6 hours.
Testing managers are prioritizing new hires with automation experience, too. Kobiton's study found that automation experience is one of the three skills managers are most interested in. And how is automation being used? A plurality (34%) of respondents to Kobiton's survey said they are using automation for an equal mix of regression and new feature testing. And it's made them more efficient. Almost half (47%) of survey respondents said it takes 3-5 days for manual testing before a release, whereas automated tests can have it done in 3-6 hours.
Being a tester made me a much better project mgr (Score:5, Informative)
Because my industry was nearly devastated by the lockdowns we lost our only software tester. Being the project manager, I had to step up and make do. What I discovered is that 1) if you can code you can automate testing, 2) the tools -- even the free ones -- are not that hard to pick up, and 3) knowing where all the bones and bodies were buried made me a much, much better informed project manager. I'm sure the team hated it, but now there was literally nowhere for them to hide.
Re: (Score:2)
I'm sure the team hated it, but now there was literally nowhere for them to hide.
The good ones didn't hate it. They like their code being better.
Re: Being a tester made me a much better project m (Score:3)
Automated testing is fine until you get tests that fails. Then you need to figure out what went wrong.
A problem is also how to know if the test itself is relevant and correct. Relevant from the perspective that you might test dead code. Correct from the perspective that the test written actually expects the correct result.
Sometimes it's not simple to test algorithms. E.g. calculation of interest rate might look simple, but suddenly there's a leap year or leap second and you're in a more complicated situatio
Re: Being a tester made me a much better project (Score:2)
None of those issues are specific to automated testing really.
Re: (Score:3)
Not to say your product or team is set up this way, but while writing automation is easy, scaling automation up is where the real challenge comes in. Once you have 1k+ worth of tests, you have enough code sitting around that if you wrote your tests brittlely, you will end up spending way too much time fixing and no time writing new tests. Open source or closed source tools makes almost no difference in this sort of problem, it's all about structure of the code and experience. Of course, I'd say many test
Re: (Score:2)
The job of a software tester is not physical hard, they should be doing the least amount of anyone in the entire company. But why they are paid the big bucks to developing comprehensive testing suits that prevent all bugs and weird edge cases from making it to market. Anyone can take 5 minutes, some free consumer software and make up some automation that will perform 100 billion tests on any software you like. But this while stability testing and random testing can be good in some instances targeted intelli
Re: (Score:1)
What are some of the free ones, if you don't mind me asking.
Really? No kidding! (Score:1)
I'm not a testing professional myself, but I learned about Expect [wikipedia.org] roughly 20 years ago and it was first released in 1990. I'm sure there's a lot of other stuff that automates testing too, and there are a lot of apps with way too many buttons for people to press during tests, not to mention unit tests. I seriously wonder what the other 3% are even doing. Perhaps that other 3% is doing *exclusively* the kind of front-end testing that can only be done with human interaction (because ultimately people have t
hard to have ui usability with Automation. Game (Score:4, Insightful)
hard to have ui usability with Automation. Game testing is an other thing where you can do some Automation but you also need real people playing as well.
Of course (Score:2)
What is wrong with the other 3%
Re: (Score:2)
They do integration testing.
Re: (Score:2)
I'm not sure how you can always automatically test complex UI systems. For simple things, you can handle. but complex UI seems harder to test automatically.
Re: Of course (Score:2)
Just look at the dynamic ribbon tgat Microsoft introduced.
Re: (Score:1)
I'm not sure how you can always test complex UI systems. For simple things, you can handle. but complex UI seems harder to test.
Complex systems are harder to test, regardless of manually or automatically.
Re: (Score:2)
What's wrong with artisanal, handcrafted software?
Automation (Score:2)
What about manufacturing? Things are expensive because most products aren't being designed for robotic assembly. Even packaging isn't being made for automated picking. Why is it that Amazon needs to hire so many pickers? It's because retail packaging isn't robot friendly.
What the hell is wrong (Score:2)
Re: (Score:2)
Well, maybe they are handling repetitive tasks; like making the same comment someone else made 10 minutes ago.
Re: (Score:2)
They're honest? Given the quality of software these days, it seems like most of them prefer to have their users do the testing.
Of course it runs faster... (Score:5, Informative)
Of course run an automated test suite runs faster then manual testing. But it'll take an incredible amount of resource (time and developers) to write the automation script. So, write it against stable parts of your software. Usually regression test suites.
If not, you'll spend a huge time and developers to write scripts to something keeps changing.
Not to mention that a lot of the automation tools, even Selenium, Pupetter and Playwrite, they'll fail a lot (i.e., false negatives). To the point that sometimes you have to disable the CI when you have a lot of developers submitting pull requests.
I'd suggest to people that really want to learn about testing (being a tester, developer, test manager, product manager), to read the International Software Testing Qualification Board (ISTQB) basic syllabus.
Re: Of course it runs faster... (Score:2)
running the same regression test plan every release is incredibly expensive too. there is a break even point where hiring an expensive automation developer is cheaper than hiring multi manual testers.
Re: (Score:2)
Agree that hiring an expert automation QA is the way to go.
I've been a manual QA for 20 years almost. And found a good solution is a mix of manual for the current sprint's development and automation for the regression suit.
Re: (Score:2)
It's always a chicken-and-egg when trying to automate the current sprint's features before they are implemented or even fully understood. And a manual test plan that people have gone through step-by-step few times is the perfect recipe card for writing automation.
What automation tools do the geniuses here use? (Score:3)
this must be utter crap because ... (Score:5, Insightful)
... one of the few things i really learned after a lifetime of writing software is:
- manual testing is never optional
- automatic testing is never optional
so color me depressed with the headline. if this is supposed to work as clickbait for software developers then something has gone terribly wrong with software developers' flamewars. manual vs automatic? really? holy fuck.
Re: this must be utter crap because ... (Score:2)
maybe someone assumed the split is 50/50. The surprise people have does look grim for the industry.
Are you certain .. (Score:3)
Hardly a new idea (Score:2)
45 years ago I wrote a process to automatically verify a system I had written. It allowed anybody to verify that the system was correct after any changes were made. The positive aspect was that it allowed the customer to have a high degree of confidence that they system they had purchased from us worked as expected.
Using automation was never the question (Score:3)
You absolutely must automate everything that you care about. Manual testing doesn't scale and unit tests are insufficient for anything that involves points of integration. The only alternative is not caring and dealing with issues only when they become production problems that you *do* care about.
The actual question is not whether you automate, but whether you automate upfront, or "prioritize getting the software out the door," and automate later.
IMHO that second choice is always a mistake. Those tests will remain unimportant until the thing they were supposed to be testing breaks - at which point you'll spend even more time to write tests for code where you can't find the initial specs and don't fully understand the intent. Your delivery process will get more and more bogged down and less confident as it relies on more human labor and head-knowledge. The bugs that often get caught by the formal scrutiny of test writing will make it into the deployments and take more effort to deal with. You will also miss your window to make your system easy to test - only once the implementation has a bunch of dependencies will you go back to you actually made it quite difficult for yourself.
Always automate early. It will force you to do the right amount of manual testing as well.
Re: (Score:2)
The actual question is not whether you automate, but whether you automate upfront, or "prioritize getting the software out the door," and automate later.
So true, as is your list of negatives. I'm dealing with that right now for the project I just joined. The code is 4-5 years old and there wasn't a single unit test in sight. I've started adding them, but it'll be a slow process to get enough there to be really useful because I don't know all of the "whys" and it's still not a priority...sigh.
revelation : 100% of them are using a computer! (Score:2)
Does that come as a surprise!
Well, duh! (Score:3)
Where I work, our major clients are life sciences companies who run validated systems - ie. systems that the FDA can walk into at any time and ask to see how the system was designed and run. Before each software release, our testers run a battery of tests - now numbering in the thousands - to ensure the system works the way it use to, unless the change was by design.
This is Dynamics 365 F RSAT (Regression Suite Automation Tool) has been an absolute godsend to the design and test teams. The testers can fully validate a system in a few hours; and if the validation fails they can give the developer the exact test where it fell over so they can work on it under the identical conditions.
And yet no mention about quality⦠(Score:2)
Too often I hear engineering and product managers refer to automation as the magic bullet⦠or worse, assume that you donâ(TM)t need dedicated testers anymore. Quality as a concept just isnâ(TM)t in fashion anymore.
Re: And yet no mention about quality⦠(Score:1)
well, yeah? (Score:2)
And water is wet.
Why is this news?
Re: (Score:2)
"Why is this news?"
Because Kobiton needs to advertise their automated testing software.
And the automated tests are no more relevant (Score:4, Insightful)
An automated test is worthless unless it tests something useful. Designing good tests is hard, whether the tests are manual or automated. If the automation fills in some random junk and clicks Save, what did that prove? What things are likely to fail when a new feature is designed? It's amazing how much test automation goes for quantity, without attention to quality. A good test suite focuses on things that are at risk of breaking, but not many actually accomplish this.
Re: And the automated tests are no more relevant (Score:2)
A QA engineer who doesn't assume everything is at risk of breaking probably isn't doing their job right.
Re: (Score:2)
I'd argue that a QA engineer that assumes that all risks are equal, isn't doing their job right. For example, I've known QA testers who would build regression tests to make sure that the text labels of input fields didn't change. The risk of such a change occurring accidentally is very low, but building a test still takes time and effort. A better use of limited QA resources, is to test things like the happy path, and what happens when a search doesn't find a result, and what happens when a user inputs bad
Re: (Score:2)
I'd argue that a QA engineer that assumes that all risks are equal, isn't doing their job right.
I agree. Assume everything might break, and start testing with the highest priority items. But at no point should they not test something because "surely that part couldn't go wrong."
For example, I've known QA testers who would build regression tests to make sure that the text labels of input fields didn't change.
Even if everything higher on the list is checked off, there are automated tools that can do that. As in, you don't even have to write the test, you just tell the tool "this is how the UI is supposed to be" and it will flag it if something changes. (this is assuming web, I don't know about other environments)
It's easy to create a bunch of automated tests but (Score:4, Interesting)
that does not guarantee the tests test much of anything. Many of the QA Automation Engineers I interview for application testing don't do much more than record and playback. Their tests will detect if there's an obvious failure (clicking a button doesn't load something) that any manual tester doing a quick shallow run through will also notice, but they don't actually test that the system is doing the right thing. They often believe the best way to fix a fragile test is to add a sleep() and to keep increasing it each time the test fails again.
Test automation is a very useful tool. Especially for complex flows which require a lot of detail validation (often outside the UI) or mass entry of random data such as fuzz testing. The problem is, that those types of tests are not what the vast majority of those writing UI automation are capable of coding. Nor do many of the tools available easily facilitate external artifact checking.
Far too many Directors and VPs want automation and are too highly impressed by seeing a UI automated in front of them with lots of controls and dialogs flashing by on the screen followed by a bunch of bight colored charts. They hear that thousands of tests can be run in mere hours. They rarely ask the important question: will this catch bugs that would otherwise have been missed?
I've always preferred a strong base of automated integration testing below the UI level -- services, APIs, etc. Those things should be 99% automated. At the UI level, I want the tedious regression items automated, but the tests need to be more than superficial in scope. I want new areas in the UI and areas with change manually tested with a heavy dose of exploratory testing. After a new feature is stable, then we can invest in automated regression.
I'm at least happy that QA automation is getting back in favor. For a while, the suits were thinking all testing could be solved with unit tests. Well, 100% branch coverage fails to tell you the developer forgot to add branches for a bunch of scenarios.
Re: (Score:2)
I'm at least happy that QA automation is getting back in favor. For a while, the suits were thinking all testing could be solved with unit tests. Well, 100% branch coverage fails to tell you the developer forgot to add branches for a bunch of scenarios.
You made me think of an old presentation of mine about code coverage which takes a simple piece of code and breaks it down:
Read A
Read B
IF A+B > 10 THEN Print "A+B is Large"
If A > 5 THEN Print "A Large"
While branch coverage requires 2 tests, condition coverage requires 4 tests (the implicit else for each if). So with just that branch coverage is not full coverage. But no matter the coverage metric, you don't know the quality of the tests. Even with great unit tests that hit 100% condition coverage y
There's nothing like a great manual tester. (Score:2)
Oh, but not Nikki. No. She always found stuff. Easily. One time she crashed the QA server by cut and pasting the entire
I will now prove I am psychic... (Score:2)
...by predicting that Kobiton makes its money off of automated software testing in some fashion.
Yep, nailed it.
If your code can't be easily automated under test (Score:2)
I'm surprised (Score:2)
Or you could skip testing altogether (Score:2)
That's what the CI/CD lifecycle is for! Let your users test your code for you. Worst case, you can rollback your changes. They can be the guinea pigs AND they'll pay you money to test your software too. No need for unit tests or paying for QA testers. /s
I've been on the receiving end of this mindset. Paying $10K+/mo to a vendor who deploys major updates every week and then within hours reverts those changes because they think inflicting untested software on their paying clients is a good idea. It's so