Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Bug Google Java Programming Software

Why Software Builds Fail 279

itwbennett writes: A group of researchers from Google, the Hong Kong University of Science and Technology and the University of Nebraska undertook a study of over 26 million builds by 18,000 Google engineers from November 2012 through July 2013 to better understand what causes software builds to fail and, by extension, to improve developer productivity. And, while Google isn't representative of every developer everywhere, there are a few findings that stand out: Build frequency and developer (in)experience don't affect failure rates, most build errors are dependency-related, and C++ generates more build errors than Java (but they're easier to fix).
This discussion has been archived. No new comments can be posted.

Why Software Builds Fail

Comments Filter:
  • Because I'm lazy (Score:5, Informative)

    by OzPeter ( 195038 ) on Wednesday June 25, 2014 @01:11PM (#47317117)

    Half the time when I'm working on any sort of non-trivial program (that is too large to hold in my head all at once) and I need to make a breaking code change (and one that is not easily managed with refactoring tools), I'll make the change where it is obvious to me and then let the compiler tell me where it broke and hence where I need to make my fixes.

    • by Z00L00K ( 682162 )

      The most important thing is not to avoid that the build fails but to avoid distributing software packages that can't be built.

      However if something can't be built due to a mistake it's often easy to find and correct. The big problems are often not that visible and it can take a while to figure them out.

      What really grinds my gears is that people release source code that is possible to build, but so full of compiler warnings that you can't be certain that it's going to work as intended.

      • Re: (Score:3, Interesting)

        by mlts ( 1038732 )

        When in CS, I had a prof that had one rule that for release (not beta/alpha/dev) code, if the code had even a single warning, it was unshippable unless there was an extremely good reason (which would be part of the README) of why it happened. Yes, this was a PITA, but he was trying to teach something that seems to have been lost.

        • by 0123456 ( 636235 ) on Wednesday June 25, 2014 @01:49PM (#47317481)

          He's the reason compiler writes invented pragmas to turn off warnings...

          • by geekoid ( 135745 )

            And turning off compiler pragmas are why software is such an amateur engineering field.

          • Why do you assume all warnings are useful?? Some of the compiler warnings are just pedantic and are "noise" such as "variable declared but not used", etc.

            There is a balance between no warnings and pedantic warnings, namely the useful ones.

            • Generally I recommend leaving most warnings on. But sometimes compiler writers go completely over board.

              When you use MSVC you have to do stupid stuff like this

              #define _CRT_SECURE_NO_WARNINGS // WIN32:MSVC disable warning C4996: This function or variable may be unsafe.

              The following compiler specific header suffices to compile code using without warnings, at highest warning level.

              #pragma warning( disable: 4061 ) // enum value is not *explicitly* handled in switch
              #pragma warning( disabl

            • Why do you assume all warnings are useful?? Some of the compiler warnings are just pedantic and are "noise" such as "variable declared but not used", etc.

              There is a balance between no warnings and pedantic warnings, namely the useful ones.

              One of the things that's nice about the Eclipse IDE is that you can select the importance of selected messages, all the way from "ignore" to "fatal", depending on shop standards and personal paranoia.

              However, the offline builders such as Maven and Ant cannot adopt those preferences, so it's not uncommon for a production build to spit out dozens or hundreds of warnings about things that don't actually matter.

              Working with C/C++ I almost never had clean builds, since even if I managed to clean one up, it eithe

              • by globaljustin ( 574257 ) on Wednesday June 25, 2014 @04:34PM (#47318975) Journal

                I know this is "offtopic" but stay with me and I'll bring it around on-topic...

                A big question that people are throwing Billions of dollars & millions of internet comments about is "How can we get more women into programming/coding?"

                Ok...b/c our industry is by default very complex, it's not unreasonable that to really drill down to an answer to that question might be fairly complex...the answer can be summarized, sure, but to really get at the problem it involves learning a bit.

                Here, in this thread, we find out why...and it affects us **all** not just woman coders, or coders...it affects how the whole company works and the perception of value...witness:

                Why do you assume all warnings are useful?? Some of the compiler warnings are just pedantic and are "noise" such as "variable declared but not used", etc.

                There is a balance between no warnings and pedantic warnings, namely the useful ones.

                One of the things that's nice about the Eclipse IDE is that you can select the importance of selected messages, all the way from "ignore" to "fatal", depending on shop standards and personal paranoia.

                However, the offline builders such as Maven and Ant cannot adopt those preferences, so it's not uncommon for a production build to spit out dozens or hundreds of warnings about things that don't actually matter.

                Working with C/C++ I almost never had clean builds, since even....

                Here we have a central thesis:

                "There is a balance between no warnings and pedantic warnings, namely the useful ones."

                Parent agrees, and describes how using a **proprietary software** (Eclipse) which adds an **extra abstraction layer** to an already ridiculous process...a process which we all know theoretically should be able to be done on a text editor

                the fact that coding, the act of developing, software engineering, the 'real work' has such obtuse solutions, solutions to problems based on...

                PEDANTIC choices...overkill...the lack of discretion...there are many reasons for this but that's another rant

                it's alienating to new people regardless of gender...the only reason many people work jobs as coders is **for the money**

                until we address these fundamental issues, the problems that arise only because some compiler programmer was overly pedantic due to lack of empathy skills will destroy any attempt to get non-traditional types into coding

                right now, you basically have to be a bit autistic, or be able to think that way on command, in order to code...part of it is genetic, but part of it is deliberate...you have to train your mind to think in a "code" instruction manner...why would a woman do all this given other options?

                the solution to pedantic, tone-deaf coding choices is, of course, a fresh perspective that can help get rid of problems from abstractions...

                we need women in coding to help make coding more appealing to women

                so, to make this on-topic, I think **more women in coding** is a long-term solution to problems in TFA

                • by tomhath ( 637240 )

                  why would a woman do all this given other options?

                  For the same reason men do...there's a deadline to be met. In my experience there are very good female programmers, and very bad female programmers, and many in between. Same as men. Your generalization has no basis.

            • Re:Because I'm lazy (Score:4, Informative)

              by Imagix ( 695350 ) on Wednesday June 25, 2014 @03:37PM (#47318493)
              That one's quite useful. You've declared a variable and now whomever is reading the code now has the additional cognitive load to try to figure out why that variable exists.
              • The person reading the code will see things the compiler can't. Like the variable actually is being used, inside some conditional compilation, or that the variable is just a place holder used for debugging (as stated in the comment), and so forth.

                But the compiler's job here is to point out possible defects, it is not the compiler's job to provide value judgement on coding style.

            • My favorite example of this is warnings about using a variable without it being assigned a value which can usually be made to disappear by assigning a value of null to the variable. The variable is null in either case, but explicitly assigning null makes the warning disappear. A null reference exception would occur either way. Also, the warning still appears even if the use of that variable is an if statement checking if the value is not null.
              • Re: (Score:3, Informative)

                by chis101 ( 754167 )

                If you are talking about C/C++, the variable is *not* null in either case. If you assigned null to it, then it is null. If you never assigned any value to it, then it is whatever happened to be in memory at that location. It's a pretty good warning to let you know you are using a variable without it being assigned a value.

                int* ptr;
                if( ptr != NULL )
                {
                *ptr = 0;
                }

                This code will at some point crash. Maybe not on the first run, but at some point ptr will not be null, but will not be a pointer to valid m

                • Before I start, I am not a coder. Never really wanted to be one, but I do understand the principles.

                  If "int* ptr;" returns any value that is inconsistent it is a problem IMHO. This command should return the exact same value every time, even if it is (empty or zero or whatever).

                  But then again, there is probably a valid reason why it is inconsistent, and that is why I hate programming. :-P

              • However this does not work either! There's a later version of GCC that will see "a=null" and warn "variable 'a' set but not used".
                Similarly, code that does "a=a;" used to work for removing this warning in the past, but newer compilers will claim that 'a' is being used before being initialized.
                The correct fix apparently is to do "(void)a".

            • by tomhath ( 637240 )
              That's true. But if you get in the habit of allowing (and ignoring) warnings you won't see the important ones.
            • I think there is an attitude at times that everyone writes code from scratch, no one deals with legacy code. So why not add several more warnings in this release of the compiler, if no one is affected? Problem is, that variable is indeed used, only it's inside of an ifdef, so now we have to stick in even more ifdefs to hide the variable declaration.

              I can understand the idea that perhaps someone had a typo in a variable name. That's possibly worth pointing out. However this gets in the way of "real" warn

          • And some of those warnings are just ridiculously stupid warnings. But if you use -Wall plus -Wextra then you've got to deal with the lunacy.
            Ie, complaints about unused function parameters; so what if they're unused, the parameter must be there because of the API and it's ugly code to fix (adding an attribute, cast the paramter to void somewhere in the function, etc). So it makes sense to be able to turn some of these off.

            The other big problem is that these warnings are added in later versions of compilers

        • by turgid ( 580780 )

          When in CS, I had a prof that had one rule that for release (not beta/alpha/dev) code, if the code had even a single warning, it was unshippable unless there was an extremely good reason (which would be part of the README) of why it happened. Yes, this was a PITA, but he was trying to teach something that seems to have been lost.

          You should be compiling with warnings as errors as soon as you start coding, and you should fix each one as they occur before you move on to write the next line of code.

          Putting of

          • by RabidReindeer ( 2625839 ) on Wednesday June 25, 2014 @03:21PM (#47318353)

            You should be compiling with warnings as errors as soon as you start coding, and you should fix each one as they occur before you move on to write the next line of code.

            Putting off fixing these problems leads to bloated and fragile code and wastes much more time debugging and fixing later.

            What you should be doing outside the CS class and in the so-called "Real World" is "being productive". That usually means screw the warnings, it has to be completed ASAP or we'll find someone "more productive" than you are.

            • Re:Because I'm lazy (Score:5, Informative)

              by QilessQi ( 2044624 ) on Wednesday June 25, 2014 @03:35PM (#47318475)

              I've spent many decades in that Real World. Ignoring compiler warnings and failing to write automated unit tests for edge cases can cause production defects and database corruption crises that will eat many, many more hours of productivity than simply addressing all compiler warnings. Not to mention causing poor end-user perception and increasing the workload up and down the software support and delivery chain.

              Developers whose coding habits cause such situations in real world enterprise or commerce systems are ultimately "less productive" than having no developer at all. :-)

              • I've spent many decades in that Real World. Ignoring compiler warnings and failing to write automated unit tests for edge cases can cause production defects and database corruption crises that will eat many, many more hours of productivity than simply addressing all compiler warnings. Not to mention causing poor end-user perception and increasing the workload up and down the software support and delivery chain.

                Developers whose coding habits cause such situations in real world enterprise or commerce systems are ultimately "less productive" than having no developer at all. :-)

                Tell that to the people who equate the time you spend parked in your chair in their offices with productivity.

                They're called "Management" and they know that the longer you take the more you're cheating them of their hard-earned bonuses, er profits.

                After all it's a simple job that a child/subminum-wage offshore coder/monkey can easily to in short order. All You Have To Do Is...

                • Tell that to the people who equate the time you spend parked in your chair in their offices with productivity. They're called "Management"

                  I don't need to tell it to them: I am them. As well as being a developer who is, coincidentally, writing some JUnit tests at this moment for my team's current delivery. Time well spent, seeing at the automated tests I wrote earlier today caught an error in our service layer, and our code freeze (for a high volume public-facing website) is in a matter of days.

                  If you're

            • by turgid ( 580780 )

              What you should be doing outside the CS class and in the so-called "Real World" is "being productive". That usually means screw the warnings, it has to be completed ASAP or we'll find someone "more productive" than you are.

              I'm having a sense of humour failure at the moment, so apologies if this was not taken in the spirit it was intended but: that sort of attitude tends to get you eaten by the Real World for breakfast.

              I've very recently found myself working for a company that has gone in that direction du

          • The problem is that the set of warnings grows over time. Code that has been good for years suddenly has warnings because a later compiler release starts to gripe about it. Frustrating when the warning is not actually a warning. And this causes teams to delay upgrading compilers for years, because they need to ship the code more urgently than they need to coddle a compiler with the colic. Another thing that some teams do in this case is start ignoring the warnings, and they're forced to ignore the warning

        • Yes, this was a PITA, but he was trying to teach something that seems to have been lost.

          Yes, he was teaching you to turn off warnings for certain operations so that when the warning was really significant it wouldn't happen.

          Maybe if professor think your code is functional but not elegant maybe you should suggest professor write login page himself? Be crazy AND proud.

          There is a reason why they are warnings and not fatal errors.

          • by geekoid ( 135745 ) <dadinportland@yaFREEBSDhoo.com minus bsd> on Wednesday June 25, 2014 @03:03PM (#47318189) Homepage Journal

            NO, he was teaching engineering practices, and a good one.

            People like you is why software is in such a terrible state as an industry.

            • Is "software" a science, and engineering practice, or is it an industry?

              People like you are what gives people who say "people like you" a bad name.

          • by Z00L00K ( 682162 )

            It's not a question of turning off warnings, it's a question of correcting the code to get rid of the warnings.

            If you turn off the warnings you turn off the warnings for all occurrences in the code, and that is really a dangerous thing to do. In most cases warnings are harmless but in some cases the warning is an important explanation to why something behaves in an erratic way.

            Ignoring compiler warnings is stupid, dangerous and can cause serious problems to become hidden.

            • It's not a question of turning off warnings, it's a question of correcting the code to get rid of the warnings.

              There are two ways of getting rid of warnings. You can either change (potentially a lot of) code, or you can turn off the warning. The fastest way to get rid of them is turn them off. Then you never have to deal with or explain them again.

              If I had a boss that said I had to justify and document every warning in the code I work with, and I couldn't turn them off, I'd never get anything productive done. I'd be spending days pouring over someone else's code, making significant changes, and then have to do it

        • That's a laudable policy but totally impractical outside of greenfield development, especially if you use OSS or some other third party code in your product. In my world, if you can persuade developers not to add more warnings to the existing spaghetti ball you're doing well. I currently manage and maintain a large cvs repository and automated build system for ~25 developers. When a developers chooses to use (say) sqlite, they do not spend days trying to rid sqlite of compiler warnings and neither do I. Doi
      • What really grinds my gears is that people release source code that is possible to build, but so full of compiler warnings that you can't be certain that it's going to work as intended.

        Where I work, all builds are run with -Wall -Wextra -Werror. So if you check in code that produces a warning, the compiler turns it into an error, and you broke the build. Which means you get to be the build babysitter until someone else breaks it.

        • by Z00L00K ( 682162 )

          There are even a few warnings that at least with gcc won't show up unless you use the plain "-W" flag, and even cases where they won't show up at the "-O0" level but only at "-O2". And there are a few that you have to enable explicitly.

          Add a run of "splint" and/or cppcheck to make sure that the code is as good as it can be. Then execute the binary under Valgrind to make sure that there are no memory leaks. The remaining errors should be those caused by a bad system design rather than plain coding errors. In

    • Exactly. I routinely break a build to find errors. Want to refactor something? Remove the code it depends on and try to build. Now fix the compile errors. Much easier than trying to make all the dependent code not need the dependancy and then removing it especially when playing with >500k lines of code as I do routinely (you just can't remember to fix everything reliably even when it turns out you did know the full set of places where it was used).

    • Most of the times I break something it is in some build for a platform I forgot to try. As in I think this change only affects one product, I build that product and it seems ok, so I check in the code but then later get a build failure email. There are maybe 15 build combinations where only 5 really make a difference, and I'll actually build only 1 or 2 of them locally.

  • We've found that builds fail for three reasons: coding errors, dependency issues, command line argument mistakes. If you're a developer you should check these three things when your builds fail and you'll likely find the issue.

    • by matthiasvegh ( 1800634 ) on Wednesday June 25, 2014 @01:17PM (#47317183)
      Oh coding error? Well thats helpful. Misplace a semicolon in a non-trivial meta-program or dsl in C++, and just watch the errors that the compiler spits back at you. None of which, will have anything to do with semicolons. I suppose this is why the C++ errors are considered to be easy to fix. Mistype a word, and you get 15000 lines of errors. I suppose it's easy to fix all those errors too. Yes, but figuring out what exactly the coding error was is kind of the point.
      • by aliensexfiend ( 656910 ) on Wednesday June 25, 2014 @01:45PM (#47317447)
        When I see the error avalanche the first place I check are the first few error messages and that is usually enough to spot the problem. Typos still make c++ compilers barf way too much crap.
      • by Anonymous Coward on Wednesday June 25, 2014 @01:52PM (#47317505)

        Please, give up the C++ slander.

        Like any compiler output, read the first error. If you are a developer of any calibre, having a few pages of errors shouldn't phase you and it's not unique to C++ to generate a few erroneous errors. All it requires is a basic level of competence and if you don't possess that then any programming
        language that facilitates you generating anything that compiles is doing noone any favours.

        • Don't get me wrong, I use C++ both at the workplace and for hobby projects and love it, the point I'm trying to make is that the types of errors encountered during developing in C++ are very different to those encountered in say Java. So the comparison that C++ errors are easier to fix seems to be apples to oranges.
      • by geekoid ( 135745 )

        what? You can't tell a missing semi colon from the error messages? or a misspelled word? you're not very good, are you?

  • by Extremus ( 1043274 ) on Wednesday June 25, 2014 @01:17PM (#47317181)

    My LaTeX builds rarely fail in MiKTeX. The compiler itself seems to be able to download packages and classes from a common repository (CTAN and its many mirrors).

  • If GOOD/Complete unit tests for code exist and this change would break it, How freaking tough is it to run the unit test before committing your change to source code control ?
  • Complexity

  • Dependencies? (Score:3, Insightful)

    by Anonymous Coward on Wednesday June 25, 2014 @01:23PM (#47317249)

    Dependencies just magnify all other problems. If your code depends on nothing then it won't break unless the compiler changes. Unfortunately such programs don't exist because you can never depend on nothing and do anything useful. In reality if you depended on nothing you'd end up writing your own console, your own I/O, pretty much your own CRT. This sounds great until you realize your dependency is now the hardware itself and it's likely your code won't be portable in any useful sense. That's why we have kernels.

    The problem with C++ is that dependency management is usually file-level and developers 'rarely' care about any file-level constructs (and nor should they, it's an abstract packaging concept). As a result you try to drag in one enum and end up with 100 #includes and 500 new classes you don't care about. This causes bigger object files to be emitted, vastly slower linkage and lots of dependencies you don't expect. All it takes now is for one of those includes to #define something unexpected and BOOM...the house of cards comes crashing down.

    Also, did I mention? The C preprocessor causes a lot of grief when it's abused.

  • by Virtucon ( 127420 ) on Wednesday June 25, 2014 @01:28PM (#47317301)

    Once code is checked in and goes through the standard build process, that's where this is expected to occur because in my experience it's the local environment where the developer does the coding that's the root problem. Why? Developers don't refresh their build environment because of the potential for other problems it may create. I had one gig to unfuck some code at a company a couple of years ago and found out that in order to set up a Dev environment in this place could take two weeks or more depending on what team you were on. You had to go through a script, download this, install that, change this.. A nightmare. Updating dependencies on a local desktop created panics amongst the developers who were reluctant to ever change anything they had which "was working" because you could spend days trying to fix what was broken. Naturally any time they migrated code into test or production (there was no build system) things failed there because of dependency related issues. Also depending on who the developer was, they naturally felt that bypassing the Test/QA cycle was a job perk.

    I found dozens of dependencies on desktops that were out of date, deprecated or had major vulnerabilities and that went for the production systems as well. It was bad all the way around from a best practices perspective. Daily production crashes were the norm, the VP of Dev had a monitor on his desk so he could "troubleshoot" production problems it was that bad.

    Yes there's shops like this that are still out there.

    • I had an experience which was somewhat opposite (though, in a lot of ways pretty much the same).

      At one point, the company went with a big giant universal build system.

      Every piece of software, every module, every final build ... was recompiled from scratch on a nightly basis. It took a massive server farm many hours to do this. Even if no changes had been made.

      What would happen would be someone would break a component. The build of that component, and every downstream dependency broke. The system had no

    • by Sowelu ( 713889 )

      This, this, this. Dependency issues on a build server are so often things like "we added this dependency on a library locally, but didn't update the build server", or even "we checked in a change that makes it work on the dev box, but didn't make a change needed for the server". That plus forgetting to merge in stuff is really the only way I ever see build breaks (where checked-in code fails to compile, as opposed to compiles failing locally).

  • Maybe that's part of the problem. Too many cooks spoiling the broth? Perhaps I'm naive, but 18k seems a bit much for what they produce.

    • Well if they won't make the beta programs today that will get discontinued tomorrow, who will?

    • At all of Google?
      Perhaps that'd be true if they were all trying to make the same pot of soup, but in this case it's more like all the cooks in the city with each group in their own kitchen sometimes serving entirely different ecosystems of consumer.

    • by geekoid ( 135745 )

      they produce a lot of stuff. I mean 1000's of things.
      I'm not sure you know what they produce.

  • Simplicity of dependencies are the only way I survive C++ development. For instance I was just playing with OgreSDK, and it depended on my having a specific version of CMake installed in a specific way, in a specific directory. That's fine, oddly enough I can live with that. But how long before I combine Ogre with something that requires something different about the CMake installation?

    So, for instance I like the Crypto++ library because I can cheat and just slam it into my multi-platform codebase without
  • .. Jones, the guy we let go last month. He's pretty much to blame for all build fails for the next few weeks. Then we can start blaming the marketing dept.
  • Large distributed development. Multiple point check ins. (Our divisional build is around 100 execuables and 300 dlls. Corporate level build would easily exceed 1000 executables). Our build group will launch builds on every library every hour and immediately report compilation failures and link failures. But that is just the beginning

    Then comes the installation and packaging failures. The dynamic libs get out of synch, wrong dll gets packaged in, etc

    Then the build is good, it does not crash on every pro

  • "Similarly, almost 53% of all C++ build errors were classified as dependency-related. The most common such errors were using an undeclared identifier and missing class variables."

    OK, why can't a compiler just Google for the dependency?

    Identify everything with a unique ID. When you need to reference it, your system searches all your code (and any other code repositories) for the unique ID.

    Why hasn't this been solved?

  • People just forget to do these things, then wonder why the nightly breaks.

    Builds fail due to engineer error.

  • Java was made to relieve some of the complexes of memory management and the coding that went with it. Less code in an error prone stage of coding that was inherent in C++.
  • Clarifications (Score:5, Informative)

    by Afty ( 182462 ) on Wednesday June 25, 2014 @04:53PM (#47319159)

    Hi, I'm one of the authors of the paper and an engineer at Google. I wanted to clarify some points that have come up in the comments.

    First, we don't believe that failing builds are bad. We wanted to study the typical edit-compile-debug cycle that all developers (at least those writing in compiled languages) use to write code. It's perfectly fine to do something like change the signature of a method, compile, then use the compiler errors to find all places where you need to fix your code. We were interested in what kinds of compile errors people run into, how long it takes them to fix the errors, and how we can help you go from a failed to a successful build more quickly. For example, for one particular class of dependency error, we saw that people were spending too much time fixing it. So we created a tool to automatically fix the error and included the command to run the tool in the error message emitted by the compiler. After that we saw the fix time for that class of error drop significantly.

    Second, this work is not related to checking in broken code. The builds we looked at are work-in-progress builds from Google developers working on their projects, so it's code in intermediate states of development, not code that has been checked in. It's possible that broken code may be checked in, but our continuous build system will catch that quickly and force you to fix the problem. So for all intents and purposes, all of the code checked into our depots builds cleanly.

    Third, by dependency issues we probably don't mean what you think we mean. Within Google we use a custom build system with a custom build file format. Source code is grouped into build targets, and build targets depend on each other, even across languages. You can assume that code checked into the depot builds successfully, and that generally engineers are editing only code in their project and not in their dependencies. The dependency errors we describe in the paper usually result because someone added a source-code-level dependency without adding a matching dependency in the build file, resulting in a "cannot find symbol" error. For example, in a JUnit test you might write the code:
    Assert.assertTrue(foo);
    But if you don't add a dependency on JUnit to the build file, then you will get a compile error because the build system doesn't know where to find the Assert class. We would count that as a dependency error.

    Finally, at Google there is no distinction between "builds on my machine" and "builds on someone else's machine." Our build system requires that all dependencies be explicitly declared, even environmental dependencies like compiler versions and environment variables, so that a build is reproducible on any machine. This is how we are able to distribute our builds. So it's impossible for code to build on a developer's local machine but not on the continuous build system.

    I'm happy to answer further questions if people are interested.

  • Developers checking in code that won't compile. There case closed.

The gent who wakes up and finds himself a success hasn't been asleep.

Working...