Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming

Are Today's Programmers Leaving Too Much Code Bloat? (positech.co.uk) 296

Long-time Slashdot reader Artem S. Tashkinov shares a blog post from indie game programmer who complains "The special upload tool I had to use today was a total of 230MB of client files, and involved 2,700 different files to manage this process." Oh and BTW it gives error messages and right now, it doesn't work. sigh.

I've seen coders do this. I know how this happens. It happens because not only are the coders not doing low-level, efficient code to achieve their goal, they have never even SEEN low level, efficient, well written code. How can we expect them to do anything better when they do not even understand that it is possible...? It's what they learned. They have no idea what high performance or constraint-based development is....

Computers are so fast these days that you should be able to consider them absolute magic. Everything that you could possibly imagine should happen between the 60ths of a second of the refresh rate. And yet, when I click the volume icon on my microsoft surface laptop (pretty new), there is a VISIBLE DELAY as the machine gradually builds up a new user interface element, and eventually works out what icons to draw and has them pop-in and they go live. It takes ACTUAL TIME. I suspect a half second, which in CPU time, is like a billion fucking years....

All I'm doing is typing this blog post. Windows has 102 background processes running. My nvidia graphics card currently has 6 of them, and some of those have sub tasks. To do what? I'm not running a game right now, I'm using about the same feature set from a video card driver as I would have done TWENTY years ago, but 6 processes are required. Microsoft edge web view has 6 processes too, as does Microsoft edge too. I don't even use Microsoft edge. I think I opened an SVG file in it yesterday, and here we are, another 12 useless pieces of code wasting memory, and probably polling the cpu as well.

This is utter, utter madness. Its why nothing seems to work, why everything is slow, why you need a new phone every year, and a new TV to load those bloated streaming apps, that also must be running code this bad. I honestly think its only going to get worse, because the big dumb, useless tech companies like facebook, twitter, reddit, etc are the worst possible examples of this trend....

There was a golden age of programming, back when you had actual limitations on memory and CPU. Now we just live in an ultra-wasteful pit of inefficiency. Its just sad.

Long-time Slashdot reader Z00L00K left a comment arguing that "All this is because everyone today programs on huge frameworks that have everything including two full size kitchen sinks, one for right handed people and one for left handed." But in another comment Slashdot reader youn blames code generators, cut-and-paste programming, and the need to support multiple platforms.

But youn adds that even with that said, "In the old days, there was a lot more blue screens of death... Sure it still happens but how often do you restart your computer these days." And they also submitted this list arguing "There's a lot more functionality than before."
  • Some software has been around a long time. Even though the /. crowd likes to bash Windows, you got to admit backward compatibility is outstanding
  • A lot of things like security were not taken in consideration
  • It's a different computing environment.... multi tasking, internet, GPUs
  • In the old days, there was one task running all the time. Today, a lot of error handling, soft failures if the app is put to sleep
  • A lot of code is due to to software interacting one with another, compatibility with standards
  • Shiny technology like microservices allow scaling, heterogenous integration

So who's right and who's wrong? Leave your own best answers in the comments.

And are today's programmers leaving too much code bloat?


This discussion has been archived. No new comments can be posted.

Are Today's Programmers Leaving Too Much Code Bloat?

Comments Filter:
  • by oldgraybeard ( 2939809 ) on Sunday June 26, 2022 @06:41AM (#62651510)
    No one wants to pay for general cleanup, improvements, refinements and code review on code that works. It is all about features for PR and marketing.
    • by getuid() ( 1305889 ) on Sunday June 26, 2022 @07:40AM (#62651586)

      It is a cost issue.

      That isn't true, but, ironically it points to the true cause: it's an issue of incapable management and business development.

      Simpe, clean software with proper refactoring isn't more expensive, it's actually cheaper than bloatware. For a typical small-team one-year product, you can spend 3-6 months to nail down the proper data model and APIs, and implement 50-100 major user stories within another 3-4 months on top of that. Features on top of a good data model practically write themselves.

      Or you can rush out the first few features after 2-3 sprints so stakeholders can giggle and wet their pants, and then spend excruciating 10 months adding 2-3 stories per sprint while bitching about "nobody wants to pay for refactoring these days" and end up with a usable, but essentially unmaintainable product.

      The problem is that management is so scared of fucking up Every. Single. Time. that they'll consistently choose the 2nd scenario over the 1st. And if you get lucky, they'll even understand what they're doing and justify it with "better a cappy product than no product at all."

      And in the end, we always end up with crappy products. Color me surprised.

      • So not cost. But risk aversion? Interesting! I didn't think of looking at it like that.
        • by GFS666 ( 6452674 ) on Sunday June 26, 2022 @01:45PM (#62652258)

          So not cost. But risk aversion? Interesting! I didn't think of looking at it like that.

          One of Managements primary goals is to do a project with the minimum amount of risk. And management will pay more for something that is considered "safe" rather than risk their project on something they don't know but is cheaper and would do that job better. So for "normal" type jobs, risk aversion is an overwhelming consideration

          I've done R&D for over 20 years and what I've learned from that is that most managers have no real ability to correctly categorize risk. Most managers are scared of their own shadow and, most importantly, are afraid of screwing up in front of THEIR manager. In good R&D managers know that they have to try stuff that may fail. The good upper manager knows that if their lower managers aren't failing then they are not trying, they are playing it too safe. But you have to come up with something that advances the field but works. It's a balance of not going too far but not playing it too safe. From what I've seen, only experience seems to teach managers where that bleeding edge is. YMMV

      • by GoJays ( 1793832 )
        To add to that, stakeholders and sales execs don't care about polish and under the hood performance improvements. Progress to them is adding something they can see, so it is more important to add rounded corners to buttons than to improve query times or write more efficient code. Changes that aren't visible on screen means no work has been done to sales execs.
        • I don't think it's really that, I think most people don't really mind when stuff is slow. I still remember 20 years ago really hating how slow the tivo UI was. Yet most people never seemed to notice or care, tivo was this big darling to them. And tivos didn't exactly have the kind of performance you get out of PCs today, in fact the hardware was quite bare.

          People today also seem to love the shit out of apple tv and Roku, yet I can't stand either. Why? Their UIs are slow as fuck. This is the exact reason I l

    • by jythie ( 914043 )
      It isn't just 'PR and marketing' though, programmers don't want to pay for cleanup either. We are happy if someone else foots the bill sure, but when you take time away from doing thing that are customer facing, that means time and energy into things that do not have an immediate visible benifit. Even worse, as customers, we don't want to pay for it either.

      A lot of us are insulated from this by layers of company that we can blame, but if you have ever worked a small project where you both have to pay oth
      • by Maxo-Texas ( 864189 ) on Sunday June 26, 2022 @10:51AM (#62651938)

        Many programmers do. They are called "maintenance programmers". They love smoothing and cleaning up code so much they used to do it for free on their own time before SOX.

        "Development" programmers like to write new code fast in new technologies. They don't stick around for maintenance and they often leave quite a few bugs and design flaws behind when they leave. They find maintenance programming smothering.

        I've managed both. Development programmers are great for new projects.

        In answer to the overall topic, I've seen development programmers leave as high as 70% bloat with such gems as throwing away an entire 100 line order if there was any exception of any kind while allocating product. Pre Sox, I saw maintenance programmers reduce 18,000 line messes that were off limits to 3,000 lines of clean, easy to maintain code.

        Neither type is better than the other. Development programmers are great for writing new code. Maintenance programmers don't tend to do that as well.

        But the biggest reason for code bloat over the last 20 years was SOX. Because even a single line code change required approval of the project lead, team lead, manager, director, cio, and ceo. As a result, many great changes become "too expensive" because getting that approval has a real financial cost.

    • by gweihir ( 88907 )

      It is. The problem is that the mountains of technological debt are getting higher and higher and at some point there must be a cleanup or everything crumbles. We see some of that crumbling already with the increasing insecurity and lack of resilience in software and systems. It will get worse. Doing it on the cheap will become exceptionally expensive in the end, probably much more expensive than all cost "savings" taken together. On the other hand, screwing up this way is something the human race excels at,

    • Most code bloat is created by the tools and frameworks, not the programmers.

    • by Kisai ( 213879 )

      Nah, the problem is people are using frameworks which act as monolithic "bases" instead of the OS.

      Now to a certain extent a framework is a good way to ensure cross-platform portability, however, the problem is that's not actually what happens.

      - Games and apps written in Unity, which is on top of dot net, which often includes hundreds of "base system" with it. This has the consequences of plugins and hacking/mods being possible that can wrestle more control away from the OS than a specialized tool that doesn

  • Pattern Paralysis (Score:5, Insightful)

    by KermodeBear ( 738243 ) on Sunday June 26, 2022 @06:46AM (#62651516) Homepage

    I see software written with additional layers of abstraction when they aren't really necessary.

    A factory that generates another factory which creates a DAO that takes a client object that...

    For some use cases, sure. That makes sense and is necessary. But in many other cases it is complete overengineering and bloat.

    Keep your projects simple. Keep your dependencies few. You can always add things later but finding time to remove bloat is very difficult to justify to the bean counters.

    • by gweihir ( 88907 )

      Keep your projects simple. Keep your dependencies few. You can always add things later but finding time to remove bloat is very difficult to justify to the bean counters.

      In other words, KISS. The fundament of all sound engineering. Mostly unknown and were known mostly ignored in the software space.

    • by narcc ( 412956 )

      For some use cases, sure. That makes sense and is necessary.

      If a factory factory makes sense, your design is fundamentally broken. Full stop. Hell, most factories are unnecessary.

      But in many other cases it is complete overengineering and bloat.

      It's unnecessary bloat 100% of the time. It's true that there are things that are fundamentally complex, but they are vanishingly rare. Odds are against your LOB being among them.

      Here's an odd observation. The more fundamentally complex something is, the simpler the code that implements it tends to be, whereas very simple things are more likely to get loaded up with needless bloat, usu

    • Keep your projects simple. Keep your dependencies few. You can always add things later but finding time to remove bloat is very difficult to justify to the bean counters.

      The hard part is to convince yours coworkers to do this. I don't know how. Solving a problem means finding a new dependency to do the work.

    • A couple years ago I went for a C# job interview. One of the questions on the test involved a simple class depicting a light bulb. I recall it went something like this:

      class LightBulb
      {
      public bool On;
      }

      They wanted me to "improve" this code. I changed the On to an automatic get/set property but apart from that I left it as-is. Explained to the interviewer at the end that it was so simple I honestly couldn't see where it needed "improvement" beyond that. It would just be a time wasted exercise.
      To which, t

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Sunday June 26, 2022 @06:47AM (#62651520)
    Comment removed based on user account deletion
    • by Anonymous Coward

      Many of the frameworks are bloated with trackers and spyware yet... they get used time and time again.
      Now everyone can be a developer. There is a distinct set of behaviours that are needed in order to get beyond the typical 'Hello World'.
      Being able to think logically is a must. Asking the awkward 'what if...' sort of question is part of what you do in order to make the software you write reliable, usable and maintainable.

      I am close to retirement now. I do not recommend that young people follow in my footste

    • by Opportunist ( 166417 ) on Sunday June 26, 2022 @07:59AM (#62651612)

      Most programmers moved on. Quite a few of them into development security where they fix the blunders of the code monkeys that took over from them.

  • by Tom ( 822 ) on Sunday June 26, 2022 @06:49AM (#62651524) Homepage Journal

    Todays programmers are not doing nor are they expected to do low-level coding.

    Mostly, the task is to plug together a bunch of library functions and frameworks, write a bit of code that translates between and calls the various components, and make sure your automated tests pass for the user stories.

    It's a completely different world. I'm not sure if better or worse, but definitely different.

    There is a lot to be said for writing little code yourself and instead using libraries. If you use high-quality libraries, their code is much better tested and battle-hardened than anything you can come up with. It is likely to be more efficient, too. Not everyone of us is Romero or Carmack. And not every wheel needs to be re-invented for every project.

    On the other hand, not every library is high-quality, and the amount of dependencies that some shit pulls in is just mindblowing. 20 libs and 5 MB of code when you want to do some small task that would be 20 lines of code if you write it yourself? Insanity.

    It's not as simple as TFA makes it. There are reasons programming has developed into what it is today.

    • 20 libs and 5 MB of code when you want to do some small task that would be 20 lines of code if you write it yourself? Insanity.

      Why is it insanity? We could probably have a bunch of different specialized hardware for different tasks too, but instead we have a general purpose "computer".

      • Actually, we're back to having specialized hardware for different tasks. The first step was math co-processors, then GPUs, MMX instruction set, etc. Look at Apple's M1/M2 series processors for example, they even have two different kinds of general-purpose processor cores on top of having specialized hardware for different tasks.

    • by splutty ( 43475 ) on Sunday June 26, 2022 @08:38AM (#62651676)

      An interesting discussion I had with an ex colleague of mine about 'libraries' revolved around "Should a library do sanity checks on input".

      His argument was: I don't want to write all the checks myself if they can be done by the library, which I don't disagree with.

      However my argument was: If you teach programmers that all the sanity checking is done within a library, they'll never learn how to actually sanitize input, and you basically create a situation where, if they ever use an (in my eyes) "normal" library, their inputs are highly dangerous.

      That just gives you the whole Bobby Tables problem again.

      We never did resolve which one would be 'better', because from two different points of view, both have their advantages and disadvantages. However the older I get, and the more I see the current crop of "programmers", the more I feel that by making libraries do too much, it makes programmers much less well rounded, and certainly much less knowledgeable.

      One solution I guess would be to have libraries export specific functionality for sanitizing input, instead of doing it within whatever call would require the sanitized input in the first place. To make it at least visible, and to impress "Wait a minute. This input NEEDS to be sanitized. Why?"

      • Language methods or 3rd party libs should validate for whatever the spec is (ie, trying to stick a string into a integer) and throw appropriate exceptions. Hopefully the IDE will prompt the dev to handle potentially uncaught exceptions and the dev will Do The Right Things.

        A "here's our departments common functions for doing all those things we do in our department/unit/org with our systems/setups/etc" library that is internally written and maintained should have both validity and sanity checks on the data

      • by Shaeun ( 1867894 )

        An interesting discussion I had with an ex colleague of mine about 'libraries' revolved around "Should a library do sanity checks on input".

        His argument was: I don't want to write all the checks myself if they can be done by the library, which I don't disagree with.

        However my argument was: If you teach programmers that all the sanity checking is done within a library, they'll never learn how to actually sanitize input, and you basically create a situation where, if they ever use an (in my eyes) "normal" library, their inputs are highly dangerous.

        That just gives you the whole Bobby Tables problem again.

        We never did resolve which one would be 'better', because from two different points of view, both have their advantages and disadvantages. However the older I get, and the more I see the current crop of "programmers", the more I feel that by making libraries do too much, it makes programmers much less well rounded, and certainly much less knowledgeable.

        One solution I guess would be to have libraries export specific functionality for sanitizing input, instead of doing it within whatever call would require the sanitized input in the first place. To make it at least visible, and to impress "Wait a minute. This input NEEDS to be sanitized. Why?"

        Or just Sanitize in both places. Sure, it is a bit of a performance hit, but in the real world security is important. If you always assume that data is dirty you have a better chance of not getting bitten by that particular issue. I always assume the user is malevolent and out to get me. Has prevented so many problems over the years.

      • by Tom ( 822 ) on Sunday June 26, 2022 @05:34PM (#62652816) Homepage Journal

        We never did resolve which one would be 'better', because from two different points of view, both have their advantages and disadvantages.

        No, because you were asking the wrong question. The answer to "should my library or my code do sanity checks?" is: Yes.

        You check sanity at both ends. Remember that these checks will not be identical. The library will make generic checks based on its internal workings and specs. Your code will do sanity checks based on your known environment and use-case.

        For example, the library might check that your input isn't larger than MAX_INT. But your code knows that negative values shouldn't be possible, despite the library would consider them valid. You might also check for MAX_INT, or you might check for a lower value because you know that your input data can't go above X, ever.

    • by jythie ( 914043 ) on Sunday June 26, 2022 @08:40AM (#62651686)
      *nod* customer expectations have also changed dramatically. Over the years I have seen teams get smaller and doing more, but customer expectations regarding how much software should do and how quickly it should come out have really ramped up. Like so much else in consumerism, the market has spoken and people want lots of stuff really cheap, so techniques and tools have developed to support this.

      It is the same progression you see in pretty much every other industry. Look around you.. at your clothes, at your furniture, at your electronics... how many of them were hand assembled by artisans vs mass produced? There are probably a few things that you paid a premium for, but our lifestyles right now are built on a foundation of cheap mass manufactured things.

      And that is all this framework bloat really is.. tools that allow small teams to produce things that would have taken far larger teams with longer timescales in the past, and it is still 'good enough'. It lacks the elegance and efficiency, but that isn't what people actually want outside people who like the aesthetic of hand rolled artisan code.
    • by AmiMoJo ( 196126 )

      Think about the requirements in the example given. It's an app to make occasional uploads. Performance is probably limited by your internet connection. It works.

      Someone could spend their time optimizing it. If they are getting paid then that's a business expense that seems to have no benefit. I doubt anybody decided not to upload because of the size of the app, and I doubt anyone will be convinced to use it because they cut the size down. It already works adequately, and probably took a lot less time to dev

    • Mostly, the task is to plug together a bunch of library functions and frameworks, write a bit of code that translates between and calls the various components,

      This is a stupid assertion about "today's programmers", because unless you have been writing assembler your whole life you fall into this category as well.

      "Oh, but I write software in C!" you reply, smugly. So you're plugging together a bunch of C library functions written on top of kernel syscalls and writing a bit of code that translates betwee

      • by Tom ( 822 )

        This is a stupid assertion about "today's programmers", because unless you have been writing assembler your whole life you fall into this category as well.

        stdlib and the npm library system really aren't in the same class. Old-style libraries made standard functions everyone needed all the time available. But today, it pretty much doesn't matter what you want to do, how obscure or unusual it is, there's a library for it. Typically, five competing libraries.

        Of course we've always had interpreters and compilers. But since you mention the C64, my first computer, there wasn't anything like a library collection. Certainly no on-demand dependency injection. You had

  • by paulidale ( 6575732 ) on Sunday June 26, 2022 @06:50AM (#62651530)
    I've seen projects pull in a library because it is expedient. Maybe one or two function but the entire library comes in. I've seen projects pull in a library to speed/cheapen development. Again, often for a couple of functions. I've seen projects pull in the whizz bang new library yet still keep the old around. It's easy to bring code in, it's harder to remove it.
    • Re: (Score:2, Interesting)

      by drinkypoo ( 153816 )

      This is why the Unix model is superior, where different pieces of functionality are in separate processes. Before threads all computing was done this way, but for many years thread creation was expensive but process creation was cheap (on Unix) so Unix trended towards having more child processes instead of child threads. But this also provides a whole slew of benefits. One of them is that you can have small, simple, optimized pieces that perform certain tasks, even though they are being called by a big, bar

  • It is more common today to find code that is cross platform and which supports proper internationalisation of the code. On the average, error checking is also less bad today than years ago. These, unavoidably, lead to larger code bases. Having said that, most code I look at makes me wince. Inefficiencies resulting from inappropriate coding choices affect not just code size but also execution speed. Most often, this does not impose a major economic cost on applications, but the aesthetics of most applications are horrible.

    • by jythie ( 914043 )
      Though really, this is not a now or then problem. I've worked on projects with code from pretty much every decade (at least going back to the 80s), and wince wothey code, I think, has always been the norm. Though something that has changed is now we see more code than we used to. Go back a few decades and it was not unusual to stay in a company on a project for decades, so you mostly ran into code you were already familiar with. Today we are much more likely to jump around, and bring in code bases for a
  • by The Evil Atheist ( 2484676 ) on Sunday June 26, 2022 @06:57AM (#62651542)

    It happens because not only are the coders not doing low-level, efficient code to achieve their goal, they have never even SEEN low level, efficient, well written code.

    Stop equating low level with efficient. People think they are doing low level code when in fact they are relying on undefined behaviour.

    C++ shows that high level abstractions enables optimizations that cannot be done at the low level because low level code ditches away a ton of information that the compiler can use to optimize.

    • by Entrope ( 68843 ) on Sunday June 26, 2022 @07:19AM (#62651558) Homepage

      Exactly how do you think "low level" code equates to relying on undefined behavior?

      The problem of many modern libraries and frameworks is that they try to preserve "high level abstractions" that end up hiding information that the compiler might use to optimize -- like aliasing, or whether non-commutative operations can be reassociated. In floating point, (A+B)+C can be different from A+(B+C), and this means vectorization can change results.

      • I wonder when compilers will turn into full featured CASs at this rate.
      • by gtall ( 79522 )

        "non-commutative operations can be reassociated" you mean "non-associative operations can be reassociated". That is your example:

                (A+B)+C can be different from A+(B+C)

        Non-commutative is something like A [bingo] B \not= B [bingo] A.

    • Relying on anything that passes through 3 or more layers of abstraction means relying on undefined behavior, unless you're willing to spend the time researching what each layer actually does.

      • Not true.

        There are some languages where many abstractions don't cost anything, and can't cause undefined behaviour. In fact, in some particular languages, these abstractions preserve information all through the compile process so nothing gets lost.
  • I have a 5-file C program using lex/yacc.

    The "src" folder that contains all of the code including headers and stuff lex/yacc generated is 129,137 bytes.

    Xcode's Derived Data is 128,994,300 bytes.

    So, yes.

  • by achacha ( 139424 ) on Sunday June 26, 2022 @07:17AM (#62651554)

    Agile programming and how it is used is a big part of the blame. The constant need for more features every sprint. It takes a serious effort to get a sprint to cleanup and remove dead code and optimize. One has to file a bug to get some of that done and even then, if not justified it will just get added to the back-burner and eventually closed due to age. This has been happening for decades.

    While agile is mostly better than waterfall, the problem falls on people running it and their inability to schedule code-wellness time in sprints. When success is rated by how many features were added, it's all additive to the management types. Developers need to push back during sprint planning and demand time to cleanup and update the codebase.

    • While agile is mostly better than waterfall, the problem falls on people running it and their inability to schedule code-wellness time in sprints.

      Agree with everything you say, in particular with this bit. But then the problem isn't Agile, it's incompetent developers. With Waterfall they'd outright fail, while with Agile they spit out bad, but "working" code, for some definition of "working". Not arguing the 2nd scenario is necessarily better; just saying...

    • by Junta ( 36770 ) on Sunday June 26, 2022 @08:33AM (#62651668)

      "Agile" is a symptom rather than the problem. The branded Agile is 'the thing' for project management, thus any random mediocre 'leader' has at least learned 'my team needs to apply the buzzwords I read in this Agile article'. So Agile at this point is usually associated with leadership that cannot deal with any semblance of nuance and prioritizes buzzword over actual understanding. Prior to Agile, Waterfall had that "honor".

      Agile is a consultancy industry where people pay disinterested consulting companies money to have that company come in and figure out a way to let management leverage Agile buzzwords as an appeal to authority to justify whatever practices they already had in mind.

      In my area, it seems that every time a significant executive changes, we get consultants to come in and "this time, we will be doing *actual* Agile unlike what you were doing before".

      The problem is it's a highly compensated field with relatively low barrier to entry (compared to, say, doctors and lawyers), and so it attracts some of the least well equipped people to participate.

      • by gtall ( 79522 )

        More to the point, Agile produces a steady stream of munchy nuggets that can be pushed up the management chain of command to feed the various levels of Pooperpoints and quarterly reports.

        • by Junta ( 36770 )

          Yes, the common thread in "Agile as mandated from the top" is the ability to game 'story points' for the production of nice charts. Charts that make people feel good but are horribly detached from business value or really anything at all, since the boots on the ground fixate on making charts that calm leaders rather than doing valuable work.

      • by CaptainLugnuts ( 2594663 ) on Sunday June 26, 2022 @11:42AM (#62652030)
        Agile (or something like it) is necessary after decades of management getting burned by large software projects failing.

        Companies are concerned with managing risk. Large waterfall style projects tend to be a black box to management until near the end when they've spent all the money and may have nothing to show for it. That's an unacceptable risk for today's "next quarter" management.

        While agile style projects may not deliver the best quality for the minimum cost, they tend to deliver acceptable software for a marginal increase of cost when successful, but can be shit-canned earlier (and at lower cost!) if it's obvious it isn't working out.

    • by WaffleMonster ( 969671 ) on Sunday June 26, 2022 @08:59AM (#62651730)

      Agile programming and how it is used is a big part of the blame. The constant need for more features every sprint. It takes a serious effort to get a sprint to cleanup and remove dead code and optimize. One has to file a bug to get some of that done and even then, if not justified it will just get added to the back-burner and eventually closed due to age. This has been happening for decades.

      While agile is mostly better than waterfall, the problem falls on people running it and their inability to schedule code-wellness time in sprints.

      Waterfall was a straw man invented by the agile crowd to justify agile. It's like going to a restaurant that only serves terrible food and when the customer complains you bring out a pot of monkey brains and say hey look our food is better than this.

      • by Weekend Triathlete ( 6446590 ) on Sunday June 26, 2022 @09:57AM (#62651836)

        Waterfall on a 3-year project is about the same as driving a car in one direction for 3 years, completely ignoring road and terrain conditions. Eventually it's going to crash and burn.

        Short-term agile without a set direction is like rowing a boat in circles, endlessly, never getting to a destination.

        The best software project management methodology I see is usually a mix of the two: let's go for 3-6 months with these targeted features, which we know will fit because we've done some story breakdown & sprint estimation, then we'll reassess and re-plan for the next 3-6 month period after we've finished this one.

    • The way I understand your comment is that managers counting only added features are to blame, not Agile itself. Now Agile makes feature count a measure of success, but it need not be the only one. Again, it comes down to poor management, not a specific philosophy.
  • Mikey likes...bloated code later code rot, ahem...technical debt.

    JoshK.

  • by karlandtanya ( 601084 ) on Sunday June 26, 2022 @07:19AM (#62651560)

    All of this will happen again.

    I leave you with The Story of Mel [catb.org]

  • While there are many reasons for, and definitions of, Code Bloat, I'm going to offer the one that probably contributes to most of it. Everyone in the chain above the programmer (starting with the customer) wants something done now and done cheaply. The programmer can spend a long, long time programming it all from scratch, or he can use existing libraries that have already done most of the work. Using existing libraries will reduce the time to completion by about 95%. There is not a singular library that wi

    • Another way of getting code done quickly is to not bother with good/any documentation. Hey - we know what we wanted so it is not needed!

      Some time later a change is needed, the new programmer does not understand the original code and so cannot make the small change needed so writes something completely new, and undocumented. In addition to bloat we get fragility; but later programmers blame the previous ones who are now in new jobs; today's programmers do not care about the troubles that they cause for those

  • by SharpFang ( 651121 ) on Sunday June 26, 2022 @07:31AM (#62651576) Homepage Journal

    This is the trend nowadays. If you have a choice between writing a 3-line function that does what you need and pulling a library containing that function, loading and instantiating it over a screen of code, massaging the data you have into the format the function provided by the library accepts over another half a screen, then calling that function, then massaging the result back into what format you require - the trend is to do the latter. Because, allegedly, the function in the library is tried and true and tested by thousands of others. Add sanity checking and exception handling (despite the fact the function does its own sanity checking, and all of your exception handling is limited to log and re-throw) and you have 4 pages of bug-prone code doing what you'd have written in 3 minutes, debugged the heck out of it making sure it's safe in another hour and had it take 60 nanoseconds to do this work, instead of half a second.

  • Yes (Score:5, Funny)

    by Alain Williams ( 2972 ) <addw@phcomp.co.uk> on Sunday June 26, 2022 @07:37AM (#62651584) Homepage

    Any more words in my reply would be bloat

  • by WierdUncle ( 6807634 ) on Sunday June 26, 2022 @07:48AM (#62651594)

    From the summary:

    In the old days, there was a lot more blue screens of death... Sure it still happens but how often do you restart your computer these days.

    My laptop (Ubuntu) manages not much more than a week between reboots, because of running out of RAM. It has 8G of RAM, but I still get the thing grinding to a halt when I run out of RAM, then the last of the 1G swap is eaten up. The culprit appears to be my web browser. I am currently running Vivaldi. Before that it was Firefox. Not much difference in resource usage. Some sites will consume massive CPU time, for no good reason I can discern. The other day, I was busy doing some PCB CAD, and the whole thing gradually got slower and slower. A quick look with top() showed all my RAM and swap used up. I closed Vivaldi, then made a cup of tea, while the RAM was recovered. It takes a few minutes.

    I presume that rendering web sites has got so inefficient because the people responsible for the bloat do not pay for the resource usage. I suspect some of the worst culprits are ads. I went through a phase of blocking ads, but this penalises many of the sites I support, such as genuine news sites that do actual journalism. I found it easier just to put up with ads, rather than allowing them just for my favourite sites.

    My PCB CAD (KiCAD), which I presume actually does quite a bit of actual computation, barely registers in terms of resource usage. I presume that people who write code for such applications optimise stuff, because that makes a better product. I guess the people who write bloated ads are not under pressure to improve efficiency.

    • by Junta ( 36770 )

      Yeah, the categorically high number of BSODs was because of the fundamental design of DOS based windows, where everything ran with supreme privilege all the time. Bloat was not required to solve this, just a more regimented design (which has happened).

      If anything, I've seen more general glitchiness now than the late 2000s in the general experience, which seemed to be the sweet spot between "we know what we are doing and have made an architecture that actually has a chance at fault isolation" and "just duct

      • the categorically high number of BSODs was because of the fundamental design of DOS based windows, where everything ran with supreme privilege all the time

        This applies only to classic DOS-booted Windows through Windows ME. And even Windows 9x would run without making BIOS calls if you had it in 32 bit mode, which is to say, you didn't load anything in DOS before loading Windows. They are not really DOS-based in normal operation, but you're right about the lack of privilege separation.

        However, Microsoft made a change to Windows NT which also made it crashy, even though it is not DOS-based at all. NT up to 3.51 used to have three separate memory spaces: Kernel,

    • I presume that rendering web sites has got so inefficient

      Browsers don't render websites. They haven't since the 90s. They are effectively a who OS in a container and all the capabilities which come with it. It's not inefficient, it's just highly capable.

      Example: If you are feeling nostalgic you can run Windows 95 to get that's 90s feel ... right in your browser, https://win95.ajf.me/ [win95.ajf.me] because that's the kind of capabilities we are running now. That you only use a small fraction of those capabilities isn't really relevant.

  • by Todd Knarr ( 15451 ) on Sunday June 26, 2022 @07:59AM (#62651610) Homepage

    A lot of the size is frameworks. And dependencies. The go-to solution to any problem these days is to look for a package that already does that. Even if "that" is something trivial like left-padding a string. And every one of those packages pulls it the packages it requires, not necessarily the same package another part of the program uses for the same functionality which means you end up with multiple copies of the same functionality taking up space.

    Then there's async bloat. That's not size, it shows up as the number of separate threads of execution. You don't just read a file anymore, the I/O library spawns an I/O thread which starts an async operation and waits for it to complete so it can call the completion code. When dealing with a document you have one thread responding to the UI and altering the document model, and another thread listening for changes to the document model and rendering them to the display.

    Then you have tracking. Lots of stuff calls home to report on what you're doing, and that network I/O has to happen in the background where it won't block the UI. That means more threads and more delays waiting for remote servers to respond. Think auto-suggestions as you type. And of course that tracking is typically done using an SDK, probably several for different tracking and reporting services, and each of them pulls in it's own set of dependencies and creates it's own set of threads to do it's work.

    And then we have software mis-engineering practices. I wrote some code that defined a simple data object, just data fields and it used object initialization instead of defining a constructor. The lead engineer demanded that it be done "properly": replace object initialization with constructors, create an interface for the object's class so it could be mocked, create a factory class to create instances of the object (it just used operator new()), said factory class had to have an interface so it could be mocked, and finally a factory factory so a factory of the right class could be instantiated (say what? there's only one factory class) and injected into the dependency-injection container at startup and then taken as a dependency of any service classes that needed to create those objects. Repeat that across all of a piece of software and it starts to add up.

    I don't miss the days of having to cram everything into max 32K of program (allowing space for data in a system with 48K of available RAM), but bloody hells the fact that we have hundreds of gigabytes of virtual memory and terabytes of disk space doesn't mean we have to use all of it...

  • It's good to use frameworks and libraries because you don't have to "reinvent the wheel".

    The thing is... it's often good to reinvent the wheel so that you get *just a wheel*.

    • by Junta ( 36770 )

      That saying in particular can be obnoxious. If the auto industry had taken that expression to hard the same way a lot of software industry has taken to heart, we'd be stuck with cars living with the limitations of wooden wagon wheels.

      A peer team wondered at how some function was working in my code, because *the* framework in the industry never could work right at implementing a specific standard. I explained to them some design flaws in the framework on this front and how I just wrote a short implementati

  • by Sique ( 173459 ) on Sunday June 26, 2022 @08:08AM (#62651628) Homepage
    I entered programming 40 years ago, and the complaints are the same: Too much bloat, too much unnecessary abstraction, wasteful programming and whatever.

    It has been never different, just the names have changed. Provgrams will always have so much bloat as they can get away with. Abstractions are a useful tool, and because they are by definition abstract, they absolve the programmer from thinking about the actual inner workings. Toolkits are a useful tool, because they offer readymade solutions for all those little details which aren't part of your overall design. And because they are generic to be useful, they are less efficient than tailored code. Frameworks and code generators are as old as computer programming. Yacc for instance was developed 50 years ago.

    I bet that in another 50 years time, people will still complain how the new generation of computing hardware allowed for bloat in programs. They miss the general conundrum. Not only Better is an enemy of Good, but Good Enough is an even more successful enemy of Good. It has been that way since the dawn of times, and it will be that way until the last two silicon cores in the universe fuse into an iron core.

  • Of course there's too much bloat! and it's Everywhere! But people don't really care; they'll just buy a faster computer, because...it costs too much money to design good software.

  • Another view (Score:3, Interesting)

    by ceg97 ( 976736 ) on Sunday June 26, 2022 @08:18AM (#62651634)
    The goal with software is correctness not optimization for a program that is too slow or unwieldly may still be useful but software that malfunctions is worse than useless and may even be dangerous.
  • by LionKimbro ( 200000 ) on Sunday June 26, 2022 @08:22AM (#62651644) Homepage

    ...my team lead, 10 years younger than me, thought I was a *GOD* when I tracked a Django bug we were having down to a bug within the C source code for a Python module, patched it up, and solved the problem, all within a couple hours of hitting upon it.

    On the negative side, when I proposed that we write a TCP server and client in Python to have two processes communicate with one another, with far less overhead and a much better fit to our requirements (a continuous stream of data,) he looked at me like I was an absolute alien. "Who could maintain this?", he challenged me. And so that is how it is that we spent a month working out implementing and installing a RabbitMQ + Celery + enough configuration code to dwarf what it would have taken to implement it with ... a simple TCP server, and a TCP client. Oh, and the security patches and so on on top of all of that.

    • by Junta ( 36770 ) on Sunday June 26, 2022 @08:57AM (#62651722)

      Message queues/brokers have made so much software just so terrible when it comes to communication.

      There was a reason they came into existence and they have their usage cases, however a generation of developers have internalized the incorrect lesson that you can't talk between networked processes without the fragility and maintenance burden of some third party to do it for you.

  • This problem comes from people having a lack of ability, from frameworks becoming to massive, and from development teams becoming to small, and overworked.

    Lets assume you have a mid sized company, with a couple applications that you want to run on Windows, Mac and Linux. Will your company hire enough developers to write and maintain the applications? Or, will they hire 1/2 the developers needed, and try to use Electron to get around having to do the job properly? We know the answer is that the company wi
  • I think that bloat actually results from including third party libraries, which each include more third party libraries. And while for C programs the linker can remove functions that are not called, today's dynamic binding makes it impossible for a linker to know what to remove - so everything gets bound to the executable. Thoughts?
  • Computers are so fast these days that you should be able to consider them absolute magic.

    Yes. I often feel that way when I run some kind of process which takes some-but-not-much time to run on my PC, which is now considered an antique but has eight cores, 32GB RAM, two GPUs which themselves are not only faster than any computer I'd had previously but have more dedicated RAM than most of them. The fact that this absolute fucking beast of a system compared to what I came up on (you know, single-digit-MHz machines) is now a total turd by modern standards never ceases to amaze me.

    All I'm doing is typing this blog post. Windows has 102 background processes running. My nvidia graphics card currently has 6 of them, and some of those have sub tasks. To do what?

    Mostly nothing! No, seriously. Those processes are mostly doing nothing, but they are running so that when they need to do something, there is the minimum possible delay. RAM is cheap so having stuff sitting in RAM doing nothing is not such a big deal unless it's really hungry. And I'm not saying it's not wasteful, but it's just not any kind of problem to have a lot of processes hanging around on a modern desktop.

    This is utter, utter madness. Its why nothing seems to work, why everything is slow

    It's really not because there's a bunch of extra processes. It's for two reasons, and one is legitimately the underlying problem being complained about here where whole heavy frameworks are dragged in to do simple things. But the other one is just that the software does vastly more than it used to. If you want to edit a text file, vi is still one of the fastest ways to do that. But if you want a singing, dancing editor with a GUI then it's by definition going to use more resources.

    Nothing actually stops you from doing vintage computing today. You can have your X11 environment with fvwm, color_xterm, xv... all the greats. You will need to run a modern browser of course, the old ones are worthless today. You could do linux from scratch (if you're lucky, the instructions will actually work) and build in only what you want. You could pare all the stuff out of the kernel that you're not using. And in the end you would have wasted a lot of time, and accomplished very little.

    At some point we're going to have to prioritize security. You can't extrapolate linearly for sure, but at some point the attackers are going to so outnumber the users that the focus will shift away from performance. And then we'll be optimizing for performance again in order to deal with the performance hit that comes from security checks.

  • You don't need to run Windows. If you didn't run Windows, you could choose from many lightweight operating environments based on Linux, designed for performance and minimal bloat. You choose to use Windows and buy into its ecosystem of bloated software and unnecessary tools for various reasons. I mostly use Linux with GNOME. Not the least bloated, but not as bad as Windows by any stretch. Sometimes Windows is needed to accomplish my goal though, and the bloat is irritating.
    • by Mal-2 ( 675116 )

      Yes, I do need to run Windows.

      The primary purpose for this machine is to be a DAW. To that end, I have purchased virtual instruments for Kontakt -- some of the best sounding and most flexible ones available of their type, at least musically. Unfortunately, Kontakt does not run on anything but Windows and OS X. So if I want those instruments, my only choices are Windows, or either massively overpaying or Hackintoshing (and the latter won't work much longer).

      Stop making blanket statements "You don't need to r

  • Yawn. 400+ here, CPU is idling.

    Computers aren't slow because of code bloat. You'd need quite a hefty load of code to even notice it being executed. They're slow because they're waiting on stuff, be it network, I/O, or your input.

    I guess they're complaining about browsers though, and how some *website* is slow. Well, that's I/O, and a lot of it, coupled with downloading and executing unverified, unknown, 3rd party code on your computer. I wonder who thought that could ever be a good idea.

    I suggest to start b

  • by DaMattster ( 977781 ) on Sunday June 26, 2022 @09:32AM (#62651788)
    When computing resources were precious, the quality of code was indeed much better because a lot of care had to be taken when writing software. I remember in the days of 8088 and 80286 processors software just didn't crash. I remember my father's business computer, an IBM PC-AT clone that used to see months of uptime without a reboot. it just didn't fail .... period. Fast forward to today, and software is just about always crashing for one reason or another. With computing resources being a relatively cheap commodity, there's just no emphasis on code quality. Even worse, the software companies have figured out that selling expensive support contracts with lousy software is even more lucrative. An entire profit model has sprung up arounbd releasing beta (or even alpha) quality software on the end user and then making money off of them being guinea pigs. They then release patches that fix individual bugs while often introducing new ones. Imagine if our cars kept stalling out for no good reason at all; no reason why we would accept this. So why do we accept rubbish software?
  • by DaMattster ( 977781 ) on Sunday June 26, 2022 @09:36AM (#62651802)
    I remember Lotus 1-2-3 from back in the days of the PC AT. Lotus 1-2-3 was an incredibly reliable and high performing spreadsheet application, truly a groundbreaking piece of software. Fast forward to today, and alas, Lotus is relegated to history. However, Excel, Lotus' replacement, is chock full of problems and bloat. With computing resources being virtually limitless and very inexpensive, no software company seems to care about efficient code anymore. In fact, a profit model has grown up around lousy software and selling support contracts. Furthermore, software is designed around planned obsolescence.
  • This is another example of the rebound effect, also known as the Jevon paradox, named after its 19th century economist author. It states simply, that when a resource becomes cheaper or more efficient we do not pocket the savings, but instead use more of it. Here is one of many examples. Now that televisions have become cheaper to produce have you seen anyone buying a 19 inch television for $150 and pocket the savings? No, they spend many times for a much larger screen. Here we are observing the same thing, now that computers have become blindly faster with more memory, we write more bloated code. To show my age, the first computer worked on, in 1960, was an IBM 1620 with 10k of memory at the University of Pennsylvania physics department. It was the size of a teacher's desk and cost ten times as much as my parent house. Without a special additional processing unit, a floating point divide took almost half a second ( yes I said second, it is not a typo. ). we had the processing unit. As you can imagine, we had to watch every byte and every operation. Now in my dolting old age retirement, I can only chuckle at the code I see now.
  • by drolli ( 522659 ) on Sunday June 26, 2022 @09:44AM (#62651810) Journal

    The situation really started deteriorating with UIs in HTML and Web programming/mobile Apps

    Several thing happened at the same time

    * UI was website/App specific, and that was not a Bug but a Feature because having an own UI meant discouraging users from switching. Thus there was no standard UI framework, and everybody packaged Apps with the frameworks - one more shitty and bloated than the other since it was the own product.

    * Users did not get a say because 90% of SW is "free" in some sense, i.e. not paid for and not controlled by the user.

    * Unhealty competition inside companies between products instead of integrating the best of every product. (IMHO the only explanation fir MS Teams)

    * "I does not crash very often and installs on 90% of the users machine without problems" today translates to "good enough"

    * "Good enough" means for management "ship it, the user will call the support"

    * Nobody is responsible for a full users hard drive - I have never seen the requirement that a program should be smaller to install. The only way to bring that in would be if the installation time hinders your testing

  • by ivec ( 61549 ) on Sunday June 26, 2022 @09:50AM (#62651824) Homepage

    Thinking about bloat, I can't help but ask:
    - How many implementations of memcopy, of an HTTP download, of a string object are simultaneously loaded and operating on your machine today?

    Doesn't a lot of the bloat come from extensive replications of such functionality?
    Which itself, comes from using many different languages, frameworks, etc ... which are all simultaneously loaded and supported, including some software compiled >10 years ago against some old API.

    So isn't this all a side-effect of a burgeoning ecosystem?
    We could bring more order, but at what cost?

    This being said, you'll often see a lot of bloat, and a lot of functionality replications, within a single application as well...

  • Today's operating systems have bloat built into their specifications, because today's operating systems try to meet the needs of every possible use case at once. The only exceptions are systems designed for extremely limited functionality and extremely specific tasks.

    Examples: aircraft and spacecraft control systems.

    Highly successful developers and coders coming from the world of business applications would need an exorbitant amount of time and effort to re-tool their thinking, and their design approach, if

  • The first problem happened when there is interpreted languages because interpreted languages remove people from allocating memory and other low-level tasks. While interpreted languages get you portability and faster code writing they also have a downside. One of the biggest downside is garbage collection, as a user of some programs you can tell when the garbage collector is starting because the app slows down.
    The second problem happened when hardware got fast and storage increased. This left developers with no incentive to clean up their code or make it less resource intensive.
    And I swear these days for companies have an incentive to make their code less efficient so people have to buy hardware. I'm still scratching my head as to why calculator apps can get into the 10s of megabytes on a phone

  • NO TIME!!!!

  • by Reiyuki ( 5800436 ) on Sunday June 26, 2022 @10:22AM (#62651892)
    Give someone 640k and they will find a way to stretch it to the limit. Give someone 640gb and they will stretch that to the limit too.
  • by Kristoph ( 242780 ) on Sunday June 26, 2022 @03:12PM (#62652412)

    This has little experience with the broader software industry. These OMG TEH BLOAT statements pop up every few years.

    First, software requires more computing power today that it has in the past because it has more functionality that it did in the past. You might argue that much of that functionality is not needed and that may be fair but thatâ(TM)s a very different type of bloat.

    Second, we now use tooling ( including languages ) that are more cpu and memory intensive. These tools make development easier ( cheaper, more robust ) but they do use up more resources.

    Itâ(TM)s also totally rubbish that companies canâ(TM)t make efficient software. I work for one of the mentioned media companies and we actually build apps on c/c++ that fly on 600 mips with 30mb. We also build apps for high end Apple TVâ(TM)s with expansive animations and online video that use 200+ mb just sitting there. Every time we sit customers down and ask which they prefer they always choose the prettier / more feature rich applications, even when their slower.

Genius is ten percent inspiration and fifty percent capital gains.

Working...