Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming Stats

51% of Developers Say They're Managing 100 Times More Code Than a Decade Ago (arstechnica.com) 99

An anonymous reader quotes Ars Technica: Sourcegraph, a company specializing in universal code search, polled more than 500 North American software developers to identify issues in code complexity and management. Its general findings are probably no surprise to most Ars readers — software has gotten bigger, more complex, and much more important in the past ten years — but the sheer scope can be surprising... When asked how the size of the codebase across their entire company, measured in megabytes and the number of repositories, has changed in the past decade, over half (51%) of software development stakeholders reported they have more than 100 times the volume of code they had 10 years ago. And a staggering 18% say they have 500 times more code.
Ars also reports another surprising finding: 91% of the surveyed developers said their non-technology company "functions more like a technology company than it did ten years ago.

"This won't surprise anyone who has noticed firms like Walmart Labs sponsoring open source technology conferences and delivering presentations."
This discussion has been archived. No new comments can be posted.

51% of Developers Say They're Managing 100 Times More Code Than a Decade Ago

Comments Filter:
  • And 49% (Score:5, Funny)

    by Russki3433 ( 7309806 ) on Sunday October 04, 2020 @10:12AM (#60571046)
    The other 49% said that they are outsourcing their work to the other 51%.
    • Re:And 49% (Score:5, Interesting)

      by Aighearach ( 97333 ) on Sunday October 04, 2020 @11:13AM (#60571170)

      Or perhaps, 51% of developers admit they're responsible for 10,000% of the bloat.

      • Re:And 49% (Score:4, Insightful)

        by gweihir ( 88907 ) on Sunday October 04, 2020 @11:21AM (#60571188)

        Or perhaps, 51% of developers admit they're responsible for 10,000% of the bloat.

        That was pretty much the first though I had as well.

        For bad coders, code is like violence: If it does not solve the problem, use more.

        • For bad coders, code is like violence: If it does not solve the problem, use more.

          If your write enough code and remember to tell the compiler to shut the fuck up, it will eventually compile and run.

          And if it dumps core, kick that shell's ass and make it stop.

          And if it crashes again, add a daemon to crack the whip and start it up again.

          If compilers had faces, most developers I meet would be punching away.

        • Comment removed based on user account deletion
      • Re:And 49% (Score:4, Informative)

        by gbjbaanb ( 229885 ) on Sunday October 04, 2020 @02:34PM (#60571712)

        I reckon so.

        I saw a simple web app recently that had to call a sQL query to get some data.. Now, me in my old-fashioned coding skills, thought well that's dozen lines of code tops. surely.

        Turns out they did a data access module that had a polymorphic interface to swap the query out for different ones with different order by clauses, if that was ever going to be needed and a factory class to decide which to build based on a parameterised user object containing preference configuration with a class object to pass this and the where clauses into the original data access class as a parameter along with another factory that builds said parameter object.

        Me, looking at the dozen classes required to do this, thinks "maybe an if statement, and 2 extra lines of code would have done"

        (and I didn't even look at the unit tests they had written to cover 100% of this "elegant" codebase)

        • LOL! Yeah, I see that shit all the time too.

          When I'm doing SQL queries in C these day I use a modern ORM interface. It adds about 30 lines of library code, but it really cleans up the application code.

          Polymorphism is a scam that never ends up providing the conveniences it promises. You get the same results, with less code that is easier to read, just from inversion of control and dependency injection. Well, if you follow the open/closed principle, anyway. If not then you'll need a framework to hold your han

          • > inversion of control and dependency injection. Well, if you follow the open/closed principle

            You sound like somebody who exists at the intersection of two universes... one where he subscribed to a newsletter about software design patterns for consultants, and one where the newsletter is about S/M practices. :)

            Care to explain what exactly you are referring to?

      • by Merk42 ( 1906718 )
        If it's anything like my personal experience, the developers are responsible for the bloat only insofar as they are the ones that wrote it at the behest of marketing/some other dept.
  • by jmccue ( 834797 ) on Sunday October 04, 2020 @10:19AM (#60571056) Homepage

    Of course, since everyone wants a GUI interface, code count will increase by quite a bit. Even if writing a TUI you get lot more code, but with GUIs that increases by a significant amount

    • Why are you talking about GUIs? You think we didn't have GUIs a decade ago?

      Macintosh System 1 was released on 1984-01-24, 36 years ago.
      AmigaOS was released on 1985-07-23, 35 years ago.
      Atari TOS was released on 1985-11-20, 34 years ago.
      Windows 1.0 was released on 1985-11-20, 34 years ago.

      • by raymorris ( 2726007 ) on Sunday October 04, 2020 @11:05AM (#60571156) Journal

        Ten years ago, I had a significant amount of CLI code under my management.

        GUI existed, for people who weren't particularly computer savvy and for some types of commercially sold software. We didn't make GUIs for internal use, because employees could learn how to use this in about 3 minutes:

        Choose an action
        A) Add user
        U) Update user
        D) Delete user

        A few months ago I made a 2FA system. The IT Operations team needs to add and delete users. They got a GUI rather than the CLI menu above. These are IT pros - who now expect a GUI because hitting "A" in the menu above is "hard".

        It's been a while since I wrote or managed any CLI code other than for my own personal use. Because CLI is faster to use (though slower to learn) I prefer CLI for anything I'm going to use more than three or four times.

        However, most GUIs today are or should be web GUIs, so they don't need to be a lot of code. The 2FA GUI is little more than:

        stylesheet src=http://internal.company.com/company-style.css

        form
        input name=username
        input type=submit value=add
        input type=submit value=delete
        input type=submit value=refresh /form

        A lot of devs today use tens of thousands of lines of frameworks and libraries to accomplish the same thing. Mostly because they've never thought to use company-style.css, or don't know how to do "input type=submit value=add" without a couple layers of frameworks.

        • Speaking of web GUIs, one term I really like is "API first".
          The idea is that when we build your app, first you build http endpoints (scripts) for each action. The functionality is server side, with the client-side being only for UI.

          Then you build a web UI that calls those, submitting forms or whatever. As if by magic, your app is now automatically fully automatable and can be integrated with any other application - because all of the functionality is available via the web API you made to enable the web GUI.

          • Speaking of web GUIs, one term I really like is "API first".
            The idea is that when we build your app, first you build http endpoints (scripts) for each action. The functionality is server side, with the client-side being only for UI.

            Then you build a web UI that calls those, submitting forms or whatever. As if by magic, your app is now automatically fully automatable and can be integrated with any other application - because all of the functionality is available via the web API you made to enable the web GUI.

            I just made an application that runs only locally, within the same device (though it calls APIs on other devices). I made the UI in html because a) it's fast and easy and b) that automatically means all the functionality in my new application can be easily called by another application, via the web interface.

            The problem is "API first" never survives first contact with reality. You are left changing and adding shit to your "API" to support some very specific thing needed by the web interface.

            Endpoints everywhere encourages repetitive design and redundant abstractions that fail to effectively leverage underlying features of systems.

            It leaves you with web pages containing more JavaScript than actual content full of peacemeal network "API" calls incurring avoidable round trip
            delays.

            • You've had a different experience than me. That's interesting. I wonder what's different about our processes.

              Especially, you mentioned "web pages containing more JavaScript than actual content" - my experience has been the exact opposite. If the operations are implemented server side before you create any GUI, why the heck would you re-implement the aame functionality via JavaScript?

              What you will have, of course, is either some CSS or a single JavaScript file that formats your XML/xhtml for a pleasing dis

              • What you end up with in the real world (when developers take this approach for web-based UIs) is many pages of MVC (or worse) utilising SOAP when a few lines of PHP and some well written, parameterised SQL queries would have adequately sufficed.

                Or in the world of desktop software, you end up with absolute muppetry which uses a particular new-n-tasty framework (e.g. Electron) because rather than designing something that's simple, efficient and fit-for-purpose, one just wants to throw together as many gene
                • > is many pages of MVC (or worse) utilising SOAP when a few lines of PHP and some well written, parameterised SQL queries would have adequately sufficed

                  That's generally been my experience with MVC, period.
                  Since years before "API first" was a thing. In my experience, that's symptomatic of MVP.

                  The *concept* of keeping in mind the difference between the UI (V) vs the business logic vs the data is solid, especially for larger programs. MVC as practiced seems to produce shit, especially with very small progr

                  • I blame a lot of it on the REST approach which heavily constrains (with severity depending on how cargo-culty your "architects" or coworkers are) access to business logic by internal data models.

                    Unfortunately users and business problems don't care about data models so your API rarely ends up well designed for the problem space. You end up forcing a lot of square pegs through round holes because it's "RESTful".

                    • > I blame a lot of it on the REST approach which heavily constrains access to business logic by internal data models.

                      I'm curious what you mean by this. Maybe an example?

                      Funny thing when you mention REST and RESTful. REST, as the term is commonly used, seems to mean nothing more than "uses URLs, with query strings".

                      The original RESTFUL was about the client exploring a new web service it has never seen before, just as any web browser can load any web page. It would therefore not know anything at all abo

                    • I have experienced the tendency of REST endpoints to be designed as a simple CRUD interface to the database tables (which makes sense as that's basically the HTTP model). But it also pulls most of your business logic into the "frontend".

                      A simple example would be disabling a set of users. Making an endpoint for mass-disabling a set of users could be considered bad design so you end up needing to call PATCH /users/<id> {enabled:false} for each individual.

                    • > I have experienced the tendency of REST endpoints to be designed as a simple CRUD interface to the database tables

                      I see. Though that's very wrong, for any API RESTful or not (and especially wrong for RESTful), it's too common.

                      > But it also pulls most of your business logic into the "frontend".

                      Which is one reason it's wrong. Also, it means that any changes to physical storage on the server may require updating all of the clients. That's neither RESTful nor good.

                      > A simple example would be disabl

              • Especially, you mentioned "web pages containing more JavaScript than actual content" - my experience has been the exact opposite. If the operations are implemented server side before you create any GUI, why the heck would you re-implement the aame functionality via JavaScript?

                What you will have, of course, is either some CSS or a single JavaScript file that formats your XML/xhtml for a pleasing display. Pretty much either you've got an iframe displaying xhtml results, or you use xml-to-table.js

                From what I can understand you view implementing these functions as some kind of abstraction or facilitation. I view it as a form of punting.

                What if you want to present data on a page from multiple tables? It seems your choices are either to make separate calls from browser to gather data for each or have a API function to do this for you.

                In this case you can suffer unnecessary round trips and extra code to manage the calls in the browser to facilitate piecemeal retrievals or you create a very specific A

                • > From what I can understand you view implementing these functions as some kind of abstraction or facilitation. I view it as a form of punting.

                  I'd call it dividing up the work logically.
                  I assume that if I'm building a desktop web UI interface to an application, we may later want a more mobile friendly interface. We may want an app interface. We may want it to interface with some other application. The user interface is for interfacing with the user, that's not where I store my data or put my business l

                  • To put actual numbers to it, when you say "an extra round trip", the typical round trip time on the internet is about 24ms. Faster in your corporate network.

                    You're literally saying that saving 24ms or less is a good excuse for a crappy architecture that wastes hours and hours of developer time, especially down the line when changes are needed.

                    Now that you know "an extra round trip" means 24ms, you have the choice to either a) stop doing that or b) keep being lazy asf while knowing that your excuse is total

                    • To put actual numbers to it, when you say "an extra round trip", the typical round trip time on the internet is about 24ms. Faster in your corporate network.

                      24ms is fantasy land bullshit. Your lucky to see 24ms thru both ends of a RF cable plant just by itself. More importantly averages are worthless for performance measurement. What matters is the tail not the average.

                      While it is certainly true if you are able to pipeline requests or execute them in parallel latency can be effectively hidden what often tends to happen with these things is subsequent calls become dependent upon previous calls thus rendering latency hiding inoperable.

                      You're literally saying that saving 24ms or less is a good excuse for a crappy architecture that wastes hours and hours of developer time, especially down the line when changes are needed.

                      You are limiting yoursel

                    • > 24ms is fantasy land bullshit. Your lucky to see 24ms thru both ends of a RF cable plant just by itself.

                      I'm getting 27-28 milliseconds to Slashdot at the moment. What are you getting?

            • The problem is "API first" never survives first contact with reality. You are left changing and adding shit to your "API" to support some very specific thing needed by the web interface.

              BOOM, right on the money.

              "API first" is a great way to waste huge amounts time at a later date by unknowingly committing yourself to all the rework from the inevitable changes. I have never seen this not to be the case.

              The only way it works is if the design specs are absolutely, positively 100% correct and unchangeable...and we all know the likelihood of that: zero.

          • Speaking of web GUIs, one term I really like is "API first".

            I always found "API first" to be a losing bet for me in the long run.

            For most apps/sites, I start with the UI and determine what the user needs to see, what buttons and controls they need to actually see and interact with for each screen. Then it's obvious what functions have to be written to connect everything up.

            Yes, sometimes the UI changes, but if you think this through ahead of time then the changes are usually minor.

            I've never gotten value out of the "API first" concept, nor from the "write the softwa

            • > For most apps/sites, I start with the UI and determine what the user needs to see, what buttons and controls they need to actually see and interact with for each screen. Then it's obvious what functions have to be written to connect everything up.

              I *think* about the tasks the user needs to do. Which the UI will I'll make available, but it's the tasks that drive the UI. I make the UI suitable for the tasks as I make the API (called by the UI) suitable for the tasks to be accomplished.

              > Nor from the

            • by Tablizer ( 95088 )

              For most apps/sites, I start with the UI and determine what the user needs to see, what buttons and controls they need to actually see and interact with for each screen

              I generally start with the ERD, because it affects much of the UI and directly affects how business logic is implemented. For certain "high-impact" parts of the app, the UI is indeed important, but that's maybe 15% of the UI, the rest just being almost scaffold-able as-is from the ERD.

      • Yes, but "having GUI" doesn't mean software was necessarily written for GUI. And personal computers (or even home computers) have not been what software was mainly written for, that were mainframes... Still I agree that the amount of text-based frontend development would have already been quite small ten years ago. If I remember correctly, the last text-based frontend I touched must have been something like 25 years ago (not counting command-line based automation utilities and such).

        • For real "SLOC" productivity, nothing beats the tool for writing PHP by throwing cow-pats at the screen with a Wiimote.

          The "quality" of the code proves this is how it was generated!

          The rest is probably produced by bots translating PHP into Python.

      • by Tablizer ( 95088 ) on Sunday October 04, 2020 @01:21PM (#60571456) Journal

        Why are you talking about GUIs?

        We are talking about GUI's because business end users want full GUI's, but our browser standards are lacking, and so we fake it with bloated buggy GUI emulators in browsers. As mentioned nearby, we need a stateful (interactive) GUI markup standard to replace or supplement HTML. Missing GUI idioms include but are not limited to:

        A) Statefulness tied to session

        B) Optional ability to use absolute coordinates that are consistent across clients. For example, interactive flow-charts with GUI widgets in them where text doesn't bleed over on the "wrong" browser version or OS DPI setting changes. You'll put an eye out trying to do that consistently in current browsers.

        C) Split panels (AKA, frames), something HTML5 forced into obsolescence in worship of the the mobile gods. See below.

        D) Combo boxes

        E) Built-in nested drop-down menus

        F) True MDI ability tied to session, with a modal and non-modal option.

        G) Tabbed panels

        H) Native tool-bars

        I) Sliders and turn-knobs

        J) Editable data grid

        K) Expandable trees (folders)

        Everyone thought the biz world would switch to finger-based tablets, but the vast majority of real work is still done on desktops/laptops with mice, and I don't think that will change soon because GUI's are simply more compact and more efficient to use. Finger-oriented interfaces are a sub-set of GUI's, limiting the palette of tools and UI patterns, making employees less efficient. Maybe certain features or screens need a mobile version for say traveling salespeople, but those doing the heavy-duty work are better off with a rich GUI.

    • by AmiMoJo ( 196126 )

      I'd say GUI code has decreased significantly. Frameworks reduce the amount of code you need to write to make a decent UI. In languages like C# it all just plugs together, you don't have to write code to fill data fields or build lists or copy changes from the UI back to the database any more. Just define the layout, the linkage and the sanitizing and it is all taken care of.

      I'd say that true of a lot of coding now. Common stuff is handled by library code that uses modern library features, so it's a lot more

      • I can say that the amount of GUI code that I manage has gone way down over the past decade, for this reason. The bindings have matured.

        But I also usually use Gtk 1.2, that way I don't have to worry about updates or bloat. Of course, it also means I don't use their networking, since networking code needs to be able to be updated for security. But it is better to use language or stdlib support for networking anyway.

        The problem for you though is that all that those modern libraries you're using are part of the

      • by Tablizer ( 95088 )

        Frameworks reduce the amount of code you need to write to make a decent UI...In languages like C# it all just plugs together...the linkage and the sanitizing and it is all taken care of.

        When it works right, maybe, but often it doesn't. For example, an upgrade may be break something, and then you spend a lot of time fiddling to fix or work around it.

        I used to roll my own web libraries that did SQL generation (or SP calls) and validation automatically, and they were only around 100 to 200 lines of code so tha

    • Also feature creep. Code always gets added, but code NEVER gets removed. There's money to be made by adding features, but there's no cost incentive to shrink the code.

    • everyone wants a GUI interface

      We need a state-ful GUI markup standard. HTML/DOM/CSS/JS just can't cut it without downloading bloated libraries.

      And it needs an absolute coordinate mode so we can have things like flowcharts where text doesn't bleed outside boundaries on different browser versions or OS settings. We can maybe also do away with PDF's if done right.

      And don't try to make it an entire virtual OS like Java applets and Flash did. Focus on the job of GUI's and put more burden on the server.

      • It should not be markup at all.

        Compare RelaxNG's C-style representation and its XML-style one.

        I'd prefer one that is a direct representation of the in-memory (binary) state of the data, and everything else being merely theming, including UI state data!
        I can be live-mapped to something human-readably (preferably graphical, I'm a visual person), just like ASCII maps numbers to bitmaps or vector drawings of characters in a text editor. But that too is mere theming.

        • by Tablizer ( 95088 )

          Generally something has to be significantly better than existing standards to replace them, not just slightly better. There's too much existing tooling & libraries that process XML to throw XML out without a good "trial" first. Maybe the "GUI browser" could allow XML, JSON, and RelaxNG format, but that's kind of a secondary issue here.

  • Well, if 51% are doing 100 times more code, that's only 51 times more code.

    • Well, if 51% are doing 100 times more code, that's only 51 times more code.

      That's not what the study says. It says almost everybody is managing more code, and it also shows that 18% (which that 51% includes) reports managing 500% more code. And 29% of developers report 20x the code.

      But really, none of this is remotely precise, so saying "Do the math" doesn't really help anybody.

      • by marcle ( 1575627 )

        Actually, I was just being snarky. I agree with you, these figures are either wholly made up or questionably sourced. No way anybody has a precise way to measure this.
        And I don't disagree that there's more code out there to manage. Otherwise, why would my computer running Windows 10 on modern hardware be no faster than my old machine with hardware a few generations old running Windows 7?

  • Great!!
  • And other bloatware code that a hello world takes 100mb of libraries?
    • by Entrope ( 68843 )

      It varies. It's about 99.5% for the people who say they are managing 100 times as much code as before, and upwards of 99.9% for the people who say they are managing 500 times as much code.

      Except for the people who were in middle school a decade ago. They were just awful developers at the baseline point.

  • It just accumulates and slowly degrades into an amorphous pile of .... (well no-one actually knows since the people who wrote it have long since moved on). However presumably someone is technically "managing" this accumulation of software.
  • Does "coding" these days include bringing up web pages with a cloud-based publisher and importing javascript libraries into it?

    • Does "coding" these days include bringing up web pages with a cloud-based publisher and importing javascript libraries into it?

      Apparently. (It seems ridiculous to me too.)

  • .. say that 42% more of their time is wasted on meaningless statics than 10 years ago.

  • That's how job experience is supposed to work. The more experienced you are, the more you *should* be able to maintain and be responsible for.

  • It would all make sense if:
    51% of developers surveyed are under 30.
    18% are under 25.

    • And also have "no clue" as what is going on or why are things done in the way they are programmed (without frameworks stacked upon frameworks). They grow up in a era of "frameworks" so they are adding frameworks. At the end it *looks* as they know what is going on. Very pitty :-(
  • by Viol8 ( 599362 ) on Sunday October 04, 2020 @11:19AM (#60571182) Homepage

    It just means loads of high level APIs - sorry frameworks - are being used because previous developers of the system couldn't hack their way out of a wet paper bag without someone helping them so you end up with megabytes of library code to do utterly trivial tasks such as manage network sockets. 5 years ago I was working at a company that wrote software in C++ for airports which consisted of layer upon layer of fossilised libraries that no one dare remove in case something broke so they just added more crap on top to get the job done. I remember being stunned when in debugger the stack trace of a crash bug was over 40 - four zero - levels deep just in the application itself, never mind any external libraries. It's a wonder the program didnt require a supercomputer just to boot up.

    • The re-use also allows developers to blame the lower layers of code for flaky behavior - sometimes correctly, sometimes not. You end up with the sum of all the previous bugs, and all the unexpected behavior of the work-arounds for those bugs, along with the nearly undiagnosable new ones

      Still since it seems quicker than re-writing from a very low level, its a very common practice

  • Throw powerful code management tools, and computer aided software development environment tools, at programmers who are not well schooled in fundamentals of computing.

    Allow computers to get more and more powerful, exponentially more powerful while the humans can not adjust for exponential change. Programmers report 10 times faster code in a 100 times faster computer and gets away with it.

    Results?

    Every damned code monkey copy & paste functions, whole libraries and mutates one or two functions for th

    • by Tablizer ( 95088 )

      People casually skim source forge or such sites and code up functions, and import entire libraries uselessly.

      I wouldn't always pin this on developers. Sometimes an executive wants some snazzy UI feature they saw on another site or app. A developer could say it will increase library size and dependencies, but the executive doesn't see that as their own problem.

      The project manager has to step in and say "no"; but that often makes them unpopular. Our business environment tends to favor short-term benefits ove

      • A developer could say it will increase library size and dependencies, but the executive doesn't see that as their own problem.

        The conversation would go something like this:

        Developer: "We could do that but it's going to cause problems"

        Executive: "Whatever, future boy, just make it happen"

  • Comment removed based on user account deletion
  • I'm not sure what the meaning of identifying "non-tech" companies is here. Virtually every company relies on "tech", meaning computers, and has for a long time. Speaking of Walmart, I had a conversation with some folks back in the mid 90's that were working in Walmart "IT" and was amazed at the what they were doing and how big their "digitilisation" (UK spelling for extra-extra-coolness) was. And of course any companies that dealt with numbers had already pushed the important 80% onto computers a decade

  • The last 10 years have been a bonanza for mergers and acquisitions. Firm A buys firm B. A moves all the code to its IT shop. Firm B's IT shop is gone. How many firms has Facebook acquired in the last 10 years. FB code base grew enormously. How about the firms they acquired? The are not in this survey, they do not exist any more.
  • Blender is 3D graphical software that's been around 25 years. It's grown in power and features but it doesn't seem to suffer from software bloat. How do they do that? (Or do they? Maybe it only seems like it has stayed tight in terms of features per megabyte to me.)
    Postgres might be another example. Both are open source. Maybe that has something to do with it.

    Maybe it has something to do with software that is written by people who are also users of the software on a fairly intimate level.

    • I learned the term "emergence" for that. As in "emergent gameplay". E.g. Minecraft. Few basic "levers", almost infinite combinations.

      The art is to find the most generic functions that cover the most diverse functionality with the smallest amoint of parametrization.

      And it automatically becomes elegant, and in terms of coding time, efficient too.
      But therr is one caveat: You have to be smart!

      Haskell is a good example. E.g. highly abstract interfaces like monoids, functors, applicatives, monads, coroutines, etc

    • by jbengt ( 874751 )

      Maybe it has something to do with software that is written by people who are also users of the software on a fairly intimate level.

      On a completely separate note about that, I use a lot of software supplied by vendors for engineers to use, like HVAC load programs, equipment selection programs, etc. It seems that there are two general types - those written by software developers who don't know how the engineering is done, and those written by mechanical engineers that don't know how to write decent software

  • ... before or after have I managed as much code as a decade ago - it was the Euro conversion of an entire billing DB. It was an Oracle DB, so there was code, lots and lots of of code...
  • Maintaining about 180K-220K LOC here. What about you guys ?

    • I code in minified Javascript, so I only have to maintain one line of code, but it is a very very long line...

    • What language?

      120 lines of Haskell can easily contain more functionality and be harder to maintain than 1200 lines of C or JS.

  • ... finds that there is a huge customer need for the service that it just happens to provide. Mastercard or Visa accepted, step right up!

  • It's about 50-50 who manages more or less code

  • Been maintaining the same code base for over 20 years. It's actually smaller now than when I started!

  • Because they still got the same amount of time per day.
    And while some things can be automated away, they already could before, so that part is constant and with growing code becoms is negligible in the calculation. O(1+n)

    Nowadays, it's mostly frameworks on top of frameworks and OSes (like "browsers") on top of OSes. People don't even care anymore about how much usless duplication is in there. Look atthe WhatWG. They do nothing else than duplicate, nowadays. (forall $something, Web$something.)

    And now you kno

  • They did not write 90% of that code, they downloaded it from the net and tweaked it and stuck it to a pile of other downloaded code with a bunch of binary chewing gum and superglue and launched it as their creation, without having any real understanding of how their binary atrocity actually works.

    Programmers are not, today, suddely able to WRITE 100 times as much code as programmers were able to write a decade ago.

  • by rsilvergun ( 571051 ) on Sunday October 04, 2020 @04:39PM (#60572076)
    due to automation this is a large part of what they really mean. Not "Dem ro-berts takin' er jerbs!" but these productivity increases that mean that one developer can oversee thousands and thousands of lines of code.

    Where I am it's the same size team as it was 10 years ago but we've taken on more products and more users than ever. Despite that we're keeping up just fine.

    Why? Because we got rid of old, crappy software that used to require a full time employee to keep it running for our customers, replaced it with modern web based stuff that, like it or not, doesn't need nearly as much maintenance, we've automated a ton of tasks we used to have to do by hand (leveraging stackoverflow quite a bit to find esoteric dodads that used to take months of study to find).

    I'd say in 10 years my productivity has doubled or tripled. My pay hasn't though. And if we had the same software from 1999 today our team would be 2-3 times it's size.

    Thing is all this happened bit by bit so you hardly notice it. But it means a *lot* less jobs. It means when people retire, quit or get laid off they're not replaced.
    • If all you're doing is supporting more stuff than ever before, how is your productivity going up? You're not producing anything. Sounds more like you've created a giant monoblock, with a small number of people maintaining it. A lot of inertia. And when you go, whoever takes over will have to spend more time learning the tools which will be more important than coding. And then the ability to be truly innovative will have died. Being able to write lots of business logic with someone else's framework is not in
    • one developer can oversee thousands and thousands of lines of code.

      Well, for some sense of the word "oversee". Managing an automated code repository (for example) is not really overseeing the code. There has to be some element of responsibility, which requires understanding. There is a limit to how much code you can understand, regardless of what fancy tools you have. As projects grow larger, more people will be needed to manage different aspects of the project. That means more jobs, not less.

      If a project grows larger, but people understand it less and less, that would be

    • by Tablizer ( 95088 )

      I will agree that a well-ran "web shop" can be quite productive, but most orgs are not "well run". You may be an exception. Most IT shops are semi-dysfunctional in my observation. The heavier layering and skill specializing of web stacks requires more coordination and good management, and are thus more sensitive to the quality of management.

      The GUI IDE's of yesteryear were more resilient to poor management because they didn't have so many layers to (mis) manage: one mostly stuck with the out-of-the-box tool

  • I'm kidding, but I'm guessing that this is a scripting language issue. And Javascript is by far the worst about this, IMO.

    If you develop modern Javascript, then you're downloading a metric ton of libraries. (And people used to say that jQuery was wastefully heavy...)

  • It sounds like people are working in larger teams and organizations.

  • Sometimes making purpose built apps is more efficient than having to implement huge swaths of frameworks and "tools". Now-a-days, developers don't just have to know how to code, they have to know how all the infrastructure around their code works. In detail. In fact, they have to know less about programming than all the other crap. Maybe it's time to switch back to libraries instead of frameworks (that make everyone program everything the same way and creates a macro single point of failure. Just a thought.
  • Deadlines get shorter, features get added, bugs don't get fixed, complexity goes up, software breaks more easily, security doesn't exist, testing is left to customers, there is no continuity in development, 3rd party dependencies rise, and products become fragile. This will not end well.
  • Decades ago I was writing embedded C code for 16 bit microprocessors with a few K of EPROM space. Now I write C# code for projects where a single class can take up 10K+ lines of code. Why is this surprising in any way?
  • This all looks like made up statistics. 51% and 100x is not believable. 51% and 83x I could take seriously, or 37% and 100x, maybe.

    Also, I cannot believe the 100x figure on its own. I know code tends to get bloated, but not that bloated. If that had happened on one of the long-running embedded projects I am involved in, I would have had to upgrade the microcontroller several times over to allow for more code space. Not such upgrade required. And believe me, a lot of extra functionality has been added over t

  • Well ya. Create an empty project in Netbeans and you probably already have more repositories and more mbs used than the entire original windows OS. This does not mean that am empty software project now is more complex or bigger than the most complex program ever written in 1985.

    The entire mario game is famously smaller than a screenshot of said game.

    Most of these increases in code, is probably explained by a loss in code density.

"Pok pok pok, P'kok!" -- Superchicken

Working...