Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Cloud

Is Modern Software Development Too Complex? (infoworld.com) 273

"It has never been more difficult to be a software developer than it is today," says Nigel Simpson, a former director of enterprise technology strategy at Walt Disney.

And they're not the only one who thinks so, writes the U.K. Group editor of InfoWorld: "Complexity kills," Lotus Notes creator and Microsoft veteran Ray Ozzie famously wrote in a 2005 internal memo. "It sucks the life out of developers; it makes products difficult to plan, build, and test; it introduces security challenges; and it causes user and administrator frustration."

If Ozzie thought things were complicated back then, you can't help but wonder what he would make of the complexity software developers face in the cloud-native era. The shift from building applications in a monolithic architecture hosted on a server you could go and touch, to breaking them down into multiple microservices, packaged up into containers, orchestrated with Kubernetes, and hosted in a distributed cloud environment, marks a clear jump in the level of complexity of our software. Add to that expectations of feature-rich, consumer-grade experiences, which are secure and resilient by design, and never has more been asked of developers. "There is a clear increase in complexity when you move to such a pervasive microservices environment," said Amazon CTO Werner Vogels during the AWS Summit in 2019. "Was it easier in the days when everything was in a monolith? Yes, for some parts definitely."

Or, as his colleague, head of devops product marketing at AWS, Emily Freeman, said in 2021, modern software development is "a study in entropy, and it is not getting any more simple."

On the other hand, complex technologies have never been easier to consume off the shelf, often through a single API — from basic libraries and frameworks, to image recognition capabilities or even whole payments stacks. Simply assemble and build your business logic on top. But is it really that simple?

The article also cites a critical 2020 blog post by RedMonk analyst Stephen O'Grady. "The process of application development is simply too fragmented at this point," O'Grady wrote. "The days of every enterprise architecture being three-tier, every database being relational, and every business application being written in Java and deployed to an application server are over.

"The single most defining characteristic of today's infrastructure is that there is no single defining characteristic. It's diverse to a fault."
This discussion has been archived. No new comments can be posted.

Is Modern Software Development Too Complex?

Comments Filter:
  • Unix is the way (Score:5, Informative)

    by packrat0x ( 798359 ) on Saturday November 06, 2021 @11:41AM (#61963063)

    "Those who do not understand Unix are condemned to reinvent it, poorly.â --Henry Spencer

    1st step: Divide software into easily digestible chunks.

    • "Those who do not understand Unix are condemned to reinvent it, poorly.â --Henry Spencer

      1st step: Divide software into easily digestible chunks.

      1st step: determine who is going to get financially fucked from any action you take.

      Best understand who's wallet you're screwing with first. One mans profit is another mans "theft", and your shortsighted advice highlights exactly how you fail to understand that Greed created this problem.

    • Step one: divide everything into a neuron.
      Step two: ???
      Step three: profit?

    • Re:Unix is the way (Score:5, Insightful)

      by ceoyoyo ( 59147 ) on Saturday November 06, 2021 @11:54AM (#61963093)

      That's all fine and good until some twit decides those easily digestible chunks all need to run in different containers, on different servers, and it's a pain to start all those up so you need some software to "orchestrate" and then....

    • Re:Unix is the way (Score:5, Insightful)

      by Sigma 7 ( 266129 ) on Saturday November 06, 2021 @01:09PM (#61963331)

      Problem with that paradigm is that it's not feasible anymore. Want a PDF editor? That file format alone requires manipulating bitmap images, vector images, text, and fonts - and those components are rather hard to subdivide. And since it's a PDF editor, you have to deal with a parse tree as well, or even a file format that can be amended by later revisions and can split itself all over the place.

      Same thing as writing a web browser. It has a rather complex rendering chunk, and the parser needs to be "fail-safe" permissive as well. Once could take a rendering core to do most of the work, but it's still rather monolithic. (As a side note, the HTTP protocol for web servers didn't show the proper list of headers that need to be sent the last time I checked. That needs to be updated too.)

      Then there's games (a.k.a. computer run adventure programs), which most developers simply use a framework to skip all the engine coding.

      Divide software into easily digestible chunks.

      Second rule of Unix is that all programs need to expand until they can read mail.

      • It doesn't matter. The function printf() covered a lot of complexity 30 years ago. So did the mere act of opening a file, and it was entirely covered in the single character < It's not about how much complexity is hidden on the inside, but the view from the outside.

        The key is to break it into comprehensible chunks, simple from the outside, that you can work with. Otherwise you can't can't work with it, qed.

        • Conceptually printf() is just a form of software tracing. Some tracing can be automated, but usually something sophisticated needs to be instrumented for trace. Now printf() isn't exactly high performance, but nearly the same API can be used to implement something faster. Storing strings off-line and only printing timestamps and arguments in a cache-optimized buffer. (this is overkill most of the time, sometimes it's exactly what is needed)

          In practice I think I'd rather dig through a log of messages than lo

          • Conceptually printf() is just a form of software tracing.

            When I started programming, printf() was the primary method of displaying information to the user. And it was brilliant.

      • by narcc ( 412956 )

        In another life time, I did a lot of work with PDFs. While the spec is ugly, it's not particularly complex. The format itself is actually really easy to work with, being designed at a time when memory was at a premium.

        That file format alone requires manipulating bitmap images, vector images, text, and fonts - and those components are rather hard to subdivide.

        I'm going to directly contradict that. Everything you've described, in a sensible design, is atomic. The bit that renders bitmaps is completely separate from the bit that renders text which is completely divorced from the bit that renders vector data. If the part that renders bitmaps is j

    • "Those who do not understand Unix are condemned to reinvent it, poorly. --Henry Spencer
      1st step: Divide software into easily digestible chunks.

      Hol up. "Modern" software development is squarely in the condemned to reinvent poorly corner, and are you alluding to it being aligned with Unix philosophy, because CHUNKS?

      Templated declarative configuration run loops, namespace virtualization, layered filesystem images, docker composition, remote software repository hell, the unholy abomination that is container networking, dvcs that waves its internal model in your face like a giant dick, API gateways to get around throttles, and the best way to do cloud

    • by AmiMoJo ( 196126 )

      TFA, and the summary for that matter, argues that the Unix way is what is creating the complexity. Instead of monolithic applications, developers have to break everything down into micro-services. The shear number and diversity of services is the source of the problem.

      I can see the point being made. You can end up with a bunch of different applications, written in different languages, running in different environments. Debugging requires following data through multiple services, some of which are pretty opa

      • by suutar ( 1860506 )

        Is it really more complex than the monolith, or is it just that the complexity is more visible? (Ignoring, for the moment, the complexity that comes with actual functional differences, like redundancy and failover.)

    • "The single most defining characteristic of today's infrastructure is that there is no single defining characteristic. It's diverse to a fault."

      It's not that complex. You just need a funner, more digestible [youtu.be] way to organize them.

  • by segedunum ( 883035 ) on Saturday November 06, 2021 @11:45AM (#61963069)
    The complexity we have built in today, especially microservices, containers and orchestration and 'automation' as far as the eye can see, are often things we have mistakenly thought we needed. Many people I've encountered think containers magically solved all deployment problems, and I have to point out that those images need to be maintained. Microservices might fit certain specific scenarios, but they add needless complexities. They are also horrific to try and develop with.

    We are our own worst enemies.
    • by ceoyoyo ( 59147 ) on Saturday November 06, 2021 @11:57AM (#61963117)

      Hey, you should try putting all the functions related to a certain task together in a file, so they're a bit more organized.

      Wow, that's a great idea! But what if I put each of those files on a different server, a server that *only* runs that bit of code, and I wrap a web API around it, and then it will be available on the Internets! Wait, and instead of putting the whole file on each server, what if I put each *function* on it's own server?! This will surely save software engineering!

      • Any time you add a dimension to the software problem we somehow find a way to ignore it until it becomes a huge problem, then we add a dimension. Files are a form of it if we add a filesystem just so we can ignore it until a filename clash occurs. Then we add "alternates" or "overlays". Then we turn those into unionfs mounts. Then we turn those into containers. Then we all throw our hands up and go home. Often we're just adding complexity to cover over the complexity or to rationalize it. The better option
        • by ceoyoyo ( 59147 ) on Saturday November 06, 2021 @01:07PM (#61963323)

          It's more than that. There is a real (and fairly well studied) asymmetry where it's easy to make something more complicated, and difficult to make it simpler. Software illustrates that in spades. You can't remove that old feature that someone somewhere might use. You have to maintain backward compatibility. The boss thinks he wants this so you have to put it in. You have to support the people who want to hold it wrong.

          A place I used to work at had a meeting because their thirty year old academic software package was getting so creaky they couldn't ignore it anymore. Nobody who used it wanted to maintain it. What do you do? I suggested starting by trimming some of the old stuff (like three of the four different APIs) and focusing on a subset to maintain in the future. I practically got run out of the room for daring to suggest that features should be *removed*.

    • How about a car?

      Engineering as opposed to science is all about managing complexity and making things that are beyond one persons ability to make every part of .

      True you may like that computer programming dies let one reach further with less but we're past that point now on many things

    • Exactly. Software development is complicated if you make it complicated. There are a lot of pithy phrases we can apply here.

      If you let it, software complexity increases to the point that you can't understand it, this has always been so.

      Kids these days. Look at this code from the 60s [slashdot.org]. You think you deal with complexity??

      If you choose an "eventually consistent" database, then you better have a plan for when it isn't consistent, because it won't be consistent. It's in the name.

      Programming is simpler than ever.

    • Containers and orchestration just has shifted complexity to another level and exponentially increased it there.
      In the end we still write mostly crud applications like in the 80s, but back then we fired up an ide created a project (5) started coding painted the forms and wrote the reports. Now it is 4 days project setup for a corporate environment getting all dependencies in planning and debugging the orchestration and then working the same data in data out over 3-5 layers with several containers to orchestr

    • Oh add frameworks. And more frameworks. Often a very simple application is amazingly complex because 99% of it has nothing to do with the actual application operation itself, but it make-work.

  • I think it's too complex now; the "features" are not worth all the costs (such as unintended consequences). But without a good feedback loop to reign it all in...

    Especially now that people are dying because of it.
    • by Junta ( 36770 )

      It depends on how much a team gets invested in reading about 'how' things should be done as written by people responsible for extreme scale problems that don't apply for 99% of work.

      If you don't let yourselves think you are screwing up if your work doesn't resemble those people's work, then you will do fine usually.

      Of course, I've seen a development team twist themselves into a mess as they have no actual understanding of what drives those people to microservices and start slicing and dicing small problems

      • Of course, I've seen a development team twist themselves into a mess as they have no actual understanding of what drives those people to microservices and start slicing and dicing small problems into unmanageable messes.

        People switch to microservices because their code isn't well organized. This gives it some natural organization because each team automatically owns a microservice or something. Unfortunately if you don't know how to organize a single codebase (the primary key being strong interfaces and separation of concerns), your microservices architecture is going to be much worse because organizing microservices is harder by an order of magnitude. Debugging can then become almost impossible.

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Saturday November 06, 2021 @11:48AM (#61963075) Homepage Journal

    I keep hearing old guys talk about how we should have more trade schools and less colleges and universities. But the reality of technology and the human mind is that as technology advances it becomes more complex to apprehend and the human mind is not advancing as rapidly as technology is. As a result we need more and not fewer specialists, which means we need more college-educated people, and not fewer of them either.

    By the same token, yes the total complexity of software development increases, but any one person typically only works on a portion of it, especially given the use of external libraries.

    Modern software development is only necessarily complex if you unnecessarily reinvent wheels.

    • The problem with idiots going to college is that you just end up with college-educated idiots.

    • I keep hearing old guys talk about how we should have more trade schools and less colleges and universities. But the reality of technology and the human mind is that as technology advances it becomes more complex to apprehend and the human mind is not advancing as rapidly as technology is. As a result we need more and not fewer specialists, which means we need more college-educated people, and not fewer of them either.

      Sorry, but you're a moron if you assume "college" is the answer to this growing problem. Or you're just a shill for that system of obscene Greed.

      "College-educated" does not create specialists. Experience creates that. College provides a path to prove you have dedication, and little more. Go ahead. Tell me what things you use today from the mandated-but-fucking-pointless classes you had to take in college years ago. If you're in IT, I doubt you're using more than 10% of your college education, simply b

      • by fazig ( 2909523 ) on Saturday November 06, 2021 @12:13PM (#61963183)
        It's true that you don't need to graduate from a university to be a software developer.
        There's lots of talented people out there with no formal education.

        However from personal experience a lot of them tend to hit a brick wall when they encounter a math problem that takes a bit of trig, more than just the basic vector stuff, and especially calculus. And I don't even want to start with quaternion math, which is required for 6 degree of freedom stuff in 3D engines.
        Of course you can learn higher math from experience as well. But the number of people that posses that kind of initiative to do so, without the pressure (and assistance) from an institution like a university, is fairly low.
        • I find two major categories of software developers: The software guys, and the software researchers. The software guys put code together in meaningful ways and builds n-tier systems. The researchers keep copying code from everywhere, give it ridiculous names, and hack it until something breaks. They then hand it to a software guy to fix. I've seen research staff who add zero to a development team. I've seen research staff who are a negative investment - they actually inhibit development and produce only br
          • by fazig ( 2909523 )
            Of course graduating from a university doesn't make you a good software developer in every case.

            Quite frankly, most of the actual programming that I learned during my time, wasn't very useful at all.
            But beyond that they taught math and software engineering principles. The latter encompassed modelling software graphically through various methods as well as documentation. Both are quite important for larger projects.
            That's what's valuable in my eyes: They provide you with the tools, that are required to s
      • You should probably listen to old guys more. You'll learn something from experience.

        I have enough experience to know that the people who think they know everything without formal education usually have gaping holes in their knowledge that materially affects their ability to make intelligent decisions, especially today. One human lifespan is only so long, and you can only fit so much trial and error into it, but through education you can be exposed to lessons learned by many people in a much shorter time, so it's easy to see what the value of education is. Most of these old guys who are ran

        • Depends upon what "old guys" you're referring to since computing had it's root in academia. When computing got down to the, everyone can afford a computer stage is when cowboy programming and, you don't need an education became popular.

      • Go ahead. Tell me what things you use today from the mandated-but-fucking-pointless classes you had to take in college years ago.

        None of the languages I now use daily even existed when I was in college. Languages I learned in college (Pascal, C and IBM BAL assembly language) are no longer used in the real world (yes, I actually do use C but mostly C++ in microcontroller projects, but besides bare-metal and kernel-level stuff, plain C is not used for enterprise or application development this article talks about), besides the exception of maintaining existing codebases or very niche things.

      • Go ahead. Tell me what things you use today from the mandated-but-fucking-pointless classes you had to take in college years ago.

        Learning how to use locks and threads is a big one that people are missing out on in a microservice (asynchronous) environment. Networking/TCP/IP is another huge one.

        People add a task queue to their environment, and don't realize how to use basic flow control. People have multiple servers running the same codebase, and don't know how to use transactions (ie, locks).

        These kinds of things cause most of the scaling problems you see at companies. It's not the relational database that is slowing them down.

    • by MeNeXT ( 200840 )

      The problem you are trying to solve is still the same but the solutions are getting more and more complex. Most of the time they include a lot of irrelevant junk that does nothing to improve the solution just complicates the process.

      It's not the degree or certificate that determines your capabilities it's your ability to understand. Before someone can teach it, it has to be imagined and developed. The schooling just introduces you to different existing methods/ideas.

    • I keep hearing old guys talk about how we should have more trade schools and less colleges and universities

      That's because the old guys you're around are right-wingers who feel like they were left out of the upper class by all those smart people, and they'd be just as smart if they just made school easier.

  • by RightwingNutjob ( 1302813 ) on Saturday November 06, 2021 @11:51AM (#61963081)

    Java is an old example of this point.

    Recommended long-ass camelcase names. Requirement to place different classes in different files, no matter how small. Design patterns that encourage lots of small classes.

    Python by its nature doesn't play very well with long source files. Meaningful whitespace kinda makes it painful to the eye to have a big if or loop block, so the stuff in the loop gets put in a different file or in a different function in the same file, even if it's unique to that one instance.

    Interpreted untyped or weakly typed languages create opportunities for whole classes of bugs that would be caught by compilers, and one way of dealing with it is something like Knuth's literate programming approach that makes the source code more verbose, or a crap ton of single-use unit tests that essentially mean writing the same code twice instead of letting the compiler check it for you.

    I don't have a solution for this. Compiled languages and heterogeneous cloud platforms don't really belong in the same sentence, and compiled languages with homogeneous cloud platforms aren't anywhere near a silver bullet either.

    You pay for the resiliency and scalability and flexibility somehow. C'est la vie.

    • by lsllll ( 830002 ) on Saturday November 06, 2021 @12:14PM (#61963189)
      Came here to say almost the same thing. For many of us who learned programming before the 90s, OOP altogether seems like complete bloat and a waste of time. There's nothing I can't achieve in procedural programming that I can achieve in OOP.
      • I'd upvote this if I had points. It's not difficult to do most things that OOP does in procedural languages, and they decided to add all-new fancy terminology for things that already existed to make it seem whizzy.

        Meanwhile, inheritance and subclassing and superclassing look so much like the new GOTO. There's no clean way to find out what the code is really doing. Why exactly are you applying subtraction to a string value, anyway?

        • Why exactly are you applying subtraction to a string value, anyway?

          Removal of an item from a set.

        • There's no clean way to find out what the code is really doing.
          Yes, there is.
          You read the code.
          You learn how use an IDE.

          And if that is not good enough: use a damn debugger. That is why they got invented.

          If you can not look at a class hierarchy and more or less instantly grasp what it does: you should not be in programming.

        • OOP involves building custom type systems, which adds about 50% extra code. Maintaing those type systems requires extra time.

          I went from C++ to Java to Groovy to JavaScript over the the course of decades. I became aware of how painful and time-consuming type systems are. Now the industry is pushing TypeScript.

          For a five-person application project under 20k lines of code, extensive typing just gets in the way. For 50+ team members, the payback for the extra code investment is better.

          Also, type definitions ca

      • There's nothing I can't achieve in procedural programming that I can achieve in OOP.

        Well, procedural programming makes it more difficult to explode your program's state into a large number of tiny encapsulated pieces that all must interact through interfaces that you have to design and implement.

        Don't ask me why OOP encourages this as though it were a good thing.

        • Making GNU HURD possible. :-p

        • Don't ask me why OOP encourages this as though it were a good thing.
          Because small pieces of code are simpler to grasp and easier too reuse.

          Did you not learn anything in your career?

          • by lsllll ( 830002 )

            Because small pieces of code are simpler to grasp and easier too reuse.

            You mean functions with parameters?

            • Do you need more parameters? Do they have to do with a bunch of different things? Is it really more clear to have a big list of parameters than a short list of containers that carry their values (objects)?

      • There's nothing I can't achieve in procedural programming that I can achieve in OOP.
        Depends what you mean with achieve.

        Writing two programs that do the same job? Yes, then you are correct.

        On the other hand: you probably have no clue about OOP. Or you would not write such nonsense.

        • OOP is all about organizing the code for human consumption. There is nothing that can be done in any language that can't be done in assembly.

          OOP isn't the only way to organize code nicely, but it gives you some nice tools for the job.

      • For many of us who learned programming before the 90s, OOP altogether seems like complete bloat and a waste of time.

        As somebody who mostly uses plan C and does OOP all the time, this is just a load of BS. You see real problems, but you're not correctly identifying their causes.

        There's nothing I can't achieve in procedural programming that I can achieve in OOP.

        LOL That has nothing to do with anything. Anything you can do in procedural somebody could do with a giant functional WTF, too. It doesn't tell you what a good architecture for a particular system is.

    • Requirement to place different classes in different files, no matter how small.
      That is not require, required is one public class per file.

      And no idea what you find wrong with that.

      Design patterns that encourage lots of small classes.
      No one forces you to use such a pattern. And patterns have nothing to do with language anyway.

  • I appreciate being a developer in an era where people actually have choices.

    In my experience, teams use a small subset of the choices available and it's fine. There is a healthy resistance to adoption of new stuff and for good reason -- the churn needs to be worth it. And if it's approved, then a plan is made about how to migrate or phase out the old stuff.

    Write a developer handbook about where to find things, how to get started with the tools, etc. and keep it updated as things change. This is very effecti

  • by Opportunist ( 166417 ) on Saturday November 06, 2021 @12:06PM (#61963153)

    Back in the days when computers were expensive and programmers wore shirts, getting even the most simple of calculations out of that box was an exercise in advanced mathematics. You not only had to understand the problem you're trying to solve, you also had to understand the machine you planned to use for that job. That in turn meant you had to learn a lot of unrelated stuff, from electronics to mathematics, before you could even ponder starting your project.

    That has changed. Fortunately, and unfortunately. We now have languages that abstract the machine below away to the point where you don't even have any kind of control how what you tell it to do is done. Hell, modern CPUs don't even understand "their" assembler codes anymore, so even if you programmed in assembler, you don't really have any kind of control over the CPU. But that's not even necessary. There's a library for everything, there's a function for anything and no matter what you'd want to do, you basically fill a state machine with the values you want to deal with and the abstraction behind it takes care of it.

    This in turn means that everyone thinks they can write code somehow. And that "somehow" is the part that is the problem here. Because these people don't even know what they're doing. More and more, what we get is cargo-cult programming, people who copy/paste snippets from various sources and frankenstein a monster of code together from parts from Github and Sourceforge, with glue they asked and/or cribbed from Stackexchange. Understanding what that code does not required.

    Modern software development isn't too complex. The people we use for it simply don't know what's required to do it. We have created this illusion that everyone can code. This may be even true. Just like everyone can somehow cook something with the ready-to-heat meals. Good enough for home cooking it is, no doubt about that. But you wouldn't want to pay a chef for the result.

  • If ever there was a piece of software/system that drove people beyond tears, through despair and to want to harm things it is Lotus Notes. That monster was so incredibly complex, horrifying, and awful to bend to your will.

  • by organgtool ( 966989 ) on Saturday November 06, 2021 @12:28PM (#61963211)
    Every field that progresses inherently becomes more complex. 150 years ago, the same doctor that treated your bunions would also treat your brain cancer. At a certain point, our knowledge of biology and medicine grew far beyond what one mere mortal could hope to learn, so we created specializations for this knowledge. We're currently going through the same thing with software development, which is why so many developers now tend to identify as a front-end dev, back-end dev, or devops dev. Anyone who claims to be a "full-stack" developer today will likely not be an expert at all, or even most, of the technologies required for all of the different development sectors. I'm sure there are many people that lament these changes, as doctors long ago probably lamented the specializations of medicine, but at a certain point there's just too many different technologies for one person to understand. I see many other comments disparaging certain technologies, but at the same time I can think of many huge benefits those technologies provide. While there's certainly value in questioning if all of these technologies are necessary or provide real value, I think complexity is here to stay and it's something we're just going to have to get used to and adapt accordingly.
  • by ffkom ( 3519199 ) on Saturday November 06, 2021 @12:30PM (#61963215)
    Just avoid writing 99.99% of "your" software - by importing even the smallest function from someone somewhere on the Internet. Use some popular tool to automate this process recursively, so you don't even know what hundreds of little code snippets are amalgamated into "your" software.

    If "your" software turns out to fail, just blame the failure on that anonymous guy on the Internet who contributed that dependency of the dependency of something you included.
    If "your" software runs slow, just demand those code writers out there shall optimize their code so it runs faster in "your" software.
    If "your" software has gaping security holes, just blame the Russians/Chinese/Iranians.

    This is what today has become "industry standard", anyway.
  • A point in every direction is the same as no point at all.
    -- The Pointless Man

    This growing level of complexity has led many organizations to adopt a central platform model, where an internal platform team is tasked with vetting the tools most required by engineers, building templates, and plotting golden paths to ease their journey into production, while also centralizing functions like financial operations, security, and governance to ease the cognitive load on individual developers.

    The last contract I worked on, I designed a strategy and framework aimed to do just this. Get rid of many little empires and too many technologies so that the business could hire programmers that could concentrate more on the business goals than creating the anti-pattern of requiring developers to learn a dozen non-programming activities and technologies before something could be delivered. And short circuiting that anti-pattern means the business had programmers that could move around within the organization more easily (requiring learning only the business cases and not a new technology stack) which helps prevent 'the one person empire killed by a bus' issue. There was a team responsible for the framework and how it and things built on it are deployed, tested, and released, and the business oriented teams could focus on delivering value to the business. Before that, the different teams were using whatever shiny thing popped up next. There is just too much available and like the article said, it is a study in entropy.

    One of the nice things about C is that frameworks weren't created for it as much. Yes I know there were, I've used them. But not as insane as it is today. And somehow, billion dollar corporations ran on the code, and people seemed to have happier lives without the manic 'improved user experience' of being glued to phones all day. There's something to be said for dealing with real people. :) Get off my lawn.

  • by hey! ( 33014 ) on Saturday November 06, 2021 @12:36PM (#61963227) Homepage Journal

    At the outset, it was all about how to *do* things. As the years rolled on, it was more and more about getting stuff to *work together*. You can see this on small teams working on projects that might have several facets. You can't build all the bits you need yourself, but the developers are stressed by information overload. There are development practices you can take to mitigate this, but at some point, somebody has to know how all the glue pieces work.

  • by rbrander ( 73222 ) on Saturday November 06, 2021 @12:39PM (#61963239) Homepage

    I could have sworn that Cloud Computing was all about saving money on infrastructure, that it costs money to run a server room, better to rent servers (rather, services) than to buy and maintain your own.

    But if the cost is complexity, then all those savings go down the drain. Running the server room was never the big IT expense, that was the programmers. I've had to pay corporate IT for their work, and the servers were 10% of IT, the human programming and analysis services were 90%.

    So if your Cloud strategy causes a 10% increase in programming, analysis, and implementation costs through increased complexity, then the AWS bill could be zero and you'd still be losing.

    • The problem isn't cloud vs server room. The problem is that the people building the cloud/server don't know what they are doing.

      So you end up with hundreds of servers when you really only need tens of servers. And no one understands the deploy script.

    • by thona ( 556334 )
      Here is my practical experience. In the last 2 larger software projects I WANTED CLOUD. I did not care about the cost. Why? Because in BOTH CASES (and sadly I did not get my will) IT inhouse was COMICALLY expensive and EXTREMELY incompetent. Inhouse VM? 3-6 months waiting, 2 years commitment, no manual refresh (all via billed ticket, taking days) so no "have vm's for installation tests that the test script can roll back to a known config. It was comically to try to work with them. In the LAST project the wh
  • by AlanObject ( 3603453 ) on Saturday November 06, 2021 @12:39PM (#61963241)

    When I was starting programming a half century ago I only needed to know FORTRAN and a few job control statements to get paid work done. And of course how to operate a keypunch machine. Back then I had an advantage in having experience with machine/assembly language which even then was only taught as a specialty.

    So here we are today. To write an app in my preferred framework I need to know: Javascript/ECMAScript, Typescript, Angular (template and class library), HTML, the browser DOM, Node.js, CSS, HTTP, REST, JSON (really a subset of JS), RxJS, express, Elasticsearch or SQL. Plus dozens if not hundreds if not thousands of function calls, data structures, event types all brought in by the frameworks and libraries that become non-optional. (The only one I manage to avoid is JQuery). Docker containers or maybe VMs if you are involved in deployment. Then there is the IDEs which help a lot but have their own complexity tolls.

    But of course we are doing things that were nearly unimaginable back then.

    I don't think "complexity kills" but it sure does steepen the learning curve to achieve mastery for any given discipline. At the same time the barrier for entry is much lower. You don't have to be at a university and benefit from a government grant to get access to a machine.

    All in all I think things are better today. Solving linear programming problems by submitting job decks was only as interesting as the problems they were solving, and more often than not that was not very.

  • by holophrastic ( 221104 ) on Saturday November 06, 2021 @12:41PM (#61963247)

    As a programmer-entrepreneur-webdeveloper, now for thirty years, I can say that the problem isn't a problem with programming, but with something very different.

    As discussed, "security" is an entire facet of modern programming. So is cloud-structure. So is testing. So is hosting. And there are plenty more both included in the article and left out.

    The problem is this: every modern development includes absolutely all of them. That's it.

    There aren't many other industries where absolutely everything is all cutting-edge. Think about it. No one programming anything these days uses any platform/structure/model from twenty years ago. Or even ten. It's all new all the time top to bottom.

    Airplane manufacturing is like that. Space rocketry is like that (I presume). Car manufacturing is mostly like that (old computers, but new car stuff).

    But your house? Your plumbing? Your door locks, toilets, garage door, air conditioner, furniture, lawn care, gardening, light bulbs, smoke detectors, pet collars, socks, pants, probably everything you're looking at right now, probably everything that isn't a computer, is never all-brand-new-modern never-existed-before-yesterday stuff.

    If you actually asked a house builder to use only brand new stuff, or a plumber to only use toilets invented this year, it'd cost you a fortune and drive them nuts. Not every plant in your garden is a brand new hybrid created last year. Not every plank of wood on your deck is the best african black walnut.

    My point is this: most modern businesses could absolutely get away with old-style software -- something programmed in visual foxpro (yes, I want to vomit too). No hair salon would suffer without cloud-backup. The lumber mill doesn't need encryption to protect their systems from the oh-so-valuable sales history of raw lumber quantities.

    But we, as an industry, force them into the latest-and-greatest because we have no interest in remembering the old ways, nor in maintaining them. So yes, I charge thousands of dollars for cloud-connected office hardware with subscription software for a simple single-location appointment scheduler, for a business that hasn't done anything different for the last forty years -- it's the same barber shop with the same barbers. They used to "invest" in a leather-bound large-sheet appointment book for $100, with $50 in new paper every year. Now they spend $5'000 - $10'000 per year for a machine that takes up more space. You know exactly why.

    And so, software development is only more complex because we, as developers (and consumers) refuse to patron a barber-shoppe that uses scissors, brooms, razors, pen, and paper.

    • by nadass ( 3963991 )
      Common sense? Is that you?! Why are you here? Nobody gains click-views by expounding with down-to-earth common sense remarks such as yours. No, sir. The masses prefer provocative deep-dive criticisms taken wholly out of context!

      Alas, you've hit the nail on the head. The demands of software solutions are more broad today, and they must encompass factors which were easily wrapped in a single class file way-back-when -- if they even existed at all!

      "Modern software development" is now a combination of
      • I think it's time for another round of: "I remember the days when..."

        I remember the days when...charging a credit card was a simple http[s] POST of amount, card number, expiry date, seller's ID, and that's it.

        I remember the days when...a backup solution was a weekly (daily) copy command.

        I remember the days when...software was a 1MB executable with no dependencies beyond the OS.

        I remember the days when...software was a 1.44MB executable on a disk that included the OS!

        I remember the days when...software was a

    • Think about it. No one programming anything these days uses any platform/structure/model from twenty years ago. Or even ten. It's all new all the time top to bottom.

      This isn't even remotely true.

      • by holophrastic ( 221104 ) on Saturday November 06, 2021 @02:06PM (#61963501)

        I know! I do! I'm programming in perl, in a platform that I developed fifteen years ago, based on structures that I designed twenty years ago.

        But I refer you to the definition of the word "exception". "no one does it" is the rule, and then there are exceptions to the rule. I am one of those exceptions. Had I said "most don't do it" as the rule, then the word "exception" would be unusable.

        So kudos to you if you, like me, are the exception! I have human-used all-day-every-day business software running on client machines that is now 20 years old! (I recently had to adjust a few lines to have the Win98 code work with Win10 permissions.) I also have some online accounting software 10 years old, being rebuilt now only because the company has changed sales models (and grown five-fold). And, of course, my modern-day-embarrassment of my own invoicing system is impressively 27-years old, having served every online invoice my company has ever created.

    • Further to your point: literally any screw, nut or bolt used in those old industries comes with a whole bunch of industry standards, norms and DIN/ISO/IEEE/whatever certificates and regulations.

      While in our great new software world, you barely have any idea just what Timmy shoved into that one NPM module or fifth degree dependency library you are pulling. As long as it is easily available and latest and greatest, it will go straight to production. And being open source only remedies this to a certain point.

  • by Slicker ( 102588 ) on Saturday November 06, 2021 @01:04PM (#61963303)

    Anyone who has been developing software since the mid-1980's (like me) or earlier should definitely be able to agree... It was awesome for so long but it can be genuinely miserable today... especially using frameworks.

    The pattern I've witnessed is that people observe other peoples garbage coding and think the language needs to be more restrictive to prevent that. The end result is that coding actually gets even worse... and a lot less fun. Restrictions tend to cause developers into more complex overall solutions. For example, using iterators instead of numerically incrementing variables means we cannot look back and forward at any point in the loop. This means you can do a lot less in a single loop, which is very much what they are trying to stop you from doing. However, it means you still have the same problems to solve and you have to do it in more loops with greater complexity at a higher level. This makes the software bigger, slower, and more complex. It also makes some problems harder to solve.

    In other words, managers want the language (e.g. Typescript) to manage developers for them. The far better solution would be, instead, to adopt better management practices and actually manage. Friendly code reviews for continuous education of developers and allowing them time to improve earlier iterations of their code is ideal. Checkboxes on test-cases is sure to result in crap code. Blaming JavaScript for having so many "undefined" messages everywhere isn't an indication of needing something like Typescript. It's an indication that you're not doing regression tests and you are a crappy manager.

    • by Morpeth ( 577066 )

      I hear you. When I think of web development using ASP (what's now called classic ASP) vs. ASP.NET it's not even the same animal. While there are certainly advantages to the .NET framework. the overall complexity has increased by orders of magnitude.

      Even a simple web site with .NET is not simple, with the original ASP you could knock out a usable, functional site fairly easy. Yes, you were limited in what you could do compared to .NET but we were able to build real apps with it -- but I think when something

  • by Junta ( 36770 )

    A lot of people *think* it has to be and do crazy crap, but very little of the pain is as necessary as people think it is.

    Now people *believe* any application they imagine can't fit in just one OS, you must have containers and you must have complex IPC over a network so you as a developer must also be doing network management. Everyone thinks *everything* must be delivered by microservices running in containers, and while there is certainly a subset of tasks where that is a wise strategy, I see it applied

  • "Math is hard, let's go shopping!"
    • The funniest part is the old guys pretending they're good programmers talking about how programming was easy in the old days.

      I can't tell if they were assigned a cubicle in the basement to keep them out of the way, or if they just have memory loss.

      In any case, I don't use a bunch of different frameworks at the same time, and things have gotten substantially simpler over the past 30 years.

  • by Somervillain ( 4719341 ) on Saturday November 06, 2021 @01:54PM (#61963471)
    I blame the blind worship of Google and the management cargo cult. So, I have been doing this for a long time and have watched as HTML & Perl/CGI gave way to the horror show that is today. My #1 culprit for everything being a shit is we put Google on a pedestal and every business, whether they have 100 customers or a billion, wants to emulate them.

    In contrast, with more mature industries, you have many tools an techniques. The way you build a bungalow is different than how you build Disney Hall or a massive university teaching center. A starter family home in my area has a poured concrete foundation and build mostly of a wood frame. In other parts of the country, where bricks are cheaper, it may be out of brick. If it's huge, I notice a lot more concrete and steel frames. The firms choose the tools based on the project needed, not just blindly emulate whatever Apple did for their spaceship campus or whatever is the engineering marvel du jour.

    Microservices are a great example. They're a shit pattern and make no sense unless you're huge. The only benefit they provide is that you can add more instances in a more granular level. However, this comes at a massive cloud computing and ecological expense. Sorry, your internal customer ticket processing app doesn't need 100 micro services, a circuit breaker pattern, smart proxy layers, backpressure, nor even asynchronous IO. Microservices allows you to scale when you have a million simultaneous concurrent users. Anything below, it just adds complexity and expense. Yet....EVERYONE is doing it.

    The same argument applies to moving logic to the browser with Angular. Congrats, you took the absolute fastest and cheapest layer and moved to to the client...now what used to be a simple and fast server side render in .25s is replaced with 30 REST service calls that require 1.5s to complete and cost you more because each modular UI component has no clue what's going on has to send data multiple times, including sending a REST call to get data to send n-number of other REST calls....and then throw away 90% of the payload, because hey...microservices. Better to have a family of one-size-fits all REST services that return EVERYTHING than just the 5% of fields you need because that would be admitting you made a dumb design mistake. But hey, you're one of the cool kids with angular and microservices...that faster, cheaper, friendlier, simplier, easier-to-maintain JSP solution is so 2010!!! We're not cavemen here!!!!

    One very painful one is ORM use. ORM is good, but sometimes you use the JPA/Hibernate API, other times you use JPQL or even SQL, and evaluate on a case by case basis. So many devs will load 100 objects into memory, update 1 field in 1 object and send it to em.save() where it deletes the 100 child collection objects and reinserts them at unnecessary risk of deadlock and huge expense from a cloud cost and user time perspective. In their mind, APIs are future and SQL/JPQL are the past and should be forgotten, despite it's very much a best practice to use the best tool for the job.

    In greater depth, here's what's going on. The managers have no clue what is going on. When in doubt, emulate the big tech FAANG corps. Developer don't know what's going on either. The few of us who do are parts of teams...half of which won't know what's going on or don't care or have such huge language barriers they're useless when it comes to explaining things. So your best engineers say "do something like what we did 10 years ago and this is why"...your younger ones say "all the cool kids are doing this new abstraction and all these extra layers." It's very tempting for them to say we're lazy or afraid of change.

    So as a result, a legitimate business need requires you to overhaul and existing application and add a large number of new features. You can retain the existing architecture, but no one wants to do that, for both legit (old framework is not supported any more) and stupid reasons (want to be like
  • by Captain Chad ( 102831 ) on Saturday November 06, 2021 @02:19PM (#61963529) Homepage

    I'm going to chip in here because I've seen a lot.

    I contend the complexity of today's programs is different, not necessarily better or worse. Anyone who's worked on a system with hundreds of thousands of lines of IBM 360/370 Assembler code, like I have, will be nodding their head right now. Brooks' seminal "No Silver Bullet" paper [wikipedia.org] was written in 1987 and was heavily based on his experience in the '60s and '70s managing projects requiring centuries of work-hours. There was quite a lot of complexity back then.

    Then the software engineering community created development software and methods greatly reducing that complexity. I cannot express what a joy it is to have automatic garbage collection in modern languages and features such as classes that help enforce modularity and encapsulation. When I first learned C#, I was gobsmacked because there was a pre-existing data type, language feature, or library method for pretty much everything. I remember learning about sorts, hashes, etc. in college, but nowadays those are all efficient library functions that most people use without caring if it's O(n^2) or O(nlogn) or how many duplicate hits you get on your hash algorithm. I once worked on a large production system that contained self-modifying code, and today's development environments are absolutely wonderful in comparison to what existed back then.

    So today's developers, for the most part, don't have to deal with that type of stuff unless they're being sloppy. Instead, as the article stated, it's more about the massive amount of distributed connectivity and networking. Software engineers in the '80s and '90s did a great job of solving many of the early complexity issues. Now we have a new breed of complexity which is just as bad for modern developers, and which we will hopefully solve in the coming decades.

  • by Malays2 bowman ( 6656916 ) on Saturday November 06, 2021 @02:21PM (#61963531)

    I just wanted to write a simple "Hello world" Android app, just to test the dev waters in that enviroment. The list of requirements to get from idea to a simple white screen that just displays "Hello world" was so mind blowing that I nearly fell out of my chair. I'm sure there is some "no code" click together solution, but I wanted to go from raw code in a text editor to a working app.

      This is nuts. "Hello World" on my very first computer was

    10 print "Hello World!"

      and I got

    ] Hello World!

      It took very little to go from that to something that prompted for input, could do math and system operations, draw a circle or character sprites,..etc

      Launguages like C, Java, etc are a bit more involved for a "Hello World!" program, but not much so.

      I'm grateful that my first programming experience did not involve the Android dev enviroment, because I would've been put off from programming forever.

    • If you're using an IDE it is the same as BASIC, you'd still need to create a project file and then the IDE would create the boiler-plate for you, Hello World would still be one line.

      From the cli you just run the tool to create a project, and it is similar.

  • But in some ways it is also too easy to make something that is crap.

  • by bb_matt ( 5705262 ) on Saturday November 06, 2021 @02:39PM (#61963571)

    I'd say, software development has absolutely got too complex - but complex due to the sheer volume of moving parts.

    The art of coding has always been difficult, at least, to produce maintainable, well tested and efficient code.
    Anyone can grab a pencil and piece of paper and make marks on it, but not everyone can be a Picasso.

    It seems what software developers spend most of their time doing now, is orchestrating code through numerous layers of delivery and into numerous microservices.
    Then that starts to bite, quite badly, so ideas such as monorepos become popular, because the code has got so fragmented.
    The output of the code may not have much repetition, but the volume of libraries used is ... quite staggering.

    Whilst good coders will realise that the mantra of "not invented here" is a bad one, they'll also realise, keeping a whacking great big third party library in an artifactory and using 1% of it, is just ... damn lazy.

    There is FAR too much leaning on third party code, which leads to a greater level of complexity, as you have to track all of these third party vendors - your application could be importing 20 projects, each of which are importing 20 projects, each of which ... you get the picture.

    So, sure, a well designed system using microservices has an ability to provide fallbacks or a slightly degraded service if one part of the system fails.
    But that all depends WHICH part.
    And you have to track all of these different microservices, deploy all of them to different containers - which is great, as you can scale them etc.
    But has opened up an entire new world of code that developers are now expected to know.

    You can no longer get away with just producing really well crafted methods, you are also expected to be a devops engineer.
    You are expected to understand the entire CI/CD process.

    It's a VERY difficult job - and you know what, I'm just fine with that.
    I'm able to do it and my skills are, right now, massively in demand - what's not to love? ... well, quite a lot, but ... I can pick and choose where I want to work - I can call those shots, I'm hard to "fire" or to be made "redundant" ... for now.

  • by Tablizer ( 95088 ) on Saturday November 06, 2021 @04:05PM (#61963845) Journal

    Yes! Part of the problem is that web standards suck for CRUD. They are stateless and lack roughly 15 common GUI idioms. I suggest we split web standards into 3 parts:

    1. Media and entertainment: games, video, art, animation, and charts.

    2. Documents: existing HTML may be good enough; however, DOM text positioning is too flawed to display PDF documents reliably, so that needs to be remedied.

    3. CRUD and Data: "Productivity" applications used for business, data analysis, and administration. A stateful GUI markup standard should be created with all the common GUI idioms built in (that HTML lacks or does poorly).

    Ideally they share conventions where appropriate to reduce the learning curve, but let each shine in its specialty. They can be either browser pluggins or stand-alone browsers.

  • by burni2 ( 1643061 ) on Saturday November 06, 2021 @04:15PM (#61963877)

    .. leads to a higher level of Abstraction accompanied by Layers trying to simplify everything further down builds layers of layers of this abstract mass -> Low Code Agile Onion Development

    L.C.A.O.D.

    or
    S.A.D. -> Severe Acronym Disorder

  • by shess ( 31691 ) on Saturday November 06, 2021 @07:03PM (#61964287) Homepage

    Part of the problem is that a lot of software development these days assumes that you can offload hard work onto libraries and services for free. But this offloading loses one of the key side effects of having to write it yourself: understanding. We know what we need to do to include and link a particular library, but we don't necessarily know if we are making a good choice by pulling in that dependency. Sometimes we are, sometimes we aren't. And when it breaks, we have to figure out what it all means from the ground up, we have no insight from having written the code in the first place. On balance, I think most systems could lose around half their dependencies without materially impacting the challenge of writing the code, and this would improve the resulting product.

    Over the longer term, this gradually whittles away at your base of experience. Instead of spending time creating and debugging real implementations, developers are spending time debugging interactions between other people's implementations - which is basically just noise which doesn't compound into increased abilities over time.

    Yes, obviously there are areas where relying on other people's code is for the best. People make so many serious mistakes writing ad-hoc UTF-8 decoders or ad-hoc storage infrastructure. The craft is in making the tradeoff.

    Don't even get me started on frameworks. These days most vendors just grab a bunch of crap and stuff it in a sack and call it a framework. A proper framework isn't just a collection of components distributed together, it's a set of APIs which are designed to have synergy with each other.

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...