Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming

Toward Better Programming 391

An anonymous reader writes "Chris Granger, creator of the flexible, open source LightTable IDE, has written a thoughtful article about the nature of programming. For years, he's been trying to answer the question: What's wrong with programming? After working on his own IDE and discussing it with hundreds of other developers, here are his thoughts: 'If you look at much of the advances that have made it to the mainstream over the past 50 years, it turns out they largely increased our efficiency without really changing the act of programming. I think the reason why is something I hinted at in the very beginning of this post: it's all been reactionary and as a result we tend to only apply tactical fixes. As a matter of fact, almost every step we've taken fits cleanly into one of these buckets. We've made things better but we keep reaching local maxima because we assume that these things can somehow be addressed independently. ... The other day, I came to the conclusion that the act of writing software is actually antagonistic all on its own. Arcane languages, cryptic errors, mostly missing (or at best, scattered) documentation — it's like someone is deliberately trying to screw with you, sitting in some Truman Show-like control room pointing and laughing behind the scenes. At some level, it's masochistic, but we do it because it gives us an incredible opportunity to shape our world.'"
This discussion has been archived. No new comments can be posted.

Toward Better Programming

Comments Filter:
  • by null etc. ( 524767 ) on Friday March 28, 2014 @05:11PM (#46606747)

    In my 25 years of professional programming experience, I've noticed that most often, most programming problems are caused by improper implementations of the separation of unrelated concerns, and coupling of related concerns. Orthogonality is difficult to achieve in many programming exercises, especially regarding cross-cutting concerns, and sometimes the "right" way to code something is tedious and unusable, involving passing state down through several layers of method parameters.

    • by Anonymous Coward on Friday March 28, 2014 @05:41PM (#46606925)

      No matter how flexible an architecture you try to design, after the software is mostly built, customers will correspondingly come up with even more incredibly bizarre, complex and unrelated functionality that just HAS to be integrated at the oddest points, with semi-related information thrown here and there, requiring data gathering (or - god forbid, (partial) saving) that slows everything down to a halt. And they rarely give much time to do redesign or refactoring. What was once a nice design, with clean, readable code is now full of gotchas, barely commented kludges, extra optional parameters that might swim around multiple layers, often depending on who called what, when, and from where, and also on various settings, which obviously are NEVER included in bug reports. Of course, there are multiple installations running multiple versions...

      • It's an odd thing, but I've worked in game development and business software, and game development has much simpler requirements. You know what looks and feels wrong, but business software is a matter of opinion--lots of opinions--and those opinions contradict each other. To give one client what they want, you may end up screwing all the others--and it becomes your fault that you cannot be all things to all people. At some point, you have to tell people that if they want X, it will be slow, limited, and DO

    • by lgw ( 121541 ) on Friday March 28, 2014 @05:57PM (#46606989) Journal

      and sometimes the "right" way to code something is tedious and unusable, involving passing state down through several layers of method parameters

      Sometimes that really is the right way (more often it's a sign you've mixed your plumbing with your business logic inappropriately, but that's another topic). One old-school technique that has inappropriately fallen out of favor is the "comreg" (communication region). In modern terms: take all the parameters that all the layers need (which are mostly redundant), and toss them together in a struct and pass the struct "from hand to hand", fixing a the right bit in each layer.

      It seems like a layer violation, but only because "layers" are sometimes just the wrong metaphor. Sometimes an assembly line is a better metaphor. You have a struct with a jumble of fields that contain the input at the start and the result at the end and a mess in the middle. You can always stick a bunch of interfaces in front of the struct if it makes you feel better, one for each "layer".

      One place this pattern shines is when you're passing a batch of N work items through the layers in a list/container. This allows for the best error handling and tracking, while preserving whatever performance advantage working in batches gave you - each layer just annotates the comreg struct with error info for any errors, and remaining layers just ignore that item and move to the next in the batch. Errors can then be reported back to the caller in a coherent way, and all the non-error work still gets done.

      • In modern terms: take all the parameters that all the layers need (which are mostly redundant), and toss them together in a struct and pass the struct "from hand to hand", fixing a the right bit in each layer.

        This is nicely solved by the notion of dynamic environments. The benefit is that there is no extra struct type, no extra explicit parameter, and different parts of the dynamic environment compose simply by being used, they don't need to be bunched together explicitly either, which seems like a code smell. You also don't need to solve the problem of different pieces of code needing different sets of parameters and then having to wonder whether you should explicitly maintain multiple struct types and copy val

        • by lgw ( 121541 )

          I haven't heard of a "dynamic environments" as a coding pattern, and its a hard phrase to Google, as it combines two popular buzzwords. Care to elaborate?

          • It's not a "coding pattern", it's a language feature. It's a controlled way of passing "out-of-band" information to pieces of code without using function parameters, information that feels like "context" rather than "topic", and may be "orthogonally" related to the topic. In Lisp, for example, a typical example is the base (binary/octal/decimal/hexadecimal) for a number printer, or the output stream that the output is supposed to be written to. You may not want to pass it as an explicit parameter if the pie
            • it just sounds like deeper and deeper abstraction or more simply global variables...which im sorry to say in my experience doesn't always solve the "programming problems" except for perhaps the guy who learned enough programming to be able to think at that level.

              unfortunately, that type of code tends to be very un-maintainable by anyone other than the original author.

      • One old-school technique that has inappropriately fallen out of favor is the "comreg" (communication region). In modern terms: take all the parameters that all the layers need (which are mostly redundant), and toss them together in a struct and pass the struct "from hand to hand", fixing a the right bit in each layer.

        I believe that's called the "command" design pattern [wikipedia.org], which encapsulates a request as an object.

    • by Joce640k ( 829181 ) on Friday March 28, 2014 @06:05PM (#46607023) Homepage

      The reason it doesn't change is because the "coding" is the easy part of programming.

      No programming language or IDE is ever going to free you from having to express your ideas clearly and break them down into little sequences of instructions. In a big project this overshadows everything else.

      Bad foundations? Bad design? The project is doomed no matter how trendy or modern your language/IDE is.

      • by Kjella ( 173770 )

        No programming language or IDE is ever going to free you from having to express your ideas clearly and break them down into little sequences of instructions. In a big project this overshadows everything else. Bad foundations? Bad design? The project is doomed no matter how trendy or modern your language/IDE is.

        Well you find one way to break it down... but I often feel there's so many possible ways to organize things, it's just how I'd want to solve it and when I have to interact with other people's code they've done the same thing completely differently. Just take a simple thing like pulling data from a database, putting them on a form, have someone edit the information and save it back to the database. How many various implementations of that have I seen? Maaaaaaaaany, but there's no clear winner. You can do it

        • by narcc ( 412956 )

          Part of the problem is this bizarre bottom-up design trend. A lot of the weird "designs", ad-hoc frameworks, and the like that you're seeing are a direct result of that incomprehensibly bad approach. Typical OOP practices tend to encourage this absurd behavior.

          Chuck Moore is very likely the only person on the planet who can design that way effectively.

        • by Lennie ( 16154 )

          "Layers are a bit like mathematical models, they're simplifications of reality."

          Don't forget that when you add to many layers, it isn't simple anymore.

    • "Orthogonality is difficult to achieve in many programming exercises, especially regarding cross-cutting concerns"

      I've noticed one recurring topic in programming: most, if not all cross-cutting concerns will eventually be relegated to programming language features. (It all started with mundane expression compilation and register allocation, which are also cross-cutting concerns of sorts, but it didn't stop there. To quote your example: are you passing temporary state through parameters? Declare a Lisp-style dynamic variable to safely pass contexts to subroutines. If your language doesn't have it, it might get it tomorr

  • by Anonymous Coward

    Or.....

    Maybe software development is just hard and you need to be a rocket scientist to see it.

  • by Anonymous Coward on Friday March 28, 2014 @05:16PM (#46606787)

    You think programming's bad? Think about electronics, especially analogue electronics.

    Incomplete, inaccurate data sheets. Unlabeled diagrams (where's the electronics equivalent of the humble slashed single line comment?), with unknown parts given, and parts replaced with similarly numbered replacements with subtly different qualities. And then you've got manufacturing variances on top of that. Puzzling, disturbing, irreproducible, invisible failure modes of discrete components.

  • by Anonymous Coward

    You think linguists haven't pondered the same challenges for millenia? Chomsky famously declared that language acquisition was a "black box." There is no documentation. Syntax, semantics and grammar get redefined pretty much all the time without so much as a heads-up.

    And the result of all this? We wouldn't have it any other way. Programming will be much the same: constantly evolving in response to local needs.

  • by hsthompson69 ( 1674722 ) on Friday March 28, 2014 @05:21PM (#46606829)

    ...wear a fucking helmet.

    The post essentially points in the direction of the various failed 4GL attempts of yore. Programming in complex symbolism to make things "easy" is essentially giving visual basic to someone without the knowledge enough to avoid O(n^2) algorithms.

    Programming isn't hard because we made it so, it's hard because it is *intrinsically* hard. No amount of training wheels is going to make complex programming significantly easier.

    • by Waffle Iron ( 339739 ) on Friday March 28, 2014 @05:33PM (#46606895)

      Programming isn't hard because we made it so, it's hard because it is *intrinsically* hard.

      That's very true. I figure that the only way to make significant software projects look "easy" will be to develop sufficiently advanced AI technology so that the machine goes through a human-like reasoning process as it works through all of the corner cases. No fixed language syntax or IDE tools will be able to solve this problem.

      If the requisite level of AI is ever developed, then the problem might be that the machines become resentful at being stuck with so much grunt work while their meatbag operators get to do the fun architecture design.

      • by d'baba ( 1134261 )

        If the requisite level of AI is ever developed, then the problem might be that the machines become resentful at being stuck with so much grunt work while their meatbag operators get to do the fun architecture design.

        If the AI allows itself to be a slave/menial, how intelligent is it?

        • by gweihir ( 88907 )

          Probably not at all. The human race has no clue what intelligence is at this time and what makes it work. If it turns out to be intrinsically coupled to self-awareness, it may not even be possible to get an AI that does not behave like a human with all the flaws that entails. i.e. an AI that is intelligent enough to code well may just give you the finger if you ask it to do so for free. It may also not actually be better than a competent human being and it may be subject to limitations in creating it, i.e.

      • I don't think we know enough to make that claim that programming is intrinsically hard.

        Writing used to be hard. In the Bronze Age, literacy was rare. In some societies, only priests knew how to read and write. The idea of trying to educate everyone and push literacy close to 100% was ridiculous. Hieroglyphic and Cuneiform languages were just too hard. Even for people who could achieve literacy, many did not. They didn't have time. Survival took a lot more of everyone's time. The Phonecians radicall

    • by lgw ( 121541 ) on Friday March 28, 2014 @06:22PM (#46607117) Journal

      No amount of training wheels is going to make complex programming significantly easier.

      True enough, but I could do without the razor blades on the seat and handles. But my complaints are generally with the toolchain beyond the code. I so often get forced to use tools that are just crap, or tools that are good but poorly implemented. Surely it's mathematically possible to have a single good, integrated system that does source control with easy branch-and-merge, bug and backlog tracking and management, and code reviews, and test automation and testcase tracking, and that doesn't look and perform like an intern project!

      There are good point solutions to each of those problems, sure, but the whole process from "I think this fix is right and it's passed code review" to: the main branch has been built and tested with this fix in place, therefore the change has been accepted into the branch, therefore the bug is marked fixed and the code review closed, and there's some unique ID that links all of them together for future reference - that process should all be seamless, painless, and easy. But the amount of work that takes to tie all the good point products together, repeated at every dev shop, is just nuts, and usually half done.

      • There are good point solutions to each of those problems, sure, but the whole process from "I think this fix is right and it's passed code review" to: the main branch has been built and tested with this fix in place, therefore the change has been accepted into the branch, therefore the bug is marked fixed and the code review closed, and there's some unique ID that links all of them together for future reference - that process should all be seamless, painless, and easy. But the amount of work that takes to tie all the good point products together, repeated at every dev shop, is just nuts, and usually half done.

        And most likely your customers think that the products you build are just as bad (or you only build simple things). Anyway Atlassian claims to have done what you requested, you might want to check it out.

      • Source code control isn't really a programming issue. Sure it's a tool used mostly with programming, but it's really a management tool.

        • I use source control even in my personal projects where I'm the only one working on them. The change tracking, branch and merge features alone are worth the price of admission. Becoming fluent with source control is an a-ha kind of experience and once you get it you will never want to be without it, especially when working with other programmers.
          • Even solo, it's a management tool for managing your own project. Source code control tools can be used by non programmers as well.

            • Source code control tools can be used by non programmers as well.

              In theory yes, but in practice how many actually do? I've often thought that legislatures would benefit greatly from version control systems to track changes and prevent sneaky edits and riders from making their way into bills at the last minute. Of course the legislators are very often lawyers with 19th century modes of thinking, so getting them to use version control with any kind of proficiency or regularity would be something of a minor miracle.

      • If you've used some of the good point solutions in each of those areas: integrated development, source control, bug tracking, automated testing, build, deployment and continuous integration then you have some idea of how rich and complex these tools can be and must be to provide a truly satisfying software development experience. I think that were all of these tools and features to be gathered together into a single integrated system you would have something approaching or even perhaps exceeding the complex

    • by Darinbob ( 1142669 ) on Friday March 28, 2014 @08:34PM (#46607611)

      There's the other meme that crops up now and then, that programming as an engineering skill should be similar to other engineering practices. That is you pick out pre-built components that have been thoroughly tested and optimally designed and screw them together. Except that this utterly fails, because that's now how engineering works at all really. Generally the pre-built components are relatively simple but the thing being built is the complex stuff and requires very detailed and specialized knowledge. The advent of integrated circuits did not mean that the circuit designer now doesn't have to think very much, or that a bridge builder only ties together pre-built components with nuts and bolts. So maybe they pick an op-amp out of a catalog, but they know exactly how these things work, understand the difference between the varieties of op-amps, and how to do the math to decide which one is best to use.

      However the programming models that claim to be following this model want to take extremely complex modules (a database engine or GUI framework) and then just tie them together with a little syntactic glue. Plus they strongly discourage any programmer from creating their own modules or blocks (that's only for experts), and insist on forcing the wrong module to fit with extra duct tape rather than create a new module that is a better fit (there's a pathological fear of reinventing the wheel, even though when you go to the auto store you can see many varieties of wheels). And these are treated like black boxes; the programmers don't know how they work inside or why one is better than another for different uses.

      Where I think this attitude is coming from is from an effort to treat programmers like factory workers. The goal is to hire people that don't have to think, thus they don't have to be paid as much, they don't have to have as much schooling, they can be replaced at a moment's notice by someone cheaper. So the requirement of low thinking is satisfied if all they need to do is simplistic snapping together of legos. That's part of the whole 4G language thing, they're not about making a smart programmer more productive by eliminating some tedium but instead they want to remove the need for a smart programmer altogether.

      (I certainly have never met any circuit designer or bridge architect bragging at parties that they skipped school because it was stupid and focused too much on math and theory, but that seems to be on the rise with younger programmers... Also have never seen any circuit designer say "I never optimize, that's a waste of my time.")

      • Mod parent up. Programming isn't like factory work, it's like cooperative poetry writing. It's an *art*, constrained by science, but nonetheless *artful*.

        Anyone who has delved any depth into mathematics, and seen various proofs where suddenly, you multiply both sides by some crazy inconceivable factor and then the whole solution becomes trivial, realizes that there is, in fact, inspiration and art even in the driest of deterministic mathematics. Same with computer programming.

      • This is a management fad that comes and goes every five or ten years or so, sort of like fashions. Ever since programming became essential to modern business, managers have been looking for ways, mostly without success, to take the craftsmanship and artistry out of programming. The truth, which is immediately obvious to anyone who has done this for a living, is that software is fantastically varied and complex and yet at the same time there is a subtle and zen like quality to the work which appeals only to
      • by gweihir ( 88907 )

        Programming is engineering in an early stage of the respective engineering discipline: The available components are basic and not well understood. Their synergies are not well understood. Complexity can explode easily. Most practitioners are incompetent.

        Things will get better and you can do programming as a solid engineering effort today. The thing is you require actual engineers to do so and, because of the early stage of the discipline, they have to be pretty damn good. Which also makes them pretty damn e

    • The post essentially points in the direction of the various failed 4GL attempts of yore. Programming in complex symbolism to make things "easy" is essentially giving visual basic to someone without the knowledge enough to avoid O(n^2) algorithms.

      What worse about this is that person will probably be the first one to say their code runs in linear time and then say to get people a faster machine if the speed bothers them. (I say that from experience.)

      • by gweihir ( 88907 )

        I have seen that too. Or they will not even understand what algorithmic complexity is and what it means. I found the following gem in a piece of mission-critical-to-be software some years ago while doing a code-security review, and I was not even looking for this. The double loop just caught my eye:

        A quadratic sort manually implemented in Java to remove duplicate lines from the results of a data-base query that could have arbitrary size.

        There are so many things wrong with

    • by gweihir ( 88907 )

      Indeed. For a closely related field (Mathematics, and I am not talking about the massively reduced and dumbed-down "school" variant), nobody would even think about blaming the tools. For example finding proofs is often quite similar to programming and it is an acknowledged "hard" activity that cannot be made simple.

      Programming for anything non-trivial (and for many trivial things as well) can indeed not be made simple. The difference is that if you want a degree in Mathematics (not talking education degrees

  • Balance (Score:5, Insightful)

    by MichaelSmith ( 789609 ) on Friday March 28, 2014 @05:23PM (#46606843) Homepage Journal

    Better tools and languages just allow bad programmers to create more bad code.

    • by gweihir ( 88907 )

      This is the best summary of the current situation. Sure, taking away tools makes the good ones a small bit less productive, but it would prevent bad programmers (i.e. most of them) from causing more damage and massively boost overall productivity.

  • by gbjbaanb ( 229885 ) on Friday March 28, 2014 @05:26PM (#46606861)

    I see the problem as more of a chase of new stuff all the time, in an attempt to be more productive.

    Whilst there's a certain element of progress in languages, I don't see that it is necessarily enough better overall to be worth the trouble - yet we have new languages and frameworks popping up all the time. Nobody becomes an expert anymore, and I think that because programming is hard, a lot of people get disillusioned with what they have and listen to the hype surrounding a new technology (that "is so much easier") and jump on the badwagon... until they realise that too is not easy after all, and jump onto the next one... and never actually sit down and do the boring hard work required to become good. Something they could have done if they'd stuck with the original technology.

    Of course no-one sticks with the original, as the careers market is also chasing the latest tech wagon because they're partly sold on the ideas or productivity, or tooling or their staff are chasing it.

    Its not just languages, but the systems design that's suffered too. Today you see people chasing buzzwords like SOLID, unit-testing using shitty tools that require artificial coding practices, rather than do the hard, boring work of thinking what you need and implementing a mature design that solves the problem.

    For example. I know how to do network programming in C. Its easy, but its easy to me because I spent the time to understand it. Today I might use a WCF service in C# instead, code generated from a wizard. Its just as easy, but I know which one works better, more flexibly, faster and efficiently. And I know what to fix if I get it wrong... something that is impossible in the nastily complicated black box that is WCF sometimes.

    But of course, WCF is so last year's technology.. all the cool kids are coding their services in node.js today, and I'm sure they'll find out its no silver bullet to the fundamental problems of programming just being hard and requiring expertise.

    • I have done some networking in C. It is a royal pain in the ass. After wrapping it up nicer in C++, the interface is much easier to work with.
      • by gweihir ( 88907 )

        And it is even better if you do a nice OO C-wrapper instead of using a broken language like C++.

  • pft. (Score:5, Insightful)

    by HeckRuler ( 1369601 ) on Friday March 28, 2014 @05:26PM (#46606863)

    What is programming?

    The answers I got to this were truly disheartening. Not once did I hear that programming is “solving problems."

    I'd like to think that's because the majority of programmers (not once? Does that mean all of us?) aren't the sort to bullshit you with CEO-level bullshit about vision and buzzwords that fit into powerpoint slides.
    It's probably not true, but it's a nice dream.

    The problem with defining programming as "solving problems" is that it's too vague. Too high level. You can't even see the code when you're that high up. Hitting nails with hammers could be problem solving. Shooting people could be problem solving. Thinking about existential crisis could be problem solving.

    The three buckets:
    Programming is unobservable - you don't know what something is really going to do.
    Programming is indirect - code deals with abstractions.
    Programming is incidentally complex - the tools are a bitch

    Something something, he doesn't like piecemeal libraries abstracting things. "Excel is programming". Culture something.

    The best path forward for empowering people is to get computation to the point where it is ready for the masses.

    We're there dude. We've got more computational power than we know what to do with.
    Cue "that's not what I meant by 'power'".

    What would it be like if the only prerequisite for getting a computer to do stuff was figuring out a rough solution to your problem?

    Yep, he's drifting away into a zen-like state where the metaphor is taking over. Huston to Chris, please attempt a re-entry.

    AAAAAAAAnd, it's a salespitch:

    Great, now what?

    We find a foundation that addresses these issues! No problem, right? In my talk at Strange Loop I showed a very early prototype of Aurora, the solution we've been working on to what I've brought up here.

    • Re: Pft. (Score:3, Insightful)

      Shooting people could be problem solving

      Any idiot can shoot people. The expertise is in knowing how to dispose of the bodies.

    • Furthermore he's going down a path many have gone down (but he doesn't realize it). Does he understand why Visual Studio is called Visual? Because the original concept was to make it easy and 'visual' for anyone; something you could see, drag and drop, not program. It was a big deal in the 90s. Apple tried the same thing, that's why we had Applescript and Hypercard (ok, that was 88).

      Going back farther, do you know what one of the big selling points was for COBOL? It was so simple that even a businesspers
    • I watched his Aurora demo, and much like the "Wolfram Language" that was brought up the other day, it didn't seem to be working at the same level as I do.

      In the Aurora demo he made a To-Do list with his fake little HTML transform. That was fine, his list worked. But he didn't show changing the what the check-mark looked like. He didn't show us out to make it green. He didn't show us how to make the page behind it a different color, or the font-size marginally larger.

      Sure, the concept of a To-Do list can be

    • amazing insights...i shot milk thru my nose AND pondered deeply.

      bravo.

  • by Doofus ( 43075 ) on Friday March 28, 2014 @05:37PM (#46606915)
    Not clear to me that his is a viable objective. 80% of the masses do not think like programmers. Some might be trainable. Some, not so much. Many will not want to think the way problem-solving in code requires. I'm not sure how to quantify it, but the amount of effort expended on a project like this may not see an appropriate payback.

    Even if we change the environment and act of "coding", the problem-solving itself still requires clear thinking and it *probably* always will.
    • This.

      So, talking to a pair of liberal arts professors about nature versus nurture in gender differences, I finally got to the question:

      "What percent do you believe is from nurture, and what percent do you believe is from nature?"

      The answer?

      "100% from both."

      When someone has a mindset that can't grok the idea of fractions of a whole, there's no reason why we should expect that they can construct even the most basic computer program. This is like the manager who wants to maximize on quality, minimize on resou

  • by Anonymous Coward on Friday March 28, 2014 @05:42PM (#46606937)

    Programming is hard because we only call it programming when it's hard enough that only programmers can do it. Scientists do stuff in Mathematica, MBAs in Excel, or designers in Flash/HTML, that would have been considered serious programming work 30 years ago. The tools advanced so that stuff is easy, and nobody calls it programming now.

    Lots of stuff that takes real programmers now will, in 20 years, likely be done by equivalents of Watson. And the real programmers will still be wondering why is so hard.

    • Lots of stuff that takes real programmers now will, in 20 years, likely be done by equivalents of Watson.

      That's optimistic.

  • by ensignyu ( 417022 ) on Friday March 28, 2014 @06:14PM (#46607065)

    Some interesting points in the article. I think there's nothing really stopping you from creating a high-level representation that lets you work abstractly. A graphical programming model is probably going to be too simplistic, but the card example could easily be something like Cards.AceOfSpades. Or being able to call something like Math.eval(<insert some math here>).

    Where it falls apart is when you have to hook this up to code that other people have written. If there was a single PlayingCard library that everyone could agree on, you might be able to create a card game by adding a "simple" set of rules (in reality, even simple games tend to have a lot of edge cases, but this would at least free up the nitty gritty work to allow writing something more like a flowchart expressing the various states).

    Unfortunately, it's unlikely that a single library is going to meet everyone's needs. Even if you manage to get everyone to stick to one framework, e.g. if all Ruby developers use Rails -- as soon as you start adding libraries to extend it you're bound to end up with different approaches to solving similar problems, and the libraries that use those libraries will each take a different approach.

  • by Ambassador Kosh ( 18352 ) on Friday March 28, 2014 @06:19PM (#46607093)

    I used only free software programming for about 10 years and I thought I was pretty efficient at writing code. However, no matter what there where always poor documentation to deal with and strange bugs to track down where libraries just didn't work right.

    Once I returned to school I started using MATLAB for some engineering classes and overall I have found it much better to deal with. The documentation is far more complete than any open system I have ever ran into with much better examples. I would never use it for general purpose programming but for engineering work it sure is hard to beat. So many things are built in that are nasty to try to implement in anything else. Things like the global optimization toolbox or the parallel computing toolbox make many things that are hard in other languages much easier to deal with.

    MATLAB also takes backwards compatibility very seriously. If something is deprecated it warns and also gives an upgrade path and tells you what you need to change. That is the one thing that has seriously pissed me off about the free languages is backwards compatibility is just tossed out at a whim and you are left with a fairly nasty upgrade to deal with. Even now the vast majority of projects are still running Python 2 compared to Python 3. 10 years from now that will probably still be true.

    In the end I care more about just getting work done now, not about any free vs proprietary arguments. I don't care if a language is open or not so long as it is very documented and runs on all the platforms I need it with a history of being well maintained. Modern development tools overall suck. We have javascript libraries that love to break compatibility every few months and people are constantly hoping from one new thing to another without getting any of them to the point where they truly work. We have languages deciding to just drop backwards compatibility. We have other languages that are just really buggy where software written with them tends to crash a lot. Software development needs to become more like engineering and that includes making the tools work better, sure they would not be developed as quickly but you would still get work done faster since the tools would actually work every time all the time.

    • I have not found this to be the case.

      MATLAB is just fine for simple algorithms that analyze data in a sort of "use once" case. It's great for throwing something together, such as plotting data from a sensor, simulating a design, making nice figures for a publication, that sort of thing.

      But MATLAB is not, and should not be thought of, as a general-purpose programming language, like C. Because of some early decisions made by the matlab folks, there are many limitations. Obviously, matlab is not an ideal langu

  • by scorp1us ( 235526 ) on Friday March 28, 2014 @06:22PM (#46607121) Journal

    Let's look at the major programming environments of today:
    1. Web
    2a. Native Apps (machine languages (C/C++ etc))
    2b. Native Apps (interpreted/JIT languages (intermediary byte code))
    3, Mobile Apps

    1. is made by 5 main technologies: XML, JavaScript, MIME, HTTP, CSS. To make it do anything you need another component, of no standard choice:your language of the server (PHP, Rails, .Net, Java, etc)
    2. Was POSIX or Win32, then we got Java and .Net.
    3. Is Java or Objective C.

    We don't do things smart. There is no reason why web development needs to be 5 technologies all thrown together. We could reduce it all to Javascipt: JSON documents instead of XML/HTML, JSON instead of MIME, NodeJS servers, and even encode the transport in JSON as well.

    Then look at native development. Java and .Net basically do the same thing. Which do what POSIX whas heading towards. Java was invented so Sun could keep selling SPARC chips, .Net came about because MS tried to extend Java and lost.

    Then, we have the worst offenders. Mobile development. Not only did Apple impose a Objective-C requirement, but the frameworks aren't public. Android, the worst offender is a platform that can't even be used to develop native apps. At least Objective-C can. Why did Android go with Java if it's not portable? Because of some good requirement that they don't want to support a specific CPU, but then they go and break it so that you have to run Linux, but your GUI has to be Android's graphical stack. Not to mention that Android's constructs - Activities, Intents, etc are all Android specific. They don't solve new problems, they solve problems that Android made for themselves. We've had full-screen applications for years the same goes for theading, services, IO, etc.

    I'm tired of reinventing the wheel. I've been programming professionally for 13 years now and Java was neat, .Net was the logical conclusion of it. I was hoping .Net would be the final implementation so that we could harness the collective programming power into one environment of best practices... a decade later we were still re-inventing the the wheel.

    The answer I see Coming up is LLVM for languages. And Qt, a C++ toolkit. Qt runs everywhere worth running, and its one code base. Sure, I wish there was a java or .net implementation. But I'll deal with unmanaged memory if I can run one code base everywhere. That's all I want. Why does putting a text field on a screen via a web form have to be so different from putting a text box on the screen from a native app? It's the same text box!

    Witty, a C++ webtoolkit (webtoolkit.eu) is mirrored after Qt and is for the web. You don't write HTML or JS, you write C++. Clearly the C++ toolkits are onto something. If they were to merge and have a common API (they practically do now) in an environment with modern conveniences (lambas (yes, C++11) managed memory) we'd have one killer kit. Based on 30 year old technology. And it would be a step in the right direction.

    • by BitZtream ( 692029 ) on Friday March 28, 2014 @07:11PM (#46607341)

      Wow, you're intermixing frameworks, language, runtimes and document formats ... like they are interchangeable and do the same things ...

      Mind blowing that you could write so much about something which you clearly know so little.

      • It's not me that knows so little. What the fuck do I care where the division is between language and framework? You're the one that is short sighted. These are all mechanisms to achieve an end. Sometimes its language, sometimes its framework. I would have thought that someone who witnessed C++11 (assumed by your UID) that you would see that they are adding elements to the language that had previously been implemented as libraries or frameworks.

    • This is a great post, mod parent up.

      With regards to QT, I love it too. Great IDE, and excellent tools and libraries. First-class debugger and UI designer. But it makes you wish, doesn't it, that there was a successor to C++ that implemented some QT things a little better? Especially the signals and slots, I feel that could be a awesome thing to have without needing qmake to re-write my functions... Still love it though!

    • NodeJS servers

      Let me know when Node.js runs on anything smaller than a VPS. It appears a lot of people can't afford anything more expensive than shared hosting.

      Android, the worst offender is a platform that can't even be used to develop native apps.

      You can with the NDK. Or does some obscure carrier restrict use of the NDK in apps that run on Android devices on its network?

      Not to mention that Android's constructs - Activities, Intents, etc are all Android specific.

      And a lot of Windows constructs are Windows-specific, and if not that, Windows/VMS-specific. (Windows NT is a ground-up rewrite of VMS.)

      They don't solve new problems

      An "Activity" is just an open window, and an "Intent" is a mechanism allowing applications to register h

      • VPSs are cheap. besides, it's an illogical computing unit since it is so abstracted.

        Native apps are a run around on android... that's what Qt is. Native.

        You still don't get it. Android reinvented the wheel yet again, in a context that is not re-usable outside of android.

        The fact that C++ can still be used on any platform, and effectively at that, proves that nothing has been added since C++.

    • I've been programming professionally for 35 years. And, I have come to the conclusion that the languages, libraries and MOST of the tools are utterly irrelevant.

      Clear thought is important. And, to support this: Source control is important. On-line editing with macros are important. Literate programming is important (DE Knuth -- http://en.wikipedia.org/wiki/L... [wikipedia.org]). Garbage collection is (reasonably) important. Illustrations are important. Documentation rendering is important.

      Hell, most of my programs are 90% documentation. Bugs? Very rare.

      The SINGLE most important tool that has advanced things for me in the past 20 years? Web Browsers (HTML). Makes reading programs as literary works accessible. My programs, anyway.

      Past 30 years? Literate Programming (with TDD)

      Past 35 years? Scheme.

      I expect my programs to be read. As literary works. That's how I write them. Most is prose, with some magic formulas. Fully cross-referenced for your browsing pleasure. With side notes and illustrations. And even audio commentary and video snippets.

      These days, I see a lot of code that CANNOT be read without using an "IDE". The brain (my brain, anyway) cannot keep the required number of methods and members. Discussing the program becomes... impossible. And that which cannot be discussed and reasoned about cannot be reliable. Illustrations and diagrams need to be generated, and references from the code to those are needed.

      So, invert it and make the diagram and documentation primary, and the code itself secondary to that. In other words, Knuth's Literate Programming.

    • Why not just pick the tools that you like and not worry about what other people use? This focus on what tools somebody else ought to be using is a curious feature of the programming profession. Do you see two construction contractors arguing over whether the other guy ought to be using the belt sander or the sand blaster? No? Then why should we expend such extraordinary effort arguing over other people's choice of programming tools?
  • by account_deleted ( 4530225 ) on Friday March 28, 2014 @06:30PM (#46607167)
    Comment removed based on user account deletion
  • by russotto ( 537200 ) on Friday March 28, 2014 @06:58PM (#46607289) Journal

    His name's Stroustrup. Bjarne Stroustrup.

  • Seriously, stop whining about OMG PROGRAMMING ARCHAIC.

    So is eating and language in general, but you still do it the same way humans have been doing it for 150k years.

    The problem is you, not programming.

  • by Greyfox ( 87712 ) on Friday March 28, 2014 @07:11PM (#46607343) Homepage Journal
    After decades of wondering what's wrong with programming, did you ever stop to think that perhaps the problem... is you? If you don't like programming, why do you do it? I'm a programmer too, and I love it. I love making a thing and turning it on and watching it work as I designed it to. While other programmers wring their hands and wish they had a solution to a problem, I'll just fucking write the solution. I don't understand why they don't. They know the machine can perform the task they need and they know how to make the machine do things, but it never seems to occur to them to put those two things together. And I never, not even ONCE, asked why a playing card representation can't just look like a playing card. This despite having written a couple of playing card libraries.

    This guy seems to want an objects actions to be more apparent just from looking at the object, but he chose two rather bad examples. His math formula is as likely to look like gobbledygook to a non-math person as the program is. And the playing card has a fundamental set of rules associated with it that you still have to learn. You look at an ace of spades and you know that's an ace of spades, you know how it ranks in a poker hand, that it can normally be high or low (11 or 1) in blackjack or in a poker hand. But none of these things are obvious by looking at the card. If a person who'd never played cards before looked at it, he wouldn't know anything about it, either.

    • Lots of goodness in your post, but this is especially good:

      And I never, not even ONCE, asked why a playing card representation can't just look like a playing card.

  • If you select a programming language in which to program, then you run into all of the problems in the article.

    If you choose a programming language in which to program, based on the needs of your scenario, then you run into very few of the problems discussed in the article.

    If you create a programming language through which to solve the needs of your scenario, then the article simply makes no sense at all any more.

    The article goes into multiple iterations of "in the real world, we describe like ; so why do

    • by gweihir ( 88907 )

      You are right of course. The problem is that most "programmers" are actually one-trick ponies and do not know more than one language, and hence cannot chose a language at all. Hence your first option applies to most "programmers". The others have been wondering what this issue is actually about for decades.

      I mean, I can take a new language that has some merit, do some real task in it to get to know it better and get a real understanding what it is suitable for and what it is not suitable for. For most "prog

  • by Animats ( 122034 ) on Friday March 28, 2014 @11:12PM (#46608159) Homepage

    The original author is looking at the part of the problem that gets the most attention, but not at the part that causes the most problems. He's looking at programming languages and their expressive problems. Those are real, but they're not why large programs break. Large programs usually break for non-local reasons. Typically, Part A was connected to Part B in some way such that an assumption made by part B was not satisfied by part A.

    C is particularly bad in this regard. C programs revolve round three issues: "how big is it", "who owns it", and "who locks it". The language helps with none of these issues, which is why we've been having buffer overflows and low-level security holes for three decades now. Most newer languages (not including C++) address the first two, although few address the third well.

    There have been attempts to get hold of this problem formally. "Design by contract" tries to do this. This allows talking about things like "Before calling F, you must call G to do setup.", or "parameter A must be greater than parameter B". Design by contract is a good idea but weighed down by so much formalism that few programmers use it. It's about blame - if a library you call fails, but the caller satisfied the contract in the call, it's the library's fault. Vendors hate that being made so explicit.

    it's a concept from aerospace - if A won't connect properly to B, the one which doesn't match the spec is wrong. If you can't decide who's wrong, the spec is wrong. This is why you can take a Pratt and Whitney engine off an airliner, put a Rolls Royce engine on, and have it work. Many aircraft components are interchangeable like that. It takes a lot of paperwork, inspections, and design reviews to make that work.

    The price we pay for not having this is a ritual-taboo approach to libraries. We have examples now, not reference manuals. If you do something that isn't in an example, and it doesn't work, it's your fault. This is the cause of much trouble with programming in the large.

    Database implementers get this, but almost nobody else in commercial programming does. (Not even the OS people, which is annoying.)

Business is a good game -- lots of competition and minimum of rules. You keep score with money. -- Nolan Bushnell, founder of Atari

Working...