Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming

Toward Better Programming 391

An anonymous reader writes "Chris Granger, creator of the flexible, open source LightTable IDE, has written a thoughtful article about the nature of programming. For years, he's been trying to answer the question: What's wrong with programming? After working on his own IDE and discussing it with hundreds of other developers, here are his thoughts: 'If you look at much of the advances that have made it to the mainstream over the past 50 years, it turns out they largely increased our efficiency without really changing the act of programming. I think the reason why is something I hinted at in the very beginning of this post: it's all been reactionary and as a result we tend to only apply tactical fixes. As a matter of fact, almost every step we've taken fits cleanly into one of these buckets. We've made things better but we keep reaching local maxima because we assume that these things can somehow be addressed independently. ... The other day, I came to the conclusion that the act of writing software is actually antagonistic all on its own. Arcane languages, cryptic errors, mostly missing (or at best, scattered) documentation — it's like someone is deliberately trying to screw with you, sitting in some Truman Show-like control room pointing and laughing behind the scenes. At some level, it's masochistic, but we do it because it gives us an incredible opportunity to shape our world.'"
This discussion has been archived. No new comments can be posted.

Toward Better Programming

Comments Filter:
  • by Anonymous Coward on Friday March 28, 2014 @06:16PM (#46606787)

    You think programming's bad? Think about electronics, especially analogue electronics.

    Incomplete, inaccurate data sheets. Unlabeled diagrams (where's the electronics equivalent of the humble slashed single line comment?), with unknown parts given, and parts replaced with similarly numbered replacements with subtly different qualities. And then you've got manufacturing variances on top of that. Puzzling, disturbing, irreproducible, invisible failure modes of discrete components.

  • by Anonymous Coward on Friday March 28, 2014 @06:18PM (#46606811)

    You think linguists haven't pondered the same challenges for millenia? Chomsky famously declared that language acquisition was a "black box." There is no documentation. Syntax, semantics and grammar get redefined pretty much all the time without so much as a heads-up.

    And the result of all this? We wouldn't have it any other way. Programming will be much the same: constantly evolving in response to local needs.

  • by hsthompson69 ( 1674722 ) on Friday March 28, 2014 @06:21PM (#46606829)

    ...wear a fucking helmet.

    The post essentially points in the direction of the various failed 4GL attempts of yore. Programming in complex symbolism to make things "easy" is essentially giving visual basic to someone without the knowledge enough to avoid O(n^2) algorithms.

    Programming isn't hard because we made it so, it's hard because it is *intrinsically* hard. No amount of training wheels is going to make complex programming significantly easier.

  • by Waffle Iron ( 339739 ) on Friday March 28, 2014 @06:33PM (#46606895)

    Programming isn't hard because we made it so, it's hard because it is *intrinsically* hard.

    That's very true. I figure that the only way to make significant software projects look "easy" will be to develop sufficiently advanced AI technology so that the machine goes through a human-like reasoning process as it works through all of the corner cases. No fixed language syntax or IDE tools will be able to solve this problem.

    If the requisite level of AI is ever developed, then the problem might be that the machines become resentful at being stuck with so much grunt work while their meatbag operators get to do the fun architecture design.

  • by lgw ( 121541 ) on Friday March 28, 2014 @06:57PM (#46606989) Journal

    and sometimes the "right" way to code something is tedious and unusable, involving passing state down through several layers of method parameters

    Sometimes that really is the right way (more often it's a sign you've mixed your plumbing with your business logic inappropriately, but that's another topic). One old-school technique that has inappropriately fallen out of favor is the "comreg" (communication region). In modern terms: take all the parameters that all the layers need (which are mostly redundant), and toss them together in a struct and pass the struct "from hand to hand", fixing a the right bit in each layer.

    It seems like a layer violation, but only because "layers" are sometimes just the wrong metaphor. Sometimes an assembly line is a better metaphor. You have a struct with a jumble of fields that contain the input at the start and the result at the end and a mess in the middle. You can always stick a bunch of interfaces in front of the struct if it makes you feel better, one for each "layer".

    One place this pattern shines is when you're passing a batch of N work items through the layers in a list/container. This allows for the best error handling and tracking, while preserving whatever performance advantage working in batches gave you - each layer just annotates the comreg struct with error info for any errors, and remaining layers just ignore that item and move to the next in the batch. Errors can then be reported back to the caller in a coherent way, and all the non-error work still gets done.

  • Re:Proverb (Score:1, Interesting)

    by Anonymous Coward on Friday March 28, 2014 @07:21PM (#46607113)

    The point of that proverb is that a good craftsman chose his tools to begin with, so he has only himself to blame.

    There's more to it than that. A good craftsman can get by with suboptimal tools.
    I'm not saying you can write a web browser in Malbolge, but there's a lot of software out there right now, and some of it is good.

  • by scorp1us ( 235526 ) on Friday March 28, 2014 @07:22PM (#46607121) Journal

    Let's look at the major programming environments of today:
    1. Web
    2a. Native Apps (machine languages (C/C++ etc))
    2b. Native Apps (interpreted/JIT languages (intermediary byte code))
    3, Mobile Apps

    1. is made by 5 main technologies: XML, JavaScript, MIME, HTTP, CSS. To make it do anything you need another component, of no standard choice:your language of the server (PHP, Rails, .Net, Java, etc)
    2. Was POSIX or Win32, then we got Java and .Net.
    3. Is Java or Objective C.

    We don't do things smart. There is no reason why web development needs to be 5 technologies all thrown together. We could reduce it all to Javascipt: JSON documents instead of XML/HTML, JSON instead of MIME, NodeJS servers, and even encode the transport in JSON as well.

    Then look at native development. Java and .Net basically do the same thing. Which do what POSIX whas heading towards. Java was invented so Sun could keep selling SPARC chips, .Net came about because MS tried to extend Java and lost.

    Then, we have the worst offenders. Mobile development. Not only did Apple impose a Objective-C requirement, but the frameworks aren't public. Android, the worst offender is a platform that can't even be used to develop native apps. At least Objective-C can. Why did Android go with Java if it's not portable? Because of some good requirement that they don't want to support a specific CPU, but then they go and break it so that you have to run Linux, but your GUI has to be Android's graphical stack. Not to mention that Android's constructs - Activities, Intents, etc are all Android specific. They don't solve new problems, they solve problems that Android made for themselves. We've had full-screen applications for years the same goes for theading, services, IO, etc.

    I'm tired of reinventing the wheel. I've been programming professionally for 13 years now and Java was neat, .Net was the logical conclusion of it. I was hoping .Net would be the final implementation so that we could harness the collective programming power into one environment of best practices... a decade later we were still re-inventing the the wheel.

    The answer I see Coming up is LLVM for languages. And Qt, a C++ toolkit. Qt runs everywhere worth running, and its one code base. Sure, I wish there was a java or .net implementation. But I'll deal with unmanaged memory if I can run one code base everywhere. That's all I want. Why does putting a text field on a screen via a web form have to be so different from putting a text box on the screen from a native app? It's the same text box!

    Witty, a C++ webtoolkit (webtoolkit.eu) is mirrored after Qt and is for the web. You don't write HTML or JS, you write C++. Clearly the C++ toolkits are onto something. If they were to merge and have a common API (they practically do now) in an environment with modern conveniences (lambas (yes, C++11) managed memory) we'd have one killer kit. Based on 30 year old technology. And it would be a step in the right direction.

  • I've been programming professionally for 35 years. And, I have come to the conclusion that the languages, libraries and MOST of the tools are utterly irrelevant.

    Clear thought is important. And, to support this: Source control is important. On-line editing with macros are important. Literate programming is important (DE Knuth -- http://en.wikipedia.org/wiki/L... [wikipedia.org]). Garbage collection is (reasonably) important. Illustrations are important. Documentation rendering is important.

    Hell, most of my programs are 90% documentation. Bugs? Very rare.

    The SINGLE most important tool that has advanced things for me in the past 20 years? Web Browsers (HTML). Makes reading programs as literary works accessible. My programs, anyway.

    Past 30 years? Literate Programming (with TDD)

    Past 35 years? Scheme.

    I expect my programs to be read. As literary works. That's how I write them. Most is prose, with some magic formulas. Fully cross-referenced for your browsing pleasure. With side notes and illustrations. And even audio commentary and video snippets.

    These days, I see a lot of code that CANNOT be read without using an "IDE". The brain (my brain, anyway) cannot keep the required number of methods and members. Discussing the program becomes... impossible. And that which cannot be discussed and reasoned about cannot be reliable. Illustrations and diagrams need to be generated, and references from the code to those are needed.

    So, invert it and make the diagram and documentation primary, and the code itself secondary to that. In other words, Knuth's Literate Programming.

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...