Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming Education

Back To 'The Future of Programming' 214

theodp writes "Bret Victor's The Future of Programming (YouTube video; Vimeo version) should probably be required viewing this fall for all CS majors — and their professors. For his recent DBX Conference talk, Victor took attendees back to the year 1973, donning the uniform of an IBM systems engineer of the times, delivering his presentation on an overhead projector. The '60s and early '70s were a fertile time for CS ideas, reminds Victor, but even more importantly, it was a time of unfettered thinking, unconstrained by programming dogma, authority, and tradition. 'The most dangerous thought that you can have as a creative person is to think that you know what you're doing,' explains Victor. 'Because once you think you know what you're doing you stop looking around for other ways of doing things and you stop being able to see other ways of doing things. You become blind.' He concludes, 'I think you have to say: "We don't know what programming is. We don't know what computing is. We don't even know what a computer is." And once you truly understand that, and once you truly believe that, then you're free, and you can think anything.'"
This discussion has been archived. No new comments can be posted.

Back To 'The Future of Programming'

Comments Filter:
  • I knew it (Score:4, Funny)

    by ArcadeMan ( 2766669 ) on Friday August 09, 2013 @01:34PM (#44523547)

    Every time some stupid colleagues of mine told me I was doing it wrong, I kept thinking they were close-minded idiots.

    Turns out, I was right all along!

  • Hmm (Score:5, Insightful)

    by abroadwin ( 1273704 ) on Friday August 09, 2013 @01:41PM (#44523635)
    Yes and no, I think.

    On the one hand, it is a good thing to prevent yourself from constrained thinking. I work with someone who thinks exclusively in design patterns; it leads to some solid code, in many cases, but it's also sometimes a detriment to his work (overcomplicated designs, patterns used for the sake of patterns).

    Unlearning all we have figured out in computer science is silly, though. Use the patterns and knowledge we've spend years honing, but use them as tools and not as crutches. I think as long as you look at something and accurately determine that a known pattern/language/approach is a near-optimal way to solve it, that's a good application of that pattern/language/approach. If you're cramming a solution into a pattern, though, or only using a language because it's your hammer and everything looks like a nail to you, that's bad.
    • Re:Hmm (Score:5, Insightful)

      by orthancstone ( 665890 ) on Friday August 09, 2013 @02:13PM (#44524025)

      Use the patterns and knowledge we've spend years honing, but use them as tools and not as crutches.

      Having just watched this video a few hours ago (sat in my queue for a few days, providence seemingly was on my side to watch it right before this story popped), I can say he argues against this very idea.

      He mentions late in the talk about how a generation of programmers learned very specific methods for programming, and in turn taught the next generation of programmers those methods. Because the teaching only involved known working methods and disregarded any outlying ideas, the next generation believes that all programming problems have been solved and therefore they never challenge the status quo.

      Much of his talk references the fact that many of the "new" ideas in computing were actually discussed and implemented in the early days of programming. Multiple core processing, visual tools and interactions, and higher level languages are not novel in any way; he's trying to point out that the earliest programmers had these ideas too, but we ignored or forgot them due to circumstances. For example, it is difficult to break out of the single processing pipeline mold when one company is dominating the CPU market by pushing out faster and faster units that excel at exactly that kind of processing.

      While TFS hits on the point at hand (don't rest on your laurels), it is worth noting that the talk is trying to emphasize open mindedness towards approaches to programming. While that kind of philosophical take is certainly a bit broad (most employers would rather you produce work than redesign every input system in the office), it is important that innovation still be emphasized. I would direct folks to look at the Etsy "Code as Craft" blog as an example of folks that are taking varying approaches to solving problems by being creative and innovating instead of simply applying all the known "best practices" on the market.

      I suppose that final comment better elaborates this talk in my mind: Don't rely on "best practices" as if they are the best solution to all programming problems.

      • Much of his talk references the fact that many of the "new" ideas in computing were actually discussed and implemented in the early days of programming. Multiple core processing, visual tools and interactions, and higher level languages are not novel in any way; he's trying to point out that the earliest programmers had these ideas too, but we ignored or forgot them due to circumstances. For example, it is difficult to break out of the single processing pipeline mold when one company is dominating the CPU market by pushing out faster and faster units that excel at exactly that kind of processing..

        I can attest to this. The phrase "Everything old is new again." (Or "All of this has happened before, and all of this will happen again." for you BSG fans) is uttered so frequently in our office that we might as well emblazon it on the door. It's almost eerie how well some of the ideas from the mainframe era fit into the cloud computing ecosystem.

      • Much of his talk references the fact that many of the "new" ideas in computing were actually discussed and implemented in the early days of programming. Multiple core processing, visual tools and interactions, and higher level languages are not novel in any way; he's trying to point out that the earliest programmers had these ideas too, but we ignored or forgot them due to circumstances.

        So what's the point? They want a cookie? They want people not to use these concepts even now that they are viable bec

    • by jythie ( 914043 )
      As with so many things, it is a matter of balance. We now have what, 60 years or so of computer science under our collective belts, and there are a lot of good lessons learned in that time... but on the down side most people only know (or choose to see) a subset of that knowledge and over apply some particular way of doing things,.. then they get promoted, and whatever subculture within CS they like becomes the dogma for where they work.
    • Designs are only complicated when they are unique. If I write my own LinkedHashMap to store 2 values, it is overcomplicated. If I just invoke a standard java LinkedHashMap to store 2 values, then it's the same design, but since everyone knows what a java LinkedHashMap does, it is simple. Also It can be swapped out for a simple array with relative ease if the code is designed in a way that is maintainable.

      Even if you are using design patterns, you should be leveraging not just the knowledge that other peo

  • by xxxJonBoyxxx ( 565205 ) on Friday August 09, 2013 @01:42PM (#44523647)

    >> We don't know what programming is. We don't know what computing is. We don't even know what a computer is.

    Aha - they found the guy who trains InfoSys employees.

  • by rvw ( 755107 ) on Friday August 09, 2013 @01:42PM (#44523649)

    The future of programming, from the seventies, it's all hippie talk...

    "We don't know what programming is. We don't know what computing is. We don't even know what a computer is." And once you truly understand that, and once you truly believe that, then you're free, and you can think anything.'"

    Next thing we can throw our chairs out and sit on the carpet with long hair, smoke weed and drink beer....

    • Re:70s yeah right! (Score:5, Insightful)

      by Zero__Kelvin ( 151819 ) on Friday August 09, 2013 @01:46PM (#44523691) Homepage

      "Next thing we can throw our chairs out and sit on the carpet with long hair, smoke weed and drink beer...."

      If you aren't doing it that way already, then you're doing it wrong.

    • So...Steve Ballmer got part of it right? I mean, throwing chair's is his specialty right?
    • Re:70s yeah right! (Score:5, Interesting)

      by phantomfive ( 622387 ) on Friday August 09, 2013 @01:59PM (#44523847) Journal

      The future of programming, from the seventies, it's all hippie talk...

      What you don't understand is, in ~1980 with the minicomputer, Computer Engineering got set back decades. Programmers were programming with toggle switches, then stepped up to assembly, then started programming with with higher level languages (like C). By the 90s objects started being used which brought the programming world back to 1967 (Simula). Now mainstream languages are starting to get first-class functions. What a concept, where has that been heard before?

      Pretty near every programming idea that you use daily was invented by the 80s. And there are plenty of good ideas that were invented back then that still don't get used much.

      My two favorite underused (old) programming ideas:

      Design by contract.
      Literate programming.

      If those two concepts caught on, the programming world would be 10 times better.

      • by DamonHD ( 794830 )

        I did an entire thesis with Tangle and Weave, and I'm glad that I did, but I'm not convinced that a narrative exposition is any better than the more random-access style that a hierarchical directory layout with some decent (embedded and out-of-line) documentation and viewer IDE does.

        Rgds

        Damon

        • I consider the important elements of literate programming to be: the idea that you are writing for a human, rather than for a computer; and making the structure of the program clear to other humans, rather than what's best for the compiler. If you do these, than I would say that you are doing literate programming.

          If you have any other ideas on the topic I'd be interested in hearing them.
          • by DamonHD ( 794830 )

            Naturally I *have* to ultimately present the program text in a form that the computer will be happy with, but I am very hot on appropriate human-centric documentation in-line and off-line, and phrases that make me spit include:

            "It's self documenting."

            "Oh it's hard to keep the comments in sync with the code."

            Farg me!

            I'm suffering on a new project from a slight lack of consideration as to what the coder following in your footsteps would need in order to understand what was intended vs what actually happened.

      • Re:70s yeah right! (Score:5, Insightful)

        by DutchUncle ( 826473 ) on Friday August 09, 2013 @02:34PM (#44524223)
        In college in the 1970s, I had to read the Multics documents and von Neumann's publications. We're still reinventing things that some very clever people spent a lot of time thinking about - and solving - in the 1960s. It's great that we have the computer power and memory and graphics to just throw resources at things and make them work, but imagine how much we could make those resources achieve if we used them with the attitude those people had towards their *limited* resources. And we have exactly the same sets of bottlenecks and tradeoffs; we just move the balance around as the hardware changes. Old ideas often aren't *wrong*, they're just no longer appropriate - until the balance of tradeoffs comes around again, at which point those same ideas are right again, or at least useful as the basis for new improved ideas.
        • imagine how much we could make those resources achieve if we used them with the attitude those people had towards their *limited* resources.

          We would gain nothing, hell we would still have god-damn teletype machines if everyone was worried about wasting nanoseconds of compute time. We would get some multiple of compute increase but we would loose out on the exponential increases in human productivity that comes from dealing with all of those abstractions automatically.

          But I think we agree in that we need to focus on fixing the problems that need fixing. It's more important to figure out what you need done and then figure out whether you will h

          • Sorry, I think you missed my intent. Lots of people have pointed out how much of their hot new computer power winds up being wasted on fancy-frosted-translucent-glass GUI effects which don't actually achieve anything. Not only is that a waste of my CPU time, it's a waste of so much computing resource around the world - and equally a waste of the time and effort of presumably clever and artistic developers.
      • by murdocj ( 543661 )

        Speaking of setting programming back, the current push in languages to get rid of declaring types of variables and parameters has set us back a few decades. In languages like Ruby, you can't say ANYTHING about your code without executing it. You have no idea what type of objects methods receive or return, whether formal and actual parameters match, or whether types are compatible in expressions, etc. I actually like a lot of aspects of Ruby, but it seems like it's thrown about 50 years of improvement in p

        • by jbolden ( 176878 )

          Ruby is actually rather strongly typed. Shell is far more like what you are describing.

          • by murdocj ( 543661 )

            Yes and no. It's true that objects have classes, but that's entirely malleable, and there's no way to look at a particular piece of Ruby code and have any idea what class an object has, unless you actually see it being created (yes, yes, even then you don't know because classes can be modified on the fly, but let's ignore that for the moment). Basically, I can't look at a method and do anything except guess what parameters it takes. Personally, I think that's a bad thing.

            • by jbolden ( 176878 )

              That I agree with. Ruby is strongly typed with very aggressive and implicit type conversions. Anyway, strongly typed languages are dominant: C++, C#, Java; so it would make sense the alternatives are dynamic. Moreover scripting has always been dynamic.

              I can understand the virtues of strongly typed. IMHO dynamic typing works best in programs under 20 lines of code, works OK 20-1000 and starts to fall apart after 1000. Most Ruby programs are under 1000 lines.

        • by Shados ( 741919 ) on Friday August 09, 2013 @04:18PM (#44525601)

          I'm a static language guy myself, but its important to keep in mind that different problems have different solutions.

          Doing heavy image processing or transnational operations, number crunching, I/O with third party APIs, etc? Yeah, static languages are probably better.

          Doing prototyping, or UI intensive work? Most UI frameworks suck, but the ones designed for static languages generally suck more, because some stuff just can't be done (well), so they have to rely on data binding expressions, strings, etc, that are out control of the language. At least dynamic languages deal with those like they deal with everything else and have them as first class concepts.

          Case in point: an arbitrary JSON string, in a dynamic language, can be converted to any standard object without needing to know what it will look like ahead of time. In a static language, you either need a predesigned contract, or you need a mess of a data structure full of strings that won't be statically checked, so you're back at square one. These type of use cases are horribly common in UI.

          • I dabble in image processing algorithms. A lot of the things I write for my own use end up being a C program to do the serious number crunching, with a perl script for the interface. Perl does all the pretty work, then just calls upon the compiled executable when serious performance is required.

          • by Greyfox ( 87712 )
            You still need to know what your JSON string will look like at some point in order to use it. It's always (for at least as long as I've been programming, a bit over 2 decades) been a problem that programmers don't fully know or understand their requirements, so they try to keep their code as generic as possible. The problem with that is that at some point you're going to have to do actual work with that code, so you end going through a labyrinth of libraries, none of which want to take the responsibility to
            • by Shados ( 741919 )

              Maybe you do need to know, maybe you don't. Maybe the object is being introspected, maybe its used to feed to a template engine, maybe its just converted from one format to another. All these things can be done in any language. Some languages make it easier than others.

              Static languages make structured stuff easy and dynamic stuff hard(er). Dynamic languages make dynamic stuff easier. Shocker, i know.

          • by lgw ( 121541 )

            You ssem to be saying that using strings - and thus bypassing the static type checking of a static language - is worse that using a language with no static type checking in the first place. I don't see that at all.

            Either you're fine with not knowing what the object looks like ahead of time - in which case you can't directly reference member names in any case, and strings are far better than reflection - or you have a specific subset of the object that you understand and needs to be how you expect it to be,

            • by Shados ( 741919 )

              Dynamic languages have better support for introspection, handling what happens when a property is missing, dealing with objects that aren't QUITE the same but "close enough", and deep nested dynamic data structures.

              If you want to represent the same things in a static languages, you need things like arbitrarily nested dictionaries or very complex data structure with meta data. Thats why to handle a JSON string in Java or .NET, you'll need a JSON object framework. The parsing is the trivial part.

              I also won't

        • If your unit tests have good code coverage this will not be a problem.
          • by murdocj ( 543661 )

            Well, it's a problem if I'm trying to actually read the code and understand what it does and what arguments it takes. Not to mention the wasted time when I pass the wrong argument type to a method and the problem doesn't show up until runtime. And god forbid that it's Ruby code using "method_missing", then I'm really screwed in so may ways it's hard to imagine. For example, in a Rails app you want to see if you are in the development environment:

            if Rails.env.dev?

            and that runs just fine... but ALWAYS retu

            • Well, it's a problem if I'm trying to actually read the code and understand what it does and what arguments it takes.

              Don't write code so complicated that you can't easily tell what type is needed from inspection. Seriously, these are solved problems. You can write code with strong type-safety, or you can write code with runtime binding. Both are workable.

      • by jbolden ( 176878 )

        Design by contract is my favorite way of handling interfaces. It really is a good idea.

        Literate programming though I'm not sure if I see much point to. There are cool examples like mathematica notebooks but in general even very good implementation like Perl's POD and Haskell's literate mode just don't seem to offer all that much over normative source code. API documentation just doesn't need to be that closely tied to the underlying source and the source documentation just doesn't need to be literate.

        As

        • Literate programming has two benefits: the idea that your program should be written more for another human to read instead of for a computer; and to make it obvious to a human the structure of your program. If you fulfill these you are doing literate programming.

          As for your 1990s and Objects. I also disagree. Objects were used for implicit parallelism and complex flow of control. No one had flow of controls like a typical GUI to deal with in 1967. Event programming was a hard problem solved well.

          I don't understand what relationship 1990s, objects, and implicit parallelism have to do with each other, you'll have to explain it more clearly. But the complex flow required by an OS managing multiple resources is significantly more difficult than

          • by jbolden ( 176878 )

            I understand the ideas behind it. But I'm not sure why understanding the structure of the program matters much. If it does, throw a few paragraphs in about the structure or include a doc to the side.

            I don't understand what relationship 1990s, objects, and implicit parallelism have to do with each other, you'll have to explain it more clearly.

            OK. Modern GUIs create a situation where operating systems have huge numbers of tasks lying around. Thousands and thousands of asymmetric potential threads passin

            • What GUI system are you using that has thousand and thousands of threads passing messages? I don't think you've really thought this through......all modern systems use only one thread. At a minimum the performance hit is often serious for thousands of threads. What you are describing seems to be the actor model, which was developed by the mid 70s.

              I understand the ideas behind it. But I'm not sure why understanding the structure of the program matters much. If it does, throw a few paragraphs in about the structure or include a doc to the side.

              Because structure is the key to understanding, in programming and literature.

              • by jbolden ( 176878 )

                Thousands of potential threads. And all of them OSX, Windows, KDE, Gnome. They all utilize tremendous numbers of objects able to operate with implicit parallelism. Generally in terms of execution threads some sort of thread pool is used to match actual CPUs to potential threads. In terms of modern systems using only one thread, look at any of the design of systems books that just ain't true. As for this being the actor model of concurrency. Yes it is. The event driven model's concurrency system was

                • As for literate you aren't answering the question what the point is of the human understanding that limited amount about the program. You are just sort of asserting that limited understanding by a human is useful.

                  The opposite. Literate programming makes it easier to understand programs.

                  In terms of modern systems using only one thread, look at any of the design of systems books that just ain't true.

                  Which GUI system doesn't? Swing? Openstep? .net? Android? They all use one thread.

  • "'The most dangerous thought that you can have as a creative person is to think that you know what you're doing,' ... 'Because once you think you know what you're doing you stop looking around for other ways of doing things and you stop being able to see other ways of doing things. You become blind.' "

    Unless of course you know you know what you are doing, because you also know to never stop looking for new ways of doing things.

    • by gweihir ( 88907 )

      But if you know what you are doing, you still have a majority with no clue around you, in the worst case micro-managing you and destroying your productivity. I think the major differences between today and the early years of computing is that most people back then were smart, dedicated and wanted real understanding. Nowadays programmers are >90% morons or at best semi-competent.

      • I couldn't agree with you more. My estimate has traditionally been about 80% [slashdot.org], but I concede that I may be a bit of an optimist.
        • by gweihir ( 88907 )

          Well, I did include the semi-competent, those that eventually do get there, with horrible code that is slow, unreliable, a resource-hog and a maintenance nightmare. Plain incompetent may indeed just be 80%. Or some current negative experiences may be coloring my view.

    • by AK Marc ( 707885 )

      Unless of course you know you know what you are doing, because you also know to never stop looking for new ways of doing things.

      If you have to look for a new way to do something, then you don't know the answer, so how can you know you know what you are doing when you know you don't know the answer? When you are 100% confident in the wrong answer, you know you know what you are doing (and are wrong). If *ever* you know you know what you are doing, you don't.

      • "If you have to look for a new way to do something, then you don't know the answer,"

        I'm not a big enough moron to think that there is one answer that can be called the answer. Your mileage clearly varies.

        " If *ever* you know you know what you are doing, you don't."

        Slashdot really needs a -1 anti-insightful option.

  • by girlintraining ( 1395911 ) on Friday August 09, 2013 @01:44PM (#44523659)

    The most dangerous thought that you can have as a creative person is to think that you know what you're doing,' explains Victor.

    Yeah. I bet Vincent Van Gogh thought he was total shit at painting, didn't know anything about paint mixing, brushes, or any of that. Look, I know what you're trying to say, Victor, but what you actually said made my brain hurt.

    However, exploring new things and remembering old things are two different things. You can be good at what you do and yet still have a spark of curiousity to you and want to expand what you know. These aren't mutually exclusive. To suggest people murder their own egos in order to call themselves creative is really, really, fucking stupid.

    You can, in fact, take pride in what you do, and yet be humble enough to want to learn more. It happens all the time.. at least until you're promoted to management.

    • The most dangerous thought that you can have as a creative person is to think that you know what you're doing,' explains Victor.

      Yeah. I bet Vincent Van Gogh thought he was total shit at painting, didn't know anything about paint mixing, brushes, or any of that.

      Um... yes actually. Van Gogh actually only sold one painting in his entire life, and he considered himself somewhat of a failure as a painter. He did not become famous until after his death.

      • Um... yes actually. Van Gogh actually only sold one painting in his entire life, and he considered himself somewhat of a failure as a painter. He did not become famous until after his death.

        He considered himself a failure commercially... Because he was. He never stopped painting. That's fairly compelling evidence he knew he didn't suck... and that it was the world that was wrong, not him.

        Just because you're bad at business doesn't mean you're bad at what you do. I know, I know... it's hard for people these days to understand that, but 'tis true.

        • Um... yes actually. Van Gogh actually only sold one painting in his entire life, and he considered himself somewhat of a failure as a painter. He did not become famous until after his death.

          He considered himself a failure commercially... Because he was. He never stopped painting. That's fairly compelling evidence he knew he didn't suck... and that it was the world that was wrong, not him.

          Just because you're bad at business doesn't mean you're bad at what you do. I know, I know... it's hard for people these days to understand that, but 'tis true.

          Van Gogh was notoriously depressed. His entire career as an artist was little more than five years, ending with his suicide in 1890. The nature of his work changed dramatically at a rapid pace, pieces from a year before could almost be from another artist entirely. This all suggests that he was never truly satisfied with his works. It has nothing to do with the lack of financial success, but rather the lack of acceptance from his peers, who often derided him. He continued painting, not because he thought he

    • by Quila ( 201335 )

      When I was doing design work, my mentor taught me the rules and told me to stay within them. After you've mastered the rules, learning the successes and mistakes of everybody before, then you can start breaking them as you explore new possibilities.

      I am afraid this will convince people who know nothing yet to just go off in whatever direction they please, wasting massive time on things others already learned not to do, subjecting others to more horrible code.

      • by Trifthen ( 40989 )

        That's guaranteed to happen. The only question is the extent. There's bound to be a few who say, "Hey! I don't have to know what I'm doing. That one guy said so!" In reality, we know different. Progress is made by learning from the mistakes of others. :)

    • by AK Marc ( 707885 )

      Yeah. I bet Vincent Van Gogh thought he was total shit at painting,

      He probably did. He died a commercial failure. Critic reviews at the time were very critical of his work.

    • Massively out of context. The quote is about how people have been taught to assume procedural programming is the only way of programming. The point is that creative people are being limited by these mistaken assumptions.
  • by Joe_Dragon ( 2206452 ) on Friday August 09, 2013 @01:44PM (#44523661)

    Time for real apprenticeships in tech and not years of theory?

    • by gweihir ( 88907 )

      No. Time for real theory coupled with real experience. Apprenticeships only work when the profession is ruled by real craftsmen. The programmers today rarely qualify, hence real apprenticeships would only make things worse.

      • by Shados ( 741919 )

        Won't change much. Even the "real theory" is half assed except in a select few colleges, usually (but not always) the high end ones. Then the professors that are good at the theory are usually impossibly terrible at the engineering aspect but still pass on their words as laws.

        Its really an awkward situation.

        • by gweihir ( 88907 )

          I had the luck to have a really good theoretician do the introductory CS year at university for me and that he invested a lot of effort to find out how to do these things well in practice. I only later found out that the years before and after (they did rotate the introductory year) got a far, far worse education, either by bad practitioners or by theoreticians with exactly the problem you describe.

          The bottom line is however that to be really good, you have to understand both theory and practice and you hav

  • Patents (Score:5, Insightful)

    by Diss Champ ( 934796 ) on Friday August 09, 2013 @01:47PM (#44523715)

    One reason I had so many patents relatively early in my career is I wound up doing hardware design in a much different area than I had planned on in school. I did not know the normal way to do things. So I figured out ways to do things.
    Sometimes I wound up doing stuff normally but it took longer, this was OK as a bit of a learning curve was expected (they hired me knowing I didn't know the area yet).
    Sometimes I did things a bit less efficiently than ideal, though this was usually fixed in design reviews.
    But sometimes I came up with something novel, and after checking with more experienced folks to make sure it was novel, patented it.

    A decade later, I know how a way to do pretty much everything I need to do, and get a lot less patents. But I finish my designs a lot faster:).

    You need people who don't know that something isn't possible to advance the state of the art, but you also need people who know the lessons of the past to get things done quickly.

  • Wow man (Score:4, Funny)

    by jackjumper ( 307961 ) on Friday August 09, 2013 @01:49PM (#44523739)
    I need some more bong hits to fully consider this
  • 'I think you have to say: "We don't know what programming is. We don't know what computing is. We don't even know what a computer is." And once you truly understand that, and once you truly believe that, then you're free, and you can think anything.'

    I agree having an open mind is a good thing. There is, of course, taking things too far. Just throw away everything we've spent the last 40-50 years developing? Is there some magical aura we should tap into, and rock back and forth absorbing instead? Should we h

    • We don't even know what a computer is.

      Think of it like this. If you believe you already know what a computer is, then you are not likely to look for alternatives. If you're looking for alternatives, then you might come up with something interesting like this [hackaday.com]. If you just accept that super-scalar pipelines, the way Intel does it, is the best way, then you're not going to find a different, potentially better way of doing it.

      • by Trifthen ( 40989 )

        Far from it. I seem to recall a researcher I read about over a decade ago who was designing a chip that worked more like a human neuron. Superscalar pipelines is just how Intel does instructions, and even they're trying to get away from it due to the cost of cache misses becoming more expensive as pipeline lengths increase. Having a talk on not being constrained to accepted dogma, and outright throwing away all known concepts are completely different things.

        The very fact that you and I can even have this co

        • by lgw ( 121541 )

          From TFA

          It's possible to misinterpret what I'm saying here. When I talk about not knowing what you're doing, I'm arguing against "expertise", a feeling of mastery that traps you in a particular way of thinking.

          But I want to be clear -- I am not advocating ignorance. Instead, I'm suggesting a kind of informed skepticism, a kind of humility.

          Ignorance is remaining willfully unaware of the existing base of knowledge in a field, proudly jumping in and stumbling around. This approach is fashionable in certain hacker/maker circles today, and it's poison.

          Knowledge is essential. Past ideas are essential. Knowledge and ideas that have coalesced into theory is one of the most beautiful creations of the human race. Without Maxwell's equations, you can spend a lifetime fiddling with radio waves and never invent radar. Without dynamic programming, you can code for days and not even build a sudoku solver.

          It's good to learn how to do something. It's better to learn many ways of doing something. But it's best to learn all these ways as suggestions or hints. Not truth.

          Learn tools, and use tools, but don't accept tools. Always distrust them; always be alert for alternative ways of thinking. This is what I mean by avoiding the conviction that you "know what you're doing".

          Does that sound better?

  • We do know a number of things about programming. One is that it is hard and that it requires real understanding of what you intend to do. Another is that languages help only to a very, very limited degree. There is a reason much programing works is still done in C, once people know what they are doing, language becomes secondary and often it is preferable if it does not help you much but does not stand in your way either. We also know that all the 4G and 5G hype was BS and that no language will ever make th

    • The big disappointment when I read about the 1st "design patterns", other than liking the idea of encouraging generic and somewhat uniform descriptions of solutions, I found many of them were solutions to problems created by JAVA and other restricted thinking.

      See Fast Food Nation; the parts about industrialization and how it took expertise and eliminated it in favor of simple mechanistic low wage zero talent jobs.

      • by gweihir ( 88907 )

        Indeed. Some things were nice to see a bit more formalized, but the patterns are typically far too lean to fit a real problem. I also think that many things were formalized in the pattern community early on, just to get some mass. Most of these had better been left informal, as good programmers had no issues with them anyways and bad programmers did not get what they truly meant.

        The analogy with Java is a good one, as my experience with Java is that once you have a certain skill, it stands constantly in you

  • I was intrigued until I noticed that where I put the quote marks, and where the quote marks actually were, were not the same place. So much for the "Mr. Perl Extreme-Fluxing Agile Capacitor."
  • Most people I worked with in the 80s (and learned from in the 70s) had a good feel for concepts like "stable systems", "structural integrity", "load bearing weight", and other physical engineering concepts. Many from engineering degrees (most of them weren't CS grads like me), and a lot from playing with legos, erector sets, chemistry sets, building treehouses (and real houses). These concepts are just as important in software systems, but I can only think of a handful of people I've worked with over the

  • by trout007 ( 975317 ) on Friday August 09, 2013 @02:30PM (#44524185)

    I find it interesting that people in software think they are the first ones to ever design complicated things. It seems there are so many arguments over design styles and paths. All they need to do is look at what other engineering fields have done for the past 100+ years. It's pretty simple. When you are working in a small project where the cost for failure and rework is low you can do it however you want. Try out new styles and push technology and techniques forward. When it comes to critical infrastructure and projects where people will die or lose massive amounts of money you have to stick with what works. This is where you need all of the management overhead of requirements, schedules, budgets, testing, verification, operation criteria, and the dozens of other products besides the "design".

    I'm a mechanical and a software engineer. When I'm working on small projects with direct contact with the customers it's easy and very minimal documentation is needed. But as more people are involved the documentation required increases exponentially.

  • So, we know what the computer does. It's this: List of x86 instructions. [wikipedia.org] It executes those instructions. The device stores and executes instructions. [wikipedia.org]

    We think in terms of programming languages. The language abstracts away the complexity of manually generating the instructions. Then we build APIs to abstract away even more. So we can program a ball bouncing across a screen in just a few lines rather than generating tens of thousands of instructions manually, because of abstraction built upon abstraction.

    In ha

  • If you want some relevant history and insight on the struggles and triumphs of software engineering, I highly suggest reading the Mythical Man-Month.

    What was surprising to me was the fact that something written in the 60's about software development is still very relevant today.

    The engineers who worked on the IBM System/360 OS discovered software engineering through pure trial and error.

    One of the classic insights from the book that I've seen companies (i.e. Microsoft) violate over and over is Brook

  • by Animats ( 122034 ) on Friday August 09, 2013 @04:26PM (#44525715) Homepage

    A major problem we have in computing is the Mess at the Bottom. Some of the basic components of computing aren't very good, but are too deeply embedded to change.

    • C/C++ This is the big one. There are three basic issues in memory safety - "how big is it", "who can delete it", and "who has it locked". C helps with none of these. C++ tries to paper over the problem with templates, but the mold always comes through the wallpaper, in the form of raw pointers. This is why buffer overflow errors, and the security holes that come with them are still a problem.

      The Pascal/Modula/Ada family of languages tried to address this. All the original Macintosh applications were in Pascal. Pascal was difficult to use as a systems programming language, and Modula didn't get it right until Modula 3, by which time it was too late.

    • UNIX and Linux. UNIX was designed for little machines. MULTICS was the big-machine OS, with hardware-supported security that actually worked. But it couldn't be crammed into a PDP-11. Worse, UNIX did not originally have much in the way of interprocess communication (pipes were originally files, not in-memory objects). Anything which needed multiple intercommunicating processes worked badly. (Sendmail is a legacy of that era.) The UNIX crowd didn't get locking right, and the Berkeley crowd was worse. (Did you know that lock files are not atomic on an NFS file system?) Threads came later, as an afterthought. Signals never worked very well. As a result, putting together a system of multiple programs still sucks.
    • DMA devices Mainframes had "channels". The end at the CPU talked to memory in a standard way, and devices at the other end talked to the channel. In the IBM world, channels worked with hardware memory protection, so devices couldn't blither all over memory. In the minicomputer and microcomputer world, there were "buses", with memory and devices on the same bus. Devices could write anywhere in memory. Devices and their drivers had to be trusted. So device drivers were usually put in the operating system kernel, where they could break the whole OS, blither all over memory, and open security holes. Most OS crashes stem from this problem. Amusingly, it's been a long time since memory and devices were on the same bus on anything bigger than an ARM CPU. But we still have a hardware architecture that allows devices to write anywhere in memory. This is a legacy from the PDP-11 and the original IBM PC.
    • Academic microkernel failure Microkernels appeared to be the right approach for security. But the big microkernel project of the 1980s, Mach, at CMU, started with BSD. Their approach was too slow, took too much code, and tried to get cute about avoiding copying by messing with the MMU. This gave microkernels a bad reputation. So now we have kernels with 15,000,000 lines of code. That's never going to stabilize. QNX gets this right, with a modest microkernel that does only message passing, CPU dispatching, and memory management. There's a modest performance penalty for extra copying. You usually get that back because the system overall is simpler. Linux still doesn't have a first-class interprocess communication system. (Attempts include System V IPC, CORBA, and D-bus. Plus various JSON hacks.)
    • Too much trusted software Application programs often run with all the privileges of the user running them, and more if they can get it. Most applications need far fewer privileges than they have. (But then they wouldn't be able to phone home to get new ads.) This results in a huge attackable surface. The phone people are trying to deal with this, but it's an uphill battle against "apps" which want too much power.
    • Lack of liability Software has become a huge industry without taking on the liability obligations of one. If software companies were held to the standards of auto companies, software would work a lot better. There are a few areas where software companies do take on liability. Avionics, of course. But an
    • by SuricouRaven ( 1897204 ) on Friday August 09, 2013 @04:57PM (#44526037)

      The whole x86/64 architecture is a mess when you get deep enough. It suffers severely from a commitment to backwards compatibility - your shiny new i7 is still code-compatible with an 80386, you could install DOS on it quite happily. But the only way to fix this by now is a complete start-over redesign that reflects modern hardware abilities rather than trying to pretend you are still in the era of the z80. That just isn't commercially viable: It doesn't matter how super-awesome-fast your new computer is when no-one can run their software on it. Only a few companies have the engineering ability to pull it off, and they aren't going to invest tens of millions of dollars in something doomed to fail. The history of computing is littered with products that were technologically superior but commercially non-viable - just look at how we ended up with Windows 3.11 taking over the world when OS/2 was being promoted as the alternative.

      The best bet might be if China decides they need to be fully independant from the 'Capitalist West' and design their own architecture. But more likely they'll just shamelessly rip off on of ARM or AMD's designs (Easy enough to steal the masks for those - half their chips are made in China anyway) and slap a new logo on it.

  • This guy is a ray of light from the younger generation. He's avoided grappling with the hard problem of shared memory latency, but other than that, he's doing pretty good. You have to deal with shared memory latency [blogspot.com] to handle a wide range of modeling problems, not the least of which is real-time multicore ray tracing like this [youtube.com].
  • by msobkow ( 48369 ) on Friday August 09, 2013 @05:04PM (#44526097) Homepage Journal

    It's an entertaining presentation, but I don't think it's anything nearly as insightful as the summary made it out to be.

    The one thing I take away from his presentation is that old ideas are often more valuable in modern times now that we have the compute power to implement those ideas.

    As a for-example, back in my university days (early-mid 1980s), there were some fascinating concepts explored for computer vision and recognition of objects against a static background. Back then it would take over 8 hours on a VAX 7/80 to identify a human by extrapolating a stick figure and paint a cross-hair on the torso. Yet nowadays we have those same concepts implemented in automatic recognition and targetting systems that do the analysis in real time, and with additional capabilities such as friend/foe identification.

    No one who read about Alan Kay's work can fail to recognize where the design of the modern tablet computer really came from, despite the bleatings of patent holders that they "invented" anything of note in modern times.

    So if there is one thing that I'd say students of programming should learn from this talk, it is this:

    Learn from the history of computing

    Whatever you think of as a novel or "new" idea has probably been conceptualized in the past, researched, and shelved because it was too expensive/complex to compute back then. Rather than spending your days coding your "new" idea and learning how not to do it through trial and error, spend a few of those days reading old research papers and theories relevant to the topic. Don't assume you're a creative genius; rather assume that some creative genius in the annals of computing history had similar ideas, but could never take them beyond the proof-of-concept phase due to limitations of the era.

    In short: learn how to conceptualize and abstract your ideas instead of learning how to code them. "Teach" the machine to do the heavy lifting for you.

  • The near ideal programming language was invented in 1959: Lisp. It has nearly open-ended abstraction ability via simple syntax: flexibility without clutter. It's a thing of design beauty.

    However, nobody has figured out how to leverage it for team programming and fungible staff. Regimented languages just seem to work better for your standard cubicle teams.

  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Friday August 09, 2013 @09:27PM (#44527917)

    I think he got it wrong why we got lost.

    It's not because we didn't or don't know. It's because software was free back then. Hardware was so bizarly expensive and rare that no one gave a damn about giving away software and software ideas for free. It's only when software was commercialised that innovation in the field started to slow rapidly. The interweb is where it was 18 years ago because ever since simply because people are busy round the clock 24/7 trying to monetise it rather than ditching bad things and trying new stuff.

    Then again, x86 wining as an archtecture and unix as software model probably does have a little to do with it aswell. We're basically stuck with early 80ies technology.

    The simple truth is:
    CPU and system development need's its iPhone/iPad moment - where a bold move is made to ditch out decade old concepts to make way for entirely new ones!

    Look what happed since Steve Jobs and his crew redid commodity computing with their touch-toys. Imagine that happening with system architecture - that would be awesome. The world would be a totally different place in 5 years from now.

    Point in case: We're still using SQL (Apollo era software technology for secretaries to manually access data - SQL is a fricking END-USER INTERFACE form the 70ies!!!) as a manually built and rebuilt access layer to persistance from the app level. That's even more braindead than keeping binary in favour of ASM, as given as example in the OPs video-talk.

    Even ORM to hide SQL is nothing but a silly crutch from 1985. Java is a crutch to bridge across plattforms because since the mid 70ies people in the industry have been fighting turf wars over the patented plattforms and basically halted innovation (MS anyone?). The sceomorphic desktop metaphor is a joke - and allways has been. Stacked windowing UIs are a joke and allways have been. Our keyboard layout is a provisionary from the steam age, from before the zipper was invented (!!). E-Mail - one of the bizarest things still to be in widespread use - is from a time when computers weren't even connected yet, with different protocolls for every little shit it does, bizar, pointless, braindead and arcane concepts like the seperation of MUA, editor and seperate protocolls for sending and recieving - a human async communication system and protocol so bad it's outclassed by a shoddy commercial social networking site running from webscripts and browser-driven widgets - I mean WTF??? etc... I could go on and on ...

    The only thing that isn't a total heap of shit is *nix as a system, and that's only because everything worthwhile being called Unix today is based on FOSS where we can still tinker and move forward with babysteps like fast no-bullshit non-tiling window managers, complete OpenGL accelerated avantgarde UIs (I'm thinking Blender here), workable userland and OS seperation and a matured way to handle text-driven UI, interaction and computer controll (zshell & modern bash).

    That said, I do believe if we'd come up with a new, entire FOSS hardware arcitecture "2013" with complete redo and focus on massive parallel concurrency and build a logic-and-constraint driven touch-based direct-maniplation-interface system - think Squeak.org completely redone today for modern retina touch display *without* the crappy desktop - that does away with seperation of filesystem and persistance seperation and other ancient dead-ends, we'd be able to top and drop *nix in no time.

    We wouldn't even miss it. ...

    But building the bazillionth web framework and next half-assed x.org window manager and/or accompaning windows clone or redoing the same audio-player app / file manager / UI-Desktop toolkit every odd year from bottom to top again appears to be more fun I guess.

    My 2 cents.

You are always doing something marginal when the boss drops by your desk.

Working...