Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming Education

Back To 'The Future of Programming' 214

theodp writes "Bret Victor's The Future of Programming (YouTube video; Vimeo version) should probably be required viewing this fall for all CS majors — and their professors. For his recent DBX Conference talk, Victor took attendees back to the year 1973, donning the uniform of an IBM systems engineer of the times, delivering his presentation on an overhead projector. The '60s and early '70s were a fertile time for CS ideas, reminds Victor, but even more importantly, it was a time of unfettered thinking, unconstrained by programming dogma, authority, and tradition. 'The most dangerous thought that you can have as a creative person is to think that you know what you're doing,' explains Victor. 'Because once you think you know what you're doing you stop looking around for other ways of doing things and you stop being able to see other ways of doing things. You become blind.' He concludes, 'I think you have to say: "We don't know what programming is. We don't know what computing is. We don't even know what a computer is." And once you truly understand that, and once you truly believe that, then you're free, and you can think anything.'"
This discussion has been archived. No new comments can be posted.

Back To 'The Future of Programming'

Comments Filter:
  • Hmm (Score:5, Insightful)

    by abroadwin ( 1273704 ) on Friday August 09, 2013 @02:41PM (#44523635)
    Yes and no, I think.

    On the one hand, it is a good thing to prevent yourself from constrained thinking. I work with someone who thinks exclusively in design patterns; it leads to some solid code, in many cases, but it's also sometimes a detriment to his work (overcomplicated designs, patterns used for the sake of patterns).

    Unlearning all we have figured out in computer science is silly, though. Use the patterns and knowledge we've spend years honing, but use them as tools and not as crutches. I think as long as you look at something and accurately determine that a known pattern/language/approach is a near-optimal way to solve it, that's a good application of that pattern/language/approach. If you're cramming a solution into a pattern, though, or only using a language because it's your hammer and everything looks like a nail to you, that's bad.
  • Re:70s yeah right! (Score:5, Insightful)

    by Zero__Kelvin ( 151819 ) on Friday August 09, 2013 @02:46PM (#44523691) Homepage

    "Next thing we can throw our chairs out and sit on the carpet with long hair, smoke weed and drink beer...."

    If you aren't doing it that way already, then you're doing it wrong.

  • Patents (Score:5, Insightful)

    by Diss Champ ( 934796 ) on Friday August 09, 2013 @02:47PM (#44523715)

    One reason I had so many patents relatively early in my career is I wound up doing hardware design in a much different area than I had planned on in school. I did not know the normal way to do things. So I figured out ways to do things.
    Sometimes I wound up doing stuff normally but it took longer, this was OK as a bit of a learning curve was expected (they hired me knowing I didn't know the area yet).
    Sometimes I did things a bit less efficiently than ideal, though this was usually fixed in design reviews.
    But sometimes I came up with something novel, and after checking with more experienced folks to make sure it was novel, patented it.

    A decade later, I know how a way to do pretty much everything I need to do, and get a lot less patents. But I finish my designs a lot faster:).

    You need people who don't know that something isn't possible to advance the state of the art, but you also need people who know the lessons of the past to get things done quickly.

  • Re:Hmm (Score:5, Insightful)

    by orthancstone ( 665890 ) on Friday August 09, 2013 @03:13PM (#44524025)

    Use the patterns and knowledge we've spend years honing, but use them as tools and not as crutches.

    Having just watched this video a few hours ago (sat in my queue for a few days, providence seemingly was on my side to watch it right before this story popped), I can say he argues against this very idea.

    He mentions late in the talk about how a generation of programmers learned very specific methods for programming, and in turn taught the next generation of programmers those methods. Because the teaching only involved known working methods and disregarded any outlying ideas, the next generation believes that all programming problems have been solved and therefore they never challenge the status quo.

    Much of his talk references the fact that many of the "new" ideas in computing were actually discussed and implemented in the early days of programming. Multiple core processing, visual tools and interactions, and higher level languages are not novel in any way; he's trying to point out that the earliest programmers had these ideas too, but we ignored or forgot them due to circumstances. For example, it is difficult to break out of the single processing pipeline mold when one company is dominating the CPU market by pushing out faster and faster units that excel at exactly that kind of processing.

    While TFS hits on the point at hand (don't rest on your laurels), it is worth noting that the talk is trying to emphasize open mindedness towards approaches to programming. While that kind of philosophical take is certainly a bit broad (most employers would rather you produce work than redesign every input system in the office), it is important that innovation still be emphasized. I would direct folks to look at the Etsy "Code as Craft" blog as an example of folks that are taking varying approaches to solving problems by being creative and innovating instead of simply applying all the known "best practices" on the market.

    I suppose that final comment better elaborates this talk in my mind: Don't rely on "best practices" as if they are the best solution to all programming problems.

  • Re:70s yeah right! (Score:5, Insightful)

    by DutchUncle ( 826473 ) on Friday August 09, 2013 @03:34PM (#44524223)
    In college in the 1970s, I had to read the Multics documents and von Neumann's publications. We're still reinventing things that some very clever people spent a lot of time thinking about - and solving - in the 1960s. It's great that we have the computer power and memory and graphics to just throw resources at things and make them work, but imagine how much we could make those resources achieve if we used them with the attitude those people had towards their *limited* resources. And we have exactly the same sets of bottlenecks and tradeoffs; we just move the balance around as the hardware changes. Old ideas often aren't *wrong*, they're just no longer appropriate - until the balance of tradeoffs comes around again, at which point those same ideas are right again, or at least useful as the basis for new improved ideas.
  • by Animats ( 122034 ) on Friday August 09, 2013 @05:26PM (#44525715) Homepage

    A major problem we have in computing is the Mess at the Bottom. Some of the basic components of computing aren't very good, but are too deeply embedded to change.

    • C/C++ This is the big one. There are three basic issues in memory safety - "how big is it", "who can delete it", and "who has it locked". C helps with none of these. C++ tries to paper over the problem with templates, but the mold always comes through the wallpaper, in the form of raw pointers. This is why buffer overflow errors, and the security holes that come with them are still a problem.

      The Pascal/Modula/Ada family of languages tried to address this. All the original Macintosh applications were in Pascal. Pascal was difficult to use as a systems programming language, and Modula didn't get it right until Modula 3, by which time it was too late.

    • UNIX and Linux. UNIX was designed for little machines. MULTICS was the big-machine OS, with hardware-supported security that actually worked. But it couldn't be crammed into a PDP-11. Worse, UNIX did not originally have much in the way of interprocess communication (pipes were originally files, not in-memory objects). Anything which needed multiple intercommunicating processes worked badly. (Sendmail is a legacy of that era.) The UNIX crowd didn't get locking right, and the Berkeley crowd was worse. (Did you know that lock files are not atomic on an NFS file system?) Threads came later, as an afterthought. Signals never worked very well. As a result, putting together a system of multiple programs still sucks.
    • DMA devices Mainframes had "channels". The end at the CPU talked to memory in a standard way, and devices at the other end talked to the channel. In the IBM world, channels worked with hardware memory protection, so devices couldn't blither all over memory. In the minicomputer and microcomputer world, there were "buses", with memory and devices on the same bus. Devices could write anywhere in memory. Devices and their drivers had to be trusted. So device drivers were usually put in the operating system kernel, where they could break the whole OS, blither all over memory, and open security holes. Most OS crashes stem from this problem. Amusingly, it's been a long time since memory and devices were on the same bus on anything bigger than an ARM CPU. But we still have a hardware architecture that allows devices to write anywhere in memory. This is a legacy from the PDP-11 and the original IBM PC.
    • Academic microkernel failure Microkernels appeared to be the right approach for security. But the big microkernel project of the 1980s, Mach, at CMU, started with BSD. Their approach was too slow, took too much code, and tried to get cute about avoiding copying by messing with the MMU. This gave microkernels a bad reputation. So now we have kernels with 15,000,000 lines of code. That's never going to stabilize. QNX gets this right, with a modest microkernel that does only message passing, CPU dispatching, and memory management. There's a modest performance penalty for extra copying. You usually get that back because the system overall is simpler. Linux still doesn't have a first-class interprocess communication system. (Attempts include System V IPC, CORBA, and D-bus. Plus various JSON hacks.)
    • Too much trusted software Application programs often run with all the privileges of the user running them, and more if they can get it. Most applications need far fewer privileges than they have. (But then they wouldn't be able to phone home to get new ads.) This results in a huge attackable surface. The phone people are trying to deal with this, but it's an uphill battle against "apps" which want too much power.
    • Lack of liability Software has become a huge industry without taking on the liability obligations of one. If software companies were held to the standards of auto companies, software would work a lot better. There are a few areas where software companies do take on liability. Avionics, of course. But an
  • by SuricouRaven ( 1897204 ) on Friday August 09, 2013 @05:57PM (#44526037)

    The whole x86/64 architecture is a mess when you get deep enough. It suffers severely from a commitment to backwards compatibility - your shiny new i7 is still code-compatible with an 80386, you could install DOS on it quite happily. But the only way to fix this by now is a complete start-over redesign that reflects modern hardware abilities rather than trying to pretend you are still in the era of the z80. That just isn't commercially viable: It doesn't matter how super-awesome-fast your new computer is when no-one can run their software on it. Only a few companies have the engineering ability to pull it off, and they aren't going to invest tens of millions of dollars in something doomed to fail. The history of computing is littered with products that were technologically superior but commercially non-viable - just look at how we ended up with Windows 3.11 taking over the world when OS/2 was being promoted as the alternative.

    The best bet might be if China decides they need to be fully independant from the 'Capitalist West' and design their own architecture. But more likely they'll just shamelessly rip off on of ARM or AMD's designs (Easy enough to steal the masks for those - half their chips are made in China anyway) and slap a new logo on it.

  • by msobkow ( 48369 ) on Friday August 09, 2013 @06:04PM (#44526097) Homepage Journal

    It's an entertaining presentation, but I don't think it's anything nearly as insightful as the summary made it out to be.

    The one thing I take away from his presentation is that old ideas are often more valuable in modern times now that we have the compute power to implement those ideas.

    As a for-example, back in my university days (early-mid 1980s), there were some fascinating concepts explored for computer vision and recognition of objects against a static background. Back then it would take over 8 hours on a VAX 7/80 to identify a human by extrapolating a stick figure and paint a cross-hair on the torso. Yet nowadays we have those same concepts implemented in automatic recognition and targetting systems that do the analysis in real time, and with additional capabilities such as friend/foe identification.

    No one who read about Alan Kay's work can fail to recognize where the design of the modern tablet computer really came from, despite the bleatings of patent holders that they "invented" anything of note in modern times.

    So if there is one thing that I'd say students of programming should learn from this talk, it is this:

    Learn from the history of computing

    Whatever you think of as a novel or "new" idea has probably been conceptualized in the past, researched, and shelved because it was too expensive/complex to compute back then. Rather than spending your days coding your "new" idea and learning how not to do it through trial and error, spend a few of those days reading old research papers and theories relevant to the topic. Don't assume you're a creative genius; rather assume that some creative genius in the annals of computing history had similar ideas, but could never take them beyond the proof-of-concept phase due to limitations of the era.

    In short: learn how to conceptualize and abstract your ideas instead of learning how to code them. "Teach" the machine to do the heavy lifting for you.

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...