Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming News

A Better Way To Program 467

mikejuk writes "This video will change the way you think about programming. The argument is clear and impressive — it suggest that we really are building programs with one hand tied behind our backs. Programmers can only understand their code by pretending to be computers and running it in their heads. As this video shows, this is increadibly inefficient and, as we generally have a computer in front of us, why not use it to help us understand the code? The key is probably interactivity. Don't wait for a compile to complete to see what effect your code has on things — if you can see it in real time then programming becomes much easier."
This discussion has been archived. No new comments can be posted.

A Better Way To Program

Comments Filter:
  • Great but... (Score:5, Interesting)

    by Anonymous Coward on Sunday March 11, 2012 @02:54PM (#39319409)

    Seems like it mostly only applies to GUI programming or programming with results expressed through a GUI. What about, say, kernel programming?

    • by viperidaenz ( 2515578 ) on Sunday March 11, 2012 @03:09PM (#39319507)
      Visualise them kernel then. They didn't have any problems in The Matrix.
    • Re:Great but... (Score:5, Informative)

      by blahplusplus ( 757119 ) on Sunday March 11, 2012 @03:55PM (#39319771)

      You didn't see the point when he showed how you could find bugs in algorithms as you typed them.

    • Re:Great but... (Score:5, Insightful)

      by Junta ( 36770 ) on Sunday March 11, 2012 @04:07PM (#39319855)

      In the video he covers that as well. Well, at least he conceptually says its covered, I disagree...

      Lets start with his abstract example. His binary search on the surface looks straightforward and he wanted to portray it as magically finding bugs as he got a float in one instance and an infinite loop in another. However the infinite loop example was found because he *knew* what he was doing as he intentionally miswrote it to start with and intentionally changed the inputs in accordance with this knowledge. There are a few more possibilities that you have to *know* to try out. For example, he didn't try a value that was lower than the lowest (would have panned out), he didn't try a value omitted from the list but still higher than the lowest and lower than the highest (which also would have been fine) and he didn't try an unordered list (which is incorrect usage, but accounting for incorrect usage is a fact of life). He didn't try varying dataset sizes (in this algorithm doesn't matter, but he has to *know* that) and different types of data. You still have the fact that 'B' is smaller than 'a' and all sorts of 'non-intuitive' things inherent in the situation.

      Now consider that binary search is a freshman level programming problem and therefore is pretty low in terms of the complexity a developer is going to deal with. Much of software development will deal with far more complicated scenarios than this, and the facility doesn't *really* cover even the binary search complexity.

      I know I may sound more negative than is appropriate, but his enthusiasm and some people's buy-in can be risky. I've seen poor developers suckered in by various 'silver bullets' and produce lower quality code because they think that unit test or other mechanisms passed and they can reast easy. Using these tools is good, but always should be accompanied with some wariness to avoid overconfidence.

      • Re:Great but... (Score:4, Insightful)

        by ghostdoc ( 1235612 ) on Sunday March 11, 2012 @04:17PM (#39319919)

        So your point, basically, is that programming is all about knowing what could go wrong with your code?

        Not a bad definition actually...it would certainly explain why coding productivity increases in step with experience; you've made those mistakes already and now know to avoid them.

        • by hughbar ( 579555 )
          This is also the point of Z, for example: http://en.wikipedia.org/wiki/Z_notation [wikipedia.org] amazing how much implicit 'potential wrongness' there is even in a simple bit of code. For me, whilst the demonstrations in the talk have limits [as a previous poster said, better for things that are visual and I can't imagine what the hashes of hashes and random bit of json that I deal with would look like] it's pretty interesting and related to some of the smalltalk attempts at visual programming.
      • Re:Great but... (Score:5, Insightful)

        by BasilBrush ( 643681 ) on Sunday March 11, 2012 @06:11PM (#39320737)

        However the infinite loop example was found because he *knew* what he was doing as he intentionally miswrote it to start with and intentionally changed the inputs in accordance with this knowledge.

        It's was a demo. Demos by their nature tend to be both simplistic contrived.

        He was putting across a principle in the first half of his talk and a way of life in the second. Both very valid.

        From your comments it's clear that you're more of a craftsman than a visionary. There's room in the world for both.

      • Re:Great but... (Score:4, Insightful)

        by Mr Z ( 6791 ) on Monday March 12, 2012 @07:53AM (#39325053) Homepage Journal

        I personally am a fan of "debugging by printf." Kernighan and Pike make a good argument for it in The Practice of Programming. But, that's not limited to debugging, really. It's a great tool for understanding one's own code. Basically, whenever I want to get a greater intuition about how something works, I load it up with print statements at strategic points. I guess you might call it "understanding through printf."

        I'll be honest: I didn't invest an hour of my time watching this video. How really does his technique compare to "understanding through printf"?

        • by rioki ( 1328185 )

          You should invest an hour of your life, it really is worth it.

          To the discussion at hand, sure it is not an end all, but how often I see people doing the same thing in a debugger, only with mouse over variable / watch windows. His approach is essentially the same as many people do in the debugger, only the feedback look is tighter. Sure you still need to know about the problem and solution domain, to isolate boundary conditions; but it surly changes the way you program.

    • by sjames ( 1099 ) on Sunday March 11, 2012 @04:20PM (#39319943) Homepage Journal

      What about, say, kernel programming?

      If you enter a syntax error, you get a video of Orville Redenbacher ripping you a new one.

    • Re:Great but... (Score:4, Insightful)

      by jythie ( 914043 ) on Sunday March 11, 2012 @04:55PM (#39320203)
      That tends to be the problem with a lot of these 'rethink programming' ideas. They tend to come from people who are thinking in very specific domains and believe that their domain is the only one that matters or that its lessons apply everywhere. It gets very frustrating, esp if your domain is one that is not one of the 'hot' fields that everyone is paying attention to.
      • Re:Great but... (Score:4, Insightful)

        by BasilBrush ( 643681 ) on Sunday March 11, 2012 @06:12PM (#39320751)

        You need to watch the video. The horizons of his thinking couldn't be wider.

        • Re:Great but... (Score:5, Insightful)

          by jythie ( 914043 ) on Sunday March 11, 2012 @06:25PM (#39320811)
          TBH - at an hour of talking, no. A summary or argument with diagrams would be worth looking at, but a video? I am actually getting tired of this fad of posting videos to make a case. Very inefficient and can usually be summed up in something that only takes 10 minutes to read. I get the feeling the people who do things pieces just like hearing their own voice....
  • by slasho81 ( 455509 ) on Sunday March 11, 2012 @02:54PM (#39319411)
    Here's an implementation [chris-granger.com] of Bret's ideas in Clojure.
  • by Giant Electronic Bra ( 1229876 ) on Sunday March 11, 2012 @02:56PM (#39319425)

    Yeeehaaa! ;)

    The tough problems aren't about running the code and seeing what happens, they're about setting up very specific situations and testing them easily.

    • by vlm ( 69642 ) on Sunday March 11, 2012 @03:08PM (#39319499)

      The tough problems aren't about running the code and seeing what happens, they're about setting up very specific situations and testing them easily.

      Handling non-specific unknown/unpredicted situations gracefully is also tough. Unsanitized user input, crazy failure modes, failure in other code making your state machines go bonkers... The trendy thing to do is just throw your hands up in the air and tell the user to reboot and/or reinstall, but that's not so cool.

      Maybe another way to phrase it is at least one of the specific situations needs to be the input of a random number generator doing crazy stuff.

      Your Arabic to Roman numeral converter accepts a INT? Well it better not crash when fed a negative, or zero, 2**63-1 (or whatever max_int is where you live), and any ole random place in between

      • by ktappe ( 747125 ) on Sunday March 11, 2012 @03:47PM (#39319707)

        The tough problems aren't about running the code and seeing what happens, they're about setting up very specific situations and testing them easily.

        another way to phrase it is at least one of the specific situations needs to be the input of a random number generator doing crazy stuff.

        Your Arabic to Roman numeral converter accepts a INT? Well it better not crash when fed a negative, or zero, 2**63-1 (or whatever max_int is where you live), and any ole random place in between

        This is still archaic thinking. A much more efficient way would be for the IDE to, when specifying a variable, ask there & then what the boundaries of the variable should be. Then the compiler could error any time it saw a situation where the variable could be (or was) handed a value outside those boundaries. Programmers should not be having to catch weird situations over and over; that's what computers are for. Allowing a variable to be any possible INT/FLOAT/REAL just doesn't make any sense in many situations so I'm quite curious why we're still having to even talk about random number generators for debugging & testing. It feels like we're still working for the computers instead of the other way around.

        • Re: (Score:3, Insightful)

          by Anonymous Coward

          This is still archaic thinking. A much more efficient way would be for the IDE to, when specifying a variable, ask there & then what the boundaries of the variable should be. Then the compiler could error any time it saw a situation where the variable could be (or was) handed a value outside those boundaries. Programmers should not be having to catch weird situations over and over; that's what computers are for. Allowing a variable to be any possible INT/FLOAT/REAL just doesn't make any sense in many situations so I'm quite curious why we're still having to even talk about random number generators for debugging & testing. It feels like we're still working for the computers instead of the other way around.

          A good Ada compiler could do this almost thirty years ago, due to the flexibility of the type system. Of course, Ada '83 looked hopelessly complex compared to the other languages of the time such as K&R C. Put that together with the fact that the major users were bureaucratic mega-corps involved in government aerospace projects and it acquired a horrible reputation as a bloated mess.

          Time moved on. C begat C++ and C++ started adding features without much evidence of overall direction. For example it was

        • We have languages that force you to define the allowed range of an integer every time you define it (like ADA), and static analysis tools that find many of these problem. Entering this information into a dialog box wouldn't be any faster than typing it into the source code. The problem is that programmers consider these languages to be too cumbersome, and using such tools on other languages has too many false positive. They would rather have freeform languages like python, which are faster to program and ea

        • by svick ( 1158077 )

          It seems to me you're describing is Design by contract [wikipedia.org]. Specifically, Code Contracts [microsoft.com] in .Net allow you to do this. The compiler there tries to check as much as it can, but checking every situation is simply not possible. It also produces runtime checks, so that your application crashes immediately when the contract is broken, instead of corrupting data.

      • at least one of the specific situations needs to be the input of a random number generator doing crazy stuff.

        You're by far not the only person to reach this conclusion [wikipedia.org].

  • Conjecture. (Score:4, Insightful)

    by tysonedwards ( 969693 ) on Sunday March 11, 2012 @02:58PM (#39319431)
    Until someone actually creates this new mythical language that is proposed by Bret, than this is all conjecture that a hyper-efficient, overly intuitive programming language that can provide immediate feedback would be hyper-efficient, overly intuitive, and provide immediate feedback.

    Basically, the video referenced by the article is no different than "wouldn't it be nice if we were no longer dependent on foreign oil... that would make so many things so much easier!"
    • by TheRaven64 ( 641858 ) on Sunday March 11, 2012 @03:08PM (#39319495) Journal
      Yes, if only there were existing systems that worked that way. Such as the Lisp environment from 1958 or the Smalltalk environment from 1976. Such revolutionary new ideas about programming! I wonder if he will invent automatic refactoring tools next...
      • by vlm ( 69642 )

        Yes, if only there were existing systems that worked that way. Such as the Lisp environment from 1958 or the Smalltalk environment from 1976. Such revolutionary new ideas about programming! I wonder if he will invent automatic refactoring tools next...

        Lisp.. Smalltalk... Oh wait, I've got a FORTH answer oh no wait thats just the third... (Sorry bad joke, makes more sense if you know what FORTH is)

        Gotta admit that the Venn diagram of languages with that kind of environment and "write only languages" nearly perfectly overlap. If someone would make a real time dev system for Perl and Basic then we'd have near perfect overlap.

        • Not sure why you'd think Lisp or Smalltalk are write-only. Smalltalk is heavily used in the financial industry specifically because it is easy to quickly make significant changes to legacy code.
        • by KDR_11k ( 778916 )

          How about Java? If you run your program in debug mode inside Eclipse it'll do hot code replacement every time you save (doesn't ALWAYS work and obviously won't retroactively change results of previous calculations).

      • Re:Conjecture. (Score:4, Informative)

        by Brummund ( 447393 ) on Sunday March 11, 2012 @03:35PM (#39319655)

        "Whoever does not understand LISP, is doomed to reinvent it".

        (As a practical example, I used OpenGL in Lisp 10 years ago, and it was great to modify the code while the system was running.)

      • Re:Conjecture. (Score:5, Insightful)

        by SJS ( 1851 ) on Sunday March 11, 2012 @03:49PM (#39319733) Homepage Journal

        Smalltalk and Lisp are a good example, and they show (to me) that the problem isn't the language. The hard part about programming isn't the code.

        The hard part about programming is understanding and decomposing the problem. If you're not any good at that, then no matter what language you use, you're going to struggle and produce crap.

        This isn't to say that languages aren't important -- different languages lend themselves to particular problem-spaces by suggesting particular solutions. Picking the right language for the problem is as important as picking the right wrench for the nut.

        But there will never be a DWIM language, because the big problem is getting the programmer's brain wrapped around what needs to be done. Once that's done, what's left is only difficult if the programmer doesn't have the figurative toolset on hand.

        • The hard part about programming is understanding and decomposing the problem. If you're not any good at that, then no matter what language you use, you're going to struggle and produce crap.

          there was a report a few years ago about an entrepreneur who *refused* to employ computer science majors. he only employed english language majors. the reason was he said that it was easier to communicate with english language majors in order to teach them programming, and they were more effective and also worked bett

          • there was a report a few years ago about an entrepreneur who *refused* to employ computer science majors. he only employed english language majors. the reason was he said that it was easier to communicate with english language majors in order to teach them programming, and they were more effective and also worked better when introduced into teams, than the people who had been taught programming at university.

            I think there's a reason we don't all know this entrepreneur's name and abbreviated life history.

        • Sure the difficult part is the problem. But if you have the computer doing the trivial and menial parts, the programmers mind will have more brain resources to expend analyzing the problem and thus it will be easier to solve the difficult part.

    • "wouldn't it be nice if we were no longer dependent on foreign oil... that would make so many things so much easier!"

      The funniest thing about this one is that it wouldn't matter. Even if we were no longer dependent on foreign oil, so many other countries around the world would be, that the middle east would still be getting funding for their terrorist programs.

  • Wait (Score:5, Insightful)

    by bytesex ( 112972 ) on Sunday March 11, 2012 @03:00PM (#39319435) Homepage

    Someone re-invented scripting languages ?

    • Re:Wait (Score:5, Insightful)

      by gl4ss ( 559668 ) on Sunday March 11, 2012 @03:52PM (#39319743) Homepage Journal

      well. yes. but sold the idea as having someone(the computer, AI) to magically setup the situation you were thinking of when writing that code too.

      it's very easy to demo the benefits with something that for example just draws something, but such game engines have been done before and isn't really that different from just editing the code in an svg- however as you add something dynamic to it... how is the computer supposed to know, without you instructing it? and using mock-content providers for realtime ui design is nothing new either so wtf?

  • Now go program me a nuclear reactor control system with this.

    • by plover ( 150551 ) *

      Are you saying that trial and error isn't appropriate for a system that cannot fail even one time?

      I think it'd be very appropriate to build reactor control software with tests. Lots of tests. Lots and lots of tests. And you can simulate every device out there, you can simulate what happens when pressure builds or releases unexpectedly, you can simulate what happens when the operator pours his pepsi down the control panel and provides you with non-sensible inputs, etc.

      Matter of fact, I can't see any other

  • by istartedi ( 132515 ) on Sunday March 11, 2012 @03:12PM (#39319529) Journal

    Unless somebody wants to give a better executive summary, there's no way I'm weeding through an hour of video. Do they have any idea how many hours there are of "the one video you must see this year" on YouTube?

    • by Twinbee ( 767046 ) on Sunday March 11, 2012 @05:01PM (#39320247)
      Okay, let's give this a go, well the first 25 minutes anyway. He's talking about seeing the effects of programming not after compiling and running, but while you're actually *typing*.

      The first example (with the fractal tree) is interesting. He changes a number in the code, and the trunk gets taller, or the number of leaves grow. He then adjusts the variable in the code as if it were a slider, and the picture updates accordingly in realtime.

      Second example is a platform game. He is able to user a slider to go back and forwards throughout time, and change certain parameters such as the gravity, or speed of the enemy turtle. To solve a problem at the end where he wants the character to jump at a very particular height, he employs a 'trail' of the character over time (so you can see all of 'time at once'). Adjusting the variable he can get that perfect jump height more intuitively. The 'character trail' changes in realtime as he adjusts the variable.

      The third example is where he talks through a binary search algorithm. Usually, you're blind as you have to figure out what the computer is doing behind the scenes. But let's say you can see the output as you're typing. The numbers of variables are drawn to the right as you're typing. If there's a loop, then there will be more than one value. In this case, the values are segmented horizontally in the output.

      I've thought of a lot of the things that this guy has said (and even semi-planned to incorporate his third idea into my OpalCalc program shown in my sig), but a couple of things were moderately surprising, and it's nice to see it all in action.
  • by Anonymous Coward on Sunday March 11, 2012 @03:14PM (#39319541)

    I've been doing this for years with Python's interactive prompt. I write code and test it on a line by line basis as I'm programming when working with unfamiliar libraries. The code that works is then put into a permanent file for reuse as a script or compiled to an executable for distribution to my end users.

  • by 1729 ( 581437 ) <slashdot1729@gma[ ]com ['il.' in gap]> on Sunday March 11, 2012 @03:29PM (#39319621)

    "I remember how, with the advent of terminals, interactive debugging was supposed to solve all our programming problems, and how, with the advent of colour screens, "algorithm animation" was supposed to do the same. And what did we get? Commercial software with a disclaimer that explicitly states that you are a fool if you rely on what you just bought."

    From http://www.cs.utexas.edu/~vl/notes/dijkstra.html [utexas.edu].

    • I'd argue that colour screens did give us a big, obvious, and immediate improvement - syntax highlighting. No having to learn some new technique or method - just open your existing code and the editor highlights it accordingly. Off-hand, I can't think of anything else that, by itself, had as much of an impact across all programming languages.
      • by 1729 ( 581437 )

        Yeah, I like syntax highlighting (and color screens in general), as well as interactive debuggers. Certainly, we can write code faster and find bugs more efficiently with all the tools available today. But it's the "silver bullet" claims that I'm skeptical about.

    • I giggled like a schoolchild when I've read the next paragraph from that lecture:

      And now we have the multimedia/communication hype: the best bits are those that just arrived from far away, and if you are not "on line", "on the Net", you just don't count, you are not of this world (which is virtual anyhow...). Apart from a change in vocabulary, it is the same hype, the same snake oil over and over again, and you can do me a favour by not getting excited by all the time you are supposed to save by switching t

      • by Raenex ( 947668 )

        Sometimes very smart people can be mostly insightful, but very spectacularly wrong on some points.

        Dijkstra turned into a figurative monk in his later years, dedicating himself to austerity and impractical principles. Kinda like Stallman is for "Software Freedom". There's some value in that, but the rest of us need to get work done, perfection being the enemy of the good.

  • by AdrianKemp ( 1988748 ) on Sunday March 11, 2012 @03:29PM (#39319623)

    It is the most worthless, dumbass thing I've ever had to sit through.

    It's just another "wouldn't it be great if computers programmed themselves" video by someone too stupid to tie their own shoes.

    I know what the code I'm writing is going to do *before* I start writing it, as I hope for the love of god most programmers do.

    In fact, the biggest plague on programming ever has been this concept that changing what you want fifteen times along the way is perfectly okay and we just need to adapt methods to that idiocy. You don't need any of this crap if you just know what you want ahead of time.

    • Agreed!
      When you write code you should be 100% sure what the code will do. If not you don't really know how to program and can be categorized as belonging to the "poke at it untill it works"-crowd and be banned from comercial programming all together. (Members of that crowd should stay away from kernels and other important open source projects too, please.)
      Working with such coders will be frustrating at best, and the death of projects at worst.
    • by bertok ( 226922 )

      Maybe an analogy would actually be better...

      Think of programming as a Mathematician developing a new maths proof. The Mathematician may not know how to get to his goal, but that doesn't mean that the solution isn't robust, or that he needs a calculator at every step.

      Similarly, a good programmer can develop robust and easy-to-maintain code even without an a-priori design, or automated assistance.

      Where machine-assistance comes in is that I can see situations where a computer can assist the Mathematician in a

      • You are wrong, and I hope in time you see this.

        Programming is *not* like a mathematician stumbling upon a new proof. Programming is in fact extremely rigid and based upon a series of existing mathematics.

        There is no room in *professional* software for stabbing in the dark. There is all the room in the world for that in hobby programming done by professionals or otherwise.

        I don't expect a carpenter to build a house without a blueprint, it'd turn up as shitty as most modern software.

    • by Twinbee ( 767046 )
      Wow, to be able to get code right first time without ever debugging, you must be super-human. Perhaps we should throw away the debugger altogether, and any debugging output. While we're at it, throw away any regression or unit testing. - they're only for morons.
      • what is with your morons and equating the video I was commenting on to debugging?

        • It's "you morons", not "your morons and", and a sentence starts with a capital letter. What's the matter, didn't you know what the sentence was going to be before you wrote it?

  • by Anonymous Coward on Sunday March 11, 2012 @03:32PM (#39319645)

    It's called Python. I use the interactive interpreter to test snipets of code al the time. This makes me 4 times more productive than Java, for example.

  • 'nuff said

  • The much-hated VB offers much of what you're looking for. Like it or hate it, it is really good for prototyping and for rapid development. The only drawbacks is that it encourages bad code and marries you to Windows.

  • ... in the creative coding community.

    Patchers like Max and Pure Data allow for realtime graphical programming and live coding environments such as Fluxus exist for realtime graphics and sound. Max [cycling74.com] was originally written by Miller Puckette in the mid 80s for realtime control of DSP for computer music at IRCAM and Pure Data [wikipedia.org], started in the mid 90's, is his open source take on addressing some of it's design issues. Fluxus [pawfal.org] originates from around 2000 as is a live 3d engine for performance using a Lisp varient a

  • by Cyberax ( 705495 ) on Sunday March 11, 2012 @04:06PM (#39319849)

    We already have something like it: IDEs for typed languages with strong inspections.

    When I'm writing something my IDE helps me by highlighting possible errors or undesirable effects. That's not the 'visualization' stuff the author's talking in the article, but it's actually useful.

  • the extension of this idea is to use evolution-style techniques. from an automated perspective, that means deploying genetic algorithms to improve the code. unfortunately that means that good tests have to be written in order to verify that the code so (eventually) generated is actually "correct". many people would fail to go to the trouble of writing good enough tests.

    yes - the alternative is seen to be to "read every line of code and make it run in your head".

    i can't handle that. a) i can't be bothere

    • by Alex Belits ( 437 ) * on Sunday March 11, 2012 @04:56PM (#39320205) Homepage

      Congratulations, you are an idiot.

      "Tests", no matter how numerous, cover an infinitely small fraction of possible instances of your code being used (that would be an infinite number of, unless you take into account the maximum lifetime of the Universe). They are supposed to assist a developer finding some faults in his reasoning, nothing more than that.

  • I can't even bring myself to watch this, and I'm generally a compulsive bring-myselfer.

    Dijkstra spins in his grave. Somewhere out there, Lamport is guzzling Guinness by the barrel and swinging his underwear around his head, while Knuth plays organ dirges.

    The plural of anecdote is performance bonus. That was the VB business model in the late 1990s. This won't work twice. To obtain twice as many programmers at half the price there's now India and China. And they can do math.

  • by prefec2 ( 875483 ) on Sunday March 11, 2012 @04:33PM (#39320021)

    Most programmers think, that coding takes the most part of the development. In some cases they would admit that testing is also very time consuming. But the real issue is the understanding of the problem. A lot of programmers design while their code. This results in strange development cycles which also includes trying to understand what the program does. As this can be done with a debugger, an interpreter or a trace analysis tool. The real issue is the problem description. First, understand the problem and its borderline cases. Second, come up with a solution for that problem. And third, try to implement it.

    BTW: Most programs of today do not contain that many surprising new algorithms. They collect data, the validate data to some constraints, they store data, they provide an interface to query the data. In desktop applications like Inkscape or Word the data is stored in in memory models. And there exist serializers and renderers for the content. So the code as such is not hard to understand, as we all know such structures.

    • And why would it be a bad practice using a computer to analyze the available data to discover the possible cases that must be covered, and thus understand what the program needs to do? Sure, the final released code should not contain the tests implemented for the first assumptions about data. But understanding the problem is easier if you can run a primary version of the basic program logic on live data.

      • by prefec2 ( 875483 ) on Sunday March 11, 2012 @05:02PM (#39320261)

        No. My argument is: You have to understand the problem and its cases first and then code it. At least that is what we learned from the analysis of different software development methods. Some developers think that the perfect model to fit all cases will emerge when they code. This is true for very small problems, where they can keep the cases all in their head. As an alternative you could come from the cases. Define your input and desired output and all parameters which effect the result. You define your constraints. If you believe in a strict test driven development, you even could design the tests. then you divide the problem and specifies the different sub-problems before you try to implement that in C, Java, Fortran, Cobol etc.

        Your argument is:You start with some input (subset of the real input or maybe a specification of the input) and a desired output. Then you fiddle around and try to come up with a suitable transformation which reads the input and provides the desired output. To "test" if it works you run it with input data. When it does not suit your needs you modify the transformation code. On a sample input basis you cannot know all alternatives of input look like. And if you have just a specification of input code, you have to generate suitable input from that specification.

        My proposition is, if you know what the input is and what the output is and what the cases are, you should be able to come up with a specification for the transformation before you implement it. The specification is then the model for the implementation. That approach is much saver and you can even try to proof if the code is supporting all specified input. I know most people work differently. But it is expensive and results in long coding nights and a lot of extra hours.

        • You have to understand the problem and its cases first and then code it. At least that is what we learned from the analysis of different software development methods

          That is the myth of the classical waterfall model. It assumes that you can and already do understand the whole problem in advance. As I said in another thread, that only works if you already have a model for the problem, maybe because it's a classical problem in physics or math. What I say is that you can use software to develop a new model; in

  • It's an hour long, and I don't have the patience to sit through someone gibbering on for that long. Is there a transcript?

  • by Animats ( 122034 ) on Sunday March 11, 2012 @04:43PM (#39320099) Homepage

    The article completely misses the point. The talk starts out awful, but after about five minutes of blithering, he gets to the point. He's set up an environment where he's running a little Javascript program that draws a picture of a tree with flowers, with mountains in the background. As he edits the code, the picture changes. There's a nice user interface which allows selecting a number in the code and then changing it with a slider. This immediately affects the picture. There's an autocomplete feature which, when code typing has reached a limited number of possible choices, offers options like drawCircle, drawRect, etc. Mousing over the selections changes the picture.

    It makes sense if you're drawing pictures with programs. Something like this should be in editors for Renderman shaders and Maya programs, and maybe for some game engines. It also makes sense for HTML editors, and there are HTML editors which do that. Beyond that, it may not be too useful.

I do not fear computers. I fear the lack of them. -- Isaac Asimov

Working...