A Better Way To Program 467
mikejuk writes "This video will change the way you think about programming. The argument is clear and impressive — it suggest that we really are building programs with one hand tied behind our backs. Programmers can only understand their code by pretending to be computers and running it in their heads. As this video shows, this is increadibly inefficient and, as we generally have a computer in front of us, why not use it to help us understand the code? The key is probably interactivity. Don't wait for a compile to complete to see what effect your code has on things — if you can see it in real time then programming becomes much easier."
They invented the debugger! (Score:5, Insightful)
Yeeehaaa! ;)
The tough problems aren't about running the code and seeing what happens, they're about setting up very specific situations and testing them easily.
Conjecture. (Score:4, Insightful)
Basically, the video referenced by the article is no different than "wouldn't it be nice if we were no longer dependent on foreign oil... that would make so many things so much easier!"
Wait (Score:5, Insightful)
Someone re-invented scripting languages ?
An observation... (Score:1, Insightful)
If you need to "run" code, either in your head or on a computer, in order to see what it's going to do, you're probably not really programming and you're definitely not an engineer.
Re:They invented the debugger! (Score:5, Insightful)
The tough problems aren't about running the code and seeing what happens, they're about setting up very specific situations and testing them easily.
Handling non-specific unknown/unpredicted situations gracefully is also tough. Unsanitized user input, crazy failure modes, failure in other code making your state machines go bonkers... The trendy thing to do is just throw your hands up in the air and tell the user to reboot and/or reinstall, but that's not so cool.
Maybe another way to phrase it is at least one of the specific situations needs to be the input of a random number generator doing crazy stuff.
Your Arabic to Roman numeral converter accepts a INT? Well it better not crash when fed a negative, or zero, 2**63-1 (or whatever max_int is where you live), and any ole random place in between
Sounds like they have a GUI REPL (Score:5, Insightful)
Unless somebody wants to give a better executive summary, there's no way I'm weeding through an hour of video. Do they have any idea how many hours there are of "the one video you must see this year" on YouTube?
Re:An observation... (Score:4, Insightful)
If you need to "run" code, either in your head or on a computer, in order to see what it's going to do, you're probably not really programming and you're definitely not an engineer.
Would be a better post if you explained the "right way", hopefully its not mysticism.
Whats wrong with processing this line of perl in your head according to the rules to figure out what it does? (admittedly I have no idea why the heck you'd want to do this, but its the simplest example I can think of using about 3 key perl concepts...)
s/(.*):(.*)/$2:$1/;
The other aspect has to do with new code vs maint (even maint of my own code). If I have no idea what I'm doing with my own freshly written code, thats just wrong... but old code always has some element of intense CSI work to figure out what it does before I can modify it..
Obligatory Dijkstra (Score:5, Insightful)
"I remember how, with the advent of terminals, interactive debugging was supposed to solve all our programming problems, and how, with the advent of colour screens, "algorithm animation" was supposed to do the same. And what did we get? Commercial software with a disclaimer that explicitly states that you are a fool if you rely on what you just bought."
From http://www.cs.utexas.edu/~vl/notes/dijkstra.html [utexas.edu].
Re:Great but... (Score:5, Insightful)
That sounds simple but it isn't. While you could theoretically do this from a virtual machine, the difference between visualising” it and testing it on real hardware is significant especially when it comes to device drivers, which are known to be the most common source of bugs in kernels.
Plus verifying a kernel or a compiler is a pretty hard problem, it's a miracle if you manage to do it in decent time, let alone manage to visualise it in any way.
Re:Conjecture. (Score:5, Insightful)
Smalltalk and Lisp are a good example, and they show (to me) that the problem isn't the language. The hard part about programming isn't the code.
The hard part about programming is understanding and decomposing the problem. If you're not any good at that, then no matter what language you use, you're going to struggle and produce crap.
This isn't to say that languages aren't important -- different languages lend themselves to particular problem-spaces by suggesting particular solutions. Picking the right language for the problem is as important as picking the right wrench for the nut.
But there will never be a DWIM language, because the big problem is getting the programmer's brain wrapped around what needs to be done. Once that's done, what's left is only difficult if the programmer doesn't have the figurative toolset on hand.
Re:Wait (Score:5, Insightful)
well. yes. but sold the idea as having someone(the computer, AI) to magically setup the situation you were thinking of when writing that code too.
it's very easy to demo the benefits with something that for example just draws something, but such game engines have been done before and isn't really that different from just editing the code in an svg- however as you add something dynamic to it... how is the computer supposed to know, without you instructing it? and using mock-content providers for realtime ui design is nothing new either so wtf?
Re:Great but... (Score:5, Insightful)
In the video he covers that as well. Well, at least he conceptually says its covered, I disagree...
Lets start with his abstract example. His binary search on the surface looks straightforward and he wanted to portray it as magically finding bugs as he got a float in one instance and an infinite loop in another. However the infinite loop example was found because he *knew* what he was doing as he intentionally miswrote it to start with and intentionally changed the inputs in accordance with this knowledge. There are a few more possibilities that you have to *know* to try out. For example, he didn't try a value that was lower than the lowest (would have panned out), he didn't try a value omitted from the list but still higher than the lowest and lower than the highest (which also would have been fine) and he didn't try an unordered list (which is incorrect usage, but accounting for incorrect usage is a fact of life). He didn't try varying dataset sizes (in this algorithm doesn't matter, but he has to *know* that) and different types of data. You still have the fact that 'B' is smaller than 'a' and all sorts of 'non-intuitive' things inherent in the situation.
Now consider that binary search is a freshman level programming problem and therefore is pretty low in terms of the complexity a developer is going to deal with. Much of software development will deal with far more complicated scenarios than this, and the facility doesn't *really* cover even the binary search complexity.
I know I may sound more negative than is appropriate, but his enthusiasm and some people's buy-in can be risky. I've seen poor developers suckered in by various 'silver bullets' and produce lower quality code because they think that unit test or other mechanisms passed and they can reast easy. Using these tools is good, but always should be accompanied with some wariness to avoid overconfidence.
Re:Great but... (Score:4, Insightful)
So your point, basically, is that programming is all about knowing what could go wrong with your code?
Not a bad definition actually...it would certainly explain why coding productivity increases in step with experience; you've made those mistakes already and now know to avoid them.
Re:They invented the debugger! (Score:3, Insightful)
This is still archaic thinking. A much more efficient way would be for the IDE to, when specifying a variable, ask there & then what the boundaries of the variable should be. Then the compiler could error any time it saw a situation where the variable could be (or was) handed a value outside those boundaries. Programmers should not be having to catch weird situations over and over; that's what computers are for. Allowing a variable to be any possible INT/FLOAT/REAL just doesn't make any sense in many situations so I'm quite curious why we're still having to even talk about random number generators for debugging & testing. It feels like we're still working for the computers instead of the other way around.
A good Ada compiler could do this almost thirty years ago, due to the flexibility of the type system. Of course, Ada '83 looked hopelessly complex compared to the other languages of the time such as K&R C. Put that together with the fact that the major users were bureaucratic mega-corps involved in government aerospace projects and it acquired a horrible reputation as a bloated mess.
Time moved on. C begat C++ and C++ started adding features without much evidence of overall direction. For example it was never an explicit design goal for C++ templates to be Turing-complete. Features were added one after another, and one day someone pointed out that they had gone so far that templates could now be considered to be a language in themselves. Is it a good idea for a language to contain its own embedded meta-language? Is this something that results in maintainable, understandable code that can be analysed successfully? These questions did not matter because template meta-programming 'just happened' as C++ features agglomerated.
Nowadays the most recent version of Ada (Ada 2012) is probably one of the more straightforward and better designed languages, but its early reputation is unshakable. That's life I guess.
Re:An observation... (Score:2, Insightful)
And if you dont run it thru the debugger and STEP thru it you are just guessing what it will do.
If you are not right about behavior of your code, you are not qualified to write it in the first place.
Many time I step thru my code to find some assumption I was making that is invalid.
Then go kill yourself. People like you are the reason why there are bugs everywhere.
You can write code that compiles with 0 warnings on the highest levels, can get thru the most stringent of lint checks, passed dozens of code reviews, pair wise coded, etc, etc etc.
Compiler warnings are about things you are supposed to know -- a good programmer only gets them on typos or after removing things thus leaving something unused in the code.
But until you run it and step thru and see you will never know.
LISTEN, EVERYONE!
This is what is wrong with those people. They think, they can write random shit, single-step through it, do more random changes, and repeat until it seems to run. Their code only works by accident. Get them out of programming.
Re:Not intended for slashdot (Score:3, Insightful)
The biggest Mistake Today (Score:5, Insightful)
Most programmers think, that coding takes the most part of the development. In some cases they would admit that testing is also very time consuming. But the real issue is the understanding of the problem. A lot of programmers design while their code. This results in strange development cycles which also includes trying to understand what the program does. As this can be done with a debugger, an interpreter or a trace analysis tool. The real issue is the problem description. First, understand the problem and its borderline cases. Second, come up with a solution for that problem. And third, try to implement it.
BTW: Most programs of today do not contain that many surprising new algorithms. They collect data, the validate data to some constraints, they store data, they provide an interface to query the data. In desktop applications like Inkscape or Word the data is stored in in memory models. And there exist serializers and renderers for the content. So the code as such is not hard to understand, as we all know such structures.
Re:Great but... (Score:4, Insightful)
Re:understanding by doing... faster (Score:5, Insightful)
Congratulations, you are an idiot.
"Tests", no matter how numerous, cover an infinitely small fraction of possible instances of your code being used (that would be an infinite number of, unless you take into account the maximum lifetime of the Universe). They are supposed to assist a developer finding some faults in his reasoning, nothing more than that.
Re:Connecting to your creation in Clojure (Score:3, Insightful)
Re:Sounds like they have a GUI REPL (Score:5, Insightful)
The first example (with the fractal tree) is interesting. He changes a number in the code, and the trunk gets taller, or the number of leaves grow. He then adjusts the variable in the code as if it were a slider, and the picture updates accordingly in realtime.
Second example is a platform game. He is able to user a slider to go back and forwards throughout time, and change certain parameters such as the gravity, or speed of the enemy turtle. To solve a problem at the end where he wants the character to jump at a very particular height, he employs a 'trail' of the character over time (so you can see all of 'time at once'). Adjusting the variable he can get that perfect jump height more intuitively. The 'character trail' changes in realtime as he adjusts the variable.
The third example is where he talks through a binary search algorithm. Usually, you're blind as you have to figure out what the computer is doing behind the scenes. But let's say you can see the output as you're typing. The numbers of variables are drawn to the right as you're typing. If there's a loop, then there will be more than one value. In this case, the values are segmented horizontally in the output.
I've thought of a lot of the things that this guy has said (and even semi-planned to incorporate his third idea into my OpalCalc program shown in my sig), but a couple of things were moderately surprising, and it's nice to see it all in action.
Re:The biggest Mistake Today (Score:4, Insightful)
No. My argument is: You have to understand the problem and its cases first and then code it. At least that is what we learned from the analysis of different software development methods. Some developers think that the perfect model to fit all cases will emerge when they code. This is true for very small problems, where they can keep the cases all in their head. As an alternative you could come from the cases. Define your input and desired output and all parameters which effect the result. You define your constraints. If you believe in a strict test driven development, you even could design the tests. then you divide the problem and specifies the different sub-problems before you try to implement that in C, Java, Fortran, Cobol etc.
Your argument is:You start with some input (subset of the real input or maybe a specification of the input) and a desired output. Then you fiddle around and try to come up with a suitable transformation which reads the input and provides the desired output. To "test" if it works you run it with input data. When it does not suit your needs you modify the transformation code. On a sample input basis you cannot know all alternatives of input look like. And if you have just a specification of input code, you have to generate suitable input from that specification.
My proposition is, if you know what the input is and what the output is and what the cases are, you should be able to come up with a specification for the transformation before you implement it. The specification is then the model for the implementation. That approach is much saver and you can even try to proof if the code is supporting all specified input. I know most people work differently. But it is expensive and results in long coding nights and a lot of extra hours.
Re:Great but... (Score:5, Insightful)
However the infinite loop example was found because he *knew* what he was doing as he intentionally miswrote it to start with and intentionally changed the inputs in accordance with this knowledge.
It's was a demo. Demos by their nature tend to be both simplistic contrived.
He was putting across a principle in the first half of his talk and a way of life in the second. Both very valid.
From your comments it's clear that you're more of a craftsman than a visionary. There's room in the world for both.
Re:Great but... (Score:4, Insightful)
You need to watch the video. The horizons of his thinking couldn't be wider.
Re:Great but... (Score:5, Insightful)
Re:Great but... (Score:2, Insightful)
It's worth a hour. But if you don't want to watch it fine. Just don't expect your comments to be worth anything if you haven't done your homework.
Re:Connecting to your creation in Clojure (Score:4, Insightful)
And for those decrying the use of video, you'll definitely want to check out Up and Down the Ladder of Abstraction by the same author: http://worrydream.com/LadderOfAbstraction/ [worrydream.com]
It's a big wall of text with interactive javascript examples and no video.
Re:Great but... (Score:4, Insightful)
It's worth a hour. But if you don't want to watch it fine. Just don't expect your comments to be worth anything if you haven't done your homework.
He has a point. You don't need a 1hr long video as the sole measure with which to convey a technical point. Summaries, diagrams, and/or a 8-10 page paper are also necessary. Asking people to devote one hour just to know that something is worth it, that is not how you present technical ideas or issues.
Re:Great but... (Score:4, Insightful)
I personally am a fan of "debugging by printf." Kernighan and Pike make a good argument for it in The Practice of Programming. But, that's not limited to debugging, really. It's a great tool for understanding one's own code. Basically, whenever I want to get a greater intuition about how something works, I load it up with print statements at strategic points. I guess you might call it "understanding through printf."
I'll be honest: I didn't invest an hour of my time watching this video. How really does his technique compare to "understanding through printf"?