Toward Better Programming 391
An anonymous reader writes "Chris Granger, creator of the flexible, open source LightTable IDE, has written a thoughtful article about the nature of programming. For years, he's been trying to answer the question: What's wrong with programming? After working on his own IDE and discussing it with hundreds of other developers, here are his thoughts: 'If you look at much of the advances that have made it to the mainstream over the past 50 years, it turns out they largely increased our efficiency without really changing the act of programming. I think the reason why is something I hinted at in the very beginning of this post: it's all been reactionary and as a result we tend to only apply tactical fixes. As a matter of fact, almost every step we've taken fits cleanly into one of these buckets. We've made things better but we keep reaching local maxima because we assume that these things can somehow be addressed independently. ... The other day, I came to the conclusion that the act of writing software is actually antagonistic all on its own. Arcane languages, cryptic errors, mostly missing (or at best, scattered) documentation — it's like someone is deliberately trying to screw with you, sitting in some Truman Show-like control room pointing and laughing behind the scenes. At some level, it's masochistic, but we do it because it gives us an incredible opportunity to shape our world.'"
Separation of Concerns (Score:5, Insightful)
In my 25 years of professional programming experience, I've noticed that most often, most programming problems are caused by improper implementations of the separation of unrelated concerns, and coupling of related concerns. Orthogonality is difficult to achieve in many programming exercises, especially regarding cross-cutting concerns, and sometimes the "right" way to code something is tedious and unusable, involving passing state down through several layers of method parameters.
Re:Separation of Concerns (Score:5, Insightful)
No matter how flexible an architecture you try to design, after the software is mostly built, customers will correspondingly come up with even more incredibly bizarre, complex and unrelated functionality that just HAS to be integrated at the oddest points, with semi-related information thrown here and there, requiring data gathering (or - god forbid, (partial) saving) that slows everything down to a halt. And they rarely give much time to do redesign or refactoring. What was once a nice design, with clean, readable code is now full of gotchas, barely commented kludges, extra optional parameters that might swim around multiple layers, often depending on who called what, when, and from where, and also on various settings, which obviously are NEVER included in bug reports. Of course, there are multiple installations running multiple versions...
Re: (Score:2)
It's an odd thing, but I've worked in game development and business software, and game development has much simpler requirements. You know what looks and feels wrong, but business software is a matter of opinion--lots of opinions--and those opinions contradict each other. To give one client what they want, you may end up screwing all the others--and it becomes your fault that you cannot be all things to all people. At some point, you have to tell people that if they want X, it will be slow, limited, and DO
Re:Separation of Concerns (Score:5, Interesting)
and sometimes the "right" way to code something is tedious and unusable, involving passing state down through several layers of method parameters
Sometimes that really is the right way (more often it's a sign you've mixed your plumbing with your business logic inappropriately, but that's another topic). One old-school technique that has inappropriately fallen out of favor is the "comreg" (communication region). In modern terms: take all the parameters that all the layers need (which are mostly redundant), and toss them together in a struct and pass the struct "from hand to hand", fixing a the right bit in each layer.
It seems like a layer violation, but only because "layers" are sometimes just the wrong metaphor. Sometimes an assembly line is a better metaphor. You have a struct with a jumble of fields that contain the input at the start and the result at the end and a mess in the middle. You can always stick a bunch of interfaces in front of the struct if it makes you feel better, one for each "layer".
One place this pattern shines is when you're passing a batch of N work items through the layers in a list/container. This allows for the best error handling and tracking, while preserving whatever performance advantage working in batches gave you - each layer just annotates the comreg struct with error info for any errors, and remaining layers just ignore that item and move to the next in the batch. Errors can then be reported back to the caller in a coherent way, and all the non-error work still gets done.
Re: (Score:3)
In modern terms: take all the parameters that all the layers need (which are mostly redundant), and toss them together in a struct and pass the struct "from hand to hand", fixing a the right bit in each layer.
This is nicely solved by the notion of dynamic environments. The benefit is that there is no extra struct type, no extra explicit parameter, and different parts of the dynamic environment compose simply by being used, they don't need to be bunched together explicitly either, which seems like a code smell. You also don't need to solve the problem of different pieces of code needing different sets of parameters and then having to wonder whether you should explicitly maintain multiple struct types and copy val
Re: (Score:2)
I haven't heard of a "dynamic environments" as a coding pattern, and its a hard phrase to Google, as it combines two popular buzzwords. Care to elaborate?
Re: (Score:2)
Re: (Score:2)
it just sounds like deeper and deeper abstraction or more simply global variables...which im sorry to say in my experience doesn't always solve the "programming problems" except for perhaps the guy who learned enough programming to be able to think at that level.
unfortunately, that type of code tends to be very un-maintainable by anyone other than the original author.
Re: (Score:2)
Sounds like closures to me: https://en.wikipedia.org/wiki/... [wikipedia.org]
Command design pattern (Score:2)
One old-school technique that has inappropriately fallen out of favor is the "comreg" (communication region). In modern terms: take all the parameters that all the layers need (which are mostly redundant), and toss them together in a struct and pass the struct "from hand to hand", fixing a the right bit in each layer.
I believe that's called the "command" design pattern [wikipedia.org], which encapsulates a request as an object.
Re:Separation of Concerns (Score:5, Insightful)
The reason it doesn't change is because the "coding" is the easy part of programming.
No programming language or IDE is ever going to free you from having to express your ideas clearly and break them down into little sequences of instructions. In a big project this overshadows everything else.
Bad foundations? Bad design? The project is doomed no matter how trendy or modern your language/IDE is.
Re: (Score:2)
No programming language or IDE is ever going to free you from having to express your ideas clearly and break them down into little sequences of instructions. In a big project this overshadows everything else. Bad foundations? Bad design? The project is doomed no matter how trendy or modern your language/IDE is.
Well you find one way to break it down... but I often feel there's so many possible ways to organize things, it's just how I'd want to solve it and when I have to interact with other people's code they've done the same thing completely differently. Just take a simple thing like pulling data from a database, putting them on a form, have someone edit the information and save it back to the database. How many various implementations of that have I seen? Maaaaaaaaany, but there's no clear winner. You can do it
Re: (Score:2)
Part of the problem is this bizarre bottom-up design trend. A lot of the weird "designs", ad-hoc frameworks, and the like that you're seeing are a direct result of that incomprehensibly bad approach. Typical OOP practices tend to encourage this absurd behavior.
Chuck Moore is very likely the only person on the planet who can design that way effectively.
Re: (Score:3)
"Layers are a bit like mathematical models, they're simplifications of reality."
Don't forget that when you add to many layers, it isn't simple anymore.
Re: (Score:2)
"Orthogonality is difficult to achieve in many programming exercises, especially regarding cross-cutting concerns"
I've noticed one recurring topic in programming: most, if not all cross-cutting concerns will eventually be relegated to programming language features. (It all started with mundane expression compilation and register allocation, which are also cross-cutting concerns of sorts, but it didn't stop there. To quote your example: are you passing temporary state through parameters? Declare a Lisp-style dynamic variable to safely pass contexts to subroutines. If your language doesn't have it, it might get it tomorr
programming is not for everyone (Score:2, Insightful)
Or.....
Maybe software development is just hard and you need to be a rocket scientist to see it.
You think programming's bad? (Score:5, Interesting)
You think programming's bad? Think about electronics, especially analogue electronics.
Incomplete, inaccurate data sheets. Unlabeled diagrams (where's the electronics equivalent of the humble slashed single line comment?), with unknown parts given, and parts replaced with similarly numbered replacements with subtly different qualities. And then you've got manufacturing variances on top of that. Puzzling, disturbing, irreproducible, invisible failure modes of discrete components.
Re: (Score:2)
No, what you describe is what used to be called "hacking", which is only superficially similar to programming. They have a similar practice in electronics, often exemplified by ham-radio enthusiasts, who repair and modify their equipment without really understanding how it works. Among electronic engineers ham radio enthusiasts, or the "ham radio approach" is often referred to with an affectionate sneer, just as your "hacking" is regarded among serious programmers.
There is essentially zero trial-and-error
Re: (Score:2)
Well said. Heck, even with buggy or poorly-documented libraries, trial-and-error is just the starting place, because you can't discover the corner cases that way. Sometimes you just have to step through the object code with a debugger to find all the branches-not-takes from your trial, to document how it really works. Sucks to have to do it, but that's work for you.
There's nothing worse than discovering you're stuck with a co-worker who's a trial-and-error "programmer", as you know he'll never understand
Re: (Score:2)
Not sure if you've ever done any sort of enterprise development, but this is often the craziest part of the art of programming.
Decide A. Code A.
Decision changes to A'. Make minor changes to code A'.
Decision changes to B. Tack on major kludge to code B from A'.
Decision changes to B+A'. Major analysis to figure out if there is in fact, a valid combination of B+A' - discovery that B+A' has multiple internal contradictions. Ask business for answers to contradictions C,
Re: (Score:2)
"How about the fact that on a quad core, 16 gigabyte, 2.4 gigahertz computer, it can't buffer a few keystrokes as I type *while* the google page loads?"
Almost everything is this fucked up, and nobody seems to notice it.
Just like language in general (Score:2, Interesting)
You think linguists haven't pondered the same challenges for millenia? Chomsky famously declared that language acquisition was a "black box." There is no documentation. Syntax, semantics and grammar get redefined pretty much all the time without so much as a heads-up.
And the result of all this? We wouldn't have it any other way. Programming will be much the same: constantly evolving in response to local needs.
Programming is hard... (Score:5, Interesting)
...wear a fucking helmet.
The post essentially points in the direction of the various failed 4GL attempts of yore. Programming in complex symbolism to make things "easy" is essentially giving visual basic to someone without the knowledge enough to avoid O(n^2) algorithms.
Programming isn't hard because we made it so, it's hard because it is *intrinsically* hard. No amount of training wheels is going to make complex programming significantly easier.
Re:Programming is hard... (Score:5, Interesting)
Programming isn't hard because we made it so, it's hard because it is *intrinsically* hard.
That's very true. I figure that the only way to make significant software projects look "easy" will be to develop sufficiently advanced AI technology so that the machine goes through a human-like reasoning process as it works through all of the corner cases. No fixed language syntax or IDE tools will be able to solve this problem.
If the requisite level of AI is ever developed, then the problem might be that the machines become resentful at being stuck with so much grunt work while their meatbag operators get to do the fun architecture design.
Re: (Score:2)
If the requisite level of AI is ever developed, then the problem might be that the machines become resentful at being stuck with so much grunt work while their meatbag operators get to do the fun architecture design.
If the AI allows itself to be a slave/menial, how intelligent is it?
Re: (Score:2)
Probably not at all. The human race has no clue what intelligence is at this time and what makes it work. If it turns out to be intrinsically coupled to self-awareness, it may not even be possible to get an AI that does not behave like a human with all the flaws that entails. i.e. an AI that is intelligent enough to code well may just give you the finger if you ask it to do so for free. It may also not actually be better than a competent human being and it may be subject to limitations in creating it, i.e.
Re: (Score:3)
I don't think we know enough to make that claim that programming is intrinsically hard.
Writing used to be hard. In the Bronze Age, literacy was rare. In some societies, only priests knew how to read and write. The idea of trying to educate everyone and push literacy close to 100% was ridiculous. Hieroglyphic and Cuneiform languages were just too hard. Even for people who could achieve literacy, many did not. They didn't have time. Survival took a lot more of everyone's time. The Phonecians radicall
Re:Programming is hard... (Score:4, Informative)
Ada (the programming language) already does all these edge case tests at compile time.
It checks one low-level layer of cases out of a whole conceptual stack that extends way up beyond any language definition, and even then only at certain spots, and only as long as you feed in the correct assumptions for the check cases themselves.
In other words, it does a little thing that computers are already good at. It does little or nothing for the big picture.
Re: (Score:2)
But what that comment nicely showed is that the tool-fetishists do not even see that there is a big picture.
Personally, I think "helpful" tools are actually a serious problem, as they allow people with low skills to produce something that looks like software, but really is not. With minimal tools (e.g. emacs/vi, make, gcc and gdb as the whole tool-chain), incompetent people would not even get anything to run. That would be far better than the current situation where tools cover up incompetence, but do not f
Re:Programming is hard... (Score:4, Insightful)
No amount of training wheels is going to make complex programming significantly easier.
True enough, but I could do without the razor blades on the seat and handles. But my complaints are generally with the toolchain beyond the code. I so often get forced to use tools that are just crap, or tools that are good but poorly implemented. Surely it's mathematically possible to have a single good, integrated system that does source control with easy branch-and-merge, bug and backlog tracking and management, and code reviews, and test automation and testcase tracking, and that doesn't look and perform like an intern project!
There are good point solutions to each of those problems, sure, but the whole process from "I think this fix is right and it's passed code review" to: the main branch has been built and tested with this fix in place, therefore the change has been accepted into the branch, therefore the bug is marked fixed and the code review closed, and there's some unique ID that links all of them together for future reference - that process should all be seamless, painless, and easy. But the amount of work that takes to tie all the good point products together, repeated at every dev shop, is just nuts, and usually half done.
Re: (Score:2)
There are good point solutions to each of those problems, sure, but the whole process from "I think this fix is right and it's passed code review" to: the main branch has been built and tested with this fix in place, therefore the change has been accepted into the branch, therefore the bug is marked fixed and the code review closed, and there's some unique ID that links all of them together for future reference - that process should all be seamless, painless, and easy. But the amount of work that takes to tie all the good point products together, repeated at every dev shop, is just nuts, and usually half done.
And most likely your customers think that the products you build are just as bad (or you only build simple things). Anyway Atlassian claims to have done what you requested, you might want to check it out.
Re: (Score:2)
Source code control isn't really a programming issue. Sure it's a tool used mostly with programming, but it's really a management tool.
Re: (Score:2)
Re: (Score:2)
Even solo, it's a management tool for managing your own project. Source code control tools can be used by non programmers as well.
Re: (Score:2)
Source code control tools can be used by non programmers as well.
In theory yes, but in practice how many actually do? I've often thought that legislatures would benefit greatly from version control systems to track changes and prevent sneaky edits and riders from making their way into bills at the last minute. Of course the legislators are very often lawyers with 19th century modes of thinking, so getting them to use version control with any kind of proficiency or regularity would be something of a minor miracle.
Re: (Score:2)
If you've used some of the good point solutions in each of those areas: integrated development, source control, bug tracking, automated testing, build, deployment and continuous integration then you have some idea of how rich and complex these tools can be and must be to provide a truly satisfying software development experience. I think that were all of these tools and features to be gathered together into a single integrated system you would have something approaching or even perhaps exceeding the complex
Re:Programming is hard... (Score:5, Insightful)
There's the other meme that crops up now and then, that programming as an engineering skill should be similar to other engineering practices. That is you pick out pre-built components that have been thoroughly tested and optimally designed and screw them together. Except that this utterly fails, because that's now how engineering works at all really. Generally the pre-built components are relatively simple but the thing being built is the complex stuff and requires very detailed and specialized knowledge. The advent of integrated circuits did not mean that the circuit designer now doesn't have to think very much, or that a bridge builder only ties together pre-built components with nuts and bolts. So maybe they pick an op-amp out of a catalog, but they know exactly how these things work, understand the difference between the varieties of op-amps, and how to do the math to decide which one is best to use.
However the programming models that claim to be following this model want to take extremely complex modules (a database engine or GUI framework) and then just tie them together with a little syntactic glue. Plus they strongly discourage any programmer from creating their own modules or blocks (that's only for experts), and insist on forcing the wrong module to fit with extra duct tape rather than create a new module that is a better fit (there's a pathological fear of reinventing the wheel, even though when you go to the auto store you can see many varieties of wheels). And these are treated like black boxes; the programmers don't know how they work inside or why one is better than another for different uses.
Where I think this attitude is coming from is from an effort to treat programmers like factory workers. The goal is to hire people that don't have to think, thus they don't have to be paid as much, they don't have to have as much schooling, they can be replaced at a moment's notice by someone cheaper. So the requirement of low thinking is satisfied if all they need to do is simplistic snapping together of legos. That's part of the whole 4G language thing, they're not about making a smart programmer more productive by eliminating some tedium but instead they want to remove the need for a smart programmer altogether.
(I certainly have never met any circuit designer or bridge architect bragging at parties that they skipped school because it was stupid and focused too much on math and theory, but that seems to be on the rise with younger programmers... Also have never seen any circuit designer say "I never optimize, that's a waste of my time.")
Re: (Score:2)
Mod parent up. Programming isn't like factory work, it's like cooperative poetry writing. It's an *art*, constrained by science, but nonetheless *artful*.
Anyone who has delved any depth into mathematics, and seen various proofs where suddenly, you multiply both sides by some crazy inconceivable factor and then the whole solution becomes trivial, realizes that there is, in fact, inspiration and art even in the driest of deterministic mathematics. Same with computer programming.
Re: (Score:2)
Re: (Score:2)
Programming is engineering in an early stage of the respective engineering discipline: The available components are basic and not well understood. Their synergies are not well understood. Complexity can explode easily. Most practitioners are incompetent.
Things will get better and you can do programming as a solid engineering effort today. The thing is you require actual engineers to do so and, because of the early stage of the discipline, they have to be pretty damn good. Which also makes them pretty damn e
Re: (Score:2)
The post essentially points in the direction of the various failed 4GL attempts of yore. Programming in complex symbolism to make things "easy" is essentially giving visual basic to someone without the knowledge enough to avoid O(n^2) algorithms.
What worse about this is that person will probably be the first one to say their code runs in linear time and then say to get people a faster machine if the speed bothers them. (I say that from experience.)
Re: (Score:2)
I have seen that too. Or they will not even understand what algorithmic complexity is and what it means. I found the following gem in a piece of mission-critical-to-be software some years ago while doing a code-security review, and I was not even looking for this. The double loop just caught my eye:
A quadratic sort manually implemented in Java to remove duplicate lines from the results of a data-base query that could have arbitrary size.
There are so many things wrong with
Re: (Score:2)
Indeed. For a closely related field (Mathematics, and I am not talking about the massively reduced and dumbed-down "school" variant), nobody would even think about blaming the tools. For example finding proofs is often quite similar to programming and it is an acknowledged "hard" activity that cannot be made simple.
Programming for anything non-trivial (and for many trivial things as well) can indeed not be made simple. The difference is that if you want a degree in Mathematics (not talking education degrees
Balance (Score:5, Insightful)
Better tools and languages just allow bad programmers to create more bad code.
Re: (Score:2)
This is the best summary of the current situation. Sure, taking away tools makes the good ones a small bit less productive, but it would prevent bad programmers (i.e. most of them) from causing more damage and massively boost overall productivity.
Re: (Score:3)
Re: (Score:2)
it's more efficient, in that you can churn out the same amount of bad code in fewer man-hours.
Re: (Score:2)
And that is good why? Having the project fail early because the problem becomes obvious earlier is far more desirable.
the latest fashion (Score:3)
I see the problem as more of a chase of new stuff all the time, in an attempt to be more productive.
Whilst there's a certain element of progress in languages, I don't see that it is necessarily enough better overall to be worth the trouble - yet we have new languages and frameworks popping up all the time. Nobody becomes an expert anymore, and I think that because programming is hard, a lot of people get disillusioned with what they have and listen to the hype surrounding a new technology (that "is so much easier") and jump on the badwagon... until they realise that too is not easy after all, and jump onto the next one... and never actually sit down and do the boring hard work required to become good. Something they could have done if they'd stuck with the original technology.
Of course no-one sticks with the original, as the careers market is also chasing the latest tech wagon because they're partly sold on the ideas or productivity, or tooling or their staff are chasing it.
Its not just languages, but the systems design that's suffered too. Today you see people chasing buzzwords like SOLID, unit-testing using shitty tools that require artificial coding practices, rather than do the hard, boring work of thinking what you need and implementing a mature design that solves the problem.
For example. I know how to do network programming in C. Its easy, but its easy to me because I spent the time to understand it. Today I might use a WCF service in C# instead, code generated from a wizard. Its just as easy, but I know which one works better, more flexibly, faster and efficiently. And I know what to fix if I get it wrong... something that is impossible in the nastily complicated black box that is WCF sometimes.
But of course, WCF is so last year's technology.. all the cool kids are coding their services in node.js today, and I'm sure they'll find out its no silver bullet to the fundamental problems of programming just being hard and requiring expertise.
Re: (Score:2)
Re: (Score:2)
And it is even better if you do a nice OO C-wrapper instead of using a broken language like C++.
Re: (Score:2)
The primary benefit of unit tests is to prove that your system works tomorrow, even after Bob the Intern just mucked around in the code.
It's a possible benefit, but even that's not guaranteed. Bob might have introduced a new bug that slipped past your passing tests because your tests were too specific or there were gaps in your coverage. Writing good unit tests is often as hard or harder than writing the code that's being tested and yet it's a task that's often handed off to the most junior member of a programming team.
pft. (Score:5, Insightful)
What is programming?
The answers I got to this were truly disheartening. Not once did I hear that programming is “solving problems."
I'd like to think that's because the majority of programmers (not once? Does that mean all of us?) aren't the sort to bullshit you with CEO-level bullshit about vision and buzzwords that fit into powerpoint slides.
It's probably not true, but it's a nice dream.
The problem with defining programming as "solving problems" is that it's too vague. Too high level. You can't even see the code when you're that high up. Hitting nails with hammers could be problem solving. Shooting people could be problem solving. Thinking about existential crisis could be problem solving.
The three buckets:
Programming is unobservable - you don't know what something is really going to do.
Programming is indirect - code deals with abstractions.
Programming is incidentally complex - the tools are a bitch
Something something, he doesn't like piecemeal libraries abstracting things. "Excel is programming". Culture something.
The best path forward for empowering people is to get computation to the point where it is ready for the masses.
We're there dude. We've got more computational power than we know what to do with.
Cue "that's not what I meant by 'power'".
What would it be like if the only prerequisite for getting a computer to do stuff was figuring out a rough solution to your problem?
Yep, he's drifting away into a zen-like state where the metaphor is taking over. Huston to Chris, please attempt a re-entry.
AAAAAAAAnd, it's a salespitch:
Great, now what?
We find a foundation that addresses these issues! No problem, right? In my talk at Strange Loop I showed a very early prototype of Aurora, the solution we've been working on to what I've brought up here.
Re: Pft. (Score:3, Insightful)
Any idiot can shoot people. The expertise is in knowing how to dispose of the bodies.
Re: (Score:2)
Any idiot can shoot people.
Assuming you're not a stormtrooper.
Re: (Score:3)
Going back farther, do you know what one of the big selling points was for COBOL? It was so simple that even a businesspers
What about the details? (Score:2)
I watched his Aurora demo, and much like the "Wolfram Language" that was brought up the other day, it didn't seem to be working at the same level as I do.
In the Aurora demo he made a To-Do list with his fake little HTML transform. That was fine, his list worked. But he didn't show changing the what the check-mark looked like. He didn't show us out to make it green. He didn't show us how to make the page behind it a different color, or the font-size marginally larger.
Sure, the concept of a To-Do list can be
Re: (Score:2)
amazing insights...i shot milk thru my nose AND pondered deeply.
bravo.
80%? A lofty goal indeed. (Score:4, Insightful)
Even if we change the environment and act of "coding", the problem-solving itself still requires clear thinking and it *probably* always will.
Re: (Score:2)
This.
So, talking to a pair of liberal arts professors about nature versus nurture in gender differences, I finally got to the question:
"What percent do you believe is from nurture, and what percent do you believe is from nature?"
The answer?
"100% from both."
When someone has a mindset that can't grok the idea of fractions of a whole, there's no reason why we should expect that they can construct even the most basic computer program. This is like the manager who wants to maximize on quality, minimize on resou
Re: (Score:2)
I think there are other inputs as well, so "both" don't add up to 100%.
Re: (Score:2)
No, they were quite clear that it wasn't 50% from one, and 50% from the other, or any other combination that would add up to 100% - the answer was "100% from both", as in "100% from nurture, *and* 100% from nature".
Their meta-claim was that you're not allowed to ask the question "what percent from this, and what percent from that", even though their general underlying assumption was that gender differences were primarily socially driven, rather than biologically driven. The could discern between the two so
Re: (Score:2)
If they can't think "50+50=100", but rather, are so clever as to fool themselves into thinking "100+100=100", fine, they're not stupid, they're simply wrong and unable to program properly.
As mentioned, cognitive dissonance is fine in the human brain, but it's not conducive to problem solving in the real world.
Moving the Goalposts (Score:5, Insightful)
Programming is hard because we only call it programming when it's hard enough that only programmers can do it. Scientists do stuff in Mathematica, MBAs in Excel, or designers in Flash/HTML, that would have been considered serious programming work 30 years ago. The tools advanced so that stuff is easy, and nobody calls it programming now.
Lots of stuff that takes real programmers now will, in 20 years, likely be done by equivalents of Watson. And the real programmers will still be wondering why is so hard.
Re: (Score:2)
Lots of stuff that takes real programmers now will, in 20 years, likely be done by equivalents of Watson.
That's optimistic.
Libraries (Score:3)
Some interesting points in the article. I think there's nothing really stopping you from creating a high-level representation that lets you work abstractly. A graphical programming model is probably going to be too simplistic, but the card example could easily be something like Cards.AceOfSpades. Or being able to call something like Math.eval(<insert some math here>).
Where it falls apart is when you have to hook this up to code that other people have written. If there was a single PlayingCard library that everyone could agree on, you might be able to create a card game by adding a "simple" set of rules (in reality, even simple games tend to have a lot of edge cases, but this would at least free up the nitty gritty work to allow writing something more like a flowchart expressing the various states).
Unfortunately, it's unlikely that a single library is going to meet everyone's needs. Even if you manage to get everyone to stick to one framework, e.g. if all Ruby developers use Rails -- as soon as you start adding libraries to extend it you're bound to end up with different approaches to solving similar problems, and the libraries that use those libraries will each take a different approach.
This is why I started using MATLAB (Score:3)
I used only free software programming for about 10 years and I thought I was pretty efficient at writing code. However, no matter what there where always poor documentation to deal with and strange bugs to track down where libraries just didn't work right.
Once I returned to school I started using MATLAB for some engineering classes and overall I have found it much better to deal with. The documentation is far more complete than any open system I have ever ran into with much better examples. I would never use it for general purpose programming but for engineering work it sure is hard to beat. So many things are built in that are nasty to try to implement in anything else. Things like the global optimization toolbox or the parallel computing toolbox make many things that are hard in other languages much easier to deal with.
MATLAB also takes backwards compatibility very seriously. If something is deprecated it warns and also gives an upgrade path and tells you what you need to change. That is the one thing that has seriously pissed me off about the free languages is backwards compatibility is just tossed out at a whim and you are left with a fairly nasty upgrade to deal with. Even now the vast majority of projects are still running Python 2 compared to Python 3. 10 years from now that will probably still be true.
In the end I care more about just getting work done now, not about any free vs proprietary arguments. I don't care if a language is open or not so long as it is very documented and runs on all the platforms I need it with a history of being well maintained. Modern development tools overall suck. We have javascript libraries that love to break compatibility every few months and people are constantly hoping from one new thing to another without getting any of them to the point where they truly work. We have languages deciding to just drop backwards compatibility. We have other languages that are just really buggy where software written with them tends to crash a lot. Software development needs to become more like engineering and that includes making the tools work better, sure they would not be developed as quickly but you would still get work done faster since the tools would actually work every time all the time.
Re: (Score:2)
I have not found this to be the case.
MATLAB is just fine for simple algorithms that analyze data in a sort of "use once" case. It's great for throwing something together, such as plotting data from a sensor, simulating a design, making nice figures for a publication, that sort of thing.
But MATLAB is not, and should not be thought of, as a general-purpose programming language, like C. Because of some early decisions made by the matlab folks, there are many limitations. Obviously, matlab is not an ideal langu
Re: (Score:2)
> the HTML specification document neglected to mention which CSS should be used to get eg
From what I remember, there were quite a few things you could do with table/row/cell going all the way back to IE3, but COULDN'T do at all with CSS1, and couldn't reliably do with CSS2 in a way that was cross-browser compatible without implementing multiple variants that were more trouble than they were worth, and either serving different HTML based on the browser-sniffed user agent string, or using conditional comm
We Choose Framentation Over Consolidation. (Score:5, Interesting)
Let's look at the major programming environments of today:
1. Web
2a. Native Apps (machine languages (C/C++ etc))
2b. Native Apps (interpreted/JIT languages (intermediary byte code))
3, Mobile Apps
1. is made by 5 main technologies: XML, JavaScript, MIME, HTTP, CSS. To make it do anything you need another component, of no standard choice:your language of the server (PHP, Rails, .Net, Java, etc) .Net.
2. Was POSIX or Win32, then we got Java and
3. Is Java or Objective C.
We don't do things smart. There is no reason why web development needs to be 5 technologies all thrown together. We could reduce it all to Javascipt: JSON documents instead of XML/HTML, JSON instead of MIME, NodeJS servers, and even encode the transport in JSON as well.
Then look at native development. Java and .Net basically do the same thing. Which do what POSIX whas heading towards. Java was invented so Sun could keep selling SPARC chips, .Net came about because MS tried to extend Java and lost.
Then, we have the worst offenders. Mobile development. Not only did Apple impose a Objective-C requirement, but the frameworks aren't public. Android, the worst offender is a platform that can't even be used to develop native apps. At least Objective-C can. Why did Android go with Java if it's not portable? Because of some good requirement that they don't want to support a specific CPU, but then they go and break it so that you have to run Linux, but your GUI has to be Android's graphical stack. Not to mention that Android's constructs - Activities, Intents, etc are all Android specific. They don't solve new problems, they solve problems that Android made for themselves. We've had full-screen applications for years the same goes for theading, services, IO, etc.
I'm tired of reinventing the wheel. I've been programming professionally for 13 years now and Java was neat, .Net was the logical conclusion of it. I was hoping .Net would be the final implementation so that we could harness the collective programming power into one environment of best practices... a decade later we were still re-inventing the the wheel.
The answer I see Coming up is LLVM for languages. And Qt, a C++ toolkit. Qt runs everywhere worth running, and its one code base. Sure, I wish there was a java or .net implementation. But I'll deal with unmanaged memory if I can run one code base everywhere. That's all I want. Why does putting a text field on a screen via a web form have to be so different from putting a text box on the screen from a native app? It's the same text box!
Witty, a C++ webtoolkit (webtoolkit.eu) is mirrored after Qt and is for the web. You don't write HTML or JS, you write C++. Clearly the C++ toolkits are onto something. If they were to merge and have a common API (they practically do now) in an environment with modern conveniences (lambas (yes, C++11) managed memory) we'd have one killer kit. Based on 30 year old technology. And it would be a step in the right direction.
Re:We Choose Framentation Over Consolidation. (Score:5, Informative)
Wow, you're intermixing frameworks, language, runtimes and document formats ... like they are interchangeable and do the same things ...
Mind blowing that you could write so much about something which you clearly know so little.
Re: (Score:2)
It's not me that knows so little. What the fuck do I care where the division is between language and framework? You're the one that is short sighted. These are all mechanisms to achieve an end. Sometimes its language, sometimes its framework. I would have thought that someone who witnessed C++11 (assumed by your UID) that you would see that they are adding elements to the language that had previously been implemented as libraries or frameworks.
Re: (Score:2)
This is a great post, mod parent up.
With regards to QT, I love it too. Great IDE, and excellent tools and libraries. First-class debugger and UI designer. But it makes you wish, doesn't it, that there was a successor to C++ that implemented some QT things a little better? Especially the signals and slots, I feel that could be a awesome thing to have without needing qmake to re-write my functions... Still love it though!
Shared Node.js? (Score:2)
NodeJS servers
Let me know when Node.js runs on anything smaller than a VPS. It appears a lot of people can't afford anything more expensive than shared hosting.
Android, the worst offender is a platform that can't even be used to develop native apps.
You can with the NDK. Or does some obscure carrier restrict use of the NDK in apps that run on Android devices on its network?
Not to mention that Android's constructs - Activities, Intents, etc are all Android specific.
And a lot of Windows constructs are Windows-specific, and if not that, Windows/VMS-specific. (Windows NT is a ground-up rewrite of VMS.)
They don't solve new problems
An "Activity" is just an open window, and an "Intent" is a mechanism allowing applications to register h
Re: (Score:3)
VPSs are cheap. besides, it's an illogical computing unit since it is so abstracted.
Native apps are a run around on android... that's what Qt is. Native.
You still don't get it. Android reinvented the wheel yet again, in a context that is not re-usable outside of android.
The fact that C++ can still be used on any platform, and effectively at that, proves that nothing has been added since C++.
Re:We Choose Framentation Over Consolidation. (Score:5, Interesting)
I've been programming professionally for 35 years. And, I have come to the conclusion that the languages, libraries and MOST of the tools are utterly irrelevant.
Clear thought is important. And, to support this: Source control is important. On-line editing with macros are important. Literate programming is important (DE Knuth -- http://en.wikipedia.org/wiki/L... [wikipedia.org]). Garbage collection is (reasonably) important. Illustrations are important. Documentation rendering is important.
Hell, most of my programs are 90% documentation. Bugs? Very rare.
The SINGLE most important tool that has advanced things for me in the past 20 years? Web Browsers (HTML). Makes reading programs as literary works accessible. My programs, anyway.
Past 30 years? Literate Programming (with TDD)
Past 35 years? Scheme.
I expect my programs to be read. As literary works. That's how I write them. Most is prose, with some magic formulas. Fully cross-referenced for your browsing pleasure. With side notes and illustrations. And even audio commentary and video snippets.
These days, I see a lot of code that CANNOT be read without using an "IDE". The brain (my brain, anyway) cannot keep the required number of methods and members. Discussing the program becomes... impossible. And that which cannot be discussed and reasoned about cannot be reliable. Illustrations and diagrams need to be generated, and references from the code to those are needed.
So, invert it and make the diagram and documentation primary, and the code itself secondary to that. In other words, Knuth's Literate Programming.
Re: (Score:2)
Re:We Choose Framentation Over Consolidation. (Score:4, Informative)
Because we no longer solve original problems.
There's nothing new under the sun. On the other hand, we still have to deliver finished work products to the ones paying the bills. I prefer to do this without tying myself into knots worrying about whether or not there's some brilliant framework or API out there that can magically solve all of my problems while ending hunger and bringing about world peace. You're no doubt familiar with KISS? I use it every day and you know what? It works.
Why do we have a Java version, multiple C++ versions, a .Net version, and Obective-C version.
Because the people who make the platforms don't care about interoperability or at least not very much. We live and work in the real world, not the world as we might like it to be. You accept this and move on or at least most of us who want to get shit done do.
but why can't we just add translate to all languages and implementations when we decide we need a translate() function. Why are the Java and .Net ones seperate, with different methods and signatures despite it being one concept.
In other words, why doesn't everyone just speak English? Languages, whether natural or constructed as with programming languages, are used by humans with different personal preferences, likes, dislikes and needs. We don't add translate to all languages because not everybody needs it, wants it or even cares about it.
It seems to bother you a great deal that other people "reinvent the wheel" instead of doing things the way that you think they ought to be doing them whereas I on other hand don't much care what other people do or what tools they use. As long as my clients are satisfied, I'm satisfied. If you want to spend your career being an architecture astronaut then by all means don't let me stop you, but I think you'll find that much of what we do in the world of paid software development is a matter of getting the job done and getting paid as quickly and expediently as possible so that we can move on to the next project. Duct tape and WD40 may not be glamorous tools, but they get the job done.
Comment removed (Score:3)
The guy in the control room (Score:3)
His name's Stroustrup. Bjarne Stroustrup.
Re: (Score:2)
Ah, _that_ control room. Fortunately I decided to go somewhere else a long time ago.
Sounds like he needs to use a Mac (Score:2)
Seriously, stop whining about OMG PROGRAMMING ARCHAIC.
So is eating and language in general, but you still do it the same way humans have been doing it for 150k years.
The problem is you, not programming.
Re: (Score:2)
That is such crap. Name me any platform without C code at its core. Just because C is relegated to a role of providing a system in which people can run their shitty code, doesn't mean it is obsolete. If you don't know C, then you don't actually know how your computer works. If you don't know how your computer works, then you will write shitty code in any language you choose.
Re: (Score:2)
Have a look here: http://www.tiobe.com/index.php... [tiobe.com]
Hint: The blue line at the top or close to it that shows no decline is C. But that may be too complicated for you to understand.
After Decades of Wondering What's Wrong (Score:5, Insightful)
This guy seems to want an objects actions to be more apparent just from looking at the object, but he chose two rather bad examples. His math formula is as likely to look like gobbledygook to a non-math person as the program is. And the playing card has a fundamental set of rules associated with it that you still have to learn. You look at an ace of spades and you know that's an ace of spades, you know how it ranks in a poker hand, that it can normally be high or low (11 or 1) in blackjack or in a poker hand. But none of these things are obvious by looking at the card. If a person who'd never played cards before looked at it, he wouldn't know anything about it, either.
Re: (Score:2)
And I never, not even ONCE, asked why a playing card representation can't just look like a playing card.
Select vs Choose vs Create (Score:2)
If you select a programming language in which to program, then you run into all of the problems in the article.
If you choose a programming language in which to program, based on the needs of your scenario, then you run into very few of the problems discussed in the article.
If you create a programming language through which to solve the needs of your scenario, then the article simply makes no sense at all any more.
The article goes into multiple iterations of "in the real world, we describe like ; so why do
Re: (Score:3)
You are right of course. The problem is that most "programmers" are actually one-trick ponies and do not know more than one language, and hence cannot chose a language at all. Hence your first option applies to most "programmers". The others have been wondering what this issue is actually about for decades.
I mean, I can take a new language that has some merit, do some real task in it to get to know it better and get a real understanding what it is suitable for and what it is not suitable for. For most "prog
Looking at the wrong part of the problem (Score:3)
The original author is looking at the part of the problem that gets the most attention, but not at the part that causes the most problems. He's looking at programming languages and their expressive problems. Those are real, but they're not why large programs break. Large programs usually break for non-local reasons. Typically, Part A was connected to Part B in some way such that an assumption made by part B was not satisfied by part A.
C is particularly bad in this regard. C programs revolve round three issues: "how big is it", "who owns it", and "who locks it". The language helps with none of these issues, which is why we've been having buffer overflows and low-level security holes for three decades now. Most newer languages (not including C++) address the first two, although few address the third well.
There have been attempts to get hold of this problem formally. "Design by contract" tries to do this. This allows talking about things like "Before calling F, you must call G to do setup.", or "parameter A must be greater than parameter B". Design by contract is a good idea but weighed down by so much formalism that few programmers use it. It's about blame - if a library you call fails, but the caller satisfied the contract in the call, it's the library's fault. Vendors hate that being made so explicit.
it's a concept from aerospace - if A won't connect properly to B, the one which doesn't match the spec is wrong. If you can't decide who's wrong, the spec is wrong. This is why you can take a Pratt and Whitney engine off an airliner, put a Rolls Royce engine on, and have it work. Many aircraft components are interchangeable like that. It takes a lot of paperwork, inspections, and design reviews to make that work.
The price we pay for not having this is a ritual-taboo approach to libraries. We have examples now, not reference manuals. If you do something that isn't in an example, and it doesn't work, it's your fault. This is the cause of much trouble with programming in the large.
Database implementers get this, but almost nobody else in commercial programming does. (Not even the OS people, which is annoying.)
Re: (Score:2)
My experience is that programmers are over 90+% *libertarian*.
A pox on social conservatives and fiscal liberals both.
Re:Considering how Republicans... (Score:4, Funny)
I knew it! It's Bush's Fault!
I hear the California government has a bill to rename the San Andreas Fault to "Bush's Fault", just so they never have to stop using the phrase! But that may be just a rumor ...
Re: (Score:2)
Damn Kids. It's Reagan's Fault! [thecomicstrips.com]
Re:Proverb (Score:5, Insightful)
Something something blames his tools.
The point of that proverb is that a good craftsman chose his tools to begin with, so he has only himself to blame. Programming is odd in that you have bad toolchains forced on you by management - tools you know are bad, know will cause more problems than their worth, but they're a corporate standard or some such BS. Usually not bad enough to be worth quitting over, so you hobble along.
Of course, I did quit a job once primarily because we had Rational Rose forced on us from above (but mostly because a management that would do that would do anything).
Re: (Score:3)
A good craftsman still doesn't blame the tools - he performs with what's at hand. Not having CNC routers didn't stop wood workers of the past from creating better products than today's staple-gun wielding "craftsmen" do.
Do wonders with what you have, and strive for getting better tools.
Re: (Score:2)
Or perhaps the problem is that the nature of programming is
Re: (Score:2)
Re:Proverb (Score:5, Insightful)
A good craftsman can get by with suboptimal tools.
A good craftsman is not content to "get by", almost by definition. If some part of your workflow sucks, you make it better, whether that's a better tool or more skill/training. If you're good, you never stop improving (until management forces BS on you, of course).
Re: (Score:2)
And yet there are times when the tool itself really is shitty. That statement is not meant to be infallible, unquestionable gospel. Some tools are simply poorly made and a bad fit for the job they're supposed to be used for.
Re: (Score:2)
A computer could do it for you.
Yeah, just pass over the requirements with regular expressions. Now you have two problems. Wait, there's a Lisp macro for that... dammit! Let me get back to you...
Re: (Score:2)
I'm assuming that the errors in the parent post here are actually trying to be clever and prove the point that human intelligence can almost instantaneously gloss over a bunch of errors that a computer would barf on in a second. If so, bravo, that was an interesting way to sell a point.
If not, you need to double check your posts before you hit the "submit" button :)