Forgot your password?
typodupeerror
Programming Idle

What Are the Genuinely Useful Ideas In Programming? 598

Posted by Unknown Lamer
from the lattices-are-pretty-cool dept.
Hugh Pickens DOT Com writes "Computer Scientist Daniel Lemire has had an interesting discussion going on at his site about the ideas in software that are universally recognized as useful. 'Let me put it this way: if you were to meet a master of software programming, what are you absolutely sure he will recommend to a kid who wants to become a programmer?' Lemire's list currently includes structured programming; Unix and its corresponding philosophy; database transactions; the 'relational database;' the graphical user interface; software testing; the most basic data structures (the heap, the hash table, and trees) and a handful of basic algorithms such as quicksort; public-key encryption and cryptographic hashing; high-level programming and typing; and version control. 'Maybe you feel that functional and object-oriented programming are essential. Maybe you think that I should include complexity analysis, JavaScript, XML, or garbage collection. One can have endless debates but I am trying to narrow it down to an uncontroversial list.' Inspired by Lemire, Philip Reames has come up with his own list of 'Things every practicing software engineer should aim to know.'"
This discussion has been archived. No new comments can be posted.

What Are the Genuinely Useful Ideas In Programming?

Comments Filter:
  • by Laxori666 (748529) on Tuesday October 08, 2013 @12:09AM (#45066499) Homepage
    By definition, most programmers are not masters of software programming. So why is Daniel trying to compile a list that everybody will agree with? That would be a list of what every non-master programmer agrees a master programmer should know, which is different than a list of what a programmer should know to be a master programmer...

    As for my approach, it would be to list those qualities which would make learning Javascript, XML, relational databases, etc., easy enough to do, by which I mean, those qualities which would allow a programmer to be able to self-teach himself these things, to the master level if his tasks require it. A master programmer doesn't have to know Objective-C or JavaScript, but he sure as heck better be able to learn how to effectively use them if he needs to.
  • by jgotts (2785) <jgotts&gmail,com> on Tuesday October 08, 2013 @12:14AM (#45066519)

    Forget about having to learn any specific language or environment. You should be able to pick up any language or environment on the job.

    You need to learn how to plan, estimate how long that plan will take to complete, and finish it on time. Very few programmers I've worked with are any good at estimating how much time they will take to complete anything. The worst offenders take double the amount of time they say they will.

    Forget about specific computer science trivia. You can look that all up, and it's all available in libraries with various licenses. When you're starting a new job, refresh yourself on how that problem is already being solved. If you need a refresher on a specific computer science concept, take some time and do so.

    With this advice you won't burn out at age 25.

  • XML? Really? (Score:2, Insightful)

    by Anonymous Coward on Tuesday October 08, 2013 @12:16AM (#45066531)

    What is this, the 90s? In a world with JSON and YAML, why should we bother learning XML for anything other than legacy systems?

  • by Crash McBang (551190) on Tuesday October 08, 2013 @12:16AM (#45066535)
    People don't know what they want till they see it. Prototype early, prototype often.
  • by TopSpin (753) on Tuesday October 08, 2013 @12:18AM (#45066539) Journal

    the heap, the hash table, and trees

    There is nothing basic about these. Each is the subject of on-going research and implementations range from simplistic and poor to fabulously sophisticated.

    An important basic data structure? Try a stack.

    Yes, a stack. What majority of supposed graduates of whatever programming related education you care to cite are basically ignorant of the importance and qualities of a stack? Some of the simplest processors implement exactly one data structure; a stack, from which all other abstractions must be derived. A stack is what you resort to when you can't piss away cycles and space on ....better.... data structures. Yet that feature prevades essentially all ISAs from the 4004 to every one of our contemporary billion transistor CPUs.

  • The Closure (Score:5, Insightful)

    by gillbates (106458) on Tuesday October 08, 2013 @12:23AM (#45066571) Homepage Journal

    The most useful concept I've ever come across is the notion of a closure in Lisp. The entire operating state of a function is contained within that function. This, and the McCarthy lisp paper (1955!) where it is explained how a lisp interpreter could be created using only a handful of assembly instructions is well worth the read. It is from the fundamental concepts first pioneered in lisp that all object oriented programming paradigms spring; if you can understand and appreciate lisp, the notions of encapsulation, data hiding, abstraction, and privacy will become second nature to you.

    Furthermore, if you actually put forth the time to learn lisp, two things will become immediately apparent:

    1. A language's usefulness is more a matter of the abstractions it supports than the particular libraries available, and
    2. Great ideas are much more powerful than the language used to express them.

    In Stroustroup's "The C++ programming language", there are numerous examples of concise, elegant code. These spring from the concept of deferring the details until they can be deferred no more - the top-down approach results in code which is easily understood, elegant, efficient, robust, and maintainable.

    Many years ago, a poster commented that the work necessary to complete a particular project was the equivalent of writing a compiler; he was trying to emphasize just how broken and unmaintainable the code was. The irony in his statement is that most professional projects are far more complex than a compiler needs to be; because he didn't understand how they worked, he thought of them as necessarily complex. However, the operation of a compiler is actually quite simple to someone who understands how they work; the McCarthy paper shows how high level code constructs can be easily broken down into lower-level machine language instructions, and Knuth implements a MIX interpreter in a few pages in the "The Art of Computer Programming." Neither building a compiler nor an interpreter are monumental undertakings if you understand the principles of parsing and code structure. i.e., what does it mean if something is an operator, versus, say, an identifier.

    Ideas are powerful; the details, temporarily useful. Learn the ideas.

  • Code Comments (Score:4, Insightful)

    by gimmeataco (2769727) on Tuesday October 08, 2013 @12:25AM (#45066575)
    What about code comments? I hated doing it for starters, but when you're working with something big or revisiting code after long period of period, it's invaluable.
  • KISS principle (Score:2, Insightful)

    by Anonymous Coward on Tuesday October 08, 2013 @12:27AM (#45066581)

    As simple as possible to accomplish the task correctly, and no simpler. That and find a career that hasn't been decimated in the last 5 years.

    Wait that is what I would tell a kid. An expert would probably preach the methodology he has the most vested interest in.

  • Input validation (Score:5, Insightful)

    by KevMar (471257) on Tuesday October 08, 2013 @12:34AM (#45066619) Homepage Journal

    I think he was missing input validation from his list. The idea that you can never trust user input and you must validate it. The idea that you should white list what you want instead of black list the things you don't want. Ideas that consider the security of the system and not just the working condition of it.

  • by Z00L00K (682162) on Tuesday October 08, 2013 @12:34AM (#45066623) Homepage

    I would say that one of the most important thing in programming is to break down a problem into parts that are useful and easy to manage. It doesn't matter which language you code in. It's very much like building with Lego - you have more use for all those 4x2 bricks than any other brick. The humongous large bricks are "use once". A right sized brick can be copied and pasted into future code as well, possibly tweaked a bit to suit the new environment. In the process of breaking down a problem - define interfaces. Make a design of the important interfaces to make sure that they can remain stable over time. That can make maintenance easier.

    The second most important thing is to learn what compiler warnings means and how to fix them. In this case strong typing isn't your enemy - it's your friend since it will tell you about problems even before you get them when executing the code.

    Third is to learn about which known algorithms that are out in the wild so you don't have to invent them yourself. Quicksort is already implemented a thousand times, so there's no need to implement it again, just find which library you need. If you are developing a commercial app you shall start with the Apache project since that license is permissive when it comes to how the libraries may be incorporated. The LGPL is also good. But leave the license headaches to someone else to sort out if you aren't sure.

    These are the useful ideas I try to follow, the rest is just a mixture of ingredients and seasoning to get something running.

    Remember: You can build a great machine with simple stuff or a stupid machine with expensive stuff.

  • by tlhIngan (30335) <<ten.frow> <ta> <todhsals>> on Tuesday October 08, 2013 @12:34AM (#45066625)

    Forget about specific computer science trivia. You can look that all up, and it's all available in libraries with various licenses. When you're starting a new job, refresh yourself on how that problem is already being solved. If you need a refresher on a specific computer science concept, take some time and do so.

    Well, it's helpful to have the basic understanding of Big-O and what common algorithms have. It's also worthwhile to know when it really doesn't matter - using bubblesort is bad, but if you know you're only sorting 10 items ever, it'll work in a pinch.

    Knowing this can have an effect on what algorithms you choose and even how you architect the system - perhaps what you're dealing with may end up causing quicksort to see its worst-case behavior far more often than necessary.

    And you laugh, but I've seen commercial software falter because of poor choices of algorithms - where doing a 1 second capture is perfectly fine, but a 10 second capture causes it to bog down and be 100 times slower.

    Or the time where adding a line to a text box causes it to get exponentially slower because adding a line causes a memory allocation and a memory copy.

    Next, understand OS fundamentals and how the computer really works. Things like how virtual memory and page files operate, how the heap works and synchronization and even lock-free algorithms and memory barriers. It's difficult to learn on your own - it takes a lot of time to sit down and really understand how it works and why it works, and even then it can take 3-4 different methods of explanation until it clicks.

    Concurrent programming isn't hard especially if concurrency was taken into account when the system was designed. Adding concurrency to a non-concurrent system though is a huge, difficult and trouble-prone process. Especially once bit-rot has set in and you find 10 different ways of getting at the variable.

  • by Anrego (830717) * on Tuesday October 08, 2013 @12:48AM (#45066685)

    But never make the prototype too good.

    "You need 12 weeks to turn it into actual software? this works fine!"

  • Left-corner design (Score:5, Insightful)

    by steveha (103154) on Tuesday October 08, 2013 @12:51AM (#45066697) Homepage

    The most important book I read as a beginning software developer was Software Tools in Pascal [google.com]. That book teaches a technique it calls "left-corner design". It's kind of a rule-of-thumb for how to do agile development informally.

    The basic idea: pick some part of the task that is both basic and essential, and implement that. Get it working, and test it to make sure it works. Now, pick another part of the task, and implement as above; continue iterating until you either have met all the specs or are out of time.

    If you meet all the specs, great! If you are out of time, you at least have something working. The book says something like "80% of the problem solved now is usually better than 100% of the problem solved later."

    For example, if you are tasked with writing a sorting program, first make it sort using some sort of sane default (such as simply sorting by code points). Next add options (for example, being able to specify a locale sort order, case-insensitive sorting, removing duplicate lines, pulling from multiple input files, etc.). A really full-featured sorting program will have lots of options, but even a trivial one that just sorts a single way is better than nothing.

    Also, the book recommends releasing early and often. If you have "customers" you let them look at it as early as possible; their feedback may warn you that your design has fatal flaws, or they may suggest features that nobody had thought of when imagining how the software should work. I have experienced this, and it's really cool when you get into a feedback loop with your customer(s), and the software just gets better and better.

    Way back in high school, I tried to write a program to solve a physics problem. I hadn't heard of "left-corner design" and I didn't independently invent it. I spent a lot of time working on unessential features, and when I ran out of time I didn't have a program that did really anything useful.

    This is the one thing I would most wish to tell a new software developer. Left-corner design.

    P.S. Software Tools in Pascal is a rewrite of an older book, Software Tools [google.com], where the programs were written in a language called RATFOR [wikipedia.org]. Later I found a copy of Software Tools and found it interesting what things were easier to write in Pascal vs. what things were easier in RATFOR... and when I thought about it I realized that everything was just easier in C. C really is the king of the "Third-Generation" languages.

  • by Anrego (830717) * on Tuesday October 08, 2013 @12:51AM (#45066699)

    I don't know if that's really good advice anymore.

    I mean if it's an interest, sure. Personally I love that stuff.. but unless you plan on doing low level coding for a career, modern programming is so abstract from machine language that knowing what's going on down there is in most cases interesting trivia at best.

  • Re:databases (Score:5, Insightful)

    by dgatwood (11270) on Tuesday October 08, 2013 @01:06AM (#45066741) Journal

    Well he's already failed. Databases are a niche topic that doesn't belong in an "uncontroversial" list of things that every software engineer needs to know.

    When it was part of our CS required curriculum, I suspected I would never use it, but it turns out that the vast majority of projects I've been involved with have used databases in some way, and one of them even involved some pretty serious database query optimization. As far as I can tell, unless you pretty much code exclusively down at the kernel level, you're going to eventually be asked to work on some project involving databases. They're the glue that holds technology together. Outside of a handful of niche fields, I'd be surprised if any programmer managed to go more than five years out of school without having to work with one.

    Also, once you understand databases conceptually, everything starts to look like a special case of a database. This is a good thing. C data structures? Table records. Pointers? Relations. And so on. It ends up helping you understand complex problems even if you're one of those rare people who never ends up touching an actual database.

  • by Rinikusu (28164) on Tuesday October 08, 2013 @01:07AM (#45066747)

    1) Learn how to have fun. Even when you're mired knee-deep in a gigantic pile of horseshit that is a 10-15 year old VBA/Access/Excel monstrosity written by a half dozen people and commented occasionally in non-english languages, if you can't find a way to enjoy the challenge, your career will be short-lived and miserable.

    2) Work on things that interest you. When you invariably get the point to where you think "I wonder if there's an easier way to do this", google it. Chances are, you're right. With any luck, you can avoid #1 above if you really work at it. I don't think anyone ever woke up in the morning thinking "Fuck this fun shit, I want to be a Programmer III at some .gov contractor", but you never know. I happen to like maintaining and bug-fixing code more than I do architecting full solutions, but I'll accept that I'm an odd bug.

    I mean, I'm reading some of these comments and thinking "yeah, if you pushed Knuth or SICP on me when I was 10, you'd have killed any interest I had in computers." Instead, I was POKEing away on my C64, etc. If anything, figuring out how to solve logic puzzles, breaking down problems, etc, were much more fun for me in the 5th and 6th grade than reading some of the current compsci literature that I *still* require significant motivation to go through. I'm not saying it's not important, I'm just saying that a middle schooler (a kid) may not be all that willing to put in that kind of work, and waiting until college wouldn't be so bad. You know, going back to the comment thing.. a lot of these comments sound like the anti-jock jocks. I remember these kids who's parents were forcing them to play sports and everything revolved around these kids playing sports, but the kids themselves weren't having any fun at all. Now we have nerds acting like jock parents, treating their kids in the same manner.. It can't be healthy.

  • by bzipitidoo (647217) <bzipitidoo@yahoo.com> on Tuesday October 08, 2013 @01:16AM (#45066785) Journal

    Estimates? How long does it take to solve a maze? Take all the correct turns, and you'll be done in a few minutes. One wrong turn, and it could take hours. How do you estimate that?

    A big reason why estimates tend to be low is the tendency to overlook all the little problems that shouldn't happen but do. It's not just that libraries have bugs too. Systems sometimes go down. Networks can corrupt data. I could never get any programming or system administration work done quickly, because I'd always run into 3 or 4 other things that I had to fix or work around first. A hard drive crash is when you find out that the DVD drive which was fine for light usage overheats during an install, that the updated OS breaks part of the software, and that it was only inertia keeping the server on the misconfigured network and once it was powered down another server grabs its IP address, and so on. Once had to work around the libpng site being blocked for "inappropriate content" by the latest update of the company's web monitoring software. But those are relatively trivial problems that don't blow estimates by orders of magnitude.

    Your advice is fine for hacks who need to grind out simple business logic, or glue a few well tested and thoroughly traveled library functions together, and who don't have to think much about performance or memory constraints. There's very little uncertainty in that kind of work. But when you're trying to do new things, trying out new algorithms, and you have no idea whether they will even work, let alone be fast enough, you're back in the maze. We could have got astronauts to the moon 5 years sooner if we knew beforehand which directions were blind alleys.

  • by Anonymous Coward on Tuesday October 08, 2013 @01:19AM (#45066811)

    Practice makes perfect.

    It always amazes me how I can go back to a language I knew so well 5 years ago, yet I make mistakes you'd see from a first year CS student.

    Nah, practice makes better... nobody's perfect.

  • by H0p313ss (811249) on Tuesday October 08, 2013 @01:38AM (#45066887)

    are the most useful thing I learned in the last 5 years.

    Regular expressions are incredibly powerful and useful, if you know how to use them and how to not abuse them.

    Much like welding torches.

  • by Jane Q. Public (1010737) on Tuesday October 08, 2013 @01:38AM (#45066891)

    "Quicksort is already implemented a thousand times, so there's no need to implement it again, just find which library you need."

    Yes, that's true, but we're talking about education here, not building websites.

    If you're a coder, and you don't know how to BUILD a hash table from genuinely fundamental, low-level components, or if you can't do a quicksort from those same fundamental building blocks, guess what? I won't hire you.

    It's great to be able to buy or borrow a used V8, but if you don't know how to build one, you're not going to be my mechanic.

  • by dgatwood (11270) on Tuesday October 08, 2013 @01:38AM (#45066897) Journal

    IMO, a function should be as long as it needs to be, and no shorter or longer. If the most easily understood way to express a concept is as a 5,000 line function, then you should write a 5,000 line function. Splitting up a function based on some arbitrary length limitation can only lead to less readable code.

    For example, my record is almost 5,500 lines. The entire function is basically a giant switch statement in a parser (post-tokenization). The only way you could make that function significantly shorter would be to shove each case into a function, and all that would do is make it harder to follow the program flow through the function for no good reason. At any given moment, you're still going to be staring at exactly one of those cases per token (plus a little code on either end), so having each case in a separate function just adds execution overhead without improving readability, and it makes debugging harder because you now have to pass piles of flags around everywhere instead of just using a local variable.

    One of the data structures for the function in question is almost 1200 lines long by itself (including anywhere from two to fifteen lines of explanation per field, because I wanted to make sure this code is actually maintainable). By itself, the initializer for that data structure cannot meet your "fits on one screen" rule, even with most of the fields auto-initialized to empty. And there's no good way to shrink that data structure. It is a state object for a very complex state machine. The code to interpret the end state is over a thousand lines of code by itself.

    In short, those sorts of rules simply don't make sense once the problem you're trying to solve becomes sufficiently complex. They're useful as guidelines for people who don't know how to write code yet—to help them avoid making obscenely complex functions when the functionality is reasonably separable into smaller, interestingly reusable components, to keep themselves from shooting themselves in the foot by repeating code where they should call a shared function, and so on. However, IMO, if you're still thinking about rules by a few years out of school, they're probably doing you more harm than good, causing you to write code with unnecessary layers of abstraction for the sake of abstraction.

  • by Anonymous Coward on Tuesday October 08, 2013 @01:53AM (#45066977)
    Find me a mechanic that can build a V8. They don't have the metallurgy, the physics or the engineering to do it. That was a ridiculous example, and I wouldn't want to work for you.
  • by ATMAvatar (648864) on Tuesday October 08, 2013 @02:01AM (#45067017) Journal
    I would much rather my mechanic focus his efforts on being good at diagnosing problems and installing factory-made parts rather than troubling himself with building parts himself. I feel the same way about programmers. The simple components (like sorting algorithms) are largely available in libraries, and I would be more concerned that someone I work with know when to use a particular sorting algorithm than that he/she can code up one from scratch.
  • by Jane Q. Public (1010737) on Tuesday October 08, 2013 @02:30AM (#45067167)

    "Find me a mechanic that can build a V8."

    I didn't write manufacture a V8 from scratch, I wrote "build".

    I rebuilt an engine, and I'm very far from an auto mechanic. And in the interest of keeping the record straight, I had help and advice. But that's my point here: I wrote "build", not "design and manufacture all the parts from scratch". It's all fine to get help and advice for coding, too. But I wasn't suggesting trying to independently re-invent internal combustion or anything like that.

  • by Jane Q. Public (1010737) on Tuesday October 08, 2013 @02:35AM (#45067193)

    "I agree with the anon poster. You are an idiot. A computer scientist better be able to write a QS in their sleep, but a programmer better know how to find a suitable implementation already written."

    That's not a programmer, that's a hack just out of high school. There IS a big difference, and if you don't know what that difference is, you're paying too much money.

    Now tell me again who's the idiot.

    I wasn't suggesting that every programmer has to know how to do finite-elelement modeling, for fuck's sake. But if you don't know a Quicksort from a Bubble sort, or how to write them, you're not a programmer by any standard I ever heard of, and I've been around.

  • by Anonymous Coward on Tuesday October 08, 2013 @02:45AM (#45067221)

    1. Don't use proven libraries to solve common programming tasks (e.g., collections).

    2. Write everything from scratch!

    3. Argue that it is a time and budget advantage to do so.

    Did I just get trolled? fuck

  • by jasno (124830) on Tuesday October 08, 2013 @02:47AM (#45067241) Journal

    Dammit I wasted a mod point but I gotta add to this...

    Understand state. Understand state machines. Understand that many times the best solution is to define and implement a state machine. It won't make you feel warm and fuzzy from all the neato tricks you invented. It will often result in a system that is easily understood, analyzed and extended without causing too many problems.

    I know, that's not exactly what you were talking about but your use of the word 'state' got me thinking.

    One more thing - software engineering... programming... whatever. It's a big field. There are folks who make more than me and all they do is glue together java libraries or craft SQL statements. I personally work in the lower levels. The things I need to know are worlds apart from other programmers. I better damn well know how to implement circular buffers, properly lock shared state, understand common hardware interface quirks, memory management, etc. etc.... It would be silly for some folks to waste their time learning those things. Would it make them a better programmer? Probably. But would it ever matter when all they're doing is gluing together libraries? Maybe not.

    I've been in the game for 16 years now. I've never once written or even directly used a sorting algorithm. The first few years I didn't even deal with strings - nothing I programmed used ASCII input or output. No serial ports even. I went the first 10 years not even really understanding what was so special about databases. I learned it on the side for shits and giggles, but it's never been necessary for me to earn a paycheck.

    The only skill every programmer really needs to know is how to be patient and detail oriented. That's the only thing I can think of that truely is common across the discipline.

  • by Jane Q. Public (1010737) on Tuesday October 08, 2013 @02:53AM (#45067265)

    "Good software development is to a chemist As lego development is to a cook."

    Well, I have to tell you honestly: I don't know whether that went over my head, or under.

    But either way, it missed me completely.

  • Keep It Simple. (Score:4, Insightful)

    by jcr (53032) <jcr.mac@com> on Tuesday October 08, 2013 @02:53AM (#45067267) Journal

    Hands down, the most important idea in programming. See the C++ disaster for proof.

    -jcr

  • Foundation (Score:3, Insightful)

    by fyngyrz (762201) on Tuesday October 08, 2013 @02:54AM (#45067273) Homepage Journal

    Until you've programmed ASM for a micro controller, you really don't know what's going on under the hood, and you're almost certainly doomed to create bloated, slow-as-mud compared to what it *could* be, code.

    Sit down with a 6809 system emulation and learn about stacks and heaps and PIC and addressing modes and registers and memory and IO and optimizing loops and etc. Then you've got a foundation. Then C and a linker AND a debugger, then something OO, then HTML, CSS, Python, PostgreSQL, follow the basic PostgreSQL with detailed DB stuff, make sure the math is there through at least algebra and geometry, explain 3D from acos() as pooltable reflection to the various lighting tech... this would be a good first year or possibly two.

    You best learn to solve problems by... wait for it... solving problems.

  • by Jane Q. Public (1010737) on Tuesday October 08, 2013 @02:59AM (#45067283)

    "1. Don't use proven libraries to solve common programming tasks (e.g., collections).

    2. Write everything from scratch!

    3. Argue that it is a time and budget advantage to do so."

    No. That wasn't the point of this discussion at all.

    I highly recommend that programmers DO use proven libraries to solve common tasks. BUT, a programmer who is worth a real programmer's salary COULD write them herself if she had to, which means she also has the skill to identify and preferably fix bugs in said libraries. After all... open source software is all bug-free, yes? (Not that it has to be open source... just an example.)

    I wasn't saying a programmer should write everything from scratch every day. But if you don't know how to, you're SOL and at everybody else's mercy when something goes wrong. You're costing your company money. Because things go wrong.

  • by gigaherz (2653757) on Tuesday October 08, 2013 @03:13AM (#45067329)

    Better list:

    1. 1. Be consistent.
    2. 2. Maintain proper variable/function/class names.
    3. 3. Put the code where it belongs.
    4. 4. Write comments when you still understand what you did.
    5. 5. Use existing libraries, whenever possible.
    6. 6. Write unit tests for your algorithms.
    7. 7. Choose the IDE with the better debugger.
    8. 8. Follow the guidelines, but don't obsess over them.
    9. 9. Open-source is good for the world, but not so good for your wallet (yet).

    I wanted it to be 10 but that's all I could come up with, that I would recommend any programmer, new or not.

  • by znanue (2782675) on Tuesday October 08, 2013 @04:21AM (#45067563)

    I'd be interested to know what line of work you do, programming wise. My experience tells me that a lot of programming that is being done is meant to be powerful and meant to be built quickly. Running quickly and with low tolerance for faults is a little less important because very few things are mission critical. While anathema to the academic, it demands a certain skill set, which is the ability to very quickly assimilate new arbitrary knowledge about libraries, software, and code, that the programmer hasn't seen before. The result is a fragile sort of knowledge that often lacks formality and granularity but is sufficient enough to accomplish a task very quickly.

    This skill is not exclusive with the ability to write everything from scratch, but understanding a system through and through seems to often undermine the coder's speed at getting the product out because they want to do it 'correctly'. This specific tug of war, between programmatic idealism and pure ease of use often ends up being a major concern in the work I do because I am rarely satisfied with the 'easy' solution. Sure, sometimes doing it 'correctly' amortizes quickly and thus the up front cost should be paid, but other times it really just badly trades one resource (dev time) for another (execution time, time of the humans managing the software, time to market).

    I've met coders who talk like you, and when they do, they're so impractical in their overarching decisions that often the software is DOA because it doesn't do enough, or it took too long to write, or it was endlessly being refactored.

    You're probably doing work at a more basic systems level than I am. And, maybe that is why you have this philosophy. I used to think work at the level I'm doing now was uninteresting, but the problems are shifted into ones of integration, forward planning, and multi-discipline and away from algorithm, pure transaction, etc. There are also a buttload of programmers working at this level, and while I would love to say that we must hire only candidates who can write a minheap backed up by a btree... I think by far the more important qualification is that skill I was talking about, which is a lack of 'not invented here' syndrome and an ability to read other people's code / technical documentation very quickly and see how their architecture can fit into our needs with the minimal of fuss.

    To put it another way, most of the low level problems have been solved adequately enough for this level of work, but there are a whole slew of emergent problems that require more than just Pattern A + Pattern B to solve, and are just not that helped by understanding the machine instructions or by algorithmic knowledge. I believe that to be true for probably the many of the programmers that are gainfully employed. So, I could not agree with the notion that your hiring strategy would work well for most positions.

  • Re:Foundation (Score:5, Insightful)

    by TheRaven64 (641858) on Tuesday October 08, 2013 @06:14AM (#45068049) Journal

    Until you've programmed ASM for a micro controller, you really don't know what's going on under the hood, and you're almost certainly doomed to create bloated, slow-as-mud compared to what it *could* be, code.

    I hear this a lot from people who write unmaintainable code that's full of 'clever' tricks that usually have no measurable impact on performance and, when they do, actually end up making things slower.

    A microcontroller has almost no relationship to the kind of system that you find in a modern desktop or even mobile phone. Some differences:

    • No multiprogramming, meaning your code is free to use all of the resources and you don't need to think about different load conditions.
    • Single core, so the most efficient code is always single-threaded.
    • Fixed latency, so you don't have to worry about things costing different amounts depending on conditions.
    • Flat memory hierarchy, so locality of reference (absolutely the biggest single performance win on a modern system) makes very little difference.
    • No FPU, so floating point operations are insanely expensive, when on a modern CPU they're much cheaper than some of the tricks people use to avoid them.

    Add to that, writing assembly for a short-pipeline, in-order processor is very different from an out-of-order superscalar architecture. If you want to write fast code, design good data structures and good algorithms. No amount of microoptimisation will make up for poor algorithmic design.

Passwords are implemented as a result of insecurity.

Working...