What Today's Coders Don't Know and Why It Matters 368
jfruhlinger writes "Today's programmers have much more advanced languages and more forgiving hardware to play with — but it seems many have forgotten some of the lessons their predecessors picked up in a more resource-constrained era. Newer programmers are less adept at identifying hardware constraints and errors, developing thorough specifications before coding, and low-level skills like programming in assembly language. You never know when a seemingly obsolete skill will come in handy. For instance, Web developers who cut their teeth in the days of 14.4 Kbps modems have a leg up in writing apps for laggy wireless networks."
Newsflash (Score:2, Insightful)
Experienced people have experience in things they have experienced
The problem is (Score:5, Insightful)
they aren't trained to engineer software, and the industry hasn't incorporated good engineering practices.
It doesn't matter. (Score:5, Insightful)
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" - Donald Knuth
Most developers will never need for their apps to run in constrained environments, and most employers don't want to spend money to eek out performance when average performance is perfectly fine.
Too many programmers get caught up in trying to make something the fastest, or most memory efficient, or makes the best use of bandwidth. When most of the time, it just doesn't matter. Such things are expensive, and in the long run it's cheaper to be fast and sloppy than slow and lean.
Re:It doesn't matter. (Score:5, Insightful)
And no, that's not an excuse to be sloppy. "Back in the ancient days" it was important to write good code for the limited resources. Now you still need to write good code, but the constraints are relaxed. But we still need code that is maintainable, dependable, extendable, flexible, understandable, etc.
Excuse me while I laugh my head off... (Score:4, Insightful)
I'm new to programming myself, but I've always felt the need to learn more about the computer than just the high-level language. That's why I want to take up PERL.
(emphasis mine)
This possibly the most hysterically unintentionally funny thing I've read in a long time.
Re:It doesn't matter. (Score:2, Insightful)
The thing is people use that quote to be *LAZY*. Yeah most of the time it doesnt matter. But guess what when it does you may find yourself rebuilding an entire framework because you made just plain stupid mistakes and LOTS of them.
For someone who understands what optimizations are available vs your code jockey who just whips up some code the difference is miles different.
I used to think the same way. "It doesnt matter much" but then I realized it does matter. It matters a lot. Think if all of your programs started up 10 seconds faster. Now multiply that by the millions of times programs are run every day.
Individually on single runs it doesnt matter much. But that time debt builds up.
Think about this. Take the standard crt function printf. One of the more used functions out there. What if it ran 2x as fast. What sort of impact in the world would that make?
Re:It doesn't matter. (Score:2, Insightful)
And all those users sitting waiting 5 minutes for the page to load, for the data to completely draw, or whatever?
You do read thedailywtf.com, don't you? Plenty of stories where a 'quick app' becomes mission critical, and can't handle slightly (?) larger datasets.
Well, those aren't _our_ users.. their time is _free_. And they can _leave_ if they don't like it. ....
Also in terms of CPU or RAM (Score:4, Insightful)
You need to wait until the shit is done, then profile it. You suck at knowing what needs to be optimized, no I don't care how good you think you are. Ask the experts, like Knuth or Abrash.
So if the speed matters, or the RAM usage, or whatever you write the program, then you profile it, in real usage, and see what happens. You then find where spending your time is worth it.
For example support you find a single function uses 95% of all the execution time. Well, until you've optimized that, it is stupid to spend time optimizing anything else because even a small gain in that function will outweigh a massive gain elsewhere. You need to find those problems spots, those areas of high usage, and optimize them first, and you can't do that until the program is written and profiled.
That is also pretty common too, the "Couple areas that use all the resources," thing. It is not usual for it to be spread across all the code. So you need a profiler to identify those and you then need to focus your effort. You can then break out the ASM hackery for those few lines that have to be super fast, if needed, and you'll achieve most if not all of the result you would from doing the whole thing low level.
Re:It doesn't matter. (Score:3, Insightful)
Nobody is suggesting you optimise to shave off the odd byte or machine cycle.
However, you should optimise for the task in the sense of picking appropriate algorithms and structures.
What I've seen in industry is that a lot of developers pay attention neither to elegance nor efficiency. And this really bites you in the pants when the code meets real data.
Anyway, once you decide to ignore resource constraints in your engineering, what on earth is left that's challenging? Honestly, you might as well flip burgers for a living if you're going to do things that way.
Re:It doesn't matter. (Score:5, Insightful)
Today's machines are over a hundred times faster than they were 10 years ago
The raw CPU power times the amount of cores is 100 times faster. How much faster is the I/O? Serving up web pages is mostly about I/O. I/O from the memory, I/O from the database, I/O to the end user. The CPU is usually a small part of it.
You actually sound like a perfect example of what the article is talking about. People who don't understand where the bottlenecks lie. Hell, it even mentioned the misunderstanding of the I/O bottlneck that exists today.
Re:tl;dr (Score:5, Insightful)
I'd submit the game development industry is still one place where tight, fast-running code really matters. But the trick is, writing efficient code is extremely slow and expensive to develop. At my company, our game engine is written in C/C++ like most places, but we still write most of our tools in C# or Python (and from what I understand, that's also becoming the norm). We well understand that we're trading efficiency of execution for efficiency of performance in those cases. Both types of coding certainly have their place.
I get a little annoyed at zealots on both sides that claim that "C++ is dead" or "C# is too slow", blah, blah... If you ask a carpenter what is "best tool" is, he might ask you "for doing what?" Programming is no different. People who cling to a single language or paradigm are just missing the big picture, IMO.
Re:The problem is (Score:4, Insightful)
The problem is many people (bosses, project managers, developers, etc) don't understand the big difference between "Software Engineering" and say Civil Engineering.
In Civil Engineering creating all the blueprints and plastic models necessary is typically 1/10th the cost of building the "Real Thing", and make up a smaller portion of the total cost.
For software, creating the source code (drafts, blueprints) costs more than 100 times the cost of "make all" (building the "Real Thing"), and form a large portion of the total cost.
So if "stuff happens" and you need to spend 50% more to fix the design, with Civil Engineering the bosses are more likely to agree (unhappily) to "fix the design", because nobody can afford to build the whole building a few times till you get the design right...
Whereas with "Software Engineering", the bosses are more likely to go "Ship and sell it, we'll fix it in the next release!".
And if you're a boss, you'd likely do the same thing
So even if you could work out the probabilities of some software component failing, nobody would care. Because all you need to work out is: which bugs need to be fixed first, out of the hundreds or thousands of bugs.
That changes if you are willing to spend 10x the cost (in $$$ and time) of creating each internal "release". By the time the 10th (final) release is written and tested (specs remaining the same - no features added) the stuff should work rather reliably. But you'd be 5 years behind everyone else...
Re:tl;dr (Score:4, Insightful)
Less code does NOT mean smaller file sizes.
I write (in general) smaller amounts of code than any of the other 4 developers I have to work with, but I also document my code as I write it, the doxygen info for a function or methods is fully written out before I start writing the function. I document then write. My files are always larger because my documentation and commenting is far more complete.
Using shorter variable names does not mean you're writing less code, you're just using less text to do the same thing. That also generally means you're writing unreadable code. Disk space and IO for files used in a compile is irrelevant for any reasonably sized project. If your project is so big that the size of your variable names makes a noticeable difference in your build times ... then you need to reorganize your project.
Good code is self documenting, but being that most of us can't write code that beautiful, proper documentation makes the code suck less for the next guy.
Re:tl;dr (Score:5, Insightful)
Your phone has 200MB of memory, so when a stupid app uses all 200MB ... then what? You don't run your other apps? Being wasteful is stupid regardless of where you do it, in your home, or in your code.
(computers did have double digit RAM at some point, right? My history of computer hardware isn't that great)
http://www.atmel.com/dyn/products/product_card.asp?part_id=4605&category_id=163&family_id=607&subfamily_id=791 [atmel.com]
If you're too lazy to follow the link:
The high-performance, low-power Atmel 8-bit AVR RISC-based microcontroller combines 512B ISP flash memory, 32B SRAM
That '32B SRAM' is actually shared with the 16 general purpose registers, so if you take those out, you have 16 bytes of ram. With those 16 bytes and the rest of the IO and other functions built in you can easily control 3 servos from a single input line, which takes feed from a larger motor. That chip is capable of driving stepper motors with interpolation for a CNC machine, again taking position information from somewhere else, not processing the command tree itself. The chip is used to handle input from several glass breakage detectors and performing false positive checks to avoid triggering because the cat knocked something off that didn't actually break an external window.
In short, the modern world is built on devices with tiny ass amounts of ram. Do you wear a digital watch? They're getting rarer now days, but thats another example.
Its not history were that stuff mattered, its right now today, you just aren't aware of it. Nothing about the concepts used then is bad today, they STILL provide massive benefits if you know them and follow them. You're basically saying 'gasoline is cheap, just burn more to get more done' and ignoring the fact that there are clear physical limits to computing, a given amount of mass, regardless of how its configured, can only store so much data and performing so many computations. Eventually you'll have devices which simply can't meet the demand being put on them ... because people thought ram/processing power was cheap and we no longer need to worry about those old guys.
Locality of reference, cache hits, pipeline stalls ... all these things that you know nothing about, yet drastically affect how your shitty little app runs. Sigh, I swear at some point in the past people actually took pride in 'doing it right'.