Programmers and the "Big Picture"? 405
"Back working on my undergrad (computer engineering) I remember getting frustrated at the comp-sci profs that insisted machines were simply 'black boxes' and the underlying hardware need not be a concern of the programmer.
Of course in embedded systems that's not the case. When developing code for a medical device, you've got to understand how the hardware responds to a software crash, etc.
A number of Slashdot readers dogmatically responded with "security through obscurity" quotes about the shuttle's missing secret box. While that may have some validity, it does not respect the needs of the entire system, in this case the difficulty of maintaining keys and equipment across a huge network of military equipment, personnel, installations."
In general... yes (Score:4, Interesting)
If a coder isn't ignoring the fact their code isn't going to be running on the exact same shell as they are, they're ignoring that it won't always be running in the exact same OS, or exact same network. Tragically, when it breaks it can then break BIG.
Note I also don't have enough experience to offer a solution other than "get a clue!". It's more work until you embed it in your habits to take notice of these possibilities.
From a programming point of view, it depends. (Score:3, Interesting)
We don't need anymore black boxes (Score:5, Interesting)
The problem with these programmers is that they rarely understand what can and does go wrong with the outside world. It is always amazing to me that there are people out there that assume that everyone has a 100BaseT Ethernet hub between the front end and the back end or other stupid assumptions.
The issue that crops up most when programmers think in black box terms is that today's software is not spec'd out enough so that the end user does not get what they wanted but the programmer did not solve it by asking. Too often the problem is very fuzzy and thus the programmer is there to help clarify not just implement.
Without a well rounded programmer looking at the overall system (or his/her boss), you will wind up with chatty, buggy applications that was what the user asked for but not what they needed.
bah (Score:1, Interesting)
Or just realize that spaceflight is risky, and it's not always the great unknown that will bring us down.
I'm not advocating carelessness, just to point out that these space machines we build tend to last 15 years between failure and are so complicated it makes the MS Windows operating system look like Linkin Logs. REAL engineers built that sucker, and they aren't infallible, but they are thorough.
I'd bet my life on their processes any day.
Re:Probably (levels) (Score:3, Interesting)
--sex [slashdot.org]
As a 3rd Year CS Student... (Score:2, Interesting)
It seems apparent enough to me, that any passionate CS student will not be satisfied with a mystically based understanding of computer architecture; and will in turn educate themself. I propose then, that any kind of 'black-box' mentality is more a reflection of the students' drive then their education.
Not yet, anyways (Score:3, Interesting)
I'm no superstar engineeer, but I find this methodology (my window into the black box) so valuable that I'm often frustrated by collegues who refuse to learn more about an OS/VM/interpreter and make use of it. It is also what most frustrates me about troubleshooting in windows.
While it's true that I don't know much about windows, I get the feeling that these kind of observation tools that are so common on unix-ish machines aren't quite so prominently available on winderboxen. Sure, you can figure out a lot about a problem using MSDEV (what I remember from college, where VC++ wouldn't stop opening everytime netscape crashed), but it isn't available on ANY machine that I ever troubleshoot.
Hell, even when I'm programming java I use truss to figure out what the hell is wrong with my classpath
Re:Probably (Score:5, Interesting)
I think it's more than a skill, it's an attitude. I've encountered a number of programmers (just out of school/training) who are oblivious to external concerns, including interface design (traditionally what users complain most about and programmers lack any standard to follow.) Generally it takes little effort to break programs written by very skilled programmers, but blind to anything outside their scope. I was probably as bad when I first started, but recently an analyst complained angrily why I went beyond the scope of the project by including an error/warning log (most likely because the errors/warnings accounted for any untrapped logic and revealed how incomplete the spec was and how little the analyst, and some of the higher-ups, knew of the business function) I felt there were too many things unaccounted for and added the log, when it produced 1,000+ entries things got a little heated. I stuck to my guns though and see a general lack of interest in review of why there are gaps in the spec or knowledge (by the very people who should know.
All Systems Are Embedded (Score:5, Interesting)
I started my career (long ago, in a galaxy far away) developing embedded systems, and much later, when running an R&D lab, came to the conclusion that, excepting (importantly) user-interface design, embedded systems were the best crucible in which to learn the right balance between modularity and holism in systems design and implementation.
It's easy for programmers who have only worked on PCs to lose sight of the notion that programs affect the world, but when you are controlling big machines that, improperly instructed, will destroy themselves and the people around them, you begin to think twice about your coding tricks, your testing, and the interaction of your component in the system as a whole.
But there is an underlying assumption in the question that modular design and system holism are mutually exclusive, and I don't accept that either. I also except user-interface design, which is more sociology and psychology and neurology than computer science.
You are correct, however, in supposing that security is particularly vulnerable.
Here's one (true) story, which I will deliberately leave unattributed: a programmer is writing code to control the dual vertical bandsaw in a sawmill -- two huge saws, each 12 inches of high-tensile stainless steel with 3-inch teeth, stretched tight between two six-foot diameter wheels and running at 10,000rpm. A log is pulled on a chain through the middle, so a cut can be made on both sides. Logs enter the system, are measured with a laser scanner, and a queued (physically and in the control program) before entering the bandsaw.
The old fart programmers used to simply store log data in an array of sufficient size to hold the maximum number of logs that could ever be in the system, but are cognizant of the problem of "phantom logs" when a log falls off the belt or otherwise leaves the system in an uncontrolled way. The clever young programmer decides to use newly-learned techniques of memory allocation and linked-list design, and build a replacement.
During mill installation the system is tested and appears to run well. At the end of the shift, however, as the last log is about to be run through the system, the operator discovers that there is no data in the queue for the last log, but decides to run it anyway. The computer dereferences a null pointer, grabs garbage data, and tells the bandsaw to set to an impossible position.
Because the mill is still being installed, the stops on the bandsaw have not been adjusted, and the saws set to position "0" -- and run into the chainguide in the middle. High-stress stainless at great speed meets six inches of fixed steel, and the saw blades explode, burying foot-long shards of stainless steel sawblades up to four inches deep in the walls of the mill, destroying the operator's booth, and causing tens of thousands of dollars damage to the mill.
Whose fault was it? The operator, for running the phantom log? The hardware installation guys, for not setting the stops on the mill? Or the programmer, for not constraining the output of his program, testing more completely, and using simpler techniques. Answer: all of the above. Better modules would have forestalled the problem, and better systems holism would have forestalled it as well. A combination would have given an even better margin of error.
This has led me to the following conclusion: in order to get a CS degree, every programmer must write code that will lower a 10-ton machine press a maximum speed to within inches of his chest, and then stop it. We would have more careful programmers if this were the case. If they went on to write security code, we would have fewer holes.
gnet
It's a dichotomy (Score:1, Interesting)
On the other hand, if you go fully-elegant (applying every theory that might be applicable) to the construction, you'll wind up exhausting processing power/memory, etc...
That's where judgement comes in; knowing where to cut, and specifically what to document to give maintenance programmers a leg-up on whatever short-cut you engineered to accomplish the feat.
Make sense?
Re:We don't need anymore black boxes (Score:3, Interesting)
Actually, I've found myself doing the reverse several times. For many years, I worked in what I guess would be semi-embedded systems. We did special purpose computers for the military. The thing was, we had our own RTE, and I got into the habit of coding for that, and assuming that the target environment essentially had nothing.
Now that I'm coding for standard OSen (Linux), I find it hard to get used to the concept that it's already there, I don't have to roll my own.
Don't get me wrong, I believe in reuse. I think it's wonderful to have the tools. It's just difficult to "rewire" myself after 15 years of a particular mindset (but I'm working at it!).
Re:The "Big Picture" is TOO big for most people (Score:2, Interesting)
On the other hand, if you aren't aware of at least some of the big picture, you may end up doing things with consequences that you didn't anticipate.
Re:We don't need anymore black boxes (Score:3, Interesting)
This is what the lead programmer/designer or the PM is for. Depending on the project, you should have 1 of these for each section of the project. If the project is sufficiently large, have 1 LP for each sub-section and have them report to a primary LP. The primary should know how all the subs interface and the subs should know how every component under them interfaces together. There are few projects that I've ever seen that required more than a few LPs (and I've worked on projects with 250+ developers) because they worked in a multi-tiered environment...each LP knew their own section.
Individual programmers need to look at things as a black box, it will make them much more efficient. Granted, you need to have sufficient requirements for them to do this effectively (very, very few projects have these). Ideally, programmers shouldn't even be hired until at least the first draft of requirements are out. The LP should be God before that, dictating what's doable, feasible and what makes sense from a technical perspective. He then needs to hire on programmers that can produce what he's envisioned. It amazes me the number of programs that staff up before requirements are out...how can you effectively staff up if you don't know what your staff is going to be doing? That's why sub-contractors are important, they can be staffed up anytime and (should!) have domain knowledge on what you're doing.
--trb
Engineering Black Box and Whole System (Score:3, Interesting)
Here are some random thoughts.
Take the slide rule. Back in the days before destops, calculators, and palmtops, we had slide rules to do division and multiplication. You slide the rule for the numerator over the denominator
(I think, its' been so long). You then look at the
result.
The thing is, you can see how 'close' the result is to whatever you desire (in a circuit or system). You can intuit how close thing are. You can easily 'play with the numbers' with a slide rule in some cases. Slide it a little to see what it would take to get the desired results. A teeny amount, alot; whatever.
With digital calculators, it's a harder (for me) to see the changes visualy. All you see is a quanitive value. I can't look at the physical distances on a slide rule and make inferences.
I can remember doing the same intuiting with meters. In the days before digitization and computers, we had analog meters. A needle would point to the value (voltage, amperage, whatever). Often the 'movement' of the needle is almost more important than the actual value itself.
Take the tuning of a final output circuit in a radio transmitter. You dip the plate and tune for
proper power. With an analog meter, you can see the needle do a quick dip. Sometimes with a digital meter, you can miss the dip, espcially if the circuit has a high Q value. The motion of the needle of the meter controls the speed at which I turn the various knobs.
With a digital meter, I feel removed from the process of tuning.
Monitoring the electrical service for a facility, whether it be a radio transmitter facility, or even a computer room; I am much more comfortable with an analog voltmeter and amp-probe. It's far easier for me to watch for hiccups (needles jumping rapidly or slowly) to indicate something is happening.
I feel that all of these examples are important in my desire to be a part of the overall system, rather than being only a blind black box. I use my overall knowlege of what is happening in the system as a whole to get a 'feel' if what is happening right there and then.
With only abstract figures and a blind black box interface, I would feel much alone and out of touch with the reality of the system.
I think the same can be said about programming. In all of the projects I have been involved with, I have been fortunate enough to see the overall picture of the system at a high enough level to be able to able to be a 'part of the system' rather than a disconnected black box'. This is certainly true in my background in writing scripts to monitor the health of databases and operating systems.
Mark
Roman bridges (Score:4, Interesting)
Of course, a bridge i a MUCH simpler thing than a program, but, hey, 2000 years, all the bridges are still there !!!
Absolutely! (Score:5, Interesting)
I took a couple of CS courses in college as part of my Math major. They were full-blown CS courses, not courses that had been altered for us Math majors. And they were nothing more than problem-solving courses -- and the problems being solved were so utterly asinine that it was laughable. However, when I studied in Germany I took a CS practicum course where we were assigned the task of creating a graphics program in X Windows on SunOS 4. The class was divided into groups: GUI, backend algorithms, SCM, QA, and requirements and management. There were design sessions and reviews, unit and integration testing, etc, etc, etc. It's the closest I'd ever seen to the real world in academia. I've never heard of any American college or university offering such a course, and no one I've interviewed ever had such a course. That's not to say that it's not offered somewhere, but it just doesn't seem all that common. And that's a real shame.
A happy medium somewhere (Score:2, Interesting)
Re:Probably (Score:5, Interesting)
I think there's a lot of truth in this. For example, how many programmers think about writing software from the standpoint of a support technician? In fact, how many programmers even have experience as a support technician? I've never even heard anyone even talk about writing supportable software [iloha.net], yet, when considering the overall costs or quality of a system, I think it's important to consider how heavily the introduction of that system will tax the support department. Whether you're shipping or deploying the system, lower support needs will lower over all costs and vastly improve the reputation of the system.
The same applies for security and usability. It's really not a question of programming/technical ability, but a question of mentality. I think programmers need to have a specific (or perhaps not-so-specific) mindset to get a bigger picture, and not very many programmers are willing to do that. Part of it may be inherent to programmer-types, but it also might be cultural (the whole "us vs. them" elitist attitude).
Yes and no (Score:2, Interesting)
Stuff that's bigger than any of us. (Score:2, Interesting)
ANYTHING of sufficient size and complexity is by definition something that no one of us can comprehend in its entirety. This being the case, there's no hope of ever seeing to it that everything from minor annoyances to catastrophic failures will be abolished.
My experience with this sort of thing isn't in software, it's large scale construction projects. Launch pads, to be precise. The basic goal is "build something that the Space Shuttle can successfully fly off of on launch day." In the real world, NOBODY knows exactly what this is going to involve down to the finest detail, and the possibility for malinteractions of that detail.
Fortunately, the launch pad, once built, more or less just stays put and continues doing the same job over and over. With software development, no such stability is feasable. We're still learning more and more about how computers work, both hard and soft ware. In this phantasmogoric landscape, with things morphing from this to that with bewildering speed and little overall pattern, the guys who have to grind out the code (and all their bosses right up to the CEO) have no prayer of ever getting it right. Don't be so hard on yourselves, it's a situation that you can NOT control fully. Just do the best you can and let Charles Darwin sort out the mistakes.
Re:In general... yes (Score:1, Interesting)
-It allows you to take advantage of fully-tested, error reduced? code that has been audited. An electrical engineer doesn't design a new power supply circuit for every new circuit board, they take a previously done, WELL TESTED and documented circuit, and builds to the required specifications.
-It reduces coding needed. By using a library/class/module already written to handle a specific function, you are able to save time. Don't re-engineer what has already been done, and done well.
Where the model breaks down (but shouldn't):
-Lack of auditing. When code libraries are written, have been debugged and used in beta, they should be audited by independant, senior people. Then the code should be frozen, except for future bug fixes, after which it should be re-audited. The auditing does not need to be done by payed people, just those who can do the job independantly and well.
-Lack of testing. I have downloaded many many libaries and applications which do not come with a working test suite. This is a bad sign. Code that is being distributed should come with a thorough test suite which can be used by both developers (to make sure new code doesn't break something) as well as by users to make sure that software compiled correctly. I admire much of the GNU project software as it usually comes with a very comprehensive test suite.
-Lack of documention. "Read the header files" is NOT documentation. Documentation should be clear on what EVERY public function call does, any exceptions that may be thrown, and what return values stand for what, as well as any assumptions made. It should be clear and conscise.
-Lack of standardisation on use. I was checking out the libpng web site, and counted roughly 100 libraries which somehow provide PNG services. I did not look at each one individually, so I may be a little off here, but I do not see how having one un-audited library is inferior to 100 un-audited libraries. More uncertain code is not better. If you want to add SSE5 suuport to a version of your software, freeze and audit the old version. Start a new version (new major version number) of the library with all of the new features that you want. This way people who use version 1.2 because it is stable won't be able to accidentally use version 2.0 by having it called 1.3 which is the newest, but untested. -GK
only to a point (Score:2, Interesting)
This doesn't mean that you always have to be explicitly focused on these issues, but the overall success of the system as a whole can be critically dependent on them. In a perfect world, of course, this wouldn't be an issue. But assuming the viability of black-box treatment in most real-world projects is the source of many problems, and the truth is that a relatively small portion of the population is capable of maintaining a sufficiently broad view of a system to be able to effectively respond..
Re:Experience (Score:3, Interesting)
Not really. I've gained much more by reading books like The Mythical Man Month and good object-oriented analysis and data modeling books. Managing complexity through good data modeling is the most important (and hardest) part of a program to get right.
The worst applications I've had to work with were designed piece-meal by a high-turnover team of inexperience people (read: really ugly data that resulted in nasty bloated unmaintainable code).
Re:Oh this is kind of crap... (Score:2, Interesting)
In general this works fine - until some future date, when the owner of the blackbox decides to change the API such that your code breaks.
Blackboxes, other than in an academic setting, imply a closed proprietary system. Open systems, on the other hand, are not truely black boxes because you do have access to the source and all the underlying APIs (no Eastereggs waiting to create undocumented interactions).
Re:Of course (Score:3, Interesting)
There's a way around it. Doing new String(string.substring(0,5)) allocates a new String that only contains the five required characters. But the documentation for the black box warns against doing that: "Unless an explicit copy of [the original string] is needed, use of this constructor is unnecessary since Strings are immutable."
Well, yes, they are - but using the constructor can also be required to get around the fact that the entire array of original characters is maintained.
As it turns out, this is a "speed hack" (ie, only one array of characters is maintained at any given time - a new array for the substring is not allocated, and the original is used). However, this implementation assumes that everyone using the black box is also going to need the parent string or is going to dereference both full string and substring (and hence allow to be gced) at about the same time, preventing memory and time from being wasted on the substring.
Unfortunately, I had written a program that read in a list of strings, keeping certain substrings and throwing out the rest - or so I thought. (Think "comments" in the text file - they would removed, the remaining characters would be kept.) This meant I was wasting quite a bit of space by the end of the run due to characters that were no longer accessible being kept around indefinately since the "substring" objects kept a reference to the array of characters for the full string. Fixing it requires doing the new String(string) thing, which as it turns out does allocate a new buffer and is there expressly for that purpose (if you read the source code).
My point is this - black boxes can be dangerous. A black box is a very useful abstraction - assuming that important details about implementation side effects are documented. In the given example, the Java developers implement an algorithm that is useful in many cases - but there are those cases where it would be useful not to use such an implementation.
I think that the idea of "black boxes" are important, but that a developer also has to be aware that something happens in the black box and be prepared to learn about it if the need arises. Likewise, when creating a black box, care should be taken to either fully specify what a given implementation does and that there are no side effects to the environment (like maintaining a 1024-element array and only allowing access to 5 elements). There are pitfalls - and both users of black boxes and designers must ensure that such issues are addressed.
(Especially because if the next String blackbox "fixes" the issue above, my code will start doing a useless extra step to get around the problems in the implementation I saw of the black box...)
Is it really Embedded Vs Virtual Machine? (Score:2, Interesting)
One thing to remember is that the only thing some programmers learn from school
is how to misuse elements of CS to rationalize away the fact that they suck.
IGNORE sophistical arguments, instead of buying into the BS and getting the wrong idea
about whatever they're using as an excuse.
It sounds like you've heard 'blackbox coding' where I've heard 'implementation details'.
Like:
Me: What if?---
Them: I'm not worring about 'implementation details' [with a tone that suggest that they are above it all]
But for you, perhaps, it was like:
You: What happens when?
Them: We don' have to worry about that because we're coding to a virtual machine.
You: Yeah I know, but ---
Them: Haven't you taken object orientied programming and design?
You: Yeah
Them: Ah then you see... [start long winded lection about OO that isn't germane ].
There are two types of people: People who don't know everything, and people who don't admit that they don't know everything. If we had been dealing with the former. It would have gone like:
Us: What about?
Them: Well, if that happens we're fscked. That's something we'd
if we were doing embedded systems programming like in medical equipment, but for now for the purposes of CS201 we implement the 'ostrich algorithm'.
Proper delegation (Score:2, Interesting)
Now I know that sounds elementary and naive, but I still beleive that if you are making [too many] links up, it might be possible to re-work your design.
Re:Oh this is kind of crap... (Score:2, Interesting)
- dasmegabyte
To quote the FOCUS Magazine interview with Bill Gates [cantrip.org] [October 23, 1995]:
"FOCUS:
Every new release of a software which has less bugs than the older one is also more complex and has more features...
Gates:
No, only if that is what'll sell!
FOCUS:
But...
Gates:
Only if that is what'll sell! We've never done a piece of software unless we thought it would sell. That's why everything we do in software
FOCUS:
But on the other hand - you would say: Okay, folks, if you don't like these new features, stay with the old version, and keep the bugs?
Gates:
No! We have lots and lots of competitors. The new version - it's not there to fix bugs. That's not the reason we come up with a new version.
FOCUS:
But there are bugs an any version which people would really like to have fixed.
Gates:
No! There are no significant bugs in our released software that any significant number of users want fixed.
FOCUS:
Oh, my God. I always get mad at my computer if MS Word swallows the page numbers of a document which I printed a couple of times with page numbers. If I complain to anybody they say "Well, upgrade from version 5.11 to 6.0".
Gates:
No! If you really think there's a bug you should report a bug. Maybe you're not using it properly. Have you ever considered that?
FOCUS:
Yeah, I did...
Gates:
It turns out Luddites don't know how to use software properly, so you should look into that. -- The reason we come up with new versions is not to fix bugs. It's absolutely not. It's the stupidest reason to buy a new version I ever heard. When we do a new version we put in lots of new things that people are asking for. And so, in no sense, is stability a reason to move to a new version. It's never a reason."
So, you can see that your assumption is incorrect. YOU CAN NOT DEPEND ON YOUR VENDOR TO FIX IT. We found this out the hard way at my job - after spending millions of dollars; now we have an open architecture system where we can plug and play different vendor solutions easily, and use open source next to vendor supplied applications.
On the other hand, I have written the maintainer of a famous development environment [don't want to drop names - not good form] - and he returned my email the same day with an answer to my question. My experience tells me your basic understanding does not jive with reality.
A good example of this... (Score:3, Interesting)
Anyway, I think that the issue of the GUI is a great example. Programmers got carried away with the GUI, and now applications and OSes are completely over-GUIed. The mouse is much, much slower than they keyboard when it comes to many tasks. I use graphic design programs on a regular basis, and I would give an arm and a leg to have a quick and easy command line interface in, say, Adobe Illustrator, for precise object manipulation. Same goes for Photoshop. AutoCAD and other programs have a decent implementation of the CLI, but it could get much better.
I would love to see programmers get out of the object-oriented point-and-click mode that they've been stuck in since the invention of the original Macintosh.
GUIs are great for representing data, and they are great for the visual manipulation of data. But visual manipulation is often imprecise. For precise data manipulation, the CLI is still necessary -- clicking through a menu and two dialog boxes to finally find a text box with the field to rotate an object by 20 degrees, or add a 2nd column to the page, or fix page margins; that's absolutely ludicrous. There should be a simple, (preferably standardized) command line that's accessible from all applications. Remember the ~ in the original Quake? That was a huge step forward. We need it in more applications. How much productivity has been lost by over-mousing? -Shylock0
Questions and comments welcome. Flames ignored. Post responsibly.
Hey, I resemble that remark!!!! (Score:1, Interesting)
Anyway, you seem to have learned something inspite of all those bad teachers. Have you considerd coming back to school and helping us out. We sure could use the help, were not proud, if you know a better way to teach, come and show us. We're willing to learn.