Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Software IT Technology

Programmers and the "Big Picture"? 405

FirmWarez asks: "I'm an embedded systems engineer. I've designed and programmed industrial, medical, consumer, and aerospace gear. I was engineering manager at a contract design house for a while. The recent thread regarding the probable encryption box of the Columbia brought to mind a long standing question. Do Slashdot readers think that the theories used to teach (and learn) programming lead to programmers that tend to approach problems with a 'black box', or 'virtual machine' mentality without considering the entire system? That, in and of itself, would explain a lot of security issues, as well as things as simple as user interface nightmares. Comments?"

"Back working on my undergrad (computer engineering) I remember getting frustrated at the comp-sci profs that insisted machines were simply 'black boxes' and the underlying hardware need not be a concern of the programmer.

Of course in embedded systems that's not the case. When developing code for a medical device, you've got to understand how the hardware responds to a software crash, etc. A number of Slashdot readers dogmatically responded with "security through obscurity" quotes about the shuttle's missing secret box. While that may have some validity, it does not respect the needs of the entire system, in this case the difficulty of maintaining keys and equipment across a huge network of military equipment, personnel, installations."

This discussion has been archived. No new comments can be posted.

Programmers and the "Big Picture"?

Comments Filter:
  • Probably (Score:4, Insightful)

    by nizcolas ( 597301 ) on Tuesday February 11, 2003 @01:38PM (#5280881) Homepage Journal
    Most programmers who are going to come across a "black box" have enough experience to be able code for the situation. Isn't that skill a trait of a good programmer?

    Then again maybe Im missing the point :)
  • by seanmcelroy ( 207852 ) on Tuesday February 11, 2003 @01:41PM (#5280919) Homepage Journal
    I think the problem increases as programmers are less and less a part of the complete systems development life cycle and are contracted to work on individual components of an overall system. Especially during the maintenance phases of a system's life, the inexperience of new programmers on a project is probably more to blame than 'training' per say to think in a black-box mentality.
  • by jlk_71 ( 308809 ) on Tuesday February 11, 2003 @01:41PM (#5280924)
    I am taking courses toward my degree and I must say that in my intro to programming course, the instructor was constantly stressing the need for 'black box' programming. In addition though, he also stressed that while keeping things black box, you also need to keep your mind on the whole project, always watching out for possibly security problems, etc.
    I believe that some people tend to get tunnel vision and concentrate wholely on the bb theory, without taking into consideration the whole program. This does usually lead to problems and errors in the code.

    Regards,

    jlk
  • by syntap ( 242090 ) on Tuesday February 11, 2003 @01:42PM (#5280928)
    Many times, management is the cause of preventing developers to see the "big picture". Sometimes it's "Here, code this" and you don't get a lot of opportunity to ask the questions you know need to be asked. Sometimes you have to hope resolutions to these types of issues are built into the requirements specification or will be ironed out in quality assurance measures.

    The developer is only one in a group of responsible parties in any given system, and his/her output depends largely in input from others. If a developer is kept "out of the loop" on things (or is lazy and stays out of the loop opn purpose), you're going to see these problems.

    Often it's like blaming clogged fuel injectors _for_ cheap gasoline instead of _on_ it.
  • by Entrope ( 68843 ) on Tuesday February 11, 2003 @01:42PM (#5280929) Homepage
    Keeping the "big picture" in mind is a good thing for managers and designers. For people implementing the finer details, though, it can be a distraction and a poor use of their time. Someone implementing or verifying flow control in a ground-to-space network link does not need to know much about the format of data carried over the link. Someone doing circuit layout or design for a cockpit control widget does not need to worry about reentry dynamics and airflow. Similar examples can be found in any large system design.

    It is the responsibility of the higher level designers and managers to encapsulate the big picture into digestible, approachable chunks. To the extent possible, they should be aware of and codify the assumptions and requirements that cross multiple domains -- when those are documented, it is easier to test the system for correctness or robustness, as well as to diagnose problems.

    When everyone on the project tries to orient their work to what they each perceive as the big picture, you end up with enough different perceptions that people work against each other. Breaking down the system into smaller, more defined, chunks combats that tendency.
  • If i write a component that takes in X1 and outputs X2, isn't it the designer's job to make it look pretty? I mean, supposedly they were the ones that came up with needing the component in the first place, to accomplish some function or other, and thus make the user happy.
  • by Dr. Manhattan ( 29720 ) <(moc.liamg) (ta) (171rorecros)> on Tuesday February 11, 2003 @01:43PM (#5280943) Homepage
    Being able to abstract chunks of a program or system out and not worry about implementation is utterly vital. No human, however gifted, is capable of understanding the entirety of more than a trivial system at once.

    Now, the amount of abstraction possible does differ depending on what you're doing. Embedded systems programming is hard, and you do have to know details of the machine. But I ask you - do you insist on a gate-level understanding of the embedded CPU, or will you settle for knowing the opcodes and their timing characteristics?

    Because, in embedded programming, you need to know more about the device, it's proportionately harder to do. That's one reason, apart from power and cost considerations, that embedded systems tend to be simple - the simpler the system, the easier it is to think about, to prove correctness or to at least enumerate possible pathways and handle them.

    But even in that case, you need to be able to ignore some implementation issues or you can't do it at all.

  • by keyslammer ( 240231 ) on Tuesday February 11, 2003 @01:44PM (#5280956) Homepage Journal
    ... but the lack of experience.

    Programmers have to consider subsystems as abstractions: there's a limit to how many things the brain can deal with at one time. We know that this kind of thinking produces cleaner designs which are less susceptible to bugs and security holes.

    Knowing the limitations of the "black box" and what will break the abstraction is the product of lots and lots of experience. I don't believe there's any way to teach this - it's something that you just have to live through.

    That's why senior level developers can continue to justify their existence (and higher price tags)!
  • Experience (Score:5, Insightful)

    by wackysootroom ( 243310 ) on Tuesday February 11, 2003 @01:44PM (#5280959) Homepage
    The only thing that school prepares you for is to get an entry level job where you can gain the experience to write reliable software.

    School will get you up to speed on new terms and concepts, but the only thing that will make you better at writing good code is to read good code, write your own code and compare it to the good code, notice the difference, and adjust your approach until your code is "good".
  • by levik ( 52444 ) on Tuesday February 11, 2003 @01:45PM (#5280967) Homepage
    The black box paradigm obviously has its proper and improper applications.

    It can be a great boon in OO programming, where you can assume a component will live up to its end of the bargain by providing the specified functionality, letting you concentrate on using whatever interface it exposes.

    It can obvioulsy be taken too far in cases where failure to know about the internal workings of a system can lead to grossly unoptimized or even error-prone code. However, more often than not such problems are caused by faulty abstraction, and incomplete documentation on the part of the implementor.

    In most such cases a "grey box" approach would do, where the end-developer is made aware of some of the limitations and quirks of the component they are working with, but not neccessarily the minute details of its operation. You don't need to know if the sort() function is implemented with Bubble Sort or Quick Sort, but it does matter if it's a square time or a log time function.

    Everything breaks down if taken to an extreme.

  • by drenehtsral ( 29789 ) on Tuesday February 11, 2003 @01:46PM (#5280973) Homepage
    I think you've got a point there.

    The way modern projects are often managed, along with the way modern programmers are often taught does lead in that direction.

    Even if you are only responsible for a small part of a much larger project, it will always help to have a decent understanding of the REST of the system. Maybe not in excruciating technical detail, but at least a decent grasp on what goes on and how it all works.

    The goal of the whole 'black box' thing is that in theory that minimizes stupid dependancies and hidden interconnections that can cause things to be unmaintainably complex. Individual components should still be well spec'd out, and projects should still be modular, but each programmer should grok the context in which his/her code runs, and people should still communicate to iron out inefficiencies, strive for a consistant UI, etc...

    I think it's hard to teach that, you just have to learn by experience. Where I work, we all go out for curry once a week (company pays) and we just talk about the project, off the record, no beaurocracy, just a handful of geeks talking about programming. We've hammered out more efficiency/UI/complexity issues that way than in any formal meetings.
  • by Anonymous Coward on Tuesday February 11, 2003 @01:48PM (#5281004)
    Computer science professors and courses are more concerned with the methods, ideas, and logic of computer programming and design. The idea is to create a totally abstract system, hardware or software, that can then be implemented on any system. This is the purpose of "black box" programming.

    While I agree with you that programmers should understand the hardware they are writing for, any knowledge of that hardware is biasing their creation of a system to run on that hardware and further removing itself from computer science's notion of total abstraction.
  • by Illserve ( 56215 ) on Tuesday February 11, 2003 @01:49PM (#5281006)
    I recently installed the recent version of the accursed RealOne player to watch an rm file. I hate Real player more than can be described by words and it just seems to be getting worse.

    So I pop it up to view the file, and what happens? I get the movie playing in a window on top of the Real advertising/browser thing. It spontaneously pops up a "help" balloon giving me a tip for how to use the browser window. The balloon is sitting RIGHT ON TOP OF THE GODDAMNED MOVIE IMAGE. It goes away after a few seconds of frantic clicking, but the point is clear, these programs are often a monstrous brew, created by too many chefs. They just throw in features, and there doesn't seem to be someone sitting at the top, deciding whether these features actually contribute to improving the final product, or just make it worse.

    Then there's Office, which, by default will turn 2-21-95 into 2/21/95. ????? I have to dig through numerous help pages to figure out what subpanel of the preferences menu will deactivate this. Worse, I enter 23 hello on a new line in Word, and hit enter, it auto indents, adds a 24 and positions the cursor after. !?!?!!?!?!?!?!!?
    How many times I've had to fight this particular feature I can't tell you.

    And it's certainly not just a closed source thing either, if anything, some open source GUI packages are even worse. Although, to be fair, I don't expect as much from open source stuff, because noone's getting paid. But when a program created by paid programmers is just badly done, I get infuriated at the incompetence, at the hours wasted taking a usable product and making it actually worse by throwing in garbage features.

    It's been said a million times, but if we made cars the way we make software, noone would get anywhere.

  • Of course (Score:5, Insightful)

    by Scarblac ( 122480 ) <slashdot@gerlich.nl> on Tuesday February 11, 2003 @01:52PM (#5281031) Homepage

    It is essential that every programmer in a big system only thinks about his own problem, and uses the other parts as a black box.

    Say I want to use some library. Then it has a documented API, which explains how I can use it. I don't need to know more. For me as a programmer, that means:

    • Simplicity - it is a limit on what I need to understand.
    • Compatibility - if a new version comes out, which changes implementation details but leaves the API intact, programs that don't make assumptions about these details won't break.
    • Portability - if there is a new implementation of the same API by another vendor, I can (theoretically) just change to that implementation and nothing changes.

    I'm certain that without these black boxes, no big software engineering project would be possible. The human mind can't keep track of everything in a whole system at once (except for some simple cases - like embedded systems, perhaps).

    It is done sometimes - I believe perl looks inside a file struct when reading/writing files on some platforms to get faster I/O than standard C, for example. But that's only as an optimization after coding the general case, and even then I don't believe it's a good idea.

    For hardware, the story is much the same. Any speedups specific for the hardware are optimizations, and they should only be looked at when the program works, after profiling, when there's a speed problem, and the algorithm can't be improved.

    Remember the rules of optimization: 1) Don't do it. 2) (for experts only) Don't do it yet.

    Black boxes in software engineering are your friend.

  • by DakotaSandstone ( 638375 ) on Tuesday February 11, 2003 @01:52PM (#5281032)
    I'm also an embedded systems engineer. Two huge concepts I got as an undergrad in CS were "APIs" and "object oriented programming." By their very nature, these things inspire black box thinking.

    And heck, I don't know. I mean, is it great that I can now call malloc(1000000); and get a valid pointer that's just usable? Yeah probably. In DOS, I wasn't shielded from the memory manager as much, and to do something like that, I had to write my own EMS memory swapping code! That was a PITA, and kept me from the true task I was trying to solve.

    So a modern 32-bit malloc() is a black box for me. Cool. It's empowered me very nicely.

    However, something like WinSock has become a big black box for people too. Okay, great. So it's really easy, in 5 function calls, to open a socket across the internet and send data. But you've missed the nourances of security. So now your app is unsafe, because you weren't forced to know more about what's going on in the "black box."

    Well, that's all I can really say in my post. Black boxes are a darn complex issue to talk about. Anyone who attempts to distill this down to a "yes" or "no" answer is probably missing a lot of the completixy at hand in the issue.

    Very interesting question to put forth, though. Good topic.

  • by Carnage4Life ( 106069 ) on Tuesday February 11, 2003 @01:55PM (#5281066) Homepage Journal
    There are many things computer science education does not teach the average student about programming. This is burdened by the fact that programming can vary significantly across areas of CS (i.e. networking vs. database implementation) and even within the same area (GUI programming on Windows vs. GUI programming on Apple computers).

    When I was at GA Tech the administration prided themselves on creating students that could learn quickly about technologies thrown at them and had broad knowledge about various areas of CS. There was also more focus on learning how to program in general than specifics. This meant that there was no C++ taught in school even at the height of the language's popularity because its complexity got in the way of teaching people how to program.

    Students were especially thought to learn how to think 'abstractly' which especially with the advent of Java meant not only ignoring how hardware works but also how things like memory management work as well. In the general case, one can't be faulted for doing this while teaching students. Most of my peers at the time were getting work at dotcoms doing Java servlets or similar web/database programming so learning how things like how using linked lists vs. arrays for a data structure affects the number of page faults the system makes were not things that they would really have to concern themselves with considering how things like the virtual machine and database server would be more significantly affect their application than any code they wrote.

    Unfortunately for the few people who ended up working on embedded systems where failure is a life or death situation (such as at shops like Medtronic [medtronic.com]) this meant they sometimes would not have the background to work in those environments. However some would counter that the training they got in school would give them the aptitude to learn what they needed.

    I believe the same applies for writing secure software. Few schools teach people how to write secure code not even simple things like why not to use C functions like gets() or strcpy(). However I've seen such people become snapped into shape when exposed to secure programming practices [amazon.com].
  • by dboyles ( 65512 ) on Tuesday February 11, 2003 @01:55PM (#5281074) Homepage
    I agree, and would like to add my thoughts.

    One of the most likeable things about programming is that on a low enough level, it's always predictable. This kind of goes hand-in-hand with the fact that computers don't make mistakes, humans do. As a programmer, it's very comforting (for lack of a better word) to have a chunk of code and know that, given X input, you'll get Y output. You can write a subroutine, document it well, and come back to it later, knowing how it will behave. Of course, other programmers can do the same with your code, without having to have intricate knowledge of how the code goes about returning the output.

    But of course, there's a catch. It's probable that the programmer who wrote the subroutine initially didn't envision some special case, and therefore didn't write the code to handle it. If everybody is lucky, the program will hiccup and the second programmer will see the problem. The worse situation is when the error is seemingly minor, and goes unnoticed: when that floating point number gets converted to an integer and nobody notices.

    I know this isn't some groundbreaking new look on abstraction in code, but it is pretty interesting to think about.
  • by Anonymous Coward on Tuesday February 11, 2003 @02:02PM (#5281143)
    I've been doing pda programming for both the pocketpc and the palm os.

    The application for both is intended to be identical, but the api is different for each device.

    I designed the app originally for the palm, but now I am porting it over to the pocketpc. Unfortunately, the api is different enough that little of the code is portable.

    If I had known I would be coding for both, I would have tried to design the code to be more portable. Knowing the requirements of both systems might have allowed me to factor out the device-specific sections.
  • by Anonvmous Coward ( 589068 ) on Tuesday February 11, 2003 @02:03PM (#5281150)
    "Many times, management is the cause of preventing developers to see the "big picture". Sometimes it's "Here, code this" and you don't get a lot of opportunity to ask the questions you know need to be asked..."

    Don't forget the "make it work by the next trade show" mentality.
  • Re:Experience (Score:2, Insightful)

    by Anonymous Coward on Tuesday February 11, 2003 @02:05PM (#5281166)
    So then how does better code get developed if everyone is merely attempting to match their coding style to the "good" style they see?

    *Someone* had to provide that good code in the first place.
  • by Anonymous Coward on Tuesday February 11, 2003 @02:08PM (#5281197)
    With almost 18 years in the field, I agree completely. For example, carpenters, HVAC, electricians, etc. provide "black box" solutions in construction. Good project management ensures that they see enough of the project to interface properly, but they don't design. A good UI is composed of (presumably) well constructed black boxes.
  • by (trb001) ( 224998 ) on Tuesday February 11, 2003 @02:11PM (#5281222) Homepage
    Don't laugh, but this is one of the reasons why it's important to have solid requirements BEFORE you being coding anything. Most projects don't, I know, but something as complicated as the space shuttle would need to be completely spec'd out beforehand. After proper requirements and specs are laid down, the programmer should then approach the system as if it were a black box...with a lot of restrictions.

    The idea behind black box development is that you don't need to know what the rest of the system does...your component takes input and delivers output. That's a Good Thing (tm). Requirements are what tell you how to design and implement your black box, ie, you can't have more than 1ms latency between input/output, you can't assume some system variable is going to be out there, you can't assume your process won't be interrupted. Given these sorts of requirements, your part of the system SHOULD be a black box...someone else should send you the inputs and know what kind of outputs they're getting out. Assuming you correctly followed the requirements (that's what QA testing is for), they know what they're getting.

    --trb
  • I bless you.... (Score:2, Insightful)

    by FirstNoel ( 113932 ) on Tuesday February 11, 2003 @02:12PM (#5281234) Journal
    Ah, I think we have someone here who DOES see the big picture.

    There are lots of times, at least in my experience, where it's not the programmer's fault in how the program works.

    I've seen specs come down from higher-ups who have no idea what they are asking for. I'm a little bit luckier though. The analysts we have here tend to spot these problems long before I get to program. But occasionally some do slip through. I have loads of fun ripping these things to shreds. I feel like a professor at college with my little red pen. "Ah, wrong...can't do that! What does that say? etc, etc ..."

    Aside:
    That is also usually a good stall tactic. If I'm swamped with other projects, I'll send them a flurry of notes, overwell them with spec questions. It usually gives me a few days.

    It's tough to think outside the box, when there is no box.

    Sean D.

  • by praetorian_x ( 610780 ) on Tuesday February 11, 2003 @02:15PM (#5281258)

    Do Slashdot readers think that the theories used to teach (and learn) programming lead to programmers that tend to approach problems with a 'black box', or 'virtual machine' mentality without considering the entire system?

    Well, programming is, at its root, about controlling complexity. A good program (not that *I've* written one) will have sub-components within it that largely act as black boxes to one another. It is a great and rare skill to recognize where the boundries are in your program and establish them early, to avoid painful refactoring later.

    In my experience, it is when something *isn't* a black box that things can get seriously fsked. "What? I set a global variable and now the app seg faults when I click that drop down?" Ahhh, not "black boxy" enough.

    My perspective is from a higher level than embedded though. Embedded is a whole different game, although the "controlling complexity" insight of higher level programming languages no doubt sill applies (as far as it can go)

    Cheers, prat
  • Re:Experience (Score:5, Insightful)

    by Oink.NET ( 551861 ) on Tuesday February 11, 2003 @02:15PM (#5281262) Homepage
    the only thing that will make you better at writing good code is to read good code

    Because code is the most direct way to communicate wisdom between geeks? I would submit that unless you get the analysis and design right, your approach to writing good code just teaches you how to make a more polished turd.

    As far as getting better at the mechanics of coding, I would suggest reading Steve McConnell's Code Complete [amazon.com].

  • by Hirofyre ( 612929 ) on Tuesday February 11, 2003 @02:16PM (#5281273)
    The correct level of abstraction for a project is often hard to find, even for very experienced programmers. Sometimes you have to raise the level of abstraction of an overall system, or you will never get to the point where you can move forward on your piece of the process. Generally, I've found that the problems lie in the areas where the pieces don't quite fit together properly; namely where one person's code doesn't follow the contract a second person was expecting. A lot of the time, even for mid-size problems, it would be impossible for one developer (or a team) to have an end-to-end understanding of the problem space.

    As far as I can tell, there is only one way to avoid the "black-box" problem, and that is to have one person code the whole thing, which is very likely infeasable. The further you get from "your" part of the system, the more abstract it is going to get. If your abstraction is faulty, there is going to be trouble, but I wouldn't say it was caused by treating the problem as a black box.
  • by Anonymous Coward on Tuesday February 11, 2003 @02:17PM (#5281283)
    Yes, I think the 'black box', you-don't-need-to-know-the-hardware approach is dominant in teaching. It's a very useful abstraction, it works most of the time, so I think this is fine.

    Of course, any decent programmer (or researcher) should be aware of where this abstraction breaks down and where you do need to know the hardware (or lower layers) of your system.

    But frankly, I don't have any idea of how to teach this... Dumbly insisting on looking at the hardware when doing your Quicksort certainly doesn't cut it. I really hate Knuth's only-assembler approach in TAOCP. He has a very valid point, but does he really have to make it on every single page? Worst is, I'm not even sure if he reaches his goals, or whether people don't simply turn to other books...

    I guess the best approach to teaching would be to stick with 'black box' as the basic assumption but to point out examples where this model in insufficient, and then hope the students get it...
  • by Entrope ( 68843 ) on Tuesday February 11, 2003 @02:17PM (#5281284) Homepage

    That's not entirely true. Right brained people can, thing is..


    Being right-brained does not magically grant someone the ability to keep ten thousand things in their head at once. At best, it allows one the ability to easily defocus on things that are not important for what they do want to focus on -- in other words, to think abstractly. :)


    The hard part of being a really good programmer is not learning how to do job X, Y or Z. The hard part is learning where to draw the interfaces, where you should use existing abstractions, and where you need to extend them or create new abstractions. This is true at most layers of design -- like Dr. Manhattan said, you can find abstraction layers and idealized interfaces at many different places in a complex (especially a computerized) system.

  • by nosilA ( 8112 ) on Tuesday February 11, 2003 @02:20PM (#5281318)
    In order to do a good job on your module, you need a solid understanding of how the components you directly interact with function. In addition, a superficial understanding of other components is useful.

    For example, let's say you are working on the software for automatic transmission control in the car. You need an intimate understanding of the hardware you are running on, that's directly related to your job.

    However, you also need a solid understanding of how the automatic transmission works. Understanding the mechanics of the gear change is important to understanding timing issues, errors that can occur, and how to deal with them.

    It is very useful to have a good understanding of how a car works in general, to get an idea of how your product will be used. This allows you to optomize your product for likely scenarios.

    Sometimes, for personal satisfaction, it is nice to know how the windshield wiper mechanism works, but it doesn't help you in any way to make your automatic transmission control better.

    -Alison
  • by dasmegabyte ( 267018 ) <das@OHNOWHATSTHISdasmegabyte.org> on Tuesday February 11, 2003 @02:23PM (#5281342) Homepage Journal
    The "Black Box" design theory abounds because of the freedom it offers programmers from the dark ages of having to know the underlying hardware intimately before anything could be accomplished. It's what allows programmers to devote all of their time to doing what matters rather than pouring over volumes of errata and arcana.

    The reason Windows became so popular, for example, is because its API offered programmers a way to manipulate graphics without having to make graphics calls. Variation from driver to driver was of no concern, and shouldn't be -- that's an IT issue which can be repaired without redoing the entire application.

    And in a perfect world, there's no problem. If a driver hooks into an API properly and documents any disperity, then the black box theory holds true. Problem is, driver aren't perfect. A lot of them are designed for bare bones functionality, and only optimized as necessary (hence how Nvidia's still squeezing substantial horsepower out of my ancient GeForce GTS with every new driver release). Obscure hardware cases always cause trouble, which is why Dells are (sometimes) more reliable than "no name" machines with "better" hardware. Dell has the clout to make sure the drivers are as seamless as possible.

    What's the solution for embedded developers? Design and test the drivers in house, so the black box coders have a shoulder to cry on when hardware doesn't act properly. But it should not be the core developer's job to know what goes on with the hardware. That kind of thinking bloats budgets, increases the complexity of the project and ultimately the cost. Modularity, even though it makes things more difficult to map in total, makes things easier to deal with on a micro level. If the application works when unit tested but fails on the release machine, then it's the driver's fault. Much easier to fix than it is to perfectly replicate the release in your tests.

    Expecting EVERY software developer to be an electrical engineer as well is absurd unless you intend to pay them for both degrees. Better to keep it modular and put the pressure on the hardware abstractor to do a good job of catching the tiger's tail.
  • by scottfitch ( 629839 ) <scottcfitch&yahoo,com> on Tuesday February 11, 2003 @02:23PM (#5281345)
    The best lesson that I learned on my first large development project was that there is a big difference in the need for abstractions and black boxes in implementation versus design. All code should be written using a black box approach... no matter whether you're programming in SmallTalk or Assembly. (Though some languages make it easier than others :-).

    The big difference is that when you are actually designing and coding (verbs!) you have to look into those black boxes. If you don't understand the subsystems/objects/subroutes that your code interfaces with, you won't know what boundary conditions to test, what assumptions the other subsystems are making, etc.

    So now I always write well abstracted code (just like your Comp Sci 101 prof taught), but design with the big picture in mind.

  • by wowbagger ( 69688 ) on Tuesday February 11, 2003 @02:36PM (#5281468) Homepage Journal
    I think the problem is not so much the "black box" mindset, but rather the perfect black box mindset.

    Being an EE who now does software design myself, I try to decompose a problem into smaller problems, and decompose the solution into smaller parts. However, I don't make the mistake of thinking that my smaller parts are each perfect - I try to ask "Now, if component X malfunctions, what effects will it have on this higher level assembly Y?"

    The problem is that many time CS folks are not taught that the system can be imperfect, so by exclusion they believe it to be perfect - one plus one will always come back two, disk writes will always succeed if there is enough space for them, and so on. Folks are not taught that sometimes 1.0 + 1.0 != 2.0 (rounding errors), that disks sometimes fail (sector not found - abort, retry, cancel), and so on.

    In Circuits 1, an EE-to-be is taught the idea of the perfect op-amp - infinite gain, infinite bandwidth, infinite possible output voltage, infinite input impedance. He is taught to use this model to analyze a circuit.

    He is then IMMEDIATELY taught that the model is BS, and starts to add to it - finite input impedance, finite gain, finite bandwidth, finite offset voltage, finite output impedance. The EE-to-be is taught to apply those non-ideal behaviors when needed, and taught to judge when they can be ignored.

    Sometimes I think the best thing in the world would be if CS and EEs had to work with robotics as part of their job. When they have to deal with sticky steppers, dust-clogged optics, and misfiring soleniods they will learn to be a bit more paranoid.
  • Definitely! (Score:5, Insightful)

    by jhouserizer ( 616566 ) on Tuesday February 11, 2003 @02:40PM (#5281513) Homepage

    While there's definite benefits of treating software components as "black boxes", I agree with the asker of the question that there are some definite negative side-effects.

    For instance, we've got a couple of developers that just don't know how to work with the team, and figure that they can go sit in their dark cubes and code away their component as a black box that with simply fit in with everybody elses stuff. Common problems that arise are:

    • Different logging schemes
    • Different configuration schemes
    • Different admin-alerting mechanisms
    • Components that don't match the design pattern that all other components follow - thus making them harder to understand.
    • Components that expect some type of "global" data to exist, that simply doesn't.
    These issues have led to no end of grief for those of us who do communicate with each other about what they're doing.

    Abstraction is great!, but you still need to make sure everything fits together correctly, and not just at the interface level.

  • by Anonvmous Coward ( 589068 ) on Tuesday February 11, 2003 @02:40PM (#5281517)
    "Someone doing circuit layout or design for a cockpit control widget does not need to worry about reentry dynamics and airflow."

    I think this example is debatable and can possibily be used against you. One could argue that reentry dynamics and airflow could make for a bumpy ride, thus the designers need to be aware of the journey this vessel's going to go on.

    That's besides the point, though. I'm not interested in debating that detail. Instead, I want to offer my insight from observing both poles of this dicussion: having strictly one point of view or the other is bad. If you're overly broad, you over-design software. If you're overly narrow, you design yourself into a corner.

    I'm radically oversimplifying this problem, but it's true. Everybody has their own perspective. A good manager places them where they're useful. My company has a nice mixture of personality types in engineering. They're all placed where they fit best. If we were to polarize all the sudden, I really think the project would collapse.
  • by Tony ( 765 ) on Tuesday February 11, 2003 @02:42PM (#5281541) Journal
    The idea behind black box development is that you don't need to know what the rest of the system does...your component takes input and delivers output. That's a Good Thing (tm).

    This is a Good Thing (tm) only when the black boxes are true black boxes. The problem with treating software as engineering problems with Black Boxes is that there is no such thing, in software. This is the reason object oriented programming has not been the panacea we were promised.

    In construction engineering (architecture, for instance) the behavior of all the pieces are known beforehand. A steel I beam is a steel I beam (yeah, I know there are different strengths, but these are specified by the engineering firm). A steel I beam has only two APIs: welding, and riveting.

    In programming, every black box has unique characteristics. Even if two ungif "black boxes" have the exact same API, the behavior of those routines are probably different. Even worse, in all non-trivial projects, connections among Black Boxes produce a complex system; the interrelations among those Black Boxes will change the behavior of the system, often in unexptected ways.

    The case for understanding all your Black Boxes is represented in one of the most dramatic engineering failures ever: the Tacoma Narrows bridge disaster. The bridge was well-designed. However, strong winds blowing down the narrows set up harmonics in the swaying of the bridge. This was a case of an engineering firm who did not understand the whole system, with devastating results.

    The problem with OO programming is it encourages the thinking that you only need to understand your component, and not the whole system. This is a Very Bad Thing (tm) in my book. There are too many Tacoma Narrows software projects out there already.

    Anyway, that's just my opinion. I could be wrong.
  • by cybergibbons ( 554352 ) on Tuesday February 11, 2003 @02:43PM (#5281569) Homepage

    No, when something bad happens, it normally highlights smaller problems that all are part of the situation. So, instead of mourning the loss of 7 people who you probabky didn't know, which is futile, get something out of it.

    I also don't understand why this is worse than other people who die in air crashes, as soldiers, in cars, or whatever. They've done something with their lives. Get over it.

    It's no worse than the Challenger disaster, and certainly no worse than the Apollo fire. Those men died horrible, slow deaths, for no reason.

  • by plumby ( 179557 ) on Tuesday February 11, 2003 @02:50PM (#5281647)
    There's nothing wrong with developing to a black box model. This is what design by contract and component development are all about. Each method on each component should describe, with pre- and post-conditions, what that method requires in order to work, and what changes it will make to the external environment that it is operating in. Beyond that, the inner workings of the component should be a black box. I don't care how your component does what it says it will do, just that it does exactly that (and nothing else).

    As the developer of that component, you will know exactly what the internals do, but then you treat the rest of the world as a black box, to be talked to through clearly defined interfaces.

    It is the lack of a black box approach that often leads to unexpected side-effects.

  • by Baldrson ( 78598 ) on Tuesday February 11, 2003 @03:06PM (#5281842) Homepage Journal
    Bit serially, although Forth isn't the be-all and end-all of programming environments, it does have the elegant simplicity that should be sought by whole systems.

    One of the priemere embedded systems languages, Forth was invented by Chuck Moore [colorforth.com]. I like Chuck Moore's 1% Code Page [colorforth.com]. His introduction:

    I've studied many C programs in the course of writing device drivers for colorForth. Some manufacturers won't make documentation available, instead referring to Linux open source.

    I must say that I'm appalled at the code I see. Because all this code suffers the same failings, I conclude it's not a sporadic problem. Apparently all these programmers have copied each others style and are content with the result: that complex applications require millions of lines of code. And that's not even counting the operating system required.

    Sadly, that is not an undesirable result. Bloated code does not just keep programmers employed, but managers and whole companies, internationally. Compact code would be an economic disaster. Because of its savings in team size, development time, storage requirements and maintainance cost.

  • by Dr. Manhattan ( 29720 ) <(moc.liamg) (ta) (171rorecros)> on Tuesday February 11, 2003 @03:16PM (#5281946) Homepage
    Uhh, and John Carmack does not understand every ounce of the quake code. And Linus torvalds does not understand every ounce of the Linux core.

    No, they don't. Those codebases are split into well-defined modules, and they are able to understand how those modules fit together. And they can look inside one of those modules and know how it's put together. That's why you have a core engine that can have software, Glide, and OpenGL renderers; or a filesystem core that can work with ext2, ext3, reiserFS, XFS, etc.

    But neither of even these prodigiously talented gentlemen can visualize the entire state of their respective systems. Else why would you have, e.g., the Quake physics bugs or any number of kernel bugs? Their horizons may be quite a bit broader than average but they are still limited.

  • by tmoertel ( 38456 ) on Tuesday February 11, 2003 @03:18PM (#5281956) Homepage Journal
    As systems become increasingly complex, the practicality of (and even the feasibility of) treating them as a monolithic entities becomes a legitimate concern. One method of addressing this concern is via abstraction, where suitable models are used as representations of their real-world counterparts. Indeed, abstraction is one of the fundamental principles of engineering and has been used for thousands of years to great success in the construction of complex systems.

    For example, in electrical engineering, it is common to simplify complex circuits by breaking them into smaller circuits, analyzing each of the smaller circuits, and then replacing the smaller circuits with appropriate black-box equivalents. With these black boxes in place, the original circuit becomes much easier to understand.

    However, as in any modeling exercise, it is crucial to choose appropriate models and to understand the limitations of the models chosen. While a simple resistance model might be a good substitute for DC and low-frequency circuits, it would be inappropriate as a substitute for higher-frequency circuits where capacitance effects come into play.

    So, returning to the original poster's questions:

    Do Slashdot readers think that the theories used to teach (and learn) programming lead to programmers that tend to approach problems with a 'black box', or 'virtual machine' mentality without considering the entire system?
    Yes, in these days too much stock is placed in the idea of letting somebody else worry about the complexity. This is especially so in mainstream industry, where one of the key selling points of software development systems is that with Magic DevStationPro X you no longer have to worry about the details but instead just use some brilliant Wizard or API to work at "a higher level." This applies not only to software development but also to user-level domains such as operating systems and applications. For example, a common notion in industry is that by using Microsoft operating systems on servers, administrators no longer need to know how to administer servers; rather, they need only know how to use the GUI administration tools. In other words, the pitch is that you need not concern yourself that the GUI tools present a mere model of the underlying system. Let the model be the system and reap the rewards.

    That's hogwash. The model approximates the system, no more. In engineering, this is well understood, and I suspect that in good CS programs the same can be said.

    Abstraction is a powerful tool. It is widely applicable, effective, and well founded -- when used appropriately. It probably ought to be used more often. Nevertheless, it is not a substitute for rational thought. Nor is it a replacement for being responsible for the entirety of the systems we build.

    That, in and of itself, would explain a lot of security issues, as well as things as simple as user interface nightmares. Comments?"
    It would certainly explain some of these problems, but I suspect that far more errors are the result of interface errors. One of the tools that goes with abstraction is composition -- breaking things into pieces, treating the pieces individually (at fine granularity), and combining the pieces (at a larger granularity) to yield a system. The risk of combining pieces is that in order to put them together properly, the boundaries where they coincide -- their interfaces -- must be well understood and compatible. Since the individual pieces make natural units for delegation, pieces are often assigned to different people who may have slightly differing understandings of the boundary conditions. As a result, "interface mismatches" are a significant source of error in software systems.

    Certainly abstraction plays a role here. Each piece can be thought of as a black-box model. The limitations of that model, and the assumptions under which the model is valid, are certainly important characteristics of the piece's interface with the world. Yet, these characteristics are frequently neglected in documentation and often go uncommunicated across delegation boundaries. This sad fact makes interface mismatches an especially harmful side effect of using abstraction and composition in common software development practice.

    Nevertheless, abstraction is a powerful and genuinely useful tool. It is also a necessary tool if we are to build increasingly complex systems. Like any tool, its uses and limitations must be understood if it is to be applied effectively. Thus, getting back to the original poster's question about whether the use of black boxes is harmful, my answer is, No.

    The problem isn't abstraction, the problem is improper use of abstraction.

  • by Anonymous Coward on Tuesday February 11, 2003 @03:23PM (#5282000)
    It's been said a million times, but if we made cars the way we make software, noone would get anywhere.

    That's *such* bullshit. A car is *simple* compared to a software product, and yet it still manages to fail in a great many ways.

    Most damning, cars are unsafe: pull the wheel at the wrong time and you may get killed. And even when standing still another car may drive into yours (where is memory protection when you need it?). Software won't (generally speaking) attempt to kill you.

    Cars need significant maintenance: tires need inflating, endless gas must be bought, and don't even begin about garage maintenance by overpaid incompetent monkeys with a bad attitude. By comparison, software is considered high-maintenance when you need to download a maintenance pack twice a year.

    And cars are extremely inflexible: they can only be controlled from within the car, by a single operator. The usage of the car is locked in the factory, you cannot for example add a few more doors or loading space after you bought it. Noone would accept this of software.

    So what the hell are you comparing crappy cars to software for? It just doesn't make any sense.

  • by Entrope ( 68843 ) on Tuesday February 11, 2003 @03:25PM (#5282023) Homepage

    One could argue that reentry dynamics and airflow could make for a bumpy ride, thus the designers need to be aware of the journey this vessel's going to go on.

    That actually occurred to me while I was writing my post, and I considered it to be an instance where my second paragraph bears true: if the ride will be bumpy, or flown upside down, or whatever, then those cases should be documented (or at least known) to the designers of the cockpit widgets.

    Yes, you need to avoid both over- and under-design. Yes, you need to know things beyond your piece of the work. But no, you do not need to consider the whole system and all parts of it when you do implementation or even some of the design.

    A good designer knows how far away the interaction horizon should be, and can analyze the effects of everything within that horizon. If the collective effects are too many to analyze, it is a sign that the design needs to be refined or reworked.

  • by Arandir ( 19206 ) on Tuesday February 11, 2003 @03:32PM (#5282100) Homepage Journal
    I'm programming on a very complex system. I simply cannot know about all the parts that my code touches. I would need three engineering degrees just to understand it all. I have to program in a black box because the white box is too big.

    This is why I demand complete requirements and specifications, and invite all relevant parties to my design and code reviews. And when I'm the one called on to write the reqs and specs, I make sure I get a sign off absolving me of responsibility for it (you would be surprised how much closer the docs are inspected when you start demanding stuff like that).

    The problem isn't the developers writing black boxes, but the upper management buying into the party line of Microsoft, thinking that snap-together black box components will reduce the resource needs for the project.
  • by mobiGeek ( 201274 ) on Tuesday February 11, 2003 @03:33PM (#5282117)
    Seriously, screw the next guy. Just follow your companies procedures (no more no less) and move on your merry way.
    When you said "seriously", you were joking, right?

    I mean, c'mon! The great U.S. of A. is gonna remain on top of the world by "obscurity" ?

    The reason the guy in India/Mexico/Nebraska/whereever makes $5/hr and is worth it is because he does a job that is only worth $5/hr. If the job is worth $55/hr, then a $55/hr person will get that job.

    Do you honestly believe that you are able to hold onto a higher paying position because your code is not documented? Do you honestly believe that this is the way to stay ahead of the game?

    It is this mentality that has stifled innovation. People spend all sorts of time trying to figure out how-in-the-hell they got to where they are instead of trudging forward down avenues unknown.

    Unions in the Western World are doing just this: pressure tactics to avoid "outsourcing" of work. The reason the work is being outsourced is that it is no longer work worthy of a high-paying (and supposedly higher-quality) employee. The work is being shipped off elsewhere because someone else can do it cheaper (possibly at a lower quality or a lower efficiency, but if the resultant product effectively meets the desired goal, then it is the right move for the company and, ultimately, ITS CUSTOMERS).

    The Unions need to work with the companies to find ways to take these higher-productive workers and make the company even higher returns. But Unions don't "work with" management and fight all change.

    Oh, don't get me wrong! I know that management has just as many boobs running the show too. They make plenty of mistakes themselves. But the nature of "management vs. union" mentality keeps the unions from effectively working with (upper) management to enable change that makes better use of all resources and, in turn, makes the company more profitable.



    Seriously (no, I'm not kidding...I mean seriously), you cannot believe that documenting your code, outlining your procedures, using effective architectural designs, and improving the company procudures is threatening your job, can you? If you do, then realize that a single, competent manager is all it takes to tear down your warped house of cards...

  • Re:IMHO (Score:3, Insightful)

    by GileadGreene ( 539584 ) on Tuesday February 11, 2003 @04:09PM (#5282385) Homepage
    People tend to focus exclusively on their area of expertise.
    Otherwise they become managers :D

    Or systems engineers...

  • by GileadGreene ( 539584 ) on Tuesday February 11, 2003 @04:17PM (#5282438) Homepage
    This is why good managers are worth their weight in gold. Bad managers are worse than worthless.

    No. This is why good systems engineers are worth their weight in gold. Dealing with the big picture, and designing large, complex systems using an engineering approach is why systems engineering came into being in the first place.

    Managers are trained to deal with schedule and budget. Not with designing complex systems. Systems engineers are trained to design complex systems, and to make sure that all the pieces interact in such a way that the overall system acheives whatever goal it was designed for.

    That said, decent systems engineers seem to be somewhat rare these days, or at least they seem to get overruled by management. Many of the well-known engineering blunders in recent years can be chalked up to poor systems engineering.

  • by pmz ( 462998 ) on Tuesday February 11, 2003 @04:38PM (#5282859) Homepage
    Managers are trained to deal with schedule and budget. Not with designing complex systems.

    I agree with what you said; however, schedule/budget managers cannot be ignorant of what good systems engineering requires. Also, the lead engineer on a project is a kind of manager. It isn't uncommon on in a small company or project for the manager and the lead engineer to be the same person, which I guess makes their job even harder.
  • Re:Probably (Score:4, Insightful)

    by oconnorcjo ( 242077 ) on Tuesday February 11, 2003 @04:47PM (#5283056) Journal
    The same applies for security and usability. It's really not a question of programming/technical ability, but a question of mentality. I think programmers need to have a specific (or perhaps not-so-specific) mindset to get a bigger picture, and not very many programmers are willing to do that. Part of it may be inherent to programmer-types, but it also might be cultural (the whole "us vs. them" elitist attitude).

    You ALMOST have it except it is not inherent in the programmer but in how programming departments are managed.

    Management usually puts an emphasis on more features and fast timelines instead of security and stability. Programmer's must prioritize the demands given to them and when management's views are skewed, so are the employees.

    Good management would have code reviews of all programmers code on a periodic basis (no matter how much experience they have) and system designers would have meetings with the programmers (including every senior to junior programmer involved in building the system) and explain why and what their system is supposed to do.

    Instead most companies give out specs and nobody knows what the hell their piers are doing either because management is incompetent or lazy and thus leave code reviews and design meetings in a dusty book that could be called "good practices that most don't do".

    One of the reasons why the code in open source software is often of a higher quality than commercial software is that: 1. programmers write their code KNOWING that somebody might be looking at it later (and often getting good suggestions back from other developers). 2. Open source projects have developer mailing lists where developers explain what/how they are designing/redesigning something new in the project.

    But most company management's are very short sighted and impatient like the rest of society.

  • by vrassoc ( 581619 ) on Tuesday February 11, 2003 @05:18PM (#5283304)

    ... some people tend to get tunnel vision and concentrate wholely on the bb theory, ... This does usually lead to problems and errors in the code.

    Not only errors: black box routines can be expensive on performance too. Take database programming as an example: black box programming teaches us that we must break the problem down into its smallest (easily) solveable (and reusable) parts, create a routine for each one and then work our way up. In a program that does disk reads to solve the problem this could mean many, many more reads from disk than what is necessary, if you're not careful.

    IMHO part of what makes a good programmer an even better programmer, is to know which routines to black box, taking into consideration performance and resource availability, and which ones not to.

    High level languages are no less than a library of black box routines. And there are often parts of a program that are best written at the lowest level, for efficiency's sake.

    A good programmer will constanly be weighing up the pros and cons of his or her methodology in order to provide a system that is sufficiently practical for the underlying architechture while taking into consideration all the other constraints of the project such as budget and deadline.

  • by Gleef ( 86 ) on Tuesday February 11, 2003 @05:24PM (#5283361) Homepage
    I studied Computer Science in College, and currently work as a Programmer/Analyst for a non-profit organization (Desktop, Web and Server-based systems). Yes, all of the above encourage a "black box" model to design and coding. Furthermore, I am guilty of perpetuating this to the people forced to listen to me blather, and will continue to do so until I see a better way.

    I understand that it hides some bugs. I don't like this. On the other hand, we can never have enough staff to make sure we have people expert on not only each system used but on each interface between the system to do a good integrated system.

    So what we do is take some premade components (eg. hardware, OS kernel, C library, certain widget libraries, web server, etc), and say "OK, assume these work according to these specifications, we're going to work on adding a piece that does this". When the premade component deviates from the specifications, we fix the component or update the specs.

    As much as possible, we make use of open standards and free software so that if we need to, we can open up the black box and fix something. However, the more we can assume that a component is a black box that will just do what it's supposed to do, the faster we can develop the "interesting" bits.

    The bottom line for us is to manage complexity. The more complexity that we can abstract away, the faster we can work on the custom stuff unique to our organization. A "black box" model works well for us, but yes, it does cause some bugs that need to get cleaned up after the fact. Most organizations I've seen make a similar design choice (or blunder into it blindly), and most schools teach their courses with a similar mindset.

    If we were to develop a truly critical system, one that lives or big bucks depended on, we ought to take a different approach for that system, but we aren't likely to work on such a system for a while.
  • Re:Definitely! (Score:3, Insightful)

    by SWPadnos ( 191329 ) on Tuesday February 11, 2003 @05:38PM (#5283489)
    I'm not sure that this problem is with the black box mentality - it seems to be with those coders.

    Logging scheme: Should be part of the interface definition (the format of log messages should be part of the spec). The logging functionality should be another black box module (with a suitable interface for all portions of the project).

    Configuration scheme: Should also be part of the spec. If it's done wrong, then the module doesn't meet spec, and the programmer(s) should be reminded that their paychecks are dependent on writing modules to spec.

    Admin Alerting: Like logging, if there is a specific format / function to use, then this should be part of the spec.

    Design pattern: The spec should incorporate any company coding standards (by reference, if it's too long :). It is then up to the programmers to follow this standard. (I see that as a different issue than the black box thing, though)

    Global data: This should never be touched, unless the access is defined in the spec.

    It looks to me like:
    1) the specs are not complete; and 2) that the programmers in question don't adequately communicate when they encounter problems coding to the spec. (It's perfectly valid to discuss changes to a black box specification if problems are encountered).
  • by lukme ( 638428 ) on Tuesday February 11, 2003 @05:39PM (#5283496)
    Some managers think that you are doing nothing unless you can say, yup, wrote 3000 line of code today. They don't really seem to care that you have just created a mantenance problem.
    I was just at a job interview where my comment that I have done several projects in which I mentioned that I have replced around 5000 lines of code with 500. Due to his obsession with lines of code, this got missinterpeted as, I have only done mantenance.
    Note: the 500 line of code did the more than the 5000, with fewer bugs and was alot faster (ie, 1 scan though the data as opposed to multiple).
    Every job that I have had, I have found areas of code bloat either done due to the pressures of meeting a deadline or though incompentence. In either case, the best thing to do is to just clean them up and go forward.
  • by revbob ( 155074 ) on Tuesday February 11, 2003 @06:04PM (#5283648) Homepage Journal
    My business card says "Embedded Software Engineer" and my current job is member of the software architecture team for the Common Operating Environment of perhaps the largest System of Systems project ever.

    I see preliminary designs for databases of objects that magically exist in pure object-land (i.e., they don't actually do anything) and yet somehow the work gets done.

    By training and disposition, whenever I don't smell silicon, I become deeply suspicious, so my first reaction is that such designs are nonsense. Perhaps it will not always be this way -- for instance, perhaps the designers of those very systems will get around to saying who actually does something and how they do it.

    But I've grown to realize that I must accept a certain amount of nonsense (subject always to good engineering judgment and a demonstration that some of these fanciful schemes can actually work) because the "how" absolutely must not enter into the design.

    If I have to say to someone writing the software for communicating between commanders and various kinds of "things" (I'm going to apply some severe declassification here) that to talk to a big orange truck you have to stick a 32-bit word into a mailbox interrupt register at such-and-such and address, while to talk to a little red truck you have to send "HELLO, WORLD!" to port 80, they're going to say to me, "Just what the hell have you been doing in your software architecture group for the past six months?"

    This is a gross example -- but the less obvious examples are nearly as bad, from my point of view.

    For instance, since one of the requirements for this SoS is that communications not be of the form, "Let's tell the enemy what we're going to do", and since communications security is best done by people who know what they're doing, we will not train every engineer to manage communications security everywhere in his application, but rather layer the architecture so that, to the greatest extent possible, engineers will not even know it's happening.

    Indeed, I expect the architecture our team develops to survive several iterations of "how"s. The first implementation better not work as well as the final implementation, or somebody's wasting money.

    In short, we'll use elementary principles of engineering in order to define common objects that communicate with one another in precisely defined ways at a level of abstraction that's appropriate for the objects themselves. That some objects will have precise real-world counterparts (e.g., big orange truck) is merely evidence that the architecture is sane. And if some of those objects have functions associated with them, that's because in the real world functions aren't performed by spirits and demons, but by (now let's not always see the same hands) objects!

    This ain't rocket science, people. If you've written an API that you can't jack up, haul out the Yugo that's underneath, and replace it with a Viper with no one the wiser except the customer who appreciates how fast he's going, you've screwed up. You've let the "how" creep into your "what".

    (Hoping some people will return my phone calls and answer their email so I can stop talking about this and get back to doing it).

  • by small_box_of_stuff ( 258902 ) on Tuesday February 11, 2003 @06:42PM (#5283891)
    After reading many of these posts, I think were getting a bit confused here. In part, the original poster is a bit confused, or at least imprecise in his wording.

    When coding a module or piece of program for use in a wider system, such as a library or module, black box thinking is a good thing. In this context, a black box means something that does not expose its internals, and provides an abstraction to the user/programmer.

    The STL or the standard c library is a black box. TCPIP and the sockets API is a black box. the java standard class library is a black box. If you were required to know all the interal details of all these systems to be able to use them, you wouldnt get very far.

    Abstractions, which is what we are talkign about when we use the term black box as above, are absolutely required to write decent software. You simply cant reliably keep that many details inplay at once and expect to get it to work right.

    As useful as the above black box libraries are, it is very possible to create a library that is unusable. Making the user of the library know all the impelentation details from inside is one way to do this, but the other is to use the wrong abstractions, make the wrong assuptions about usage, etc.

    In short, what the original poster and many others are complaining about when they say they dont like black boxes is that they are using bad or incompatible boxes!

    Having to fight with a library that makes assuptions that are invalid, provides the wrong level of abstraction, or is not implemented very well is not an indictment of black boxes, but of black boxes that suck. The fact that their internals are hidden is not the problem. the fact that their interface to these internals sucks is the problem.

    So dont confuse the problems of using bad black boxes with a fundamental problem with black boxes. A bad impelemtation doesnt mean the concept is bad. A poorly designed system well may provide a black box that no one can use, or at least not in all situations. but making that box white, and exposing all its internals, isnt the solution. designing the module to work right is.

    Further, one persons perfect abstraction is anothers miserable pile of junk. Just because the black box doesnt provide the interface you need/want doesnt mean it sucks, it could just mean your using the wrong tool for the job. Complaining about how hard threads make it to hammer in a screw doesnt make much sense. Use a nail.

  • by cmacb ( 547347 ) on Tuesday February 11, 2003 @07:24PM (#5284112) Homepage Journal
    I can understand the original posters "screw them" attitude. It is very frustrating for people who actually want to do quality work to be ultimately punished for trying to do so.

    Process improvement methodologies that proliferated in the 90's seem to be falling by the wayside. I think part of the problem is that at about the same time RAD techniques promised upper management almost instant turnaround on their requests. The RAD techniques, and the tools that go with them have turned out reams of bad and unmaintainable code that companies will either have to live with, or replace the old fashioned way (which involves thinking about business rules, a process quite foreign to most of the people involved).

    Hopefully the current generation of programmers who were obsessed with pretty graphics will be replaced by a new generation who take those things for granted and can get back to solving actual problems.

    Regarding the original story I do find it odd that the existence of that box in the wild represents a threat. One would HOPE that these transmitions use encryption techniques that depend on large keys for their security and not on the obscurity of the algorithm itself

    On the other hand I've read that a lot of this code is written in Ada and NASA is going to be walking on eggshells trying to modernize anything that uses this old software.

    While it may be completely debugged, transition of this software via new hardware and a new Ada compiler, or a re-write to a more modern language is bound to generate errors.
  • by ltkije ( 635596 ) on Tuesday February 11, 2003 @09:45PM (#5284815)
    Now, the amount of abstraction possible does differ depending on what you're doing. Embedded systems programming is hard, and you do have to know details of the machine. But I ask you - do you insist on a gate-level understanding of the embedded CPU, or will you settle for knowing the opcodes and their timing characteristics?

    I work on embedded products. A typical design has about 30,000 lines of C code, but the amount of assembly language is 1% of that and dropping. So there is little need for most of the programmers on a team to know something as concrete as assembly.

    Our code runs on 16-bit single-chip microcontrollers rated at about 5 MIPS. The chips are typical of what $5 buys today. The application, which Slashdot readers would recognize instantly, has soft real-time requirements. We could probably run most everything on 20 ms timer ticks and get good responsiveness. There are plenty of spare CPU cycles even at peak loading. Yet there are people in my company who want to read the assembly code generated by our ANSI-standard C compiler, and turn off all the compiler optimizations. Some still insist on writing their own memset() functions.

    Contrast this with the fact that it takes us 18 months to develop each new product. The 2003 version is about 80% the same as the 2001 version, about 20% of the code handles product differences from the older version, and there's maybe 5% new code for new features. What's wrong with this picture?

    One obvious answer is simple: we're probably using the wrong level of abstraction, or just the wrong abstractions, in our design. We'd do much better to:

    Abstract away hardware specifics wherever possible.

    Trade off a little performance for shorter project schedules.

    Profit!

    This is not to say we should never open the black box -- just that we should be smart about when to dig deeply into the underlying hardware and CPU cycles. And being able to debug with an oscilliscope is still sometimes a valuable talent. As others have said here, the art of engineering lies in knowing when to do these things.

    Successful black-box design can produce amazing results. For instance, look at Pure-Systems [pure-systems.de], whose initial product [pure-systems.de] generates an optimized embedded kernel, written in C++, that's small enough to run on an Atmel AVR chip.

  • by LadyLucky ( 546115 ) on Wednesday February 12, 2003 @02:14AM (#5285932) Homepage
    Anyone else here got massively overworked and stressed while working towards HIMSS [himss.org]? Geeeeeeeeeeeeez. So glad the work for that is over.
  • by Goth Biker Babe ( 311502 ) on Wednesday February 12, 2003 @02:30AM (#5285989) Homepage Journal
    I'm in software engineering in the real world as essentially a software architect for embedded systems and yes hearing about CS courses where the lecturers tell students not to worry about the hardware "If your software won't run on the specified system then you need to get the hardware engineers to give you more memory, or more MIPs, or whatever" does make me cringe. This is just not practical when you are building a product to a budget in the embedded world. Get yourself the schematics and learn the system you are going to code for but... ...coding too directly to the hardware is also bad. A company which has, for say customer reasons, use different processors and operating systems in it's products will end up with lots of teams which are all reinventing the wheel for their particular hardware. That doesn't bode well for reuse and potential requires rewrites even for cost down versions of products.

    Understanding the hardware doesn't prevent you from using black box methods. Just use heirachical ones. The system as a box, subsystems as boxes, their objects as boxes and so on. So if we take a DVD player (European market one) for example without knowing the hardware you know it's going to have the following subsystems: storage (dvd mechanism say), user i/o (front panel, remote and display), a/v routing (either video from the box or passthrough, analogue audio or digital audio - possibly optical), memory and decode (codecs etc). For storage you could split that into device driver (low and high level), io stream etc. All of these have software components that can be reused even as basic as the on screen widget set in the OSD.

    If you do use a black box/component approach then you can never have too much documentation. Components should be considered projects in their own rights. Do your requirements analysis and find out what it's needed to do, model it, document it formally, explain how it's supposed to be used. If there are areas of ambiguity then it hasn't been documented properly.

    The art of desiging the system is knowing when to reuse and when to reinvent. When to componentize and when you shouldn't. I don't see black box engineering and knowing the system as mutually exclusive as an engineer you must consider how much and at what level you need to know.

"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...