Forgot your password?
typodupeerror
Programming Software IT Technology

Programmers and the "Big Picture"? 405

Posted by Cliff
from the distinguishing-the-forest-from-the-trees dept.
FirmWarez asks: "I'm an embedded systems engineer. I've designed and programmed industrial, medical, consumer, and aerospace gear. I was engineering manager at a contract design house for a while. The recent thread regarding the probable encryption box of the Columbia brought to mind a long standing question. Do Slashdot readers think that the theories used to teach (and learn) programming lead to programmers that tend to approach problems with a 'black box', or 'virtual machine' mentality without considering the entire system? That, in and of itself, would explain a lot of security issues, as well as things as simple as user interface nightmares. Comments?"

"Back working on my undergrad (computer engineering) I remember getting frustrated at the comp-sci profs that insisted machines were simply 'black boxes' and the underlying hardware need not be a concern of the programmer.

Of course in embedded systems that's not the case. When developing code for a medical device, you've got to understand how the hardware responds to a software crash, etc. A number of Slashdot readers dogmatically responded with "security through obscurity" quotes about the shuttle's missing secret box. While that may have some validity, it does not respect the needs of the entire system, in this case the difficulty of maintaining keys and equipment across a huge network of military equipment, personnel, installations."

This discussion has been archived. No new comments can be posted.

Programmers and the "Big Picture"?

Comments Filter:
  • In general... yes (Score:4, Interesting)

    by Anonymous Coward on Tuesday February 11, 2003 @01:37PM (#5280879)
    I don't have as much experience as some, but I've always wondered about coders who restrain themselves in the 'world' their code runs in. It overlaps, I think, with the problems of sysadmins who leave systems/gateways/firewalls and whatnot wide open to the world.

    If a coder isn't ignoring the fact their code isn't going to be running on the exact same shell as they are, they're ignoring that it won't always be running in the exact same OS, or exact same network. Tragically, when it breaks it can then break BIG.

    Note I also don't have enough experience to offer a solution other than "get a clue!". It's more work until you embed it in your habits to take notice of these possibilities.
  • Probably (Score:4, Insightful)

    by nizcolas (597301) on Tuesday February 11, 2003 @01:38PM (#5280881) Homepage Journal
    Most programmers who are going to come across a "black box" have enough experience to be able code for the situation. Isn't that skill a trait of a good programmer?

    Then again maybe Im missing the point :)
    • Re:Probably (levels) (Score:3, Interesting)

      by $$$$$exyGal (638164)
      Programming Levels:
      1. Microsoft Frontpage
      2. Raw HTML
      3. CGI/PHP/etc.
      4. Servlets/Mod-perl/etc.
      5. Object-Oriented black boxes
      6. Documented API's
      7. Public Documented API's
      8. Performance
      9. "The Big Picture" - Architects

      --sex [slashdot.org]

      • Re:Probably (levels) (Score:4, Informative)

        by oconnorcjo (242077) on Tuesday February 11, 2003 @03:57PM (#5282302) Journal

        Programming Levels:

        # Microsoft Frontpage
        # Raw HTML
        # CGI/PHP/etc.
        # Servlets/Mod-perl/etc.
        # Object-Oriented black boxes
        # Documented API's
        # Public Documented API's
        # Performance
        # "The Big Picture" - Architects



        Another elitest post without a real clue. A good programmer knows how to get a job done and should ALWAYS have a big picture view of how things work around them. It does not matter whether they are working on a web site or writing a backend database app or a game engine for the latest and greatest game. Somebody writing good PHP code could probably write good backend C++ code. You are associating the tasks people do with how capable they are. Languages and programs are TOOLS and a programmer should be able to quickly learn to use new tools whether it is a new language, interacting with a new API or using a performance profiler. A good programmer really should not care HOW they get things done- ONLY that they DO get them done.

    • Re:Probably (Score:5, Interesting)

      by ackthpt (218170) on Tuesday February 11, 2003 @01:59PM (#5281113) Homepage Journal
      Most programmers who are going to come across a "black box" have enough experience to be able code for the situation. Isn't that skill a trait of a good programmer?

      I think it's more than a skill, it's an attitude. I've encountered a number of programmers (just out of school/training) who are oblivious to external concerns, including interface design (traditionally what users complain most about and programmers lack any standard to follow.) Generally it takes little effort to break programs written by very skilled programmers, but blind to anything outside their scope. I was probably as bad when I first started, but recently an analyst complained angrily why I went beyond the scope of the project by including an error/warning log (most likely because the errors/warnings accounted for any untrapped logic and revealed how incomplete the spec was and how little the analyst, and some of the higher-ups, knew of the business function) I felt there were too many things unaccounted for and added the log, when it produced 1,000+ entries things got a little heated. I stuck to my guns though and see a general lack of interest in review of why there are gaps in the spec or knowledge (by the very people who should know.

      • I bless you.... (Score:2, Insightful)

        by FirstNoel (113932)
        Ah, I think we have someone here who DOES see the big picture.

        There are lots of times, at least in my experience, where it's not the programmer's fault in how the program works.

        I've seen specs come down from higher-ups who have no idea what they are asking for. I'm a little bit luckier though. The analysts we have here tend to spot these problems long before I get to program. But occasionally some do slip through. I have loads of fun ripping these things to shreds. I feel like a professor at college with my little red pen. "Ah, wrong...can't do that! What does that say? etc, etc ..."

        Aside:
        That is also usually a good stall tactic. If I'm swamped with other projects, I'll send them a flurry of notes, overwell them with spec questions. It usually gives me a few days.

        It's tough to think outside the box, when there is no box.

        Sean D.

    • Re:Probably (Score:5, Interesting)

      by ryochiji (453715) on Tuesday February 11, 2003 @02:55PM (#5281718) Homepage
      >programmers that tend to approach problems with a 'black box', or 'virtual machine' mentality without considering the entire system?

      I think there's a lot of truth in this. For example, how many programmers think about writing software from the standpoint of a support technician? In fact, how many programmers even have experience as a support technician? I've never even heard anyone even talk about writing supportable software [iloha.net], yet, when considering the overall costs or quality of a system, I think it's important to consider how heavily the introduction of that system will tax the support department. Whether you're shipping or deploying the system, lower support needs will lower over all costs and vastly improve the reputation of the system.

      The same applies for security and usability. It's really not a question of programming/technical ability, but a question of mentality. I think programmers need to have a specific (or perhaps not-so-specific) mindset to get a bigger picture, and not very many programmers are willing to do that. Part of it may be inherent to programmer-types, but it also might be cultural (the whole "us vs. them" elitist attitude).

      • Re:Probably (Score:4, Insightful)

        by oconnorcjo (242077) on Tuesday February 11, 2003 @04:47PM (#5283056) Journal
        The same applies for security and usability. It's really not a question of programming/technical ability, but a question of mentality. I think programmers need to have a specific (or perhaps not-so-specific) mindset to get a bigger picture, and not very many programmers are willing to do that. Part of it may be inherent to programmer-types, but it also might be cultural (the whole "us vs. them" elitist attitude).

        You ALMOST have it except it is not inherent in the programmer but in how programming departments are managed.

        Management usually puts an emphasis on more features and fast timelines instead of security and stability. Programmer's must prioritize the demands given to them and when management's views are skewed, so are the employees.

        Good management would have code reviews of all programmers code on a periodic basis (no matter how much experience they have) and system designers would have meetings with the programmers (including every senior to junior programmer involved in building the system) and explain why and what their system is supposed to do.

        Instead most companies give out specs and nobody knows what the hell their piers are doing either because management is incompetent or lazy and thus leave code reviews and design meetings in a dusty book that could be called "good practices that most don't do".

        One of the reasons why the code in open source software is often of a higher quality than commercial software is that: 1. programmers write their code KNOWING that somebody might be looking at it later (and often getting good suggestions back from other developers). 2. Open source projects have developer mailing lists where developers explain what/how they are designing/redesigning something new in the project.

        But most company management's are very short sighted and impatient like the rest of society.

  • by seanmcelroy (207852) on Tuesday February 11, 2003 @01:41PM (#5280919) Homepage Journal
    I think the problem increases as programmers are less and less a part of the complete systems development life cycle and are contracted to work on individual components of an overall system. Especially during the maintenance phases of a system's life, the inexperience of new programmers on a project is probably more to blame than 'training' per say to think in a black-box mentality.
  • Huh? (Score:5, Funny)

    by twofidyKidd (615722) on Tuesday February 11, 2003 @01:41PM (#5280923)
    I don't know what you're trying to say here man, but no amount of programming or "Fatal Error: Wing no longer attached to craft" terminal prompts would've saved them from what happened.

    If you're trying to make a case for programming paradigm shifts based on security procedures, it isn't working in this context.
  • by jlk_71 (308809) on Tuesday February 11, 2003 @01:41PM (#5280924)
    I am taking courses toward my degree and I must say that in my intro to programming course, the instructor was constantly stressing the need for 'black box' programming. In addition though, he also stressed that while keeping things black box, you also need to keep your mind on the whole project, always watching out for possibly security problems, etc.
    I believe that some people tend to get tunnel vision and concentrate wholely on the bb theory, without taking into consideration the whole program. This does usually lead to problems and errors in the code.

    Regards,

    jlk
  • by syntap (242090) on Tuesday February 11, 2003 @01:42PM (#5280928)
    Many times, management is the cause of preventing developers to see the "big picture". Sometimes it's "Here, code this" and you don't get a lot of opportunity to ask the questions you know need to be asked. Sometimes you have to hope resolutions to these types of issues are built into the requirements specification or will be ironed out in quality assurance measures.

    The developer is only one in a group of responsible parties in any given system, and his/her output depends largely in input from others. If a developer is kept "out of the loop" on things (or is lazy and stays out of the loop opn purpose), you're going to see these problems.

    Often it's like blaming clogged fuel injectors _for_ cheap gasoline instead of _on_ it.
    • by Anonvmous Coward (589068) on Tuesday February 11, 2003 @02:03PM (#5281150)
      "Many times, management is the cause of preventing developers to see the "big picture". Sometimes it's "Here, code this" and you don't get a lot of opportunity to ask the questions you know need to be asked..."

      Don't forget the "make it work by the next trade show" mentality.
    • It's not just management. I'm on a contract right now where I've advised that the client do things in particular ways, and the management itself is reasonably amenable to the concept, but the programmer to whom I report is, well, set in her ways.

      Developers should know what the big picture is so that they can have a sense of direction in the development process. They don't need to worry abuot it, perhaps, but they should know what it is nonetheless.

      -austin
  • by Entrope (68843) on Tuesday February 11, 2003 @01:42PM (#5280929) Homepage
    Keeping the "big picture" in mind is a good thing for managers and designers. For people implementing the finer details, though, it can be a distraction and a poor use of their time. Someone implementing or verifying flow control in a ground-to-space network link does not need to know much about the format of data carried over the link. Someone doing circuit layout or design for a cockpit control widget does not need to worry about reentry dynamics and airflow. Similar examples can be found in any large system design.

    It is the responsibility of the higher level designers and managers to encapsulate the big picture into digestible, approachable chunks. To the extent possible, they should be aware of and codify the assumptions and requirements that cross multiple domains -- when those are documented, it is easier to test the system for correctness or robustness, as well as to diagnose problems.

    When everyone on the project tries to orient their work to what they each perceive as the big picture, you end up with enough different perceptions that people work against each other. Breaking down the system into smaller, more defined, chunks combats that tendency.
    • by nosilA (8112) on Tuesday February 11, 2003 @02:20PM (#5281318)
      In order to do a good job on your module, you need a solid understanding of how the components you directly interact with function. In addition, a superficial understanding of other components is useful.

      For example, let's say you are working on the software for automatic transmission control in the car. You need an intimate understanding of the hardware you are running on, that's directly related to your job.

      However, you also need a solid understanding of how the automatic transmission works. Understanding the mechanics of the gear change is important to understanding timing issues, errors that can occur, and how to deal with them.

      It is very useful to have a good understanding of how a car works in general, to get an idea of how your product will be used. This allows you to optomize your product for likely scenarios.

      Sometimes, for personal satisfaction, it is nice to know how the windshield wiper mechanism works, but it doesn't help you in any way to make your automatic transmission control better.

      -Alison
    • "Someone doing circuit layout or design for a cockpit control widget does not need to worry about reentry dynamics and airflow."

      I think this example is debatable and can possibily be used against you. One could argue that reentry dynamics and airflow could make for a bumpy ride, thus the designers need to be aware of the journey this vessel's going to go on.

      That's besides the point, though. I'm not interested in debating that detail. Instead, I want to offer my insight from observing both poles of this dicussion: having strictly one point of view or the other is bad. If you're overly broad, you over-design software. If you're overly narrow, you design yourself into a corner.

      I'm radically oversimplifying this problem, but it's true. Everybody has their own perspective. A good manager places them where they're useful. My company has a nice mixture of personality types in engineering. They're all placed where they fit best. If we were to polarize all the sudden, I really think the project would collapse.
      • by Entrope (68843) on Tuesday February 11, 2003 @03:25PM (#5282023) Homepage

        One could argue that reentry dynamics and airflow could make for a bumpy ride, thus the designers need to be aware of the journey this vessel's going to go on.

        That actually occurred to me while I was writing my post, and I considered it to be an instance where my second paragraph bears true: if the ride will be bumpy, or flown upside down, or whatever, then those cases should be documented (or at least known) to the designers of the cockpit widgets.

        Yes, you need to avoid both over- and under-design. Yes, you need to know things beyond your piece of the work. But no, you do not need to consider the whole system and all parts of it when you do implementation or even some of the design.

        A good designer knows how far away the interaction horizon should be, and can analyze the effects of everything within that horizon. If the collective effects are too many to analyze, it is a sign that the design needs to be refined or reworked.

    • by pmz (462998) on Tuesday February 11, 2003 @03:06PM (#5281840) Homepage
      When everyone on the project tries to orient their work to what they each perceive as the big picture, you end up with enough different perceptions that people work against each other. Breaking down the system into smaller, more defined, chunks combats that tendency.

      This is why good managers are worth their weight in gold. Bad managers are worse than worthless.
      • by GileadGreene (539584) on Tuesday February 11, 2003 @04:17PM (#5282438) Homepage
        This is why good managers are worth their weight in gold. Bad managers are worse than worthless.

        No. This is why good systems engineers are worth their weight in gold. Dealing with the big picture, and designing large, complex systems using an engineering approach is why systems engineering came into being in the first place.

        Managers are trained to deal with schedule and budget. Not with designing complex systems. Systems engineers are trained to design complex systems, and to make sure that all the pieces interact in such a way that the overall system acheives whatever goal it was designed for.

        That said, decent systems engineers seem to be somewhat rare these days, or at least they seem to get overruled by management. Many of the well-known engineering blunders in recent years can be chalked up to poor systems engineering.

  • IMHO (Score:5, Funny)

    by Em Emalb (452530) <ememalb AT gmail DOT com> on Tuesday February 11, 2003 @01:42PM (#5280935) Homepage Journal
    People tend to focus exclusively on their area of expertise.

    Otherwise they become managers :D
  • If i write a component that takes in X1 and outputs X2, isn't it the designer's job to make it look pretty? I mean, supposedly they were the ones that came up with needing the component in the first place, to accomplish some function or other, and thus make the user happy.
  • by Dr. Manhattan (29720) <[sorceror171] [at] [gmail.com]> on Tuesday February 11, 2003 @01:43PM (#5280943) Homepage
    Being able to abstract chunks of a program or system out and not worry about implementation is utterly vital. No human, however gifted, is capable of understanding the entirety of more than a trivial system at once.

    Now, the amount of abstraction possible does differ depending on what you're doing. Embedded systems programming is hard, and you do have to know details of the machine. But I ask you - do you insist on a gate-level understanding of the embedded CPU, or will you settle for knowing the opcodes and their timing characteristics?

    Because, in embedded programming, you need to know more about the device, it's proportionately harder to do. That's one reason, apart from power and cost considerations, that embedded systems tend to be simple - the simpler the system, the easier it is to think about, to prove correctness or to at least enumerate possible pathways and handle them.

    But even in that case, you need to be able to ignore some implementation issues or you can't do it at all.

    • by dboyles (65512) on Tuesday February 11, 2003 @01:55PM (#5281074) Homepage
      I agree, and would like to add my thoughts.

      One of the most likeable things about programming is that on a low enough level, it's always predictable. This kind of goes hand-in-hand with the fact that computers don't make mistakes, humans do. As a programmer, it's very comforting (for lack of a better word) to have a chunk of code and know that, given X input, you'll get Y output. You can write a subroutine, document it well, and come back to it later, knowing how it will behave. Of course, other programmers can do the same with your code, without having to have intricate knowledge of how the code goes about returning the output.

      But of course, there's a catch. It's probable that the programmer who wrote the subroutine initially didn't envision some special case, and therefore didn't write the code to handle it. If everybody is lucky, the program will hiccup and the second programmer will see the problem. The worse situation is when the error is seemingly minor, and goes unnoticed: when that floating point number gets converted to an integer and nobody notices.

      I know this isn't some groundbreaking new look on abstraction in code, but it is pretty interesting to think about.
    • I agree. There is nothing inherently wrong with seeing applications, or even components, as black boxes. And those working to develop business or shrink-wrap applications are in an entirely different problem domain than embedded system programmers anyway. The difference in the complexity of the OS alone is enough to require several levels of abstraction.

      The problems arising from a lack of of a wholistic view are real as stated by the article, but they are not something we can easily work around. What we need is a better way of working with the black boxes such that, although we don't have time to learn what is in them, we can see how they fit together.
    • The best lesson that I learned on my first large development project was that there is a big difference in the need for abstractions and black boxes in implementation versus design. All code should be written using a black box approach... no matter whether you're programming in SmallTalk or Assembly. (Though some languages make it easier than others :-).

      The big difference is that when you are actually designing and coding (verbs!) you have to look into those black boxes. If you don't understand the subsystems/objects/subroutes that your code interfaces with, you won't know what boundary conditions to test, what assumptions the other subsystems are making, etc.

      So now I always write well abstracted code (just like your Comp Sci 101 prof taught), but design with the big picture in mind.

  • by keyslammer (240231) on Tuesday February 11, 2003 @01:44PM (#5280956) Homepage Journal
    ... but the lack of experience.

    Programmers have to consider subsystems as abstractions: there's a limit to how many things the brain can deal with at one time. We know that this kind of thinking produces cleaner designs which are less susceptible to bugs and security holes.

    Knowing the limitations of the "black box" and what will break the abstraction is the product of lots and lots of experience. I don't believe there's any way to teach this - it's something that you just have to live through.

    That's why senior level developers can continue to justify their existence (and higher price tags)!
    • by Anonymous Coward
      I've been doing pda programming for both the pocketpc and the palm os.

      The application for both is intended to be identical, but the api is different for each device.

      I designed the app originally for the palm, but now I am porting it over to the pocketpc. Unfortunately, the api is different enough that little of the code is portable.

      If I had known I would be coding for both, I would have tried to design the code to be more portable. Knowing the requirements of both systems might have allowed me to factor out the device-specific sections.
  • on the new Death Star, I found that trying to envision the "Big Picture" interfered with the specific requirements of my task. I needed control mechanisms smart enough to deal with Storm Trooper suits, regular Empire uniforms, robots with various temperature ranges, Wookies. It needed to be able to maintain a comfortable temperature range in the beam tunnel vicinity even during firing. And it needed to be efficient enough that they wouldn't shift power from the exhaust port shields to the jacuzzi heaters like they did on the old Death Star.
  • Experience (Score:5, Insightful)

    by wackysootroom (243310) on Tuesday February 11, 2003 @01:44PM (#5280959)
    The only thing that school prepares you for is to get an entry level job where you can gain the experience to write reliable software.

    School will get you up to speed on new terms and concepts, but the only thing that will make you better at writing good code is to read good code, write your own code and compare it to the good code, notice the difference, and adjust your approach until your code is "good".
    • The only thing that school prepares you for is to get an entry level job where you can gain the experience to write reliable software.

      School will get you up to speed on new terms and concepts, but the only thing that will make you better at writing good code is to read good code, write your own code and compare it to the good code, notice the difference, and adjust your approach until your code is "good".

      I agree entirely, and will actually take it a step further:
      Question the 'facts' you were taught. They may appear to be correct, but you may not only learn something new by examining why the 'fact' is as it is, but you may find a better way.

      Of course, take that with a grain of salt. I don't believe the speed of light can't be exceeded. Not that I can prove anything, I just don't like having someone tell me I can't do something :)

    • Re:Experience (Score:2, Insightful)

      by Anonymous Coward
      So then how does better code get developed if everyone is merely attempting to match their coding style to the "good" style they see?

      *Someone* had to provide that good code in the first place.
    • Re:Experience (Score:5, Insightful)

      by Oink.NET (551861) on Tuesday February 11, 2003 @02:15PM (#5281262) Homepage
      the only thing that will make you better at writing good code is to read good code

      Because code is the most direct way to communicate wisdom between geeks? I would submit that unless you get the analysis and design right, your approach to writing good code just teaches you how to make a more polished turd.

      As far as getting better at the mechanics of coding, I would suggest reading Steve McConnell's Code Complete [amazon.com].

      • Re:Experience (Score:3, Interesting)

        by pmz (462998)
        Because code is the most direct way to communicate wisdom between geeks?

        Not really. I've gained much more by reading books like The Mythical Man Month and good object-oriented analysis and data modeling books. Managing complexity through good data modeling is the most important (and hardest) part of a program to get right.

        The worst applications I've had to work with were designed piece-meal by a high-turnover team of inexperience people (read: really ugly data that resulted in nasty bloated unmaintainable code).
    • In high school I did a 2 year Computer Studies course.

      During that period, one night I went to a heavy party and then spent the following day trying to write functional code whilst suffering a hangover.

      This was the only experience from the course which mirrored anything which happened to me since I started programming professionally
  • by Dthoma (593797) on Tuesday February 11, 2003 @01:45PM (#5280962) Journal
    Programming classes may encourage a 'black box' approach to programming, depending on what language you use. The reason for this is that it all relies on how high-level the language is; if you're using PHP then chances are you won't be worrying nearly as much about the hardware of your system than if you're using C, assembler, or machine code.
  • by jj_johny (626460) on Tuesday February 11, 2003 @01:45PM (#5280964)
    I think that the programmer who thinks of things in a black box mentality is usually going to be involved in failed program. I have run into so many programmers who know nothing of the many parts that their program touches. They seem to believe that their software does not work within a wider system and a wider world.

    The problem with these programmers is that they rarely understand what can and does go wrong with the outside world. It is always amazing to me that there are people out there that assume that everyone has a 100BaseT Ethernet hub between the front end and the back end or other stupid assumptions.

    The issue that crops up most when programmers think in black box terms is that today's software is not spec'd out enough so that the end user does not get what they wanted but the programmer did not solve it by asking. Too often the problem is very fuzzy and thus the programmer is there to help clarify not just implement.

    Without a well rounded programmer looking at the overall system (or his/her boss), you will wind up with chatty, buggy applications that was what the user asked for but not what they needed.

    • The thinking about the underlying system should have been done already. Somebody figures out how a system should behave, and you code around those parameters.

      That thinking should be done by the admins of the system. Any dope who puts a network intensive app on a 10Mb LAN when the app needs more bandwidth or a switched network gets what he deserves.

      When new airplanes are developed, test pilots push the aircraft to it's outer limits. The flight manual for an airplane is based upon the testing experience.

      Keeping "the big picture" in focus means that you don't really care how a motor lowers landing gear... you just care that the landing gear is down when you land.
    • It is always amazing to me that there are people out there that assume that everyone has a 100BaseT Ethernet hub between the front end and the back end or other stupid assumptions.

      Actually, I've found myself doing the reverse several times. For many years, I worked in what I guess would be semi-embedded systems. We did special purpose computers for the military. The thing was, we had our own RTE, and I got into the habit of coding for that, and assuming that the target environment essentially had nothing.

      Now that I'm coding for standard OSen (Linux), I find it hard to get used to the concept that it's already there, I don't have to roll my own.

      Don't get me wrong, I believe in reuse. I think it's wonderful to have the tools. It's just difficult to "rewire" myself after 15 years of a particular mindset (but I'm working at it!).
    • Without a well rounded programmer looking at the overall system (or his/her boss)

      This is what the lead programmer/designer or the PM is for. Depending on the project, you should have 1 of these for each section of the project. If the project is sufficiently large, have 1 LP for each sub-section and have them report to a primary LP. The primary should know how all the subs interface and the subs should know how every component under them interfaces together. There are few projects that I've ever seen that required more than a few LPs (and I've worked on projects with 250+ developers) because they worked in a multi-tiered environment...each LP knew their own section.

      Individual programmers need to look at things as a black box, it will make them much more efficient. Granted, you need to have sufficient requirements for them to do this effectively (very, very few projects have these). Ideally, programmers shouldn't even be hired until at least the first draft of requirements are out. The LP should be God before that, dictating what's doable, feasible and what makes sense from a technical perspective. He then needs to hire on programmers that can produce what he's envisioned. It amazes me the number of programs that staff up before requirements are out...how can you effectively staff up if you don't know what your staff is going to be doing? That's why sub-contractors are important, they can be staffed up anytime and (should!) have domain knowledge on what you're doing.

      --trb
    • I'm programming on a very complex system. I simply cannot know about all the parts that my code touches. I would need three engineering degrees just to understand it all. I have to program in a black box because the white box is too big.

      This is why I demand complete requirements and specifications, and invite all relevant parties to my design and code reviews. And when I'm the one called on to write the reqs and specs, I make sure I get a sign off absolving me of responsibility for it (you would be surprised how much closer the docs are inspected when you start demanding stuff like that).

      The problem isn't the developers writing black boxes, but the upper management buying into the party line of Microsoft, thinking that snap-together black box components will reduce the resource needs for the project.
  • by levik (52444) on Tuesday February 11, 2003 @01:45PM (#5280967) Homepage
    The black box paradigm obviously has its proper and improper applications.

    It can be a great boon in OO programming, where you can assume a component will live up to its end of the bargain by providing the specified functionality, letting you concentrate on using whatever interface it exposes.

    It can obvioulsy be taken too far in cases where failure to know about the internal workings of a system can lead to grossly unoptimized or even error-prone code. However, more often than not such problems are caused by faulty abstraction, and incomplete documentation on the part of the implementor.

    In most such cases a "grey box" approach would do, where the end-developer is made aware of some of the limitations and quirks of the component they are working with, but not neccessarily the minute details of its operation. You don't need to know if the sort() function is implemented with Bubble Sort or Quick Sort, but it does matter if it's a square time or a log time function.

    Everything breaks down if taken to an extreme.

  • No, I don't think that this is an issue, especially with Cryptography which is entirely in the space of mathematical theory. Unless this is some form of hardware crypto, and a programmer said "oh yeah, it's made by the hand of God," I doubt that it is programmer error.

    On the other hand, managers and people more dettached from the actual implementation...
  • by drenehtsral (29789) on Tuesday February 11, 2003 @01:46PM (#5280973) Homepage
    I think you've got a point there.

    The way modern projects are often managed, along with the way modern programmers are often taught does lead in that direction.

    Even if you are only responsible for a small part of a much larger project, it will always help to have a decent understanding of the REST of the system. Maybe not in excruciating technical detail, but at least a decent grasp on what goes on and how it all works.

    The goal of the whole 'black box' thing is that in theory that minimizes stupid dependancies and hidden interconnections that can cause things to be unmaintainably complex. Individual components should still be well spec'd out, and projects should still be modular, but each programmer should grok the context in which his/her code runs, and people should still communicate to iron out inefficiencies, strive for a consistant UI, etc...

    I think it's hard to teach that, you just have to learn by experience. Where I work, we all go out for curry once a week (company pays) and we just talk about the project, off the record, no beaurocracy, just a handful of geeks talking about programming. We've hammered out more efficiency/UI/complexity issues that way than in any formal meetings.
  • by Anonymous Coward on Tuesday February 11, 2003 @01:48PM (#5281004)
    Computer science professors and courses are more concerned with the methods, ideas, and logic of computer programming and design. The idea is to create a totally abstract system, hardware or software, that can then be implemented on any system. This is the purpose of "black box" programming.

    While I agree with you that programmers should understand the hardware they are writing for, any knowledge of that hardware is biasing their creation of a system to run on that hardware and further removing itself from computer science's notion of total abstraction.
  • by Illserve (56215) on Tuesday February 11, 2003 @01:49PM (#5281006)
    I recently installed the recent version of the accursed RealOne player to watch an rm file. I hate Real player more than can be described by words and it just seems to be getting worse.

    So I pop it up to view the file, and what happens? I get the movie playing in a window on top of the Real advertising/browser thing. It spontaneously pops up a "help" balloon giving me a tip for how to use the browser window. The balloon is sitting RIGHT ON TOP OF THE GODDAMNED MOVIE IMAGE. It goes away after a few seconds of frantic clicking, but the point is clear, these programs are often a monstrous brew, created by too many chefs. They just throw in features, and there doesn't seem to be someone sitting at the top, deciding whether these features actually contribute to improving the final product, or just make it worse.

    Then there's Office, which, by default will turn 2-21-95 into 2/21/95. ????? I have to dig through numerous help pages to figure out what subpanel of the preferences menu will deactivate this. Worse, I enter 23 hello on a new line in Word, and hit enter, it auto indents, adds a 24 and positions the cursor after. !?!?!!?!?!?!?!!?
    How many times I've had to fight this particular feature I can't tell you.

    And it's certainly not just a closed source thing either, if anything, some open source GUI packages are even worse. Although, to be fair, I don't expect as much from open source stuff, because noone's getting paid. But when a program created by paid programmers is just badly done, I get infuriated at the incompetence, at the hours wasted taking a usable product and making it actually worse by throwing in garbage features.

    It's been said a million times, but if we made cars the way we make software, noone would get anywhere.

  • I've worked on large systems where I've had a lot to do with many parts of the system, and not been able to hold the whole thing in my head all at the same time.

    And as a someone that's had a lot to do with the building of these systems I have a better chance than most. New programmers would have no chance. We need black box systems to enable us to continue working.

    The Real problem is that the black boxes we define don't have nice sharp edges, so when we put 2 boxes next to each other there are cracks. Cracks for the crackers to crawl through.
    Joel Spolsky wrote about it and called it The Law of Leaky Abstractions [joelonsoftware.com]

  • Of course (Score:5, Insightful)

    by Scarblac (122480) <slashdot@gerlich.nl> on Tuesday February 11, 2003 @01:52PM (#5281031) Homepage

    It is essential that every programmer in a big system only thinks about his own problem, and uses the other parts as a black box.

    Say I want to use some library. Then it has a documented API, which explains how I can use it. I don't need to know more. For me as a programmer, that means:

    • Simplicity - it is a limit on what I need to understand.
    • Compatibility - if a new version comes out, which changes implementation details but leaves the API intact, programs that don't make assumptions about these details won't break.
    • Portability - if there is a new implementation of the same API by another vendor, I can (theoretically) just change to that implementation and nothing changes.

    I'm certain that without these black boxes, no big software engineering project would be possible. The human mind can't keep track of everything in a whole system at once (except for some simple cases - like embedded systems, perhaps).

    It is done sometimes - I believe perl looks inside a file struct when reading/writing files on some platforms to get faster I/O than standard C, for example. But that's only as an optimization after coding the general case, and even then I don't believe it's a good idea.

    For hardware, the story is much the same. Any speedups specific for the hardware are optimizations, and they should only be looked at when the program works, after profiling, when there's a speed problem, and the algorithm can't be improved.

    Remember the rules of optimization: 1) Don't do it. 2) (for experts only) Don't do it yet.

    Black boxes in software engineering are your friend.

    • Re:Of course (Score:3, Interesting)

      by _xeno_ (155264)
      Block boxes can be your friend - but there are issues with them from time to time. One of the most annoying things I never knew about Java is because of the black box that is the java.lang.String class. It turns out that String.substring(...) creates a new String object that keeps a reference to the entire original sequence of characters that made up the original String object. In other words, if I had a 1024-character long string, and wanted only 5 characters from it, I would end up with a String object that presented through its black box just those 5 characters but maintained internally all 1024 characters.

      There's a way around it. Doing new String(string.substring(0,5)) allocates a new String that only contains the five required characters. But the documentation for the black box warns against doing that: "Unless an explicit copy of [the original string] is needed, use of this constructor is unnecessary since Strings are immutable."

      Well, yes, they are - but using the constructor can also be required to get around the fact that the entire array of original characters is maintained.

      As it turns out, this is a "speed hack" (ie, only one array of characters is maintained at any given time - a new array for the substring is not allocated, and the original is used). However, this implementation assumes that everyone using the black box is also going to need the parent string or is going to dereference both full string and substring (and hence allow to be gced) at about the same time, preventing memory and time from being wasted on the substring.

      Unfortunately, I had written a program that read in a list of strings, keeping certain substrings and throwing out the rest - or so I thought. (Think "comments" in the text file - they would removed, the remaining characters would be kept.) This meant I was wasting quite a bit of space by the end of the run due to characters that were no longer accessible being kept around indefinately since the "substring" objects kept a reference to the array of characters for the full string. Fixing it requires doing the new String(string) thing, which as it turns out does allocate a new buffer and is there expressly for that purpose (if you read the source code).

      My point is this - black boxes can be dangerous. A black box is a very useful abstraction - assuming that important details about implementation side effects are documented. In the given example, the Java developers implement an algorithm that is useful in many cases - but there are those cases where it would be useful not to use such an implementation.

      I think that the idea of "black boxes" are important, but that a developer also has to be aware that something happens in the black box and be prepared to learn about it if the need arises. Likewise, when creating a black box, care should be taken to either fully specify what a given implementation does and that there are no side effects to the environment (like maintaining a 1024-element array and only allowing access to 5 elements). There are pitfalls - and both users of black boxes and designers must ensure that such issues are addressed.

      (Especially because if the next String blackbox "fixes" the issue above, my code will start doing a useless extra step to get around the problems in the implementation I saw of the black box...)

  • by DakotaSandstone (638375) on Tuesday February 11, 2003 @01:52PM (#5281032)
    I'm also an embedded systems engineer. Two huge concepts I got as an undergrad in CS were "APIs" and "object oriented programming." By their very nature, these things inspire black box thinking.

    And heck, I don't know. I mean, is it great that I can now call malloc(1000000); and get a valid pointer that's just usable? Yeah probably. In DOS, I wasn't shielded from the memory manager as much, and to do something like that, I had to write my own EMS memory swapping code! That was a PITA, and kept me from the true task I was trying to solve.

    So a modern 32-bit malloc() is a black box for me. Cool. It's empowered me very nicely.

    However, something like WinSock has become a big black box for people too. Okay, great. So it's really easy, in 5 function calls, to open a socket across the internet and send data. But you've missed the nourances of security. So now your app is unsafe, because you weren't forced to know more about what's going on in the "black box."

    Well, that's all I can really say in my post. Black boxes are a darn complex issue to talk about. Anyone who attempts to distill this down to a "yes" or "no" answer is probably missing a lot of the completixy at hand in the issue.

    Very interesting question to put forth, though. Good topic.

  • I concur that there is a lot of this kind of niavete out there, but on the other hand, there are always the few that will go above and beyond. While my schooling tends to focus on a more abstract approach with emphasis on OOP, I have also started working on embedded systems in my own time.

    It seems apparent enough to me, that any passionate CS student will not be satisfied with a mystically based understanding of computer architecture; and will in turn educate themself. I propose then, that any kind of 'black-box' mentality is more a reflection of the students' drive then their education.
  • why's it always got to be the "black box"? Huh??
  • by nicodaemos (454358) on Tuesday February 11, 2003 @01:53PM (#5281043) Homepage Journal
    Okay the "big picture" college profs should be showing you is this one [scubaboard.com].
  • Not yet, anyways (Score:3, Interesting)

    by captredballs (71364) on Tuesday February 11, 2003 @01:53PM (#5281044) Homepage
    Is black box programming a pipe dream? I wouldn't go that far, as software engineering/compsci is a relatively new "science". At any rate, I know that I am very reliant on knowledge of the underlying platform that my code is running on. When a piece of software (especially one that I didn't write) doesn't work, I often resort to tools like truss/strace, lsof, netcat, /proc,etc... to help me determine what is going on "under the hood". I can figure out what ports, files, dlls, and logs the software is using in a matter of seconds, instead of resorting to a dubugger or printf's.

    I'm no superstar engineeer, but I find this methodology (my window into the black box) so valuable that I'm often frustrated by collegues who refuse to learn more about an OS/VM/interpreter and make use of it. It is also what most frustrates me about troubleshooting in windows.

    While it's true that I don't know much about windows, I get the feeling that these kind of observation tools that are so common on unix-ish machines aren't quite so prominently available on winderboxen. Sure, you can figure out a lot about a problem using MSDEV (what I remember from college, where VC++ wouldn't stop opening everytime netscape crashed), but it isn't available on ANY machine that I ever troubleshoot.

    Hell, even when I'm programming java I use truss to figure out what the hell is wrong with my classpath ;-)
  • A black box is something you don't know anything about. You make it not a black box by learning something about it. Most of software development is spent learning about the system (reading documents, searching indexes, walking through code with a debugger). How much you can get done is determined by how little you can get away with learning before fixing a problem.

    That's not universal, it might not be the case with the shuttle software, but it's true for a lot of software. It's definitely true for my job.
  • Back working on my undergrad (computer engineering) I remember getting frustrated at the comp-sci profs that insisted machines were simply 'black boxes' and the underlying hardware need not be a concern of the programmer.

    I'm not sure if there was a lack of communication with your prof, but the concept should have been "SOFTware as black boxes". This is the concept of data hiding, which is a good thing. The cornerstones of software engineering are abstraction and encapsulation, and data hiding is a big part of encapsulation.

    The hardware is (to an extent) a "black box" from the standpoint of any higher-level language, including C and Ada. That is the whole point of software portability, which is also a good thing. Both of those have been used for a tremendous number of embedded systems (particularly Ada, which is used for quite a lot of the space shuttle software). One must know that one's algorithms will execute deterministically in the required time, but knowing in detail how the data and instructions flow from memory though cache and processor is emphatically not required in 99% of cases.

    Detailed knowledge of computer hardware is helpful to software engineers, but by no means essential. Talk to the hardware folks if you have a question. ;-)

    By the way, don't forget another important axiom:

    "Premature optimization is the root of all evil."

    :-)

  • by Carnage4Life (106069) on Tuesday February 11, 2003 @01:55PM (#5281066) Homepage Journal
    There are many things computer science education does not teach the average student about programming. This is burdened by the fact that programming can vary significantly across areas of CS (i.e. networking vs. database implementation) and even within the same area (GUI programming on Windows vs. GUI programming on Apple computers).

    When I was at GA Tech the administration prided themselves on creating students that could learn quickly about technologies thrown at them and had broad knowledge about various areas of CS. There was also more focus on learning how to program in general than specifics. This meant that there was no C++ taught in school even at the height of the language's popularity because its complexity got in the way of teaching people how to program.

    Students were especially thought to learn how to think 'abstractly' which especially with the advent of Java meant not only ignoring how hardware works but also how things like memory management work as well. In the general case, one can't be faulted for doing this while teaching students. Most of my peers at the time were getting work at dotcoms doing Java servlets or similar web/database programming so learning how things like how using linked lists vs. arrays for a data structure affects the number of page faults the system makes were not things that they would really have to concern themselves with considering how things like the virtual machine and database server would be more significantly affect their application than any code they wrote.

    Unfortunately for the few people who ended up working on embedded systems where failure is a life or death situation (such as at shops like Medtronic [medtronic.com]) this meant they sometimes would not have the background to work in those environments. However some would counter that the training they got in school would give them the aptitude to learn what they needed.

    I believe the same applies for writing secure software. Few schools teach people how to write secure code not even simple things like why not to use C functions like gets() or strcpy(). However I've seen such people become snapped into shape when exposed to secure programming practices [amazon.com].
    • I believe the opposite.
      I am IN school where they mainly teach us abstract concepts and not specific programming languages.
      What good would it be to learn a specific programming language in the rapidly changing technological world? Keep your Cobol and teach me recursive binary tree algorithms.
      If you cant go and teach yourself how to apply these concepts to specific languages, either you arent meant to be coding or your school did not help teach the second step of algorithms which is the application
      Basically, since we were mainly taught algorithms, our assignments would be in a random language like C, java, etc, and we would code portions of these algorithms. Sink or swim, but it worked, I know everyone in my graduating class can code in any language efficiently and effectively.
      *go waterloo*
  • If you are in a very small team, you obviously need to be aware and conscientious of the system as a whole, but on a larger team, if everybody has a view of the whole project with their own vision, everybody goes a different direction. It is better in this case to have each individual or group be concerned with following the specifications for their individual componant, and having lead programmers/designers integrate as needed. I've never worked in embeded systems, so I cant tell you if that holds up.
  • My undergrad studies for computer science included fundamental understanding of gates and boolean logic. We also studied some of the microcode that goes into processors. So we went from the level of the gates to simple chips to the basics of processors to assembly to operating systems to applications. It wasn't taught in that order from the ground up. Algorithms were studied at the same time as chips, but it worked out well. Anyone getting a comp sci degree from Pace U. in NY has at least a fundamental understanding of computers from the ground up. However I have a coworker (developer) with an electrical engineering degree. He has much better knowledge of the electronics from beginning to end and he's a great programmer.

    I find knowing how things work from the bottom up makes me better at building on top. I find the most ignorant and least innovative developers to be those with only a high level understanding of how the underlying software and hardware works.
  • Do Slashdot readers think that the theories used to teach (and learn) programming lead to programmers that tend to approach problems with a 'black box', or 'virtual machine' mentality without considering the entire system? That, in and of itself, would explain a lot of security issues, as well as things as simple as user interface nightmares. Comments?"
    But isn't that exactly how we are able to use abstraction and make large, complex systems? A good programmer and engineer is naturally going to want how their piece of the system fits in with the overall machine, but usually it simply isn't practical.

    The best case scenario will always be for each member of a development team to understand every nuance of the system and every detail of its interface with the underlying hardware. However, it simply isn't practical (and for some systems it might not even be possible).
  • I think part of what makes many of us go into an engineering career is the curiosity that requires that we have to have a look under the hood. I never was a very good Lisp programmer until I wrote my own interpreter in C. That gave me the knowledge to write more efficient Lisp code.

    Every Java programmer should at least look at the source for the Java base classes, and ultimately should understand the VM. C++ programmers should at least read "Inside The C++ Object Model." C/C++ programmers should peek at the assembly their compiler creates. Python or Perl programmers that have a good understanding of the internals of their interpreters are going to write better code.

    All these abstractions are there so you don't have to sweat details all the time. But this shouldn't be misconstrued as "never."

  • Do Slashdot readers think that the theories used to teach (and learn) programming lead to programmers that tend to approach problems with a 'black box', or 'virtual machine' mentality without considering the entire system?

    Having a human brain leads programmers to approach problems with a 'black box' or 'virtual machine' mentality.

    I don't think we were built for a natural 'big picture' view. We were built to understand our little piece of the African savanna from inside that box.

    All the 'big picture' stuff is doable, but not as naturally. We will always feel a little more comfortable inside the box.

    Humanity will always have to force itself to think outside the boxes we constantly make to aid our system of modeling reality through perception.
  • by kisrael (134664)
    Hrmm. Some of the philosophies of Unix and its revolutionary system of pipes tend to emphasize individual components, each doing its job well. (Though Perl as the swiss army chainsaw with sometimes surprisingly better performance has something to say on that...)

    I'm probably living in a dreamland, but I really think small teams, where people can realistically have a hand in all and therefore knowledge of all parts of the system, can do almost any software project. It seems to me that "mythical man month" scaling problems really start to attack productivity, even with medium size teams.
  • One side of the issue is that if you attempt to look at any sufficiently large and complex system it will overwhelm you with its complexity. The human mind can only deal with a certain amount of complexity at a time before overloading. That's the reason why object-oriented methodologies were invented, to attempt to chop up a large and complex problem into smaller and more manageable pieces, so you can deal with certain things as "black boxes" and move on to the bigger picture. Sort of like zooming in and out. A graphic artist would never think of working at a single zoom scale when editing a picture, she would zoom in for fine work and zoom out for an overall view. Treating things as black boxes is done so that you don't lose sight of the forest for the trees, not the other way around!

    But of course, as my own experience in embedded systems development and electronics work has taught me as well, it does no good to simply leave things as black boxes. You also have to know how the black box works on the inside before you can go on to treat it as a black box. I had to learn the ins and outs of semiconductor and transistor physics before I learned how to use logic IC's, which have these components as their basic building blocks, so that I'd understand the limits and quirks of these devices. I think the big problem we have is that people are generally unfamiliar with how the many black boxes they use actually look like on the inside, so if their system winds up eventually tickling limitations or quirks (which, as the complexity of the system they're building grows becomes more and more likely), they have no idea what the hell is going on or what to do about it. In other words, too much zooming out, not enough zooming in, so you get work which has too many rough edges and not enough fine detail.

    Salon had a highly insightful article some years back about this very topic as it pertains to software engineering: "The Dumbing Down of Programming", by Ellen Ullman, Part One [salon.com] and Part Two [salon.com]. She talks about the way too much knowledge is disappearing into code, and the problems that causes.

  • My sense is that this does happen, but not usually because of any flaw in the programmers or engineers. More often, it's the result of management not giving more information than is necessary to complete the task.

    It's important to take the whole team, as a group, through the big picture. Even if it is just a short overview meeting, there is significant value in making sure that everyone knows that their assignment is part of a larger whole, showing how the pieces are intended to come together, and giving everyone a context for their individual bits.

    My experience is that being given all the additional info doesn't take too much time, doesn't overwhelm anyone, and produces far more usable results. Letting everyone work in a vacuum, the other extreme, tends to cause integration nightmares and lots of wild tangents that make sense ONLY in the context of one little bit while working against the overall goals.
  • by gnetwerker (526997) on Tuesday February 11, 2003 @02:06PM (#5281179) Journal

    I started my career (long ago, in a galaxy far away) developing embedded systems, and much later, when running an R&D lab, came to the conclusion that, excepting (importantly) user-interface design, embedded systems were the best crucible in which to learn the right balance between modularity and holism in systems design and implementation.

    It's easy for programmers who have only worked on PCs to lose sight of the notion that programs affect the world, but when you are controlling big machines that, improperly instructed, will destroy themselves and the people around them, you begin to think twice about your coding tricks, your testing, and the interaction of your component in the system as a whole.

    But there is an underlying assumption in the question that modular design and system holism are mutually exclusive, and I don't accept that either. I also except user-interface design, which is more sociology and psychology and neurology than computer science.

    You are correct, however, in supposing that security is particularly vulnerable.

    Here's one (true) story, which I will deliberately leave unattributed: a programmer is writing code to control the dual vertical bandsaw in a sawmill -- two huge saws, each 12 inches of high-tensile stainless steel with 3-inch teeth, stretched tight between two six-foot diameter wheels and running at 10,000rpm. A log is pulled on a chain through the middle, so a cut can be made on both sides. Logs enter the system, are measured with a laser scanner, and a queued (physically and in the control program) before entering the bandsaw.

    The old fart programmers used to simply store log data in an array of sufficient size to hold the maximum number of logs that could ever be in the system, but are cognizant of the problem of "phantom logs" when a log falls off the belt or otherwise leaves the system in an uncontrolled way. The clever young programmer decides to use newly-learned techniques of memory allocation and linked-list design, and build a replacement.

    During mill installation the system is tested and appears to run well. At the end of the shift, however, as the last log is about to be run through the system, the operator discovers that there is no data in the queue for the last log, but decides to run it anyway. The computer dereferences a null pointer, grabs garbage data, and tells the bandsaw to set to an impossible position.

    Because the mill is still being installed, the stops on the bandsaw have not been adjusted, and the saws set to position "0" -- and run into the chainguide in the middle. High-stress stainless at great speed meets six inches of fixed steel, and the saw blades explode, burying foot-long shards of stainless steel sawblades up to four inches deep in the walls of the mill, destroying the operator's booth, and causing tens of thousands of dollars damage to the mill.

    Whose fault was it? The operator, for running the phantom log? The hardware installation guys, for not setting the stops on the mill? Or the programmer, for not constraining the output of his program, testing more completely, and using simpler techniques. Answer: all of the above. Better modules would have forestalled the problem, and better systems holism would have forestalled it as well. A combination would have given an even better margin of error.

    This has led me to the following conclusion: in order to get a CS degree, every programmer must write code that will lower a 10-ton machine press a maximum speed to within inches of his chest, and then stop it. We would have more careful programmers if this were the case. If they went on to write security code, we would have fewer holes.

    gnet

    • Roman bridges (Score:4, Interesting)

      by giampy (592646) on Tuesday February 11, 2003 @02:38PM (#5281493) Homepage
      This reminds me of how the romans used to test their bridges: they put the designer under the bridge while marching over it with the entire legion.

      Of course, a bridge i a MUCH simpler thing than a program, but, hey, 2000 years, all the bridges are still there !!!

    • by Baldrson (78598) on Tuesday February 11, 2003 @03:06PM (#5281842) Homepage Journal
      Bit serially, although Forth isn't the be-all and end-all of programming environments, it does have the elegant simplicity that should be sought by whole systems.

      One of the priemere embedded systems languages, Forth was invented by Chuck Moore [colorforth.com]. I like Chuck Moore's 1% Code Page [colorforth.com]. His introduction:

      I've studied many C programs in the course of writing device drivers for colorForth. Some manufacturers won't make documentation available, instead referring to Linux open source.

      I must say that I'm appalled at the code I see. Because all this code suffers the same failings, I conclude it's not a sporadic problem. Apparently all these programmers have copied each others style and are content with the result: that complex applications require millions of lines of code. And that's not even counting the operating system required.

      Sadly, that is not an undesirable result. Bloated code does not just keep programmers employed, but managers and whole companies, internationally. Compact code would be an economic disaster. Because of its savings in team size, development time, storage requirements and maintainance cost.

  • Some of you have complained that the Space Shuttle reference is gratuitous in this article.

    To you people, I must point out that in this post-columbine, post-9/11 world, those people who are able to leverage all available datum into a synergetic process are the most likely to accept success in their lives.
  • I think the question is confusing the "black box" concept with "design patterns". Unless you are programming assembly for Atari 2600 and need to pay attention to how many CPU cycles each instruction takes, there is no reason to consider underlying hardware (embedded issues aside). A black box in this classic sense is simply an abstraction for the zillion types of hardware ever made. The STDOUT file handle is a perfect illustration of this as it has migrated from physical devices to VT100 telnet sessions across the world. Design patterns, OTOH, can easily lead to "cargo-cult" programming, which is always bad.
  • by (trb001) (224998) on Tuesday February 11, 2003 @02:11PM (#5281222) Homepage
    Don't laugh, but this is one of the reasons why it's important to have solid requirements BEFORE you being coding anything. Most projects don't, I know, but something as complicated as the space shuttle would need to be completely spec'd out beforehand. After proper requirements and specs are laid down, the programmer should then approach the system as if it were a black box...with a lot of restrictions.

    The idea behind black box development is that you don't need to know what the rest of the system does...your component takes input and delivers output. That's a Good Thing (tm). Requirements are what tell you how to design and implement your black box, ie, you can't have more than 1ms latency between input/output, you can't assume some system variable is going to be out there, you can't assume your process won't be interrupted. Given these sorts of requirements, your part of the system SHOULD be a black box...someone else should send you the inputs and know what kind of outputs they're getting out. Assuming you correctly followed the requirements (that's what QA testing is for), they know what they're getting.

    --trb
    • by Tony (765)
      The idea behind black box development is that you don't need to know what the rest of the system does...your component takes input and delivers output. That's a Good Thing (tm).

      This is a Good Thing (tm) only when the black boxes are true black boxes. The problem with treating software as engineering problems with Black Boxes is that there is no such thing, in software. This is the reason object oriented programming has not been the panacea we were promised.

      In construction engineering (architecture, for instance) the behavior of all the pieces are known beforehand. A steel I beam is a steel I beam (yeah, I know there are different strengths, but these are specified by the engineering firm). A steel I beam has only two APIs: welding, and riveting.

      In programming, every black box has unique characteristics. Even if two ungif "black boxes" have the exact same API, the behavior of those routines are probably different. Even worse, in all non-trivial projects, connections among Black Boxes produce a complex system; the interrelations among those Black Boxes will change the behavior of the system, often in unexptected ways.

      The case for understanding all your Black Boxes is represented in one of the most dramatic engineering failures ever: the Tacoma Narrows bridge disaster. The bridge was well-designed. However, strong winds blowing down the narrows set up harmonics in the swaying of the bridge. This was a case of an engineering firm who did not understand the whole system, with devastating results.

      The problem with OO programming is it encourages the thinking that you only need to understand your component, and not the whole system. This is a Very Bad Thing (tm) in my book. There are too many Tacoma Narrows software projects out there already.

      Anyway, that's just my opinion. I could be wrong.
  • by praetorian_x (610780) on Tuesday February 11, 2003 @02:15PM (#5281258)

    Do Slashdot readers think that the theories used to teach (and learn) programming lead to programmers that tend to approach problems with a 'black box', or 'virtual machine' mentality without considering the entire system?

    Well, programming is, at its root, about controlling complexity. A good program (not that *I've* written one) will have sub-components within it that largely act as black boxes to one another. It is a great and rare skill to recognize where the boundries are in your program and establish them early, to avoid painful refactoring later.

    In my experience, it is when something *isn't* a black box that things can get seriously fsked. "What? I set a global variable and now the app seg faults when I click that drop down?" Ahhh, not "black boxy" enough.

    My perspective is from a higher level than embedded though. Embedded is a whole different game, although the "controlling complexity" insight of higher level programming languages no doubt sill applies (as far as it can go)

    Cheers, prat
  • by Hirofyre (612929)
    The correct level of abstraction for a project is often hard to find, even for very experienced programmers. Sometimes you have to raise the level of abstraction of an overall system, or you will never get to the point where you can move forward on your piece of the process. Generally, I've found that the problems lie in the areas where the pieces don't quite fit together properly; namely where one person's code doesn't follow the contract a second person was expecting. A lot of the time, even for mid-size problems, it would be impossible for one developer (or a team) to have an end-to-end understanding of the problem space.

    As far as I can tell, there is only one way to avoid the "black-box" problem, and that is to have one person code the whole thing, which is very likely infeasable. The further you get from "your" part of the system, the more abstract it is going to get. If your abstraction is faulty, there is going to be trouble, but I wouldn't say it was caused by treating the problem as a black box.
  • Part of the reason that the Big Picture difficulties crop up is that programmers are problem solvers. Their problem is "How Do I Do X".

    And so they write something that Does X.

    This goes wrong when the problem isn't "How Do I Do X", but is "How Do I Do X, Given Y". In these cases, Y may or may not be available to the programmer. The programmer may not understand that Y is important. The programmer might not be able to determine how Y applies, or how to get Y out of The Big Picture. Or the programmer may just be lazy and figure that someone else will take care of Y for them. Or the programmer decided to write the most generalized method for Doing X, so they can Do X Anywhere.

    Solving this problem of ignoring Y is going to take education: First, know that Y exists. Second, find Y. Third, code for Y. Finally, when Y is "important enough", recycling code from somewhere else won't cut it.
  • Most of the programming courses I've taken focus on teaching you the language, and some of the tricks on using the language.

    Didn't learn how to really program until I got into the real world and dealt with real problems.

    I think a mix of self-taught and instruction yield the best mix.
    .

  • Frankly, I believe there is a lot of bad project management going on. That also applies to software development, not just integration projects. Usability issues much too often arise out of not spending enough time in prototyping and usability testing.

    When it comes to seeing the big picture, well, let's face it - in the corporate world, having an open source-like eyeball count on everything kills productivity. However, the people who do the initial design REALLY should spend some time making sure that their design will work. They also should be kind enough to give the programmers a slight briefing on what sort of project they are part of.
  • by dasmegabyte (267018) <das@OHNOWHATSTHISdasmegabyte.org> on Tuesday February 11, 2003 @02:23PM (#5281342) Homepage Journal
    The "Black Box" design theory abounds because of the freedom it offers programmers from the dark ages of having to know the underlying hardware intimately before anything could be accomplished. It's what allows programmers to devote all of their time to doing what matters rather than pouring over volumes of errata and arcana.

    The reason Windows became so popular, for example, is because its API offered programmers a way to manipulate graphics without having to make graphics calls. Variation from driver to driver was of no concern, and shouldn't be -- that's an IT issue which can be repaired without redoing the entire application.

    And in a perfect world, there's no problem. If a driver hooks into an API properly and documents any disperity, then the black box theory holds true. Problem is, driver aren't perfect. A lot of them are designed for bare bones functionality, and only optimized as necessary (hence how Nvidia's still squeezing substantial horsepower out of my ancient GeForce GTS with every new driver release). Obscure hardware cases always cause trouble, which is why Dells are (sometimes) more reliable than "no name" machines with "better" hardware. Dell has the clout to make sure the drivers are as seamless as possible.

    What's the solution for embedded developers? Design and test the drivers in house, so the black box coders have a shoulder to cry on when hardware doesn't act properly. But it should not be the core developer's job to know what goes on with the hardware. That kind of thinking bloats budgets, increases the complexity of the project and ultimately the cost. Modularity, even though it makes things more difficult to map in total, makes things easier to deal with on a micro level. If the application works when unit tested but fails on the release machine, then it's the driver's fault. Much easier to fix than it is to perfectly replicate the release in your tests.

    Expecting EVERY software developer to be an electrical engineer as well is absurd unless you intend to pay them for both degrees. Better to keep it modular and put the pressure on the hardware abstractor to do a good job of catching the tiger's tail.
  • by mrs clear plastic (229108) <allyn@clearplastic.com> on Tuesday February 11, 2003 @02:29PM (#5281399) Homepage
    Here are some thoughts that go beyond programming and include engineering as well. And not just systems vs black block, but concepts as well.

    Here are some random thoughts.

    Take the slide rule. Back in the days before destops, calculators, and palmtops, we had slide rules to do division and multiplication. You slide the rule for the numerator over the denominator
    (I think, its' been so long). You then look at the
    result.

    The thing is, you can see how 'close' the result is to whatever you desire (in a circuit or system). You can intuit how close thing are. You can easily 'play with the numbers' with a slide rule in some cases. Slide it a little to see what it would take to get the desired results. A teeny amount, alot; whatever.

    With digital calculators, it's a harder (for me) to see the changes visualy. All you see is a quanitive value. I can't look at the physical distances on a slide rule and make inferences.

    I can remember doing the same intuiting with meters. In the days before digitization and computers, we had analog meters. A needle would point to the value (voltage, amperage, whatever). Often the 'movement' of the needle is almost more important than the actual value itself.

    Take the tuning of a final output circuit in a radio transmitter. You dip the plate and tune for
    proper power. With an analog meter, you can see the needle do a quick dip. Sometimes with a digital meter, you can miss the dip, espcially if the circuit has a high Q value. The motion of the needle of the meter controls the speed at which I turn the various knobs.

    With a digital meter, I feel removed from the process of tuning.

    Monitoring the electrical service for a facility, whether it be a radio transmitter facility, or even a computer room; I am much more comfortable with an analog voltmeter and amp-probe. It's far easier for me to watch for hiccups (needles jumping rapidly or slowly) to indicate something is happening.

    I feel that all of these examples are important in my desire to be a part of the overall system, rather than being only a blind black box. I use my overall knowlege of what is happening in the system as a whole to get a 'feel' if what is happening right there and then.

    With only abstract figures and a blind black box interface, I would feel much alone and out of touch with the reality of the system.

    I think the same can be said about programming. In all of the projects I have been involved with, I have been fortunate enough to see the overall picture of the system at a high enough level to be able to able to be a 'part of the system' rather than a disconnected black box'. This is certainly true in my background in writing scripts to monitor the health of databases and operating systems.

    Mark
  • by wowbagger (69688) on Tuesday February 11, 2003 @02:36PM (#5281468) Homepage Journal
    I think the problem is not so much the "black box" mindset, but rather the perfect black box mindset.

    Being an EE who now does software design myself, I try to decompose a problem into smaller problems, and decompose the solution into smaller parts. However, I don't make the mistake of thinking that my smaller parts are each perfect - I try to ask "Now, if component X malfunctions, what effects will it have on this higher level assembly Y?"

    The problem is that many time CS folks are not taught that the system can be imperfect, so by exclusion they believe it to be perfect - one plus one will always come back two, disk writes will always succeed if there is enough space for them, and so on. Folks are not taught that sometimes 1.0 + 1.0 != 2.0 (rounding errors), that disks sometimes fail (sector not found - abort, retry, cancel), and so on.

    In Circuits 1, an EE-to-be is taught the idea of the perfect op-amp - infinite gain, infinite bandwidth, infinite possible output voltage, infinite input impedance. He is taught to use this model to analyze a circuit.

    He is then IMMEDIATELY taught that the model is BS, and starts to add to it - finite input impedance, finite gain, finite bandwidth, finite offset voltage, finite output impedance. The EE-to-be is taught to apply those non-ideal behaviors when needed, and taught to judge when they can be ignored.

    Sometimes I think the best thing in the world would be if CS and EEs had to work with robotics as part of their job. When they have to deal with sticky steppers, dust-clogged optics, and misfiring soleniods they will learn to be a bit more paranoid.
  • Absolutely! (Score:5, Interesting)

    by casmithva (3765) on Tuesday February 11, 2003 @02:39PM (#5281503)
    I've been quite frustrated over the years, interviewing recent college graduates whose software development abilities seem to be limited to problem-solving. They didn't know about requirements, design, configuration management, testing, lifecycles. They didn't put as much thought into how others would use their libraries or classes as they should've, eventually causing some serious redesign to be done to make overall integration easier. Only after a couple of years of having design documents ripped apart and pissed upon, having CM staff threaten them with dismemberment, having QA people file a ton of defect reports against their work, and having their phone ring in the wee hours of the night did they understand the bigger picture.

    I took a couple of CS courses in college as part of my Math major. They were full-blown CS courses, not courses that had been altered for us Math majors. And they were nothing more than problem-solving courses -- and the problems being solved were so utterly asinine that it was laughable. However, when I studied in Germany I took a CS practicum course where we were assigned the task of creating a graphics program in X Windows on SunOS 4. The class was divided into groups: GUI, backend algorithms, SCM, QA, and requirements and management. There were design sessions and reviews, unit and integration testing, etc, etc, etc. It's the closest I'd ever seen to the real world in academia. I've never heard of any American college or university offering such a course, and no one I've interviewed ever had such a course. That's not to say that it's not offered somewhere, but it just doesn't seem all that common. And that's a real shame.

  • Definitely! (Score:5, Insightful)

    by jhouserizer (616566) on Tuesday February 11, 2003 @02:40PM (#5281513) Homepage

    While there's definite benefits of treating software components as "black boxes", I agree with the asker of the question that there are some definite negative side-effects.

    For instance, we've got a couple of developers that just don't know how to work with the team, and figure that they can go sit in their dark cubes and code away their component as a black box that with simply fit in with everybody elses stuff. Common problems that arise are:

    • Different logging schemes
    • Different configuration schemes
    • Different admin-alerting mechanisms
    • Components that don't match the design pattern that all other components follow - thus making them harder to understand.
    • Components that expect some type of "global" data to exist, that simply doesn't.
    These issues have led to no end of grief for those of us who do communicate with each other about what they're doing.

    Abstraction is great!, but you still need to make sure everything fits together correctly, and not just at the interface level.

    • Re:Definitely! (Score:3, Insightful)

      by SWPadnos (191329)
      I'm not sure that this problem is with the black box mentality - it seems to be with those coders.

      Logging scheme: Should be part of the interface definition (the format of log messages should be part of the spec). The logging functionality should be another black box module (with a suitable interface for all portions of the project).

      Configuration scheme: Should also be part of the spec. If it's done wrong, then the module doesn't meet spec, and the programmer(s) should be reminded that their paychecks are dependent on writing modules to spec.

      Admin Alerting: Like logging, if there is a specific format / function to use, then this should be part of the spec.

      Design pattern: The spec should incorporate any company coding standards (by reference, if it's too long :). It is then up to the programmers to follow this standard. (I see that as a different issue than the black box thing, though)

      Global data: This should never be touched, unless the access is defined in the spec.

      It looks to me like:
      1) the specs are not complete; and 2) that the programmers in question don't adequately communicate when they encounter problems coding to the spec. (It's perfectly valid to discuss changes to a black box specification if problems are encountered).
  • by Minna Kirai (624281) on Tuesday February 11, 2003 @02:50PM (#5281648)
    If you're really an engineer, then you shouldn't have any trouble seeing the big picture.

    Unlike, say, managers or interns, Engineers are trained to think through all the consequences of an action [kb9mci.net].

    If you can't predict the effects of your software code on not just the rest of the project, but the economy and society as a whole, then I guess you've been slacking off.

    (nobody flame me without reading the cartoon)
  • by tmoertel (38456) on Tuesday February 11, 2003 @03:18PM (#5281956) Homepage Journal
    As systems become increasingly complex, the practicality of (and even the feasibility of) treating them as a monolithic entities becomes a legitimate concern. One method of addressing this concern is via abstraction, where suitable models are used as representations of their real-world counterparts. Indeed, abstraction is one of the fundamental principles of engineering and has been used for thousands of years to great success in the construction of complex systems.

    For example, in electrical engineering, it is common to simplify complex circuits by breaking them into smaller circuits, analyzing each of the smaller circuits, and then replacing the smaller circuits with appropriate black-box equivalents. With these black boxes in place, the original circuit becomes much easier to understand.

    However, as in any modeling exercise, it is crucial to choose appropriate models and to understand the limitations of the models chosen. While a simple resistance model might be a good substitute for DC and low-frequency circuits, it would be inappropriate as a substitute for higher-frequency circuits where capacitance effects come into play.

    So, returning to the original poster's questions:

    Do Slashdot readers think that the theories used to teach (and learn) programming lead to programmers that tend to approach problems with a 'black box', or 'virtual machine' mentality without considering the entire system?
    Yes, in these days too much stock is placed in the idea of letting somebody else worry about the complexity. This is especially so in mainstream industry, where one of the key selling points of software development systems is that with Magic DevStationPro X you no longer have to worry about the details but instead just use some brilliant Wizard or API to work at "a higher level." This applies not only to software development but also to user-level domains such as operating systems and applications. For example, a common notion in industry is that by using Microsoft operating systems on servers, administrators no longer need to know how to administer servers; rather, they need only know how to use the GUI administration tools. In other words, the pitch is that you need not concern yourself that the GUI tools present a mere model of the underlying system. Let the model be the system and reap the rewards.

    That's hogwash. The model approximates the system, no more. In engineering, this is well understood, and I suspect that in good CS programs the same can be said.

    Abstraction is a powerful tool. It is widely applicable, effective, and well founded -- when used appropriately. It probably ought to be used more often. Nevertheless, it is not a substitute for rational thought. Nor is it a replacement for being responsible for the entirety of the systems we build.

    That, in and of itself, would explain a lot of security issues, as well as things as simple as user interface nightmares. Comments?"
    It would certainly explain some of these problems, but I suspect that far more errors are the result of interface errors. One of the tools that goes with abstraction is composition -- breaking things into pieces, treating the pieces individually (at fine granularity), and combining the pieces (at a larger granularity) to yield a system. The risk of combining pieces is that in order to put them together properly, the boundaries where they coincide -- their interfaces -- must be well understood and compatible. Since the individual pieces make natural units for delegation, pieces are often assigned to different people who may have slightly differing understandings of the boundary conditions. As a result, "interface mismatches" are a significant source of error in software systems.

    Certainly abstraction plays a role here. Each piece can be thought of as a black-box model. The limitations of that model, and the assumptions under which the model is valid, are certainly important characteristics of the piece's interface with the world. Yet, these characteristics are frequently neglected in documentation and often go uncommunicated across delegation boundaries. This sad fact makes interface mismatches an especially harmful side effect of using abstraction and composition in common software development practice.

    Nevertheless, abstraction is a powerful and genuinely useful tool. It is also a necessary tool if we are to build increasingly complex systems. Like any tool, its uses and limitations must be understood if it is to be applied effectively. Thus, getting back to the original poster's question about whether the use of black boxes is harmful, my answer is, No.

    The problem isn't abstraction, the problem is improper use of abstraction.

  • by Gleef (86) on Tuesday February 11, 2003 @05:24PM (#5283361) Homepage
    I studied Computer Science in College, and currently work as a Programmer/Analyst for a non-profit organization (Desktop, Web and Server-based systems). Yes, all of the above encourage a "black box" model to design and coding. Furthermore, I am guilty of perpetuating this to the people forced to listen to me blather, and will continue to do so until I see a better way.

    I understand that it hides some bugs. I don't like this. On the other hand, we can never have enough staff to make sure we have people expert on not only each system used but on each interface between the system to do a good integrated system.

    So what we do is take some premade components (eg. hardware, OS kernel, C library, certain widget libraries, web server, etc), and say "OK, assume these work according to these specifications, we're going to work on adding a piece that does this". When the premade component deviates from the specifications, we fix the component or update the specs.

    As much as possible, we make use of open standards and free software so that if we need to, we can open up the black box and fix something. However, the more we can assume that a component is a black box that will just do what it's supposed to do, the faster we can develop the "interesting" bits.

    The bottom line for us is to manage complexity. The more complexity that we can abstract away, the faster we can work on the custom stuff unique to our organization. A "black box" model works well for us, but yes, it does cause some bugs that need to get cleaned up after the fact. Most organizations I've seen make a similar design choice (or blunder into it blindly), and most schools teach their courses with a similar mindset.

    If we were to develop a truly critical system, one that lives or big bucks depended on, we ought to take a different approach for that system, but we aren't likely to work on such a system for a while.
  • by revbob (155074) on Tuesday February 11, 2003 @06:04PM (#5283648) Homepage Journal
    My business card says "Embedded Software Engineer" and my current job is member of the software architecture team for the Common Operating Environment of perhaps the largest System of Systems project ever.

    I see preliminary designs for databases of objects that magically exist in pure object-land (i.e., they don't actually do anything) and yet somehow the work gets done.

    By training and disposition, whenever I don't smell silicon, I become deeply suspicious, so my first reaction is that such designs are nonsense. Perhaps it will not always be this way -- for instance, perhaps the designers of those very systems will get around to saying who actually does something and how they do it.

    But I've grown to realize that I must accept a certain amount of nonsense (subject always to good engineering judgment and a demonstration that some of these fanciful schemes can actually work) because the "how" absolutely must not enter into the design.

    If I have to say to someone writing the software for communicating between commanders and various kinds of "things" (I'm going to apply some severe declassification here) that to talk to a big orange truck you have to stick a 32-bit word into a mailbox interrupt register at such-and-such and address, while to talk to a little red truck you have to send "HELLO, WORLD!" to port 80, they're going to say to me, "Just what the hell have you been doing in your software architecture group for the past six months?"

    This is a gross example -- but the less obvious examples are nearly as bad, from my point of view.

    For instance, since one of the requirements for this SoS is that communications not be of the form, "Let's tell the enemy what we're going to do", and since communications security is best done by people who know what they're doing, we will not train every engineer to manage communications security everywhere in his application, but rather layer the architecture so that, to the greatest extent possible, engineers will not even know it's happening.

    Indeed, I expect the architecture our team develops to survive several iterations of "how"s. The first implementation better not work as well as the final implementation, or somebody's wasting money.

    In short, we'll use elementary principles of engineering in order to define common objects that communicate with one another in precisely defined ways at a level of abstraction that's appropriate for the objects themselves. That some objects will have precise real-world counterparts (e.g., big orange truck) is merely evidence that the architecture is sane. And if some of those objects have functions associated with them, that's because in the real world functions aren't performed by spirits and demons, but by (now let's not always see the same hands) objects!

    This ain't rocket science, people. If you've written an API that you can't jack up, haul out the Yugo that's underneath, and replace it with a Viper with no one the wiser except the customer who appreciates how fast he's going, you've screwed up. You've let the "how" creep into your "what".

    (Hoping some people will return my phone calls and answer their email so I can stop talking about this and get back to doing it).

  • by shylock0 (561559) on Tuesday February 11, 2003 @11:42PM (#5285388)
    A while ago I submitted to Ask Slashdot about command line/GUI interfunctionality (you can look up the post for yourself, it's a while old -- but look through my info and you'll probably find it).

    Anyway, I think that the issue of the GUI is a great example. Programmers got carried away with the GUI, and now applications and OSes are completely over-GUIed. The mouse is much, much slower than they keyboard when it comes to many tasks. I use graphic design programs on a regular basis, and I would give an arm and a leg to have a quick and easy command line interface in, say, Adobe Illustrator, for precise object manipulation. Same goes for Photoshop. AutoCAD and other programs have a decent implementation of the CLI, but it could get much better.

    I would love to see programmers get out of the object-oriented point-and-click mode that they've been stuck in since the invention of the original Macintosh.

    GUIs are great for representing data, and they are great for the visual manipulation of data. But visual manipulation is often imprecise. For precise data manipulation, the CLI is still necessary -- clicking through a menu and two dialog boxes to finally find a text box with the field to rotate an object by 20 degrees, or add a 2nd column to the page, or fix page margins; that's absolutely ludicrous. There should be a simple, (preferably standardized) command line that's accessible from all applications. Remember the ~ in the original Quake? That was a huge step forward. We need it in more applications. How much productivity has been lost by over-mousing? -Shylock0

    Questions and comments welcome. Flames ignored. Post responsibly.

Truly simple systems... require infinite testing. -- Norman Augustine

Working...