Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming Software IT Technology

Programmers and the "Big Picture"? 405

FirmWarez asks: "I'm an embedded systems engineer. I've designed and programmed industrial, medical, consumer, and aerospace gear. I was engineering manager at a contract design house for a while. The recent thread regarding the probable encryption box of the Columbia brought to mind a long standing question. Do Slashdot readers think that the theories used to teach (and learn) programming lead to programmers that tend to approach problems with a 'black box', or 'virtual machine' mentality without considering the entire system? That, in and of itself, would explain a lot of security issues, as well as things as simple as user interface nightmares. Comments?"

"Back working on my undergrad (computer engineering) I remember getting frustrated at the comp-sci profs that insisted machines were simply 'black boxes' and the underlying hardware need not be a concern of the programmer.

Of course in embedded systems that's not the case. When developing code for a medical device, you've got to understand how the hardware responds to a software crash, etc. A number of Slashdot readers dogmatically responded with "security through obscurity" quotes about the shuttle's missing secret box. While that may have some validity, it does not respect the needs of the entire system, in this case the difficulty of maintaining keys and equipment across a huge network of military equipment, personnel, installations."

This discussion has been archived. No new comments can be posted.

Programmers and the "Big Picture"?

Comments Filter:
  • In general... yes (Score:4, Interesting)

    by Anonymous Coward on Tuesday February 11, 2003 @01:37PM (#5280879)
    I don't have as much experience as some, but I've always wondered about coders who restrain themselves in the 'world' their code runs in. It overlaps, I think, with the problems of sysadmins who leave systems/gateways/firewalls and whatnot wide open to the world.

    If a coder isn't ignoring the fact their code isn't going to be running on the exact same shell as they are, they're ignoring that it won't always be running in the exact same OS, or exact same network. Tragically, when it breaks it can then break BIG.

    Note I also don't have enough experience to offer a solution other than "get a clue!". It's more work until you embed it in your habits to take notice of these possibilities.
  • by Dthoma ( 593797 ) on Tuesday February 11, 2003 @01:45PM (#5280962) Journal
    Programming classes may encourage a 'black box' approach to programming, depending on what language you use. The reason for this is that it all relies on how high-level the language is; if you're using PHP then chances are you won't be worrying nearly as much about the hardware of your system than if you're using C, assembler, or machine code.
  • by jj_johny ( 626460 ) on Tuesday February 11, 2003 @01:45PM (#5280964)
    I think that the programmer who thinks of things in a black box mentality is usually going to be involved in failed program. I have run into so many programmers who know nothing of the many parts that their program touches. They seem to believe that their software does not work within a wider system and a wider world.

    The problem with these programmers is that they rarely understand what can and does go wrong with the outside world. It is always amazing to me that there are people out there that assume that everyone has a 100BaseT Ethernet hub between the front end and the back end or other stupid assumptions.

    The issue that crops up most when programmers think in black box terms is that today's software is not spec'd out enough so that the end user does not get what they wanted but the programmer did not solve it by asking. Too often the problem is very fuzzy and thus the programmer is there to help clarify not just implement.

    Without a well rounded programmer looking at the overall system (or his/her boss), you will wind up with chatty, buggy applications that was what the user asked for but not what they needed.

  • bah (Score:1, Interesting)

    by Anonymous Coward on Tuesday February 11, 2003 @01:48PM (#5280999)
    We need new paradigms, and more best-processes. We need a design for system mentality and a design for integration production engine.

    Or just realize that spaceflight is risky, and it's not always the great unknown that will bring us down.

    I'm not advocating carelessness, just to point out that these space machines we build tend to last 15 years between failure and are so complicated it makes the MS Windows operating system look like Linkin Logs. REAL engineers built that sucker, and they aren't infallible, but they are thorough.

    I'd bet my life on their processes any day.

  • Re:Probably (levels) (Score:3, Interesting)

    by $$$$$exyGal ( 638164 ) on Tuesday February 11, 2003 @01:51PM (#5281029) Homepage Journal
    Programming Levels:
    1. Microsoft Frontpage
    2. Raw HTML
    3. CGI/PHP/etc.
    4. Servlets/Mod-perl/etc.
    5. Object-Oriented black boxes
    6. Documented API's
    7. Public Documented API's
    8. Performance
    9. "The Big Picture" - Architects

    --sex [slashdot.org]

  • by calebp ( 543435 ) on Tuesday February 11, 2003 @01:52PM (#5281036) Homepage
    I concur that there is a lot of this kind of niavete out there, but on the other hand, there are always the few that will go above and beyond. While my schooling tends to focus on a more abstract approach with emphasis on OOP, I have also started working on embedded systems in my own time.

    It seems apparent enough to me, that any passionate CS student will not be satisfied with a mystically based understanding of computer architecture; and will in turn educate themself. I propose then, that any kind of 'black-box' mentality is more a reflection of the students' drive then their education.
  • Not yet, anyways (Score:3, Interesting)

    by captredballs ( 71364 ) on Tuesday February 11, 2003 @01:53PM (#5281044) Homepage
    Is black box programming a pipe dream? I wouldn't go that far, as software engineering/compsci is a relatively new "science". At any rate, I know that I am very reliant on knowledge of the underlying platform that my code is running on. When a piece of software (especially one that I didn't write) doesn't work, I often resort to tools like truss/strace, lsof, netcat, /proc,etc... to help me determine what is going on "under the hood". I can figure out what ports, files, dlls, and logs the software is using in a matter of seconds, instead of resorting to a dubugger or printf's.

    I'm no superstar engineeer, but I find this methodology (my window into the black box) so valuable that I'm often frustrated by collegues who refuse to learn more about an OS/VM/interpreter and make use of it. It is also what most frustrates me about troubleshooting in windows.

    While it's true that I don't know much about windows, I get the feeling that these kind of observation tools that are so common on unix-ish machines aren't quite so prominently available on winderboxen. Sure, you can figure out a lot about a problem using MSDEV (what I remember from college, where VC++ wouldn't stop opening everytime netscape crashed), but it isn't available on ANY machine that I ever troubleshoot.

    Hell, even when I'm programming java I use truss to figure out what the hell is wrong with my classpath ;-)
  • Re:Probably (Score:5, Interesting)

    by ackthpt ( 218170 ) on Tuesday February 11, 2003 @01:59PM (#5281113) Homepage Journal
    Most programmers who are going to come across a "black box" have enough experience to be able code for the situation. Isn't that skill a trait of a good programmer?

    I think it's more than a skill, it's an attitude. I've encountered a number of programmers (just out of school/training) who are oblivious to external concerns, including interface design (traditionally what users complain most about and programmers lack any standard to follow.) Generally it takes little effort to break programs written by very skilled programmers, but blind to anything outside their scope. I was probably as bad when I first started, but recently an analyst complained angrily why I went beyond the scope of the project by including an error/warning log (most likely because the errors/warnings accounted for any untrapped logic and revealed how incomplete the spec was and how little the analyst, and some of the higher-ups, knew of the business function) I felt there were too many things unaccounted for and added the log, when it produced 1,000+ entries things got a little heated. I stuck to my guns though and see a general lack of interest in review of why there are gaps in the spec or knowledge (by the very people who should know.

  • by gnetwerker ( 526997 ) on Tuesday February 11, 2003 @02:06PM (#5281179) Journal

    I started my career (long ago, in a galaxy far away) developing embedded systems, and much later, when running an R&D lab, came to the conclusion that, excepting (importantly) user-interface design, embedded systems were the best crucible in which to learn the right balance between modularity and holism in systems design and implementation.

    It's easy for programmers who have only worked on PCs to lose sight of the notion that programs affect the world, but when you are controlling big machines that, improperly instructed, will destroy themselves and the people around them, you begin to think twice about your coding tricks, your testing, and the interaction of your component in the system as a whole.

    But there is an underlying assumption in the question that modular design and system holism are mutually exclusive, and I don't accept that either. I also except user-interface design, which is more sociology and psychology and neurology than computer science.

    You are correct, however, in supposing that security is particularly vulnerable.

    Here's one (true) story, which I will deliberately leave unattributed: a programmer is writing code to control the dual vertical bandsaw in a sawmill -- two huge saws, each 12 inches of high-tensile stainless steel with 3-inch teeth, stretched tight between two six-foot diameter wheels and running at 10,000rpm. A log is pulled on a chain through the middle, so a cut can be made on both sides. Logs enter the system, are measured with a laser scanner, and a queued (physically and in the control program) before entering the bandsaw.

    The old fart programmers used to simply store log data in an array of sufficient size to hold the maximum number of logs that could ever be in the system, but are cognizant of the problem of "phantom logs" when a log falls off the belt or otherwise leaves the system in an uncontrolled way. The clever young programmer decides to use newly-learned techniques of memory allocation and linked-list design, and build a replacement.

    During mill installation the system is tested and appears to run well. At the end of the shift, however, as the last log is about to be run through the system, the operator discovers that there is no data in the queue for the last log, but decides to run it anyway. The computer dereferences a null pointer, grabs garbage data, and tells the bandsaw to set to an impossible position.

    Because the mill is still being installed, the stops on the bandsaw have not been adjusted, and the saws set to position "0" -- and run into the chainguide in the middle. High-stress stainless at great speed meets six inches of fixed steel, and the saw blades explode, burying foot-long shards of stainless steel sawblades up to four inches deep in the walls of the mill, destroying the operator's booth, and causing tens of thousands of dollars damage to the mill.

    Whose fault was it? The operator, for running the phantom log? The hardware installation guys, for not setting the stops on the mill? Or the programmer, for not constraining the output of his program, testing more completely, and using simpler techniques. Answer: all of the above. Better modules would have forestalled the problem, and better systems holism would have forestalled it as well. A combination would have given an even better margin of error.

    This has led me to the following conclusion: in order to get a CS degree, every programmer must write code that will lower a 10-ton machine press a maximum speed to within inches of his chest, and then stop it. We would have more careful programmers if this were the case. If they went on to write security code, we would have fewer holes.

    gnet

  • It's a dichotomy (Score:1, Interesting)

    by Progman3K ( 515744 ) on Tuesday February 11, 2003 @02:12PM (#5281238)
    You must consider the implementation, ie. real-world limitations and requirements/trade-offs otherwise your solution will not be acceptable, but if you go too far in this respect, you'll create a system almost no one else will be able to comprehend and maintain.
    On the other hand, if you go fully-elegant (applying every theory that might be applicable) to the construction, you'll wind up exhausting processing power/memory, etc...
    That's where judgement comes in; knowing where to cut, and specifically what to document to give maintenance programmers a leg-up on whatever short-cut you engineered to accomplish the feat.
    Make sense?
  • by sconeu ( 64226 ) on Tuesday February 11, 2003 @02:19PM (#5281311) Homepage Journal
    It is always amazing to me that there are people out there that assume that everyone has a 100BaseT Ethernet hub between the front end and the back end or other stupid assumptions.

    Actually, I've found myself doing the reverse several times. For many years, I worked in what I guess would be semi-embedded systems. We did special purpose computers for the military. The thing was, we had our own RTE, and I got into the habit of coding for that, and assuming that the target environment essentially had nothing.

    Now that I'm coding for standard OSen (Linux), I find it hard to get used to the concept that it's already there, I don't have to roll my own.

    Don't get me wrong, I believe in reuse. I think it's wonderful to have the tools. It's just difficult to "rewire" myself after 15 years of a particular mindset (but I'm working at it!).
  • by MCZapf ( 218870 ) on Tuesday February 11, 2003 @02:25PM (#5281363)
    Yes, not everyone is a "big picture" person. I myself am a big picture person, and I've found that it has hampered my ability to be productive at a lower level, such as coding, because I keep worrying about larger issues in the development.

    On the other hand, if you aren't aware of at least some of the big picture, you may end up doing things with consequences that you didn't anticipate.

  • by (trb001) ( 224998 ) on Tuesday February 11, 2003 @02:27PM (#5281375) Homepage
    Without a well rounded programmer looking at the overall system (or his/her boss)

    This is what the lead programmer/designer or the PM is for. Depending on the project, you should have 1 of these for each section of the project. If the project is sufficiently large, have 1 LP for each sub-section and have them report to a primary LP. The primary should know how all the subs interface and the subs should know how every component under them interfaces together. There are few projects that I've ever seen that required more than a few LPs (and I've worked on projects with 250+ developers) because they worked in a multi-tiered environment...each LP knew their own section.

    Individual programmers need to look at things as a black box, it will make them much more efficient. Granted, you need to have sufficient requirements for them to do this effectively (very, very few projects have these). Ideally, programmers shouldn't even be hired until at least the first draft of requirements are out. The LP should be God before that, dictating what's doable, feasible and what makes sense from a technical perspective. He then needs to hire on programmers that can produce what he's envisioned. It amazes me the number of programs that staff up before requirements are out...how can you effectively staff up if you don't know what your staff is going to be doing? That's why sub-contractors are important, they can be staffed up anytime and (should!) have domain knowledge on what you're doing.

    --trb
  • by mrs clear plastic ( 229108 ) <allyn@clearplastic.com> on Tuesday February 11, 2003 @02:29PM (#5281399) Homepage
    Here are some thoughts that go beyond programming and include engineering as well. And not just systems vs black block, but concepts as well.

    Here are some random thoughts.

    Take the slide rule. Back in the days before destops, calculators, and palmtops, we had slide rules to do division and multiplication. You slide the rule for the numerator over the denominator
    (I think, its' been so long). You then look at the
    result.

    The thing is, you can see how 'close' the result is to whatever you desire (in a circuit or system). You can intuit how close thing are. You can easily 'play with the numbers' with a slide rule in some cases. Slide it a little to see what it would take to get the desired results. A teeny amount, alot; whatever.

    With digital calculators, it's a harder (for me) to see the changes visualy. All you see is a quanitive value. I can't look at the physical distances on a slide rule and make inferences.

    I can remember doing the same intuiting with meters. In the days before digitization and computers, we had analog meters. A needle would point to the value (voltage, amperage, whatever). Often the 'movement' of the needle is almost more important than the actual value itself.

    Take the tuning of a final output circuit in a radio transmitter. You dip the plate and tune for
    proper power. With an analog meter, you can see the needle do a quick dip. Sometimes with a digital meter, you can miss the dip, espcially if the circuit has a high Q value. The motion of the needle of the meter controls the speed at which I turn the various knobs.

    With a digital meter, I feel removed from the process of tuning.

    Monitoring the electrical service for a facility, whether it be a radio transmitter facility, or even a computer room; I am much more comfortable with an analog voltmeter and amp-probe. It's far easier for me to watch for hiccups (needles jumping rapidly or slowly) to indicate something is happening.

    I feel that all of these examples are important in my desire to be a part of the overall system, rather than being only a blind black box. I use my overall knowlege of what is happening in the system as a whole to get a 'feel' if what is happening right there and then.

    With only abstract figures and a blind black box interface, I would feel much alone and out of touch with the reality of the system.

    I think the same can be said about programming. In all of the projects I have been involved with, I have been fortunate enough to see the overall picture of the system at a high enough level to be able to able to be a 'part of the system' rather than a disconnected black box'. This is certainly true in my background in writing scripts to monitor the health of databases and operating systems.

    Mark
  • Roman bridges (Score:4, Interesting)

    by giampy ( 592646 ) on Tuesday February 11, 2003 @02:38PM (#5281493) Homepage
    This reminds me of how the romans used to test their bridges: they put the designer under the bridge while marching over it with the entire legion.

    Of course, a bridge i a MUCH simpler thing than a program, but, hey, 2000 years, all the bridges are still there !!!

  • Absolutely! (Score:5, Interesting)

    by casmithva ( 3765 ) on Tuesday February 11, 2003 @02:39PM (#5281503)
    I've been quite frustrated over the years, interviewing recent college graduates whose software development abilities seem to be limited to problem-solving. They didn't know about requirements, design, configuration management, testing, lifecycles. They didn't put as much thought into how others would use their libraries or classes as they should've, eventually causing some serious redesign to be done to make overall integration easier. Only after a couple of years of having design documents ripped apart and pissed upon, having CM staff threaten them with dismemberment, having QA people file a ton of defect reports against their work, and having their phone ring in the wee hours of the night did they understand the bigger picture.

    I took a couple of CS courses in college as part of my Math major. They were full-blown CS courses, not courses that had been altered for us Math majors. And they were nothing more than problem-solving courses -- and the problems being solved were so utterly asinine that it was laughable. However, when I studied in Germany I took a CS practicum course where we were assigned the task of creating a graphics program in X Windows on SunOS 4. The class was divided into groups: GUI, backend algorithms, SCM, QA, and requirements and management. There were design sessions and reviews, unit and integration testing, etc, etc, etc. It's the closest I'd ever seen to the real world in academia. I've never heard of any American college or university offering such a course, and no one I've interviewed ever had such a course. That's not to say that it's not offered somewhere, but it just doesn't seem all that common. And that's a real shame.

  • by agaznog ( 642529 ) on Tuesday February 11, 2003 @02:54PM (#5281695) Homepage
    It is true that most, if not all, abstractions are leaky. But it is still essential to be able to work in "black box" mode to contain complexity when necessary. It is just as important to be able to flip back and forth between levels of "nested black boxes" when necessary. Of course no single person can learn everything, which is why there are specialized developers, and management software engineers. Meaning at higher or lower levels of abstraction can be preserved (ie. abstraction leak prevention) when working from a particular level: To ensure that everything is sound and complete in other levels, you usually have superior ranking software engineers looking over the shoulders of the code monkeys. So, if software fails because of naivete on the part of a particular developer, it's most likely an engineering management and/or software architecture problem. You can't blame a single developer for not knowing everything. You might (and probably should) blame his/her managing engineering for not ensuring that everything fits together at higher levels. Or you might the software analysts/architects for not designing everything to fit together properly. If you ranting against the general usage of abstraction in CS, you are naive. Everything humans know is an abstraction. Computer engineering is an abstraction. Electrical engineering is an abstraction. Biology, chemistry, physics and everything in between are abstractions. Mathematics is perhaps the ultimate abstraction of all. Unless you are suggesting that we all should attain some sort of zen like state where all the semantic levels converge into a giant mass, you cannot escape the "black box mentality". (Trying to suggest that programmers need to code in only machine? Or maybe raw electrical impulses?) Rod
  • Re:Probably (Score:5, Interesting)

    by ryochiji ( 453715 ) on Tuesday February 11, 2003 @02:55PM (#5281718) Homepage
    >programmers that tend to approach problems with a 'black box', or 'virtual machine' mentality without considering the entire system?

    I think there's a lot of truth in this. For example, how many programmers think about writing software from the standpoint of a support technician? In fact, how many programmers even have experience as a support technician? I've never even heard anyone even talk about writing supportable software [iloha.net], yet, when considering the overall costs or quality of a system, I think it's important to consider how heavily the introduction of that system will tax the support department. Whether you're shipping or deploying the system, lower support needs will lower over all costs and vastly improve the reputation of the system.

    The same applies for security and usability. It's really not a question of programming/technical ability, but a question of mentality. I think programmers need to have a specific (or perhaps not-so-specific) mindset to get a bigger picture, and not very many programmers are willing to do that. Part of it may be inherent to programmer-types, but it also might be cultural (the whole "us vs. them" elitist attitude).

  • Yes and no (Score:2, Interesting)

    by brandonY ( 575282 ) on Tuesday February 11, 2003 @02:59PM (#5281760)
    On the one hand, you should be able to look at a computer as a black box. If it's not an operating system, and it's not a driver, you shouldn't have to know what sort of system your code is running on. Portability is a wonderful, wonderful thing. On the other hand, you should always take into account what system your program will run primarily on, and you should always be aware of how the systems under your program probably work so that you don't duplicate functionality, try to out-guess the compiler, or make all sorts of horrendously expensive blocking calls that you don't need to make. I'm an undergrad at Georgia Tech, and I've found that one of the big differences between a solid degree in computer science and a weak one is that the better programs open the black box as much as possible, especially later in the program. Sure, the early classes are taught in pseudocode and java and such, but the farther along the education gets, the more we have to take classes like ECE 2030 (which explains transisters up to CPUs) and Design of Operating Systems (which explains printf down to the CPU). Another big difference is theory and knowledge of design paradigms, from the simple, like hash tables, to the more unusual, like factory classes. It makes a big difference to see the big picture, but then again it's quite possible to write perfectly acceptable code without the slightest idea how the API works. Otherwise nobody could write Windows software. Caching and pipelining and all that stuff is useful to remember, but there's a reason most of it is completely transparent -- so you don't have to know it's there.
  • by DoraLives ( 622001 ) on Tuesday February 11, 2003 @03:04PM (#5281817)
    I think the problem increases as programmers are less and less a part of the complete systems development life cycle and are contracted to work on individual components of an overall system.

    ANYTHING of sufficient size and complexity is by definition something that no one of us can comprehend in its entirety. This being the case, there's no hope of ever seeing to it that everything from minor annoyances to catastrophic failures will be abolished.

    My experience with this sort of thing isn't in software, it's large scale construction projects. Launch pads, to be precise. The basic goal is "build something that the Space Shuttle can successfully fly off of on launch day." In the real world, NOBODY knows exactly what this is going to involve down to the finest detail, and the possibility for malinteractions of that detail.

    Fortunately, the launch pad, once built, more or less just stays put and continues doing the same job over and over. With software development, no such stability is feasable. We're still learning more and more about how computers work, both hard and soft ware. In this phantasmogoric landscape, with things morphing from this to that with bewildering speed and little overall pattern, the guys who have to grind out the code (and all their bosses right up to the CEO) have no prayer of ever getting it right. Don't be so hard on yourselves, it's a situation that you can NOT control fully. Just do the best you can and let Charles Darwin sort out the mistakes.

  • Re:In general... yes (Score:1, Interesting)

    by Anonymous Coward on Tuesday February 11, 2003 @03:08PM (#5281854)
    I am currently a software engineering student in Ontario (where software engineering has just recently become a 'real' engineering discipline. From an engineering perspective, the documented black-box model is the ideal way to have software written (Please keep in mind that I am a student with no experience). Here is why:

    -It allows you to take advantage of fully-tested, error reduced? code that has been audited. An electrical engineer doesn't design a new power supply circuit for every new circuit board, they take a previously done, WELL TESTED and documented circuit, and builds to the required specifications.

    -It reduces coding needed. By using a library/class/module already written to handle a specific function, you are able to save time. Don't re-engineer what has already been done, and done well.

    Where the model breaks down (but shouldn't):

    -Lack of auditing. When code libraries are written, have been debugged and used in beta, they should be audited by independant, senior people. Then the code should be frozen, except for future bug fixes, after which it should be re-audited. The auditing does not need to be done by payed people, just those who can do the job independantly and well.

    -Lack of testing. I have downloaded many many libaries and applications which do not come with a working test suite. This is a bad sign. Code that is being distributed should come with a thorough test suite which can be used by both developers (to make sure new code doesn't break something) as well as by users to make sure that software compiled correctly. I admire much of the GNU project software as it usually comes with a very comprehensive test suite.

    -Lack of documention. "Read the header files" is NOT documentation. Documentation should be clear on what EVERY public function call does, any exceptions that may be thrown, and what return values stand for what, as well as any assumptions made. It should be clear and conscise.

    -Lack of standardisation on use. I was checking out the libpng web site, and counted roughly 100 libraries which somehow provide PNG services. I did not look at each one individually, so I may be a little off here, but I do not see how having one un-audited library is inferior to 100 un-audited libraries. More uncertain code is not better. If you want to add SSE5 suuport to a version of your software, freeze and audit the old version. Start a new version (new major version number) of the library with all of the new features that you want. This way people who use version 1.2 because it is stable won't be able to accidentally use version 2.0 by having it called 1.3 which is the newest, but untested. -GK

  • only to a point (Score:2, Interesting)

    by dwk123 ( 529337 ) on Tuesday February 11, 2003 @03:11PM (#5281900)
    This only works to a point, and the fundamental problem as others have pointed out is that the 'black boxes' are almost never specified to a level of detail adequate to fully describe their behavior. Things like side effects, performance criteria/guarantees, behavior on edge conditions etc are very frequently overlooked. Your statements are idealized versions that might have been taken from some software Methodology book - they completely ignore the real-world problems like bugs, partial/incomplete implementations, outdated specs/documentation and a whole host of others.
    This doesn't mean that you always have to be explicitly focused on these issues, but the overall success of the system as a whole can be critically dependent on them. In a perfect world, of course, this wouldn't be an issue. But assuming the viability of black-box treatment in most real-world projects is the source of many problems, and the truth is that a relatively small portion of the population is capable of maintaining a sufficiently broad view of a system to be able to effectively respond..
  • Re:Experience (Score:3, Interesting)

    by pmz ( 462998 ) on Tuesday February 11, 2003 @03:13PM (#5281909) Homepage
    Because code is the most direct way to communicate wisdom between geeks?

    Not really. I've gained much more by reading books like The Mythical Man Month and good object-oriented analysis and data modeling books. Managing complexity through good data modeling is the most important (and hardest) part of a program to get right.

    The worst applications I've had to work with were designed piece-meal by a high-turnover team of inexperience people (read: really ugly data that resulted in nasty bloated unmaintainable code).
  • by Lodragandraoidh ( 639696 ) on Tuesday February 11, 2003 @03:23PM (#5282006) Journal
    I have been given projects where I had to interface with some existing POS (such as windows) - and I did not have access to the source, and thus needed to approach the project from the standpoint of a blackbox.

    In general this works fine - until some future date, when the owner of the blackbox decides to change the API such that your code breaks.

    Blackboxes, other than in an academic setting, imply a closed proprietary system. Open systems, on the other hand, are not truely black boxes because you do have access to the source and all the underlying APIs (no Eastereggs waiting to create undocumented interactions).
  • Re:Of course (Score:3, Interesting)

    by _xeno_ ( 155264 ) on Tuesday February 11, 2003 @04:16PM (#5282429) Homepage Journal
    Block boxes can be your friend - but there are issues with them from time to time. One of the most annoying things I never knew about Java is because of the black box that is the java.lang.String class. It turns out that String.substring(...) creates a new String object that keeps a reference to the entire original sequence of characters that made up the original String object. In other words, if I had a 1024-character long string, and wanted only 5 characters from it, I would end up with a String object that presented through its black box just those 5 characters but maintained internally all 1024 characters.

    There's a way around it. Doing new String(string.substring(0,5)) allocates a new String that only contains the five required characters. But the documentation for the black box warns against doing that: "Unless an explicit copy of [the original string] is needed, use of this constructor is unnecessary since Strings are immutable."

    Well, yes, they are - but using the constructor can also be required to get around the fact that the entire array of original characters is maintained.

    As it turns out, this is a "speed hack" (ie, only one array of characters is maintained at any given time - a new array for the substring is not allocated, and the original is used). However, this implementation assumes that everyone using the black box is also going to need the parent string or is going to dereference both full string and substring (and hence allow to be gced) at about the same time, preventing memory and time from being wasted on the substring.

    Unfortunately, I had written a program that read in a list of strings, keeping certain substrings and throwing out the rest - or so I thought. (Think "comments" in the text file - they would removed, the remaining characters would be kept.) This meant I was wasting quite a bit of space by the end of the run due to characters that were no longer accessible being kept around indefinately since the "substring" objects kept a reference to the array of characters for the full string. Fixing it requires doing the new String(string) thing, which as it turns out does allocate a new buffer and is there expressly for that purpose (if you read the source code).

    My point is this - black boxes can be dangerous. A black box is a very useful abstraction - assuming that important details about implementation side effects are documented. In the given example, the Java developers implement an algorithm that is useful in many cases - but there are those cases where it would be useful not to use such an implementation.

    I think that the idea of "black boxes" are important, but that a developer also has to be aware that something happens in the black box and be prepared to learn about it if the need arises. Likewise, when creating a black box, care should be taken to either fully specify what a given implementation does and that there are no side effects to the environment (like maintaining a 1024-element array and only allowing access to 5 elements). There are pitfalls - and both users of black boxes and designers must ensure that such issues are addressed.

    (Especially because if the next String blackbox "fixes" the issue above, my code will start doing a useless extra step to get around the problems in the implementation I saw of the black box...)

  • by that _evil _gleek ( 598545 ) on Tuesday February 11, 2003 @05:22PM (#5283348)
    Did you do you're embedded programming in C? Or Forth? I'm thinking C because you seem to put as an 'either or. ' I think the difference is how much you, have to do, yourself, to get to something the resembles a 'virtual machine.' More, "Who provides the virtual machine?" than "Is it a virtual machine?"

    One thing to remember is that the only thing some programmers learn from school
    is how to misuse elements of CS to rationalize away the fact that they suck.
    IGNORE sophistical arguments, instead of buying into the BS and getting the wrong idea
    about whatever they're using as an excuse.
    It sounds like you've heard 'blackbox coding' where I've heard 'implementation details'.
    Like:
    Me: What if?---
    Them: I'm not worring about 'implementation details' [with a tone that suggest that they are above it all]

    But for you, perhaps, it was like:
    You: What happens when?
    Them: We don' have to worry about that because we're coding to a virtual machine.
    You: Yeah I know, but ---
    Them: Haven't you taken object orientied programming and design?
    You: Yeah
    Them: Ah then you see... [start long winded lection about OO that isn't germane ].

    There are two types of people: People who don't know everything, and people who don't admit that they don't know everything. If we had been dealing with the former. It would have gone like:
    Us: What about?
    Them: Well, if that happens we're fscked. That's something we'd /have to/ deal with
    if we were doing embedded systems programming like in medical equipment, but for now for the purposes of CS201 we implement the 'ostrich algorithm'.
  • Proper delegation (Score:2, Interesting)

    by ajole ( 132756 ) <patrickkidd@gmail3.14.com minus pi> on Tuesday February 11, 2003 @06:57PM (#5283988) Homepage
    I've always beleived that a proper system, from an oo perspective, has a top-down hierarchy with no links going upwards. That is never possible, but if you have tools T that are used by users U, and U are used by Senior users S, then T will never use U, and U will never use S. Heck, it works in companies...
    Now I know that sounds elementary and naive, but I still beleive that if you are making [too many] links up, it might be possible to re-work your design.

  • by Lodragandraoidh ( 639696 ) on Tuesday February 11, 2003 @09:56PM (#5284884) Journal
    Better to put the responsibility for fixing a driver or api call on somebody who is both libel to me and who has experience in doing so.

    - dasmegabyte



    To quote the FOCUS Magazine interview with Bill Gates [cantrip.org] [October 23, 1995]:

    "FOCUS:
    Every new release of a software which has less bugs than the older one is also more complex and has more features...

    Gates:
    No, only if that is what'll sell!

    FOCUS:
    But...

    Gates:
    Only if that is what'll sell! We've never done a piece of software unless we thought it would sell. That's why everything we do in software ... it's really amazing: We do it because we think that's what customers want. That's why we do what we do.

    FOCUS:
    But on the other hand - you would say: Okay, folks, if you don't like these new features, stay with the old version, and keep the bugs?

    Gates:
    No! We have lots and lots of competitors. The new version - it's not there to fix bugs. That's not the reason we come up with a new version.

    FOCUS:
    But there are bugs an any version which people would really like to have fixed.

    Gates:
    No! There are no significant bugs in our released software that any significant number of users want fixed.

    FOCUS:
    Oh, my God. I always get mad at my computer if MS Word swallows the page numbers of a document which I printed a couple of times with page numbers. If I complain to anybody they say "Well, upgrade from version 5.11 to 6.0".

    Gates:
    No! If you really think there's a bug you should report a bug. Maybe you're not using it properly. Have you ever considered that?

    FOCUS:
    Yeah, I did...

    Gates:
    It turns out Luddites don't know how to use software properly, so you should look into that. -- The reason we come up with new versions is not to fix bugs. It's absolutely not. It's the stupidest reason to buy a new version I ever heard. When we do a new version we put in lots of new things that people are asking for. And so, in no sense, is stability a reason to move to a new version. It's never a reason."


    So, you can see that your assumption is incorrect. YOU CAN NOT DEPEND ON YOUR VENDOR TO FIX IT. We found this out the hard way at my job - after spending millions of dollars; now we have an open architecture system where we can plug and play different vendor solutions easily, and use open source next to vendor supplied applications.

    On the other hand, I have written the maintainer of a famous development environment [don't want to drop names - not good form] - and he returned my email the same day with an answer to my question. My experience tells me your basic understanding does not jive with reality.
  • by shylock0 ( 561559 ) on Tuesday February 11, 2003 @11:42PM (#5285388)
    A while ago I submitted to Ask Slashdot about command line/GUI interfunctionality (you can look up the post for yourself, it's a while old -- but look through my info and you'll probably find it).

    Anyway, I think that the issue of the GUI is a great example. Programmers got carried away with the GUI, and now applications and OSes are completely over-GUIed. The mouse is much, much slower than they keyboard when it comes to many tasks. I use graphic design programs on a regular basis, and I would give an arm and a leg to have a quick and easy command line interface in, say, Adobe Illustrator, for precise object manipulation. Same goes for Photoshop. AutoCAD and other programs have a decent implementation of the CLI, but it could get much better.

    I would love to see programmers get out of the object-oriented point-and-click mode that they've been stuck in since the invention of the original Macintosh.

    GUIs are great for representing data, and they are great for the visual manipulation of data. But visual manipulation is often imprecise. For precise data manipulation, the CLI is still necessary -- clicking through a menu and two dialog boxes to finally find a text box with the field to rotate an object by 20 degrees, or add a 2nd column to the page, or fix page margins; that's absolutely ludicrous. There should be a simple, (preferably standardized) command line that's accessible from all applications. Remember the ~ in the original Quake? That was a huge step forward. We need it in more applications. How much productivity has been lost by over-mousing? -Shylock0

    Questions and comments welcome. Flames ignored. Post responsibly.

  • by Anonymous Coward on Wednesday February 12, 2003 @12:11AM (#5285495)
    As one of those people who teaches programming, I think you're missing the point. The purpose of higher level languages is to abstract away the hardware and allow the software developer to create the abstract models of the real world without considering the underlying computer system. That may not apply in all case and maybe not in embeded systems. If anything there is still too much hardware involved, things like pointers and memory alocation should not be needed in a begining programming language. After students get some expertice then they can start conserning themselves with hardware issues.
    Anyway, you seem to have learned something inspite of all those bad teachers. Have you considerd coming back to school and helping us out. We sure could use the help, were not proud, if you know a better way to teach, come and show us. We're willing to learn.

If you want to put yourself on the map, publish your own map.

Working...