Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Why Programming Still Stinks 585

Andrew Leonard writes "Scott Rosenberg has a column on Salon today about a conference held in honor of the twentieth anniversary of the publishing of 'Programmers at Work.' Among the panelists saying interesting things about the state of programing today are Andy Hertzfeld, Charles Simonyi, Jaron Lanier, and Jef Raskin."
This discussion has been archived. No new comments can be posted.

Why Programming Still Stinks

Comments Filter:
  • by bsDaemon ( 87307 ) on Saturday March 20, 2004 @11:26PM (#8624860)
    Please keep in mind that being only nearly 20, the depth of my personal experience is not that of say, someone who was around when UNIX was first rolled out. However, I have been in my day an avid C and BSD (mostly FreeBSD, but some NetBSD) user.
    Honestly, from where I sit (you may agree or not), programming and computer stuff in general has become a lot less like a science or craft, and more like a factory job. In the early days programmers who physicists, engineers, and mathamaticians. Today programmers are just programmers. More and more computer science departments are teaching using Java. Why? because it helps people to understand how the computer works? no. Simply, because it's what the industry is using.
    I had 4 technicians from Cox over at my house yesturday because my parents couldn't figure out what was wrong with the cable modem. They were the most filthy, disgusting bunch I have ever seen and were dressed more like gas station attendants than professionals. Why? because that sort of work has become blue-collar and low-rent.
    Programmers are no longer expected to be educated beyond their field. they are being educated to produce software, not to be COMPUTER SCIENTISTS. How many graduates of say, ITT Tech would actually understand Knuth, even if they have ever heard of him? Likely, not many. That is why software sucks. That is why the programming "trade" sucks. and that is why companies can send the jobs abroad to people who work for peanuts. Programming is just like stamping "Ford" on the grill in a Detroit assembly plant these days and nothing more.
  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Saturday March 20, 2004 @11:28PM (#8624871)
    Comment removed based on user account deletion
  • Programming Skills (Score:3, Interesting)

    by Anonymous Coward on Saturday March 20, 2004 @11:31PM (#8624889)
    It's such a simple concept. The more of anything we have, the more the mediocre stands out. With millions of writers, we get self help books, assorted garbage, and several really excellent works.

    Programming has an artisitic side, the creativity, vision, and insanity required to apply oneself to a project is much like the authoring of a book. Many have the skills, know the principles, but even then, few have the internal extra to create.

    I may know language and syntax, but I'm nowhere near the league of Shakespear, Tolkien, Asimov, or Clancey. Fortunately for me, they are nowhere near my league when it comes to putting code together.

    We have millions of coders - 60 percent will have average skills, 20 percent will be below average (or plain suck), and 20 percent will be above average, including that rare 2 percent of the absolutely insane, don't let them out on weekends, make sure they get fed, check they haven't peed themselves brand of genius.

  • by hak1du ( 761835 ) on Saturday March 20, 2004 @11:50PM (#8624996) Journal
    Sorry, I don't think any of those people have much credibility left: they have been in the business for decades, they have had enormous name recognition. We have seen the kind of software they produce. If they knew how to do better, they have had more opportunities than anybody else to fix it. I think it's time to listen to someone else.
  • by rice_burners_suck ( 243660 ) on Saturday March 20, 2004 @11:50PM (#8624997)

    I admittedly haven't read the article (yet), but I'd like to include a few reasons of my own that programming stinks. As you might guess, I am a programmer.

    My friends and I compare a lot of computer things to car things. Most likely, we do that because we are enthusiasts of both. Fast cars and fast software are very similar in many respects.

    A little background information on cars is necessary to gain the full effect of my argument about programming. Although the next three paragraphs may seem unnecessary at first glance, I assure you that I am a careful writer and that you should read them.

    Car enthusiasts fall into quite a few categories. For example, people who restore classic Mopar or Chevy cars enjoy making everything look like "mint" condition. Usually, every part of the car is so spotless and beautiful that you could eat off the engine. On the other end of the classic car spectrum, there are those who will tub out the entire car and concentrate only on performance features. These cars may not look like much, but they'll break your neck if you push the gas too hard. And of course, there is an entire spectrum of prefenences between these two ideals.

    In most of these categories, the hard core enthusiasts like to do the ENTIRE job themselves. They won't let anyone else touch their cars. The wanna be's will usually contract out nearly everything, because they want the glamor of showing up at car shows and showing off their machine, but can't hold a screwdriver and don't know the difference between a 6-point wrench and an Allen wrench. And of course, there is an entire spectrum of car knowledge, experience, and do-it-yourself levels in between these two extremes.

    Somewhere in the middle of the two extremes are people like my friends and I. We do a lot of work ourselves, but when it's a complex or high-risk job, or if we don't feel like doing it because it's boring and time consuming, we'll have a professional do it. There are auto mechanics who do pretty much any job. And there are mechanics who specialize in a specific area. For example, I have my radiator guy, my transmission guy, my engine rebuilding guy, my chrome plating guy, my carpet guy, my headliner guy, and the list goes on and on. I use each specific person for the job he excels at because I understand thoroughly what I am about to explain.

    Programmers are a lot like the car enthusiasts that I am and whom I describe above. Some prefer to do EVERYTHING, like that guy who wrote 386BSD and wouldn't insert other peoples' code improvements. (The project got forked and now you've got the *BSDs, and that guy is no longer involved as far as I know.) Some prefer to concentrate only on a specific area of software, such as graphics, numerical algorithms, kernel schedulers, assembly optimizations, databases, text processing, and the list goes on and on forever. Even an area such as graphics can break down into a plethora of categories, such as charting software, user interfaces, etc.

    The biggest reason that software sucks, in my opinion, is the very same reason that the automotive repair industry sucks. I wouldn't be surprised if programmers are just as hated as car mechanics. The programmer's boss is just like the old lady who takes her car to the mechanic. Neither knows anything about the job at hand. The only thing they know is that it costs them big and the results suck.

    For the programmer's boss, the software contains bugs, is difficult and confusing for the customer to use, and takes much too long to develop, so the market window closes, the project goes over budget, and maybe higher management cancels the project altogether.

    For the little old lady, the car broke down. The mechanic wants to fix it properly. But doing so will take weeks (believe me). The symptoms are caused by one or more problems, which require several new parts and quite a lot of labor to repair. The parts may be hard to find. The old ones may need to be rebuilt. And generally, people don't like renting a car for t

  • Kind of disappointed (Score:3, Interesting)

    by Comatose51 ( 687974 ) on Sunday March 21, 2004 @12:00AM (#8625040) Homepage
    The article doesn't provide much of the actual discussions so it's really hard for me to decide if I agree with the experts. From the article, it seems to imply that there are problems with software. That much is nothing new. Software is fragile and implemenation is difficult. However, the article doesn't really seem to get at the reason, other than to say we lack the necessary tools. So, while I agree with that much, it's nothing shocking or particularly insightful. It's disappointingly shallow for a Salon article.

    The only real shocking part to me was the Bill Gates quote. He's an Open-Source man at heart or just a hypocrite. :P
  • Shoveling Data (Score:5, Interesting)

    by nycsubway ( 79012 ) on Sunday March 21, 2004 @12:01AM (#8625050) Homepage
    That sounds like most IT jobs. I've found that IT is different from research and academia. Where I work, at an insurance company, I started referring to what I do as shoveling data. Because my entire job can be summed in one flow chart. Begin, open file, read file, process data, end of file? no. read file. end of file? yes. close file. end of program.

    It's mindless. The problem with programming today is that yes, it has become a commodity. Something that people expect you to be able to sit for 8 hours a day and do continuously, without thinking or having any input of your own whether WHAT your doing is really worth it.

    There is no creativity in the corporate world, I think thats why so many people choose to work on open source software.

  • by sashang ( 608223 ) on Sunday March 21, 2004 @12:12AM (#8625111)
    "One is, designing the artifact we're trying to implement. The other is the sheer software engineering to make that artifact come into being. I believe these are two separate roles -- the subject matter expert and the software engineer."
    Funny chap talking about how design and implementation should be separate. Seems a bit ironic considering he was the one who create Word docs where the layout and content are all packed into one file. Most decent solutions separate the layout from the content (eg: Latex, HTML/CSS). If Simoyi was a web programmer he'd be laying out his html with tables.
  • by miu ( 626917 ) on Sunday March 21, 2004 @12:13AM (#8625117) Homepage Journal
    I'm gonna have to disagree with the notion that lack of scarcity leads to bad design.

    I think more often that low level optimization often locks us into a bad design, look at the Mac System software version 9 and lower or Windows before XP for an extreme example of this. Locks and crashes caused by apps were common because the task scheduler and memory model were created with scarcity in mind - developers at Apple and MS knew better ways to do things, but were locked in by those descisions made based on earlier hardware capabilities.

  • by Black Parrot ( 19622 ) on Sunday March 21, 2004 @12:14AM (#8625123)


    > I don't think we are even a little bit closer to that dream today than we were 24 years ago.

    The problem, IMO, is that providing a specification that is detailed enough and correct enough to generate a correct program from is just as hard as writing the correct program in the first place.

    OK, maybe only as hard as writing it in a slightly higher-level language, but if so, just huse the HLL.

  • by Dixie_Flatline ( 5077 ) <vincent.jan.gohNO@SPAMgmail.com> on Sunday March 21, 2004 @12:34AM (#8625212) Homepage
    I half agree. The complete lack of regard that most programmers have for the amount of system resources that they have is in most cases, entirely tragic. There are a lot of programs that could have a smaller memory footprint or run much faster if any time was taken to make these things fundamental to the design.

    That said, I've never really heard anything good about the code that developers have to write for systems like the PS2 that simultaneously have a wealth and a lack of resources. The PS2 has a lot of processors to work with, and that's great! Well, it's great until you realize that the abundance of processors just makes things really hard to synchronise, and you're doing all sorts of hacks to make things work. On the other hand, the PS2 barely has any memory to work with at all, and that's ALSO problematic, since you're always cutting corners, and getting data off the disc in a timely fashion is a pain in the ass. (The PS2 developers that I know have all had to write their own memory managers just to make things even pretend like they're working.) Both those things contribute needlessly to high complexity. Managing complexity well is the TRUE mark of a good programmer, but nobody wants to deal with more than is absolutely necessary.

    By comparison, both the XBox and GameCube are purportedly easier to program. Both of them have hard and fast limits on memory and CPU speed, but both of them provide enough of each to make managing the complexity easier. Programmers still have to be careful with their resources, but they don't have to resort to as many dirty tricks to get everything to work.

    So, just because a system has limited resources doesn't mean that the code coming out the other end is going to be clean and the programming is going to suck any less. You just end up with a different kind of sucky programming.
  • by KrispyKringle ( 672903 ) on Sunday March 21, 2004 @01:06AM (#8625370)
    Oh, yeah. And one thing that we can do to help mitigate this problem? Help managers understand what it is we do. If we want the uninitiated to think that programming is this difficult, arcane task they'll never understand, then they won't ever learn to tell the difference between a valid excuse (``that's a complex request that will require a lot of time for writing and testing to do well'') and a lame one (``I need to contact MSDN for the latest version of the J2EE hashtable implementation to speed up our ASP.NET servers''). Let them in on your secrets and they'll be a bit more receptive to your side of the story.

    Sorry for the replying to myself thing. I hate it when people do that.

  • by BrianMarshall ( 704425 ) on Sunday March 21, 2004 @01:47AM (#8625577) Homepage
    I love to program - it is a craft and I love to do it well. I started with BASIC and FORTRAN in a high school computer club (with time donated by the local technical college), and I wrote FORTRAN for most of the '80s.

    The thing was, even in the late 80's, to mention that a particular approach would be 'elegant' was the kiss of death; it meant that you were amusing yourself at the company's expense rather than doing your job. There was still a strong desire in big companies to control programming with methodologies and basically attempt to do things in such a way that talent was not required. It was like the military - they don't/didn't want talent; they wanted interchangeable programmers that could be put onto any job.

    This is undoubtedly still true today in many cases.

    But, more and more, companies want senior talented people. And more and more, companies realize that when a talented programmer claims to have an elegant solution, it means simple, reliable and fundamentally sound.

    More and more, companies are realizing that any idiot can write complicated software; it takes a really good programmer to write a simple, sound solution.

    Of course, I am talking about real programming here, not dragging boxes around on a form and putting 2 lines of code behind a button.

  • No Silver Bullets (Score:2, Interesting)

    by BrianMarshall ( 704425 ) on Sunday March 21, 2004 @02:02AM (#8625640) Homepage
    As Fred Brooks pointed out in the classic paper in 1987, there are "No Silver Bullets".

    Writing software is difficult because it actually is complex - a piece of software of any size has more "moving parts" than (almost?) any other thing people build.

    Things have gotten better over the last 50 years - high-level languages, object-oriented design and information-hiding to reduce interdependencies.

    But software is still one of the most complex things people make. Better tools and approaches help, but there are no silver bullets - software is complex because it is complex.

  • by rusty0101 ( 565565 ) on Sunday March 21, 2004 @02:09AM (#8625666) Homepage Journal
    ... doesn't fit on a bumper sticker, so no one is really concerned about it.

    Seriously.

    Physics is about finding the one inch formula. One of which is e=mc^2

    Accounting is about making sure that the accounts ballance. Profit is the difference between cost and revenue. Cost plus Profit equals Revenue. Accountants recognize that they are part of both the "Cost" and "Profit". Good business management recognizes that eliminating the Cost will also eliminate the Revenue, which will also eliminate the Profit.

    Programing is not currently about finding the least expensive way to solve a problem. It is about finding a usable way to help people accomplish their desires.

    Computer Science, such that it is, in most cases is a euphamism for Software Engineering. The goals of Software Engineering when I was going to school was to write "provable" software. "Provable" software is software that you can "prove" every line of it does exactly what the "engineer" who wrote it intended, nothing more, nothing less. If the developer writes to variable "n" in function "A", and the scope of the design is that variable "n" is only applicable to function "A" then when function "B" changes variable "n", it should not affect what function "A" expects it to be.

    That is a very simplified version of what Software Engineering is all about. Software development is supposed to be about using the tools that software engineers have used to build useful software. All too often it is about using tools other software developers have created instead, because other software developers have gone ahead and created something to work with, because they got tired of waiting on Software Engineers to get over being elietist, and actually putting together provable designs.

    A suspension bridge is a beautiful piece of engineering. It is also very often a very beautiful piece of hardware. The Tacomma Narrows disaster happens when a piece of engineering comes across a situation that the engineer was not expecting, and didn't design for.

    Likewise software developers are using unproven tools to acomplish various tasks. Then they are being asked to work arround the problems that come up when their tools encounter situations that the developers of those tools had no idea would ever be asked of those tools.

    As a result, we currently get buffer overflows, memory overruns, as well as hundreds of other problems that can best be described as anoyances, and at worst be described as security flaws.

    ------

    Alternatives to the WIMP design, as well as the Unix understructure.

    While Lanier bemones the fact that we have not "surpassed" the Unix and Windowing mode of computer archetecture and interface, at best he can be said to have waved his hands at a new direction, (VR). The only "improvement" I have observed is the time based document stream view that has come out of MIT, and even that can be considered a WMP view (minus the icons at the moment.) For some people this very well may be an improvement, though I think it is only useful to those people who look at time centric management of information. In other words, I think it's a great way to manipulate stuff like e-mail, but probably wouldn't be of particular use in managing a book store inventory.

    "Mind Mapping" seems to me to be a "better" way of managing information, but I don't know that it is a great idea as an interface for a computer. Perhaps that's based upon my own limitation as my input to a computer is a small set of serial interfaces, rarely used concurrently, and the output is a couple of serial interfaces, and a "screen" or "window" of data that I process as information.

    As long as that is what my interface to a computer is, I will probably run into limitations as to what I can expect from a computer. Those limitations are going to affect how I interact with the computer, as well as how I, and others, develop software for the computer.

    Rather than bemoning the current state of the art as being of the same
  • by melatonin ( 443194 ) on Sunday March 21, 2004 @02:31AM (#8625744)

    one tiny error and everything grinds to a halt

    This is easy to say, but what to do about it?

    Simple, you don't make programs so stupid. Here's what a program does. "OH SHIT! THAT WASN'T SUPPOSED TO HAPPEN! EXCEPTION! Oh shit, there's no exception handler. CORE DUMP!"

    The problem comes down to 'that wasn't supposed to happen.' It reminds me of my 2nd year CSC course when my prof said "assume all your input is correct." WTF? They never teach you about error handling in university (not in any curriculum that I've seen anyway).

    The problem is that 'errors' aren't talked about much; we just all agree that they're bad and throw an exception (or SIGTERM) when one happens. We're also told to write programs that are correct, and that programs are either correct or not. There's confusion there.

    Programs are either faulty or not, in that they either do what they're told or not. A correct program does what it's expected to do.

    The problem is when you confuse faults with errors. Faults must not happen. Errors are expected to happen! You can't assume that all your input is correct. You can't just bury your head in the sand and throw an 'i/o error' exception.

    You have to keep things simple and design your systems to follow certain rules. In this case, code should do what it was asked to do or not do it. That second option has to be considered as part of the design. You can't design code that will work if everything happens as expected, and will throw some kind of exception (caught by who knows what) if something unexpected happens. If you let everything have a simple chance to succeed or fail, errors can cascade cleanly to some other code that cares to handle it.

    For example, say a library is using some other library to write data. Library #2 tells lib #1 that it can't write the data asked. Lib #1 tells the code that calls it that it can't write a PNG file. The user application tells the user that it can't save the document as PNG. If exceptions were used, it would have to be caught. And by which part of this system? What if one part expects another part to catch the exception?

    The only thing missing in that example is why the failure occurred. In OOP systems it is easy to communicate between components - communication between 'black boxes' is the whole point in OOP. The way I do it is that if an object is unable to perform a requested action it returns a basic 'no' response, and the reason can be queried from that object later (model objects wouldn't have this ability and they shouldn't - but controller objects would).

    Anyway, the point is that 'errors' (which are really just things you didn't plan for but should have) do happen and you have to design your systems for it.

    Here's some food for thought, Objective-C vs. Smalltalk. In Smalltalk, every method returns 'self' (this) as a result unless you explicitly return a different object in the code. This allows you to chain messages (method calls) together,

    mailbox messages lastObject description.

    my Smalltalk's a bit rusty, but mailbox is an object, messages is a method that returns an array object, which responds to the lastObject message. This expression evauluates to the description object for the last email in the mailbox. In Objective-C, you'd write it like this,

    [[[mailbox messages] lastObject] description];

    Which does pretty much the same thing. There's one difference between Smalltalk and Objective-C, which has been debated for some time. Let's say that the messages array had a count of 0. In both Smalltalk and Objective-C code, you'd expect that lastObject return a nil value. The difference between the two languages is their concept of nil. In Smalltalk, nil an another object that raises a DoesNotUnderstand exception to any message that's sent to it. In Objective-C, nil is treated like a black hole, and sending messages to ni

  • by soft_guy ( 534437 ) on Sunday March 21, 2004 @02:33AM (#8625753)
    From your post, it sounds to me like you are doing win32 programming. It also sounds like you are trying to use a bunch of those calls that Microsoft has that end in "ex".

    You're right. Those damn calls have so many goddamned parameters and complicated structures to fill out, it feels like you're going on a snipe hunt every time you want to make a system call.

    Try Qt. Its not quite as efficient as win32, but for GUIs, it generally doesn't matter. For other things, the key is to make as few system calls as possible and instead rely on the C/C++ standard libraries.

    I'm lucky. I get to write commercial software for MacOS X using Cocoa. Still, the bugs I have are mostly due to having to make system calls. When I'm writing code to hold and manipulate data using the STL, I have very few bugs. Where I run into trouble is when I want to use something like Quicktime or other system APIs where I don't really know what the calls are doing and there are a lot of undocumented gotchas.

  • by Mr.Oreo ( 149017 ) on Sunday March 21, 2004 @02:57AM (#8625834)
    This is coming from a game programming point of view, but I think it applies to all facets of software development. Programming sucks these days because of the communities it has created.

    I'm not going to be a Yancy and specify where these points aren't applicable. Take what you read here with a grain of salt, but I guarantee you can apply one of these to an experience you've had.

    - Zealot Trolls.Answering someone's question with a code solution that contains even the pettiest OO fault, even if it has nothing to do with OOP, will get you nothing but a bunch of OOP zealots on your ass, saying 'WRONG! That shouldn't be public' or 'WRONG. The destructor should be virtual' or 'WRONG. Should pass by reference'. You get the point. There are more and more trolls on boards these days looking to stroke their ego by posting extremely minor corrections to mostly correct solutions.

    - Wheel Engineers. Stop making 3D engines. Stop making WinSock networking wrappers. Stop making ray-tracers. Stop making things that have been done 1000x before unless a)It's for fun/educational purposes. b) You're going to do something someone else hasn't. Even if there's _one_ thing in your coding project that someone hasn't done before, it's definitly worth it to create. Red-Faction was the last game IMO that added anything new to the world of 3D engines, other than 'taking advantage of x graphics card feature'. Physics is another area to inovate with game engines. Please Stop Re-Inveting The Wheel and giving it some cheezy name.

    - Meatless Code. Anyone who has worked with the 3DS Max SDK knows what I'm talking about. Important data is fragmented everywhere, and accessed in 10 different ways. You spend more time reading the API docs than you do programming. I was reading through some ASP.net code the other day, and it took 45 lines to update a table with an SQL command. I read through it, and it could be done with 5 narrow lines of perl code. With C++, you could probably spend a solid two weeks writting generic 'manager' code that does absolutely nothing. Programmers need to learn to draw the line between 'productive' code and 'silly' code. Having a DataObjectFactoryCreatorManager class for 'ping' program, is a bit silly.

    If I could do the world a favour, it would be to send all coders a letter that simply said "You are not the best. Live with it.". If I read another reply to a simple question with some dork awkwardly throwing in that he's "A 20 year C programmer who wrote a compiler on a TSK-110ZaxonBeta when I was 11". No one cares about your background unless they ask, or it's relevant.

    Other than that, programming is fine. Except for Java.
  • by nathanh ( 1214 ) on Sunday March 21, 2004 @03:01AM (#8625847) Homepage
    I take it you don't know much about Raskin. He has real reasons to criticize "another Windows" as he puts it, reasons that go far beyond "we've used this same model for some time.

    Ignoring your snide attack on the previous guy's knowledge - do you really think anybody on /. doesn't know who Raskin is? - I agree with the previous guy that it's unfair to blame open-source coders for producing "more of the same". Let's look at the comments:

    "There's this wonderful outpouring of creativity in the open-source world," Lanier said. "So what do they make -- another version of Unix?"

    Jef Raskin jumped in. "And what do they put on top of it? Another Windows!"

    "What are they thinking?" Lanier continued. "Why is the idealism just about how the code is shared -- what about idealism about the code itself?"

    They say that "Windows" (meaning WIMP) on top of "UNIX" is a bad idea. Why? It's exactly what Raskin's former employer is currently doing. And Windows is essentially WIMP on top of VMS. Where is the innovation coming out of the leading two desktop OSs? They too are just rehashed versions of decades old ideas.

    I don't think it's the open-source community's responsibility to be free R&D for the entire computer industry. Isn't it enough that they are producing free software? Do they have to research it as well? What an onerous task! R&D should be in the domain of researchers and academics. It took 40 years for WIMP to progress this far. Does Raskin think open-source can turn that around over night? If so, then he has more unrealistic expectations about open-source than all the /. cheerleaders combined.

    To put it bluntly, I don't think it's fair for Raskin and Lanier to demand such a high standard from the money-poor open-source community when the ultra-rich closed source companies aren't doing any better. Microsoft pumped how many billions into their R&D department and what they did get? A ripoff of J2EE and a ripoff of MacOS X. Apple pumped billions into their own R&D and they've produced Display Postscript... I mean Display PDF... only 20 years after Adobe did it. Colour me unimpressed.

  • by jhoger ( 519683 ) on Sunday March 21, 2004 @03:43AM (#8625970) Homepage
    Actually you are on to something.

    The thing that was supposed to make programming easier is reuse. The mystery to some is, that it never really materialized.

    On the hardware side of thing reuse is everywhere. Standard components are made which are unbelievably complex but have a simple interface and are cheap. The manufactures give away the parts for development, and reference designs which show exactly how to design them into a product. Drivers are often licensed for free.

    Why hasn't this happened in software? Proprietary software is the problem. There is no vast array of components that you can draw upon without having to pay for a license agreement for each and every one. The best answer so far is things like Java and .Net which come with a ridiculously gigantic runtime library. These will help.

    But I think F/OSS provides a compelling answer. If you're willing to comply with the GPL there is a vast amount of software ready to be reused in your design. I think it's going to work unless the government steps in to stop it.
  • by Anonymous Coward on Sunday March 21, 2004 @05:53AM (#8626336)
    1. When I was a kid our operating system was a programming language.

    Not to be confused with the "When I was a kid we walked a mile in the snow up hill both ways nude just to get a candle"

    Progression and "Computers sucked back then" aside we had easy access to a programming language and people didn't say "kids can't write code" well actually SOME people did but those same people think kids can't do anything.

    Even ferther back before home computers kids couldn't access computers but collage students could get access to the schools mainframe and before that...
    The point is that the notion that programming is something only profesionals should do is very new.

    For example: Script kiddies..
    Script kiddies is something new (well new as of 10 years ago).
    In the 70's and 80's children didn't download scripts, recepies or 0 days. Kids wrote there own programs.

    Today script kiddies have to rely on more experenced programmers to write simple scripts to get the job done.

    Profesionals think thats good. "Think how bad things would be if thies kids could write code?".

    You may think this comes with a userfriendly operating system. But this actually comes from Dos.

    The profesional PC software develupment tools were priced well outside the price rage of the avrage user let alone someone who just forked over over $1,000 for a home computer. (In the 1970's $1,000 was a lot for a home computer).
    Even today $100 is a lot to ask just to "mess around".

    As a result the PC discuraged non-profesional software develupment. I think this was IBM and Microsofts way of attracting profesional software develupment. You had no threat of compeating with public domain counterparts.

    In the 1970's video games were like a gateway drug to programming. But with the death of the "programming language as the os" systems a whole generation of tech savy kids grew up never knowing how to write code.

    I don't see how someone who learnned how to code in a Microsoft trainning session could keep pace with a programmer who wrote his/her "Hello world" before hitting puberty.

    I'm sure we can all find industreal practaces that contribute to the poor quality of code but quality control didn't exist 20 years ago so why is software quality taking a nose dive?
  • by Anonymous Coward on Sunday March 21, 2004 @06:23AM (#8626416)
    Sorry. I usually hack up my own wheel quicker than adapt to someone elses since that would:

    a) require learning the interface
    b) need glueing code
    c) generate a dependency problem
  • by Quantum Jim ( 610382 ) <jfcst24@@@yahoo...com> on Sunday March 21, 2004 @06:24AM (#8626419) Homepage Journal
    That's an interesting point. Somebody will often post a copy or mirror of an article or web site if the original has been slashdotted. The copy has been presented because the original is unavailable due to technical reasons: It is the author's intent to keep the page up, but there isn't enough bandwidth. Is that still copyright infringement if permission hasn't been obtained prior? What about Google's cache?
  • by Anonymous Coward on Sunday March 21, 2004 @06:33AM (#8626444)
    Grandparent failed to mention that it is equally irritant when the code is badly structured as well. Actually it might be even worse, since the notation is possibly also lying then ("hey, I'll just change this to unsigned without touching the name").
  • by Anonymous Coward on Sunday March 21, 2004 @08:31AM (#8626687)
    But still, I do not see any suggestions for a fundamentally better model or even any concrete problems with the existing one.

    Its the editor!

    No, not the IDE, its the model we use.

    I've coded almost daily for over 25 years, so ponder this!

    We're working the parse tree once removed and are forced into prose by the ancient conventions and a pathetic need to print.

    We lack a model in which we work the parse trees symbolically/pictorially using the keyboard. One where we can zoom structurally, rotate and slice our model by patterns, flows and data-access regardless of instance and without regard for 'files'.

    From the 'forest for the trees' department.

    Cheers!
  • by Anonymous Coward on Sunday March 21, 2004 @09:50AM (#8626889)
    Not so! In over 20 years of working with software on everything from real-time process controllers to top-of-the-line IBM mainframes, one thing I've noticed over and over again is that the worst-quality software comes from "business-like" projects and the best - meaning cleanest, fastest, most reliable and most fun to use is done by the people who had fun doing it (generally as revealed by things like Douglas Adams quotes in the documentation and source code). It's even true of major IBM program products.

    OTOH, the OTHER thing I've noticed is that the more a software product costs, the more likely it will be cranky, unreliable, and a bitch to use.
  • by blahplusplus ( 757119 ) on Sunday March 21, 2004 @09:57AM (#8626908)
    The problem of near infinite time investment... of time and work versus finite amount of resources for small customer payoffs / results. It takes massive investments of time to get the computer to do some of the most basic interface and problem solving tasks so that humans can do and perform some simple tasks, even today. Tools and compiler/language development needs to get better. Right now many programs are too fragile, very hard for the end users to modify (without recompilation), and they are far from robust. Think of it this way, in an ideal world any program should be able to run on any platform without having to be recompiled and any necessary hardware/software dependencies would be automagically emulated (assuming you have the CPU power). Computer scientists have yet to create decent "building blocks" and tools that don't require a thorough understanding of how the tool itself was made. i.e. you don't expect a construction worker to know the workings of how his tool(s) was made or came to be manufactured or how it works internally, he can use it to perform all tasks from the intended "purpose" without ever having to understand how it was made or works internally from a single domain of functionality. Too many times programmers have to have cross disciplinary knowledge of their tools/etc that should not be required to get the job done. Weak analytic/development tools to help other programmers and new programmers demystify what is going on without having to write lengthly comments to tell other programmers what sections of code mean. This should tip you off right there that if you have to comment and explain something that should be as obvious as reading plain english sentences that mean the same thing you've got severe weakness on multiple person projects that shouldn't be there. No one ever gets confused about- "Today I went to the supermarket and bought some food." with "Today I picked up some food at the supermarket." If you look at computer programming languages today, its like learning a foriegn language because you're forced to "learn the rules", syntax of how the language works AND how it parses and a million other little "gotchas" when it interprets your code. Right now programming tools to create things are simply in the dark age. How many lines of code does it take to create buttons, lists, input boxes, programmer and user friendly functionality from scratch? It takes a massive investment of time and energy today just to create the meaningful building blocks let alone full programs.
  • by ultrabot ( 200914 ) on Sunday March 21, 2004 @12:14PM (#8627492)
    Now Lisp took a hit in the aftermath of the Lisp machine days, but with computers and Moore's law and advances in compiler and language technology, maybe it is time for the idea to come back.

    The problem w/ Lisp seems to be that not many programmers like it. Even the ones that try to like it, and learn it in school.

    If you teach someone Python, it is extremely probable that the pupil will become a Python fan. Teaching people Lisp (or Scheme) seems to have pretty much the opposite reaction. Some of it is probably due to shoving FP down their throats, but not all.

    Lisp, they say (Paul Graham, et al), was the victim of worse is better,

    Wasn't it Richard Gabriel?

    How about Python as a worse-is-better substitute for Lisp? Even Paul Graham admits it is getting close.

    Actually, Python already seems to have replaced Lisp. Only the die-hard macro freaks stick with Lisp, most dynamic typing - first class functions - simple semantics people have picked up Python.

    If Python is to become the heir to Lisp

    Curious wording you have here. Python has beaten Lisp in popularity ages ago. Perhaps you mean a technical heir to Lisp? Lisp is by far not a "king" in anything, except perhaps power of expression. Many would argue that it is too powerful to be practical.

    But speaking as a Delphi programmer who has done some work in C++, Java, and C#, I think Python has potential and is something I am going to be looking at as it evolves.

    Better yet, starting hacking in Python right now. It doesn't *need* to evolve, it's excellent as it stands. I'll take all the evolution with open arms, but the beauty of programming in Python can take your breath away even now.

    It really opens your eyes and helps you see how the rest of programming community seems to be still living in dark ages. I believe this is how Lisp people feel about their language too, so I guess we are in the same boat in a way. There seems to be a weird synchronizity b/w Lisp and Python. Both are probably manifestations of some deeply profound programming archetype :-).
  • by Arkaein ( 264614 ) on Sunday March 21, 2004 @12:40PM (#8627668) Homepage
    This is true if you explicitly append a character for the type of every variable. However, there are other uses of Hungarian notation that imply more intent than explicit types.

    In my C++ code I tend to use the following conventions:

    m_ = member variable
    sm_ = static member variable
    g_ = global

    These will not change unless the variable is moved to an entirely new scope.

    Actual type annotations:

    p = pointer (not likely to change unless all code using it changes)
    n = integer counter (usually int, but long or char could work also)
    str = string (not always used, and I know MFC/Win32 uses sz, but pretty clear)

    Combine these, and I know that m_pObject is a pointer to a object of some sort which is a member of the current class.

    That's basically it. A few characters that help convey intent, which is I believe the purpose of a good variable name. These examples are invariant to minor changes in type, such as switching float to double.

    I would agree there are bad emples of Hungarian notion, notably by Microsoft. I really hate the lp* notation. Only microsoft would feel the need to declare a whole new type that is simply a pointer to another type.

  • by Tenzen01 ( 155389 ) on Sunday March 21, 2004 @01:34PM (#8628057)
    Many people have attacked the problem from how Developers should write software. As many have mentioned, the ridiculous management schedules and push to just "get it out the door" tends to lend itself to corner cutting in design, coding, testing, etc.

    I think that it is up to the End Users to start demmanding better. None of "This Software is provided with no warranty" crap. Somewhat expected from Free Software, but I can't begin to understand how real businesses that spend millions of dollars on commercial software can continue to stand this. If you bought a car with even a fraction of the defects seen in most commercial software packages, you would return that thing so fast it would make the dealer's head spin.

    People will argue that building a car and building a desktop application have entirely different sets of risk and expectations. The desktop application will (probably) not kill you when it crashes. There are all sorts of regulations and lawsuits that exist to keep the manufactures very concerned about the quality of their product.

    The same is needed in the software industry. Until there is a major demand for high quality products, where users no longer expect or even tolerate that they will find bugs, software will continue to give into buggy, quick and dirty code that must be replaced every 18 months.

    There exist software industries today that have incredibly tight quality control and testing (medical & military for instance) and will not tolerate bad code or bugs. Their Software is far less feature rich but far more robust.

    End Users of the software need to demand higher standards from their software before the state of programming will really get better.
  • Re:panel link (Score:2, Interesting)

    by Anonymous Coward on Sunday March 21, 2004 @02:09PM (#8628264)
    Ever use type-ahead find in Mozilla?

    He basically pointed out that the subset of operations we now perform on "the web" would satisfy 90% of humans and should be optimized for, and to some extent, he's been proven correct.

    There's a problem with the Humane ideal providing few good metaphors for data modeling, but we seem to do okay switching between keyboard ("text manipulation metaphor") and mouse ("grabbing hand; spatial manipulation metaphor"), so it's pretty zero-sum.

    When it comes to writing, I'd rather have a Cat than OpenOffice, but I'm not sure it'd be any better at, say, graphing.
  • by StarBar ( 549337 ) on Sunday March 21, 2004 @07:01PM (#8629502) Homepage Journal
    I could write a very long post about this because I have given it a lot of thoughts over the years, but I'll keep it short(er). I have bitched a lot about why we need better software and *not* faster CPU:s. Does your word processor produce your documents faster these days using 2.6 GHz instead of a dazzling 90Mhz? Ohh, you still type at the same pace...?

    Writing stuff in C gives you speed and almost total freedom. Nowadays you can even write drivers entirely in C. Pascal and Ada tried to address quality issues by limit the freedom with declarations. OO languages like Simula, C++ and Java tries to address reuse. Interpreting languages like BASIC, Perl (semi) and shell scripts adresses the compile time issue. But who/what adresses the way we attack the problem?

    The common denominator for all languages needs to be adressed. The text file! Just think about it, how much time is spent just to format and handle the chunks of code that should carry out your creative solutions? More than you think including missing #includes, linker arguments, shared/private symbols, bugs related to text file scope (static globals) and lazyness not refering across text file boundaries properly when the project grows.

    Smalltalk tried to adress this by making everything OO in a super inheritence metaphore, but to me at least I just got confused because the base functionality is so huge. Maybe they are onto something but its far too complex to gain popularity and I don't think it has the simplicity needed to succeed. Many incremental compiler projects has been promissing I think but we haven't seen a real usable and working environment yet.

    I do think that with some slight adjustments to the C language removing the text file as scope and the #include CPP directive a base for a better and creative programming paradigm could start. Instead all globals (variables and functions) should be stored in a "scope tree" where the actual storage should be language idependent. Each level in the scope tree should require an API declaration. Each node could be idependently compiled and incrementally linked. Emacs would work directly on the scope tree. The system should be able import C projects and detach the code from the text file structure.

    This would give the programmers back some of the energy and creativeness wasted on dull tools and malfomed source code trees of today, without loosing the huge C base of sources out there and instead focusing on the language itself.

    I know this is just a dream I have had for a long time and maybe someone already started on a similar project somewhere?

  • by rofthorax ( 722179 ) on Monday March 22, 2004 @03:51PM (#8637118)
    Yeah I'm beginning to see this myself.. My brother when he first came upon OOP said at Xmas time that he doesn't believe it is all that useful, that he would rather code everything in C because of the efficiency. However I still believe its good to think in terms of objects, just not good to design code first with objects, its better when refactoring the codebase, like going from something C based to something C++.

    Another thing is to look at design antipatterns that talk about OOP flaws, like the "blob" anti pattern, a object that is litterally an entire application or a library.. I've seen this a lot..

    The one major problem I have with the industry at large is the the avoidance of pure OOP concepts, like that of agents (objects that traverse a network, jump from computer to computer, and have self knowledge). I see this great acceptance of XML, but XML is a data format, a pure object format includes methods to manage the data contained int he object, and those methods should be able to best manage themselves.. The methods for the objects can be purely data oriented, and need not any system resources to do what is needed. So why not put these agent link objects in sandboxes, limit their computational resources to managing themselves? This makes it so that libraries are not needed to manage the objects, the objects never become obsolete because the code is within, not without. The objects can be upgraded to newer methods that know how to manage them, but the code may be replicated for each object, the idea is that the code that manages the objects need not be complex. And the objects can pass from system to system without having to be rewritten because they run in a virtual machine.

    You could do something like this with XML, but the reason companies like Microsoft support XML is it is not CORBA, its not a standard for interfacing, its not a object oriented design, its a structural data format (that doesn't allow overlapping of elements by the way). If Microsoft adopted a standard for objects that was open like XML, then it would be like giving too much away, because then people could write them out of the picture.. With XML, at least they can change the language, the interfacing method is harder to maintain (data objects and organization, versus functions with parameters), people still must rely on libraries to interpret and transform the data in XML..

    If we put something like Perl code in a XML object, and relied on the methods in the XML to maintain itself (limiting the access of the perl interpreter to system resources), then the XML library that interprets the object is the object itself, or a reference object that knows it well. This eliminates the need for libraries in the system that must be compiled or included to use the objects.. Its allows for varying types of objects to be written, that do various things and no libraries are required.. Even if there is a lot fo object replication, each kind of object a system captures, it could reference the original object to obtain the methods to manage the data of the each type of object it sees.

    This means software never becomes obsolete.. Word processor files are always readable, image files always viewable, sound files always playable, etc.. This for Microsoft would be a disaster, because for one it would not require them to be a part of the picture. Its a democracy, the objects that work the best people use.. Its platform independent, the objects manage data with local methods running on virtual or real processors (in a sandbox). Allows programmers to move onto better things, rather than wasting a lot of time adhering to standards that will change tommorow (so that management in a company can justify upgrades to the next best spanking new version of Windows..

    The thing I had noticed though is that commercial software development tends to make companies adopt archaic languages so that their competitors can't adopt their projects (should management come to be disastisfied with their solutions), this is why one will use del

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...