Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet Programming IT Technology

Monday, The Death of Websites 207

An anonymous reader writes "Developers implementing 'weekend inspiration' are more dangerous than hackers. Vnunet.com has this article about how eager developers and administrators create more troubles than hackers and viruses do for websites. How about those of us who start the week with a cup of coffee and the morning online-news? My inspiration and new ideas for development are definitely not the cause of the Monday-crash hour ... I think."
This discussion has been archived. No new comments can be posted.

Monday, The Death of Websites

Comments Filter:
  • You all died? Where are all the posts?

    Does this mean I shouldn't expect anymore karma?
  • Great sample:) (Score:5, Insightful)

    by EnlightenedDuck ( 621201 ) <michael...last@@@gmail...com> on Monday May 19, 2003 @11:10AM (#5991854)
    They did a survey of 70 leading websites over a nine week period - one needs to wonder who picked those 70 leading websites, and in what sense they are considered leading or typical.

    And if slashdotting causes more downtime than developer mistakes, couldn't one argue that interesting content is more harmful than bad code for website uptime?

    • Re:Great sample:) (Score:3, Interesting)

      by yoshi_mon ( 172895 )
      Indeed.

      A Google search for even the word "website" came back with: Results 1 - 10 of about 68,800,000.

      Even with that number, which I would estimate to be low of the total number of websites in existance, that puts the 70 site survay at .0000102% of all the websites (Granted those are pages but you get the idea.) in the world.
      • Statistically it is enough tho, as long as the 70 were picked fairly (randomly).
        • Statistically it is enough tho, as long as the 70 were picked fairly (randomly).

          I think most statisticians would want at least 100 to consider the results statistically significant, and you need at least 3,000 respondents to guarantee a 3-percentage-point margin of error.

          With all the different kinds of sites out on the internet, with extremely different dev cycles, content, and audience, 70 websites picked completely randomly still could turn out to be 50 pr0n sites and 20 Geocities N'Sync fansites. At
          • Re:Great sample:) (Score:4, Informative)

            by EnlightenedDuck ( 621201 ) <michael...last@@@gmail...com> on Monday May 19, 2003 @01:36PM (#5993034)
            Being a statistician, I wish to disagree.

            1,000 respondents give a +/-3 percent 95% confidence interval.

            Rule of thumb - worst case is if the population is split 50-50. This gives a base 95% confidence interval of 1. Divide by the square root of n. That is your worst-case confidence interval.

            If you can make a random sample of all major websites (define major websites), then no need for stratified sampling - only if you wish to talk about specific subgroups or if sampling the general population is difficult is stratified sampling necessary.

    • Re:Great sample:) (Score:2, Informative)

      by bigman2003 ( 671309 )
      Maybe not a great sample- but I am an example.

      I've been working on a project for the last two months, and when they created our 'live' date- they picked today.

      Moving the site from test, to live, we noticed 5 or 6 small problems (due to browser cache issues, people still had parts of the old pages).

      So- I guess that Monday morning we had issues, but not due to the developer (me) I hope- due to the arbitrary date set by people in charge.

  • day traders (Score:5, Interesting)

    by prgrmr ( 568806 ) on Monday May 19, 2003 @11:10AM (#5991862) Journal
    Has anyone done any sort of bandwidth study looking at sites like etrade and yahoo, for purposes of determining any correlation between bandwidth consumption and movement on the stock markets? Intuition says that Monday mornings ought to see some sort of correlated spike.
    • by Consul ( 119169 ) on Monday May 19, 2003 @11:52AM (#5992202) Journal
      Has anyone done any sort of bandwidth study looking at sites like etrade and yahoo, for purposes of determining any correlation between bandwidth consumption and movement on the stock markets?

      Maybe you should lead the way in doing this study. Then, you can publish your results, get Slashdotted, and become an inexplicably famous Internet personality! ;-)
    • Re:day traders (Score:2, Interesting)

      by Anonymous Coward
      First, day traders or the public in general is such a small part of the capital markets that they don't even register a blip on the radar. Even if someone had a million dollar portfolio they still are up against institutions with billions of dollars. So in effect, the mass public is irrelevent.
      Second, I don't know about bandwidth studies but you will find that most of the activity in the markets takes place on the open or on the close. Mid morning and mid afternoon markets are generally dead.

      Also, to do a
      • Maybe I'm assuming an overly-broad definition of "day trader", in that I wouldn't expect all of them do al their own trades. For people with larger and/or fairly diverse portfolios, there is an advantage to working with a brokerage firm. Doing your own research on the web to get the up-to-the-minute info (such as it is) combined with watching the markets move in relation to that info and to each other isn't something I would expect those who do make use of a broker to competely eschew. Doing your own hom
  • by h2oliu ( 38090 ) on Monday May 19, 2003 @11:10AM (#5991865)
    I log in, the story is a few hours old, and there are 4 posts. Slashdot implementing the theory?
  • by aggressivepedestrian ( 149887 ) on Monday May 19, 2003 @11:12AM (#5991876)
    This guy has just solved the software problem:
    Neal Gandhi, vice president of product management at Attenda, said: "The quietest time of year for website problems is over Christmas and New Year because the development teams are away, even though it's a busy time for consumer websites. "Then, as soon as you see the developers logging on again, the trouble starts."
    So, software bugs are caused by developers working on software. The solution is clear: all those VPs of product management should just pay us web developers to stay home.
    • STOP ME BEFORE I CODE AGAIN!
    • Re:The cause of bugs (Score:5, Interesting)

      by Anonymous Coward on Monday May 19, 2003 @11:33AM (#5992065)
      Mr. Gandhi has his cause and effect a little mixed up, and I think he's implying that new development shouldn't ever introduce new bugs, which is a little silly.

      For the concrete "holiday lockdown" example, I think he's only partially right. In my development group, we explicitly lock down ALL changes to our production web apps well before, and all through, the Christmas shopping season, to prevent the inadvertent introduction of any (new) bugs. It's not a side effect of vacation time -- it's an explicit operations decision to reduce the risk of breakage.

      So, yeah, while we're not touching it the stability seems to increase, but no existing (but less critical) bugs get fixed either. No large-scale app is bug-free -- the lockdown period just seems to stabilize things but it's an illusion caused by the lack of new species of bugs popping up.

      In the more abstract "development introduces bugs" sense, it's a fact of life in complex systems that new code means new bugs -- and if we never introduced new code (->features) then we'd lose customers. So I take his statement to imply that we should only be introducing 100% bug-free code -- which is a PHB pipe dream.
      • Mr. Gandhi has his cause and effect a little mixed up, and I think he's implying that new development shouldn't ever introduce new bugs, which is a little silly.

        He does have a valid point about testing before putting code into production and being able to roll back changes. That's all pretty obvious stuff.

        The mention of managers pressuring for changes, but not allowing for adequate testing time is also typical.

        -Craig.
  • by UCRowerG ( 523510 ) <UCRowerG@@@yahoo...com> on Monday May 19, 2003 @11:12AM (#5991877) Homepage Journal
    "However, you still get managers who don't understand the technology and want changes implemented yesterday. If it goes wrong it's the developer that ends up with egg on the face."

    The article suggests that developers come back from their weekends and start fiddling with websites, but I think this last paragraph is perhaps equally or more accurate. Managers get "inspired" over the weekend just as much as code writers.

  • by British ( 51765 ) <british1500@gmail.com> on Monday May 19, 2003 @11:15AM (#5991904) Homepage Journal
    Reminds me of the BBS days. Usually a few hours after the SysOp leaves on vacation, the BBS is guaranteed to go down.
    • "Reminds me of the BBS days. Usually a few hours after the SysOp leaves on vacation, the BBS is guaranteed to go down. "

      My boss went on a trip to Europe for a month. A few months previously, she had two racks full of servers managing our needs. It all ran like clockwork. Of course, this one-month trip included a 2 week stint where she couldn't be reached. The day after she was completely out of contact, *poof*. I shit you not, there was a *poof*. Something on the motherboard let all it's smoke out.
    • If you remember BBS's, you are too old to be posting on Slashdot.
  • by wowbagger ( 69688 ) * on Monday May 19, 2003 @11:15AM (#5991906) Homepage Journal
    This is nothing but unprofessional development - the old "Oh, this is soooo good and sooo simple, how can it possibly cause troub..... <NO CARRIER>".

    Any codebase, be it a program, a web site, or a router's firewall rules, should be changed IN TEST FIRST! Then you do your best to break it, and only after you and several others have had at it do you move it to production/HEAD/whatever (and hold your breath).

    If you had a wonderful idea over the weekend, GREAT! Implement it in a test branch, try it out, and then move it to production. But if you merge it into the mainline without testing you are not acting professionally.

    I will give the /. crew this: while their spelling may be atrocious, their grasp of grammer poor, and their fact and dup checking next to non-existent, they will put major changes to the codebase into Banjo first, then after they've been abused put them into the main /. site.

    At least, some of the time.
    • by Blaine Hilton ( 626259 ) on Monday May 19, 2003 @11:44AM (#5992143) Homepage
      The biggest problem I see is that sometimes a change is so simple it seems better to just go direct to the liver server, then all of sudden everything is down and you can't figure out what went wrong with the change.
      • by tadas ( 34825 ) on Monday May 19, 2003 @11:57AM (#5992238)
        I could see the "liver server" going down after the weekend
      • ... sometimes a change is so simple it seems better to just go direct to the liver (sic) server, then all of sudden everything is down ....

        And that is EXACTLY my point. I've been bit by that too many times myself - "Oh, this is so simple I'll just roll it in". Then all goes to hell.

        True, there are things that won't break until they go live - that's life in a universe ruled by Murphy. But the idea is to reduce the likelihood of an error as much as possible.

        That is why, no matter how tempting, no matte

    • by gughunter ( 188183 ) on Monday May 19, 2003 @11:46AM (#5992160) Homepage
      while their spelling may be atrocious, their grasp of grammer poor

      "Grammer"? You're just trolling, aren't you?

    • That's one reason why web developers have a low reputation compared with other software developers. It may not necessarily be justified, or even the direct fault of the developers, but it's there nonetheless.

      It's almost like all sense of professionalism evaporates when you're working on a website. Database developers do some awesome work, and their specialty is accorded a high level of regard. Yet when they work on a website's backend they stumble and fall. I've seen Java programs that are the pinnacle of
      • OUCH!

        I think the problem is the immediacy of a web site - it is possible to make quick changes and see the results (as opposed to a lengthy edit/compile/link/load/swear cycle) so people get used to making quick changes. When you can make a quick edit and immediately see the results, you get sloppy - when you are looking at a 15 minute e/c/l/s cycle you are a bit more careful.

        On my project at work [p25.com] we do much of the work in TCL/Tk - and so you can make changes while the system is running. This is good, in
  • OMG (Score:2, Funny)

    by Lane.exe ( 672783 )
    OMFG I broke teh Intarweb this morning!
  • by CTho9305 ( 264265 ) on Monday May 19, 2003 @11:17AM (#5991921) Homepage
    In any properly managed environment, developers don't get to [i]touch[/i] the production environment. If they do, it should be read-only. All changes are made in the dev environment (developers can do what they want), put into test (developers are seriously limited), and then finally into production. Prod should be a physically separate set of servers from dev & test.

    If stuff breaks on Mondays, either someone is skipping steps, or there is more going on.
    • If stuff breaks on Mondays, either someone is skipping steps, or there is more going on.

      You mean something like this?

      if (day == MONDAY) {
      die_you_scum_sucking_pigdog ();
      }
    • Your server (in sig) is down. Should I check back on Tuesday?
    • Isn't this the point of the article? Thanks for pointing it out to the less discerning of the readers. Though, I must admit, the article was sparse on actual remedies or even any ideas in general.
    • That's fine and dandy if you have an large IT staff to do all that, but most of my friends have jobs where there are less than 3 people doing all the development, myself included.

      I enforce upon myself the requirement to run new code on a test server first, but a formal and managed development environment just isn't going to happen at small companies, or larger companies with small dev staffs.

      Then there is also the issue of things that are extremely difficult to model in a test environment. Complex failur
    • But they (we) always do.

      Speaking purely from my experience, half the problem is managers who don't understand that "I've finished coding" does not mean "I am ready to deploy."

      It's been years since I got time to do serious "pre-deployment" testing. The code deadlines are so tight...well, you can imagine.

      In school they always talked about post-production/predeployment. I've never worked in a place where that was anything but a dream. I've watched people edit LIVE CODE before, I mean the ACTUAL pages, while they're up! Used to be, compile time was so long, people exhaustively checked their code before trying to compile it. Now we compile to check for typos. Guess this is similar sloppiness.

      Just my 2 cents worth.

    • by jc42 ( 318812 ) on Monday May 19, 2003 @01:52PM (#5993172) Homepage Journal
      In any properly managed environment, developers don't get to [i]touch[/i] the production environment.

      In the project that I'm working on, this is done. And it's the main reason that changes to the "live" web sites are usually disasters.

      I develop something new, test it very thoroughly on my test machines. It all seems to be working, so I hand it over to the guys running the live systems. They install it in different directories than I did (and won't ever tell me where they plan to installl things, so I can't defend against this very well). They change the cgi scripts' search paths so they can't find some of the things they need (or find old versions). They install images in random new directories without changing the tags.

      Then they complain that my stuff doesn't work, and was obviously not tested thoroughly.

      Well, of course it wasn't. I have no way of knowing how it'll be mangled when its installed in the live systems. I can't find out the directory structures of the live systems. But I get the blame when it's installed all wrong. ;-)

      • And they're right to say it wasn't tested thoroughly. Your test machines MUST be identical to the deployment machines; the directory structures, the version and patchlevel of the OS and critical apps...

        Saying "Well, it works on MY machine" is irrelevant - the only box that counts is production.

  • Manic monday eh? (Score:5, Interesting)

    by TheCarp ( 96830 ) * <sjc@carpa[ ].net ['net' in gap]> on Monday May 19, 2003 @11:17AM (#5991925) Homepage
    Sounds alot more like lack of a proper devlopment environment to me.

    I mean its easy for it to happen. We had problems like this with our monitoring system (tho it was manic friday where someone would attemtp to impliment something before the weekend because of course, the weekend is when you want pages the least so you want to get anything that causes false pages fixed on friday to maximize enjoyment of the weekend)

    Now we have development and test servers where things live BEFORE they go production. I never had any idea that it would help so much until we finnaly implimented it.

    -Steve
  • Tesing (Score:5, Funny)

    by mgrennan ( 2067 ) on Monday May 19, 2003 @11:20AM (#5991942) Homepage Journal
    I guess these sites don't test anyting. Maybe they are talking about small sites. I work for a big car company. We have three stages of testing.

    I'm not saying the artical is wrong. The developers are still the biggest problem with our web site. It just doesn't always happen on Monday. Some times it takes tell Wednesday to get through the system. :-)
  • by BrianUofR ( 143023 ) on Monday May 19, 2003 @11:22AM (#5991966)

    Just a thought: The rest of the world lumps all of us IT people together; the distinction between, say, a "developer" and "sysadmin" means nothing to my non-geek friends.

    I don't think stuff like this happens often to sysadmins or DBAs. How often do you come into work on a monday and decide to migrate to xfs because you read on slashdot over the weekend that SGI ported it to linux, and SGI is cool? Likewise, how often does an Oracle DBA decide on Monday to move some production tablespaces over to rawfs from cooked, because she read a whitepaper from Oracle on Saturday that talked about performance increases from raw filesystems?

    I've written a lot of code, and also sysadmin'd an awful lot of servers, and in my experience probably 90% of "production outages" are software changes--exactly like the article said--poor change control, etc etc. So, what's the point of dynamic multipathing, patching, dual power supplies, etc etc, when most problems occur because someone got excited and forgot a semicolon somewhere?

    Is it fair to say that sysadmins fix things and developers break them? What is different about a software engineer's brain than a systems engineers? Talk amongst yourselves :)

    • by Anonymous Coward
      You're joking right? The difference is that the developer's job is to make changes and the sysadmins's job is to prevent changes. Which goal is more likely to cause outages?
    • by Enzondio ( 110173 ) * <jelmore AT lexile DOT com> on Monday May 19, 2003 @12:04PM (#5992306) Homepage
      I don't think stuff like this happens often to sysadmins or DBAs. How often do you come into work on a monday and decide to migrate to xfs because you read on slashdot over the weekend that SGI ported it to linux, and SGI is cool? Likewise, how often does an Oracle DBA decide on Monday to move some production tablespaces over to rawfs from cooked, because she read a whitepaper from Oracle on Saturday that talked about performance increases from raw filesystems?

      Well, first of all those are all pretty big changes. No developer worth a damn would try to do something that massive over a weekend, by himself. Also in general I think there is more possibility of a small change causing big problems in development work than IT work, although this certainly does happen with IT, as I can attest to, having spent many a late night struggling with some sever setup or what have you.

      Is it fair to say that sysadmins fix things and developers break them? What is different about a software engineer's brain than a systems engineers? Talk amongst yourselves :)

      Again I think it comes back to the job of a developer, not being harder per se, but perhaps being ... more experimental. Much of what is done in the IT world has been done by many other people many other times and so one can draw from that experience. This is true in the development world as well, but I believe to a lesser degree.
    • by teromajusa ( 445906 ) on Monday May 19, 2003 @12:06PM (#5992319)
      "Is it fair to say that sysadmins fix things and developers break them?"

      Not in my experience. I've seen sysadmins break software by installing security patches, changing server passwords, changing firewall rules, restarting servers at the wrong time, swapping hardware, tinkering with network topology, failing to follow proper startup or shutdown proceedures, failing to perform necessary maintenance, etc. DBAs can cause just as much trouble tinkering with optimization, DB parameters, passwords, etc.

      Thing is, anyone involved in the software process in any meaningful way can break it if they do something stupid, and in my experience, stupidity is not a trait confined to a particular profession, culture, religion, or ethnicity but is shared generally by all.
    • Developers don't have beepers going off when the server goes down. All the sysadmins I have known basically live with a server lo-jack on their hip, unable to go outside a certain radius from the server (or a terminal).
    • Is it fair to say that sysadmins fix things and developers break them?

      Our last three major outtages were as follows:

      1) DB Server failed in a way we cannot reproduce in development (recovered easily, but manually).

      2) Sysadmins didn't install the correct certs the second time (renewing).

      3) Major power outage to main office pointed out that all DNS was served from the main office.

      The last one produced a minor bug where someone had configured production servers to ignore (NOOP) a reasonably large percenta
  • if developers hurt a website more than viruses, then surely sites with more developers crash more, and sites with fewer developers crash less. Thus, the site with the most developers working on it has the most problems, and all sites with 1 developer have incredibly few problems. My personal site doesnt have enough complicated stuff on it to really "crash" per se, so obviously less developers means less bugs in that way. So, whose site has the most programmers working on it? Hmmm.... of course, the larg [microsoft.com]
  • Am I crazy? (Score:3, Insightful)

    by cruppel ( 603595 ) on Monday May 19, 2003 @11:23AM (#5991977) Homepage
    or do those developers need to possibly develop on a copy of the website not accessible to the public? I mean, it's not hat hard to hit cp -R and transfer your updated functioning website to the primary directory...Maybe I'm the only one that doesn't tinker with things that people are hitting as I type.
  • Changes on Monday? (Score:4, Interesting)

    by borkus ( 179118 ) on Monday May 19, 2003 @11:24AM (#5991979) Homepage
    On retail and B2B sites, Monday is usually a busy day. Customers are rolling into their offices with articles to read, facts to research and stuff to buy. Out of the seven days of the week, it'd be the worst for making a change.

    On the other hand, I'm not sure incremental development is that much worse than large releases. You're either releasing a bug or two a week or waiting eight weeks to release all your bugs at one time.
  • Monday, The Death of Websites

    Yeah... Unless you are a /. subscriber!
  • by KFury ( 19522 ) * on Monday May 19, 2003 @11:24AM (#5991985) Homepage
    The tone of the article talks about shoot-from-the-hip developers acting irresponsibly, on impulse. They're taking a recognized and thoughtful practice and painting it as irresponsibility.

    Monday is the best time to implement changes to most sites. The irresponsible coder implements on Friday, when errors might not be caught, or fixed, until the next working day, after a full weekend of downtime, bugginess, or insecure behavior.

    But that wouldn't make for an interesting story. News flash: updating code often results in bugs that need to be fixed. When do the authors suggest we roll out new versions?
    • by kelleher ( 29528 ) on Monday May 19, 2003 @11:51AM (#5992201) Homepage
      When I was responsible for the Internet site of a rather large national bank, we only accepted change requests for Tuesday and Thursday mornings. It was just easier for the operators to get hold of a sober developer/administrator at 02:00 on a Tuesday or a Thursday than any other time. And getting a contact on the business side to ok a rollback that caused contract issues on the weekend was near impossible.
  • hmmmm (Score:3, Funny)

    by bilbobuggins ( 535860 ) <bilbobuggins@@@juntjunt...com> on Monday May 19, 2003 @11:26AM (#5992005)
    that's odd
    i tend to spend the weekend trying to think of excuses to avoid doing any work on monday morning, somehow i figured other people did the same thing

    p.s. our website hasn't gone down in 2 years, go figure

    • Re:hmmmm (Score:2, Flamebait)

      by outsider007 ( 115534 )
      that's odd
      i tend to spend the weekend getting blowjobs from hookers behind the circleK, somehow i figured other people did the same thing
      • "i tend to spend the weekend getting blowjobs from hookers behind the circleK, somehow i figured other people did the same thing "

        I wish my group of friends had thought of that.
  • Sounds about right (Score:2, Insightful)

    by Stargoat ( 658863 )
    This really isn't surprising. The most dangerous person to a network is the person who has the administrator password.

  • by problemchild ( 143094 ) on Monday May 19, 2003 @11:28AM (#5992014)
    While working for a large nameless Telecoms Company,
    I and my fellow Contractors had an unwritten rule to "hold off" on all "good" ideas generated in meetings etc on Monday & Friday. Almost inevitably they would
    all be canceled within a couple of days. Not subjecting ourselves to post/pre weekend madness saved ourselves a ton of work and helped us bring the project in on time!!
    • While working for a large nameless Telecoms Company, I and my fellow Contractors had an unwritten rule to "hold off" on all "good" ideas generated in meetings etc on Monday & Friday. Almost inevitably they would all be canceled within a couple of days. Not subjecting ourselves to post/pre weekend madness saved ourselves a ton of work and helped us bring the project in on time!!
      Wouldn't that just relocate the "weekend madness" to Tuesday and Thursday?
  • My theory is much more simple. Everyone on the Western half of the planet is going back to work and they really don't want to be working, so they *gasp* - hit the internet. I also believe people are more likely to be home at New Year's and Christmas in addition to the developers.

    Well thought out article. I put less thought in my article, which is why it is at Slashdot.

  • Weekend Update (Score:4, Informative)

    by Rick the Red ( 307103 ) <Rick.The.Red@gmaBOYSENil.com minus berry> on Monday May 19, 2003 @11:33AM (#5992060) Journal
    Websites crash Monday because they're usually updated over the weekend.

    They talk as if the developer has an idea over the weekend, then comes in Monday morning and implements this idea without any testing. But if that were true, the websites would crash Tuesday. I mean, really, how many of you think these guys are really making the changes Monday morning and the websites are thus breaking Monday morning? Any changes you see Monday morning were loaded over the weekend, and are probably the result of all last-week's work. Whatever ideas anyone got over the weekend will be coded and tested this week and installed next weekend; they won't show up until next Monday at the earliest.

    • Re:Weekend Update (Score:5, Interesting)

      by riflemann ( 190895 ) <riflemann AT bb DOT cactii DOT net> on Monday May 19, 2003 @12:41PM (#5992610)
      The correct precedure for an update should be at minimum:

      • Lightbulb above head in the weekend (ding!)
      • Over the following week, research the change, check impact on existing systems, come up with a maintanance strategy, document it, inform people, test it in a lab, plan the implementation, develop a rollback procedure.
      • Implement change early the following week - never on a Friday, preferably not on Thursday.
      • Watch throughout the week for problems.
      Anything less and you dont deserve to be in that position.
      • Correct is not necessarily what is mandated to you from above. Sure, it'd be nice to be able to take two weeks for a single idea, but reality is a different story. By two weeks later you are probably expected to be working on a new project or at the very least moving on...
    • Also more importantly what developer is coherent on a monday morning! Even if he/she is in?
  • by UncleAlias ( 157955 ) on Monday May 19, 2003 @11:38AM (#5992102) Homepage
    Log in.

    Cup of coffee.

    Browse online forums.

    Read witty remark.

    C|N>K

    Change keyboard. Curse profusely.
  • I, as IT director, would fire my IT staff if they pulled this. Considering that I have some systems with uptimes in YEARS, a few going on a DECADE, and over-all the _entire_ network has worked 24x7 for the last 10 years. Our business operations isn't even Internet based (we just happen to use it -- primarily for email) and the operations of the systems isn't life-critical. We just like our computers/networks to work.

    Of course I'm the one that implemented a testing domain (live on the Net) for just such pur
    • by HeghmoH ( 13204 ) on Monday May 19, 2003 @12:40PM (#5992606) Homepage Journal
      Yeah, that's great, your entire network has amazing reliability.

      Oh, but you don't do anything interesting on it.

      Have you considered that running complicated software that your business relies on could reduce reliability simply because it's actually doing something more interesting than serving internal files and transferring e-mail?
    • I, as IT director, would fire my IT staff if they pulled this. Considering that I have some systems with uptimes in YEARS, a few going on a DECADE. . .

      Way to keep those systems up-to-date and current, mister IT director. Are you planning on firing your security staff too?

      - A.P.
    • Considering that I have some systems with uptimes in YEARS, a few going on a DECADE,...

      Any black hats out there want to know about a system that hasn't seen any security patches in the last 10 years?

      You, my friend, are a fool. There is one school of thought that says "if it ain't broke, don't fix it." There is another school of thought out there that says "stay up to date on your patches." Somewhere in the middle there is a happy medium. I myself prefer to wait a couple of months for the general publ
  • I'm going to start with pointing out that the first sentence of the article said "UK websites", not "US". Obviously that means the people across the Pond need to work on this.

    And what a surprise that when people roll out changes sometimes things break. Oh My God. Have you cured cancer yet?

    And I'd say more often than not the "problems" on Monday are caused by bug fixes that developers are rushing on to production to fix bugs that were found over the weekend. And, as we all know, sometimes bug fixes ski

  • by mwillems ( 266506 ) on Monday May 19, 2003 @11:44AM (#5992144) Homepage
    Seems to me we are talking about several different things here.

    First of all, presumably it is a good thing that people think, and get inspiration. Mon-Fri 9-5 is not the best time for thinking - this is the time for meeting deadlines, sitting in meetings, answering the phone, putting out fires, and so on. The only time most of us have to actually sit and think is the weekend. Personally, I think that should be encouraged.

    The next step is implmenting what you have dreamt up. Obviously, most ideas fail - ask any patent officer. And obviulsly, implmenting a new idea without checking with colleagues, drawing it 0ut in a spec, getting that spec approved, then protoyping, testing, tuning is not ideal either. These procedures were invented for good reasons - not just to constrain the creative mind. This is where most developers fail - not in coming up with ideas, but in being disiplined in implmenting them. I hear "we cannot plan ahead, it does not work like that for us" all the time from my developers - this is always a misconception, and seems to me simply a combination of inexperience, laziness and inability... nothing that cannot be fixed!

    Michael

  • by ckessel ( 607295 ) <ckessel@tripwi r e .com> on Monday May 19, 2003 @11:48AM (#5992173)
    Sheesh, next thing you know they'll start spouting nonsense like "burning the midnight oil leads to more bugs."
  • This type of thing might happen on small websites, where developers work on code, unit test, then publish. But any large code-base will have a cycle where things are tested first, and then rolled. Typically these rolls are scheduled for the best possible time, which often is monday morning. Everyone is in house, and you've got a whole week to fix anything that went wrong.

    M@
  • ohh come on.. (Score:5, Insightful)

    by Squarewav ( 241189 ) on Monday May 19, 2003 @12:02PM (#5992286)
    It doesn't matter if the website was made on saturday , or wensday. anytime a website comes out with new code, its going to fuck up in the first few days. there is just no cheap way to test a website with a full load of users all with difrent OS's, web browsers, and connections. how many times has slashdot craped out when they update the slashcode
  • QA (Score:5, Informative)

    by Arpie ( 414285 ) on Monday May 19, 2003 @12:03PM (#5992297) Homepage
    Having been working on a company that grew from a 1999 Internet startup with 5 employees (me being the only programmer to work along two consultants) to a profitable Internet company with 40 employees in 2003 (inlcuding the two former consultants), I've seen quite a bit of change in the IT procedures.

    We have an 8 people tech team now (manager, programmers, support, QA). Whereas before we programmers would just use a development environment somewhat similar to the production (live) environment, test it a bit, deploy at will and monitor if anything went wrong, things have progressed a lot. Now we develop on a development environment as close to the production one as possible, then this is released to a test environment (also as close as possible to the production one) to be tested by QA, and that is finally released on the production (live) environment after it all tests ok (including regression testing).

    Moreover, all the code changes are now under CVS, and we have automatic tools for monitoring the site, emailing errors, etc. QA is also done by separate people. IMHO it is conceptually flawed to allow the developers to do the final testing, by definition. (Though of course this is not always possible for cost reasons, it should be a goal).

    The quality of our site is much better now. Problems almost always only arise when people want to bypass QA or force things through for emergencies.

    IMHO, what is needed is:
    1. Professionalism by the developers.
    2. Testing, testing, and testing -- by the developers.
    3. QA, QA and QA -- by someone other than the developers!
    4. Managers must know the test/ QA process should never by bypassed -- this unfortunately is probably the hardest point. :-(

    I taught a couple of ecommerce classes for MBA students and had them actually do hands on development (in a limited sense of course) so they could get an appreciation of this process. Hopefully if some of them are managers they will remember that and not try to shortcut the due process.
    • Re:QA (Score:3, Insightful)

      by Hentai ( 165906 )
      4. Managers must know the test/ QA process should never by bypassed -- this unfortunately is probably the hardest point. :-(

      Of course it is. If the code breaks, it's the developer's fault, even if the Manager broke the process. The Manager can FORCE it to be the developer's fault, so what reason does the manager ever have to accept blame?

      When you have a choice between waiting 3 days for a fix that has to be done TODAY, and risking breaking something that absolutely MUST NOT BE BROKEN, it is your fault, a
  • by xchino ( 591175 ) on Monday May 19, 2003 @12:07PM (#5992335)
    .. Have never defaced my site with the goatse guy nor deleted critical files. They may not work great at first (or at all) but they're in no way malicious. More dangerous than hackers or virii? I think not.
  • this is when developers come in and implement ideas they had over the weekend.

    I don't know about some of these development teams they are talking about but around here, you don't just implement "ideas" you might have had over the weekend. "Hey! Wouldn't it be cool if it did this... !" If it's not a requirement, it doesn't go in.

    If that were the case... Wouldn't it be cool if Slashdot loaded a random pr0n image with every article posting! :-) Sure it's a cool idea, but I think Slashdot would end up Slashd
  • "Looks like someone has a case of the Mooondays"
  • by xA40D ( 180522 ) on Monday May 19, 2003 @12:08PM (#5992343) Homepage
    There must be a course somewhere for developers - how to piss-off sysadmins. Highlights:

    1. Make changes last thing on a Friday.
    2. Or before a 2 week holiday
    3. Change Management does not apply to developers
    4. CVS is for wimps
    5. And if you must use CVS, wait a week before committing fixed code.
    5. Don't bump version numbers
    6. Don't update init scripts
    7. Ecept if they are correct
    8. If anyome is aware of what you are upto... go to lunch.
  • Stupid article (Score:5, Insightful)

    by Synn ( 6288 ) on Monday May 19, 2003 @12:10PM (#5992357)
    When you make changes to websites, they sometimes break. Anytime you introduce change into a stable system, you open the door for instability.

    And generally business websites don't see as much traffic on the weekends, so naturally the weekend is the time to make changes.

    So wow, it's no shock that Mondays are when you're most likely to see problems with a website. But these problems and hiccups are the price you pay for progress.

    If you don't want to chance any disruption in your life, then I guess you should never change. Otherwise, get over it.
  • by crovira ( 10242 ) on Monday May 19, 2003 @12:16PM (#5992408) Homepage
    You need to get the code monkey off the production box.

    They need a Dev environment. And THAT's ALL they touch. They deliver their code to UAT.

    QA needs 2 environments:
    - Unit Acceptence testing (UAT) and all bugs go back to Dev
    - Integration Testing (IT) and all bugs back to Dev or you need SysAdmins who need to hack the OS middleware &| environment)

    Production where NOHING is allowed until its gone through UAT & IT.
  • What a gyp! (Score:4, Funny)

    by Jonboy X ( 319895 ) <jonathan,oexner&alum,wpi,edu> on Monday May 19, 2003 @12:21PM (#5992451) Journal
    You mean, other companies' web programmers get to take weekends?!? Man, that must be nice...
  • a solution (Score:2, Funny)

    by cr@ckwhore ( 165454 )
    Clearly we need a 5 day waiting period on developers. How many more of our websites must go down before congress gets the message? We should also launch a class action lawsuit against schools and staffing agencies for negligence in putting dangerous developers in the hands of unsuspecting companies. Its the republicans fault.

    [/sarcasm]

  • I am an employee of the state, and the government of Georgia is cheap! That, and our old-as-creation computers give us hell at random intervals, and let me tell you, they don't discriminate based on the days of the week.
  • by crashnbur ( 127738 ) on Monday May 19, 2003 @12:39PM (#5992593)
    Peter: When you come in on Monday and you're not feeling real well, does anyone ever say to you, "Sounds like someone has a case of the Mondays?"

    Lawrence: No. No, man. Shit, no man. I believe you'd get your ass kicked saying something like that.

  • bah (Score:2, Funny)

    by Anonymous Coward
    Desktop applications sucked because the developers were smart and the users were stupid.

    Web applications suck because the developers are stupid and the users are smart.

    Solution: have the desktop developers design web sites and fire the webdevs! I mean, I've been waiting WAY to long for a boss key on e2 [everything2.net], anyway.
  • by doomicon ( 5310 ) on Monday May 19, 2003 @01:01PM (#5992791) Homepage Journal
    Is this really a case of "Weekend Inspiration", or a case of management pushing changes that haven't been thouroughly tested?

    I find it quite disturbing how these companies are blaming downtime on developers. This means that:

    a. You have no change control over your environment, and developers can do as they please, hence poor management.

    b. Developers are implementing changes that haven't been thouroughly tested. Again poor management.

    Technology and competition isn't moving so quickly that you cannot take the time to use a test/qa environment.
  • by phliar ( 87116 ) on Monday May 19, 2003 @01:06PM (#5992827) Homepage
    What kind of mickey-mouse operation is it that allows someone's (whether management or developer) mistake to take down their web site? Have they heard of QA and testing? You'd have to be insane to allow any changes to a production system without it being tested on the test system first. In all the places I've worked at (I'm a back-end hacker, using C/C++ and java) anyone who made a change to the production system without following all the test procedures (regression tests and QA signoff) would be canned in a second. (Unless it's the VP of Engineering -- but that's another story.) Or these personal sites they're talking about?
  • Two little words: (Score:2, Flamebait)

    by Wakko Warner ( 324 ) *
    Change controls.

    Any place that doesn't use them deserves all the shit it causes itself.

    - A.P.
  • In other news, more people die in hospitals than at McDonalds, so if you get appendicitis, you should go to McDonalds.

  • Change controls? (Score:2, Interesting)

    by HarvDog ( 70933 )
    Not only does my company have an extensive change control system in place, but until very recently, we had a "no changes on Monday" policy. Since Monday is our busiest day, it made good sense. In fact, we couldn't even run network cabling for new servers on Mondays. There were little signs all over the building that said, "A Monday without change is like money in the bank!" Kinda cheesy, but it seemed to work.

    We recently dropped the policy, but I'm not sure if there has been any fallout from doing so.

It's been a business doing pleasure with you.

Working...