Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Operating Systems Software Windows

In The Works: Windows For Supercomputers 705

Robert Accettura writes "According to ZDNet, Microsoft may be feeling threatened by Linux gaining ground in the High Performance Computing (HPC) arena. As a result, they have formed a HPC group to bring windows to these systems. It makes a mention of how clustered computing may be a target. I guess the only thing better than crashing 1 computer at a time is crashing an entire room full at once."
This discussion has been archived. No new comments can be posted.

In The Works: Windows For Supercomputers

Comments Filter:
  • by Mike Buddha ( 10734 ) on Tuesday May 25, 2004 @06:03AM (#9245764)
    I guess Bill thinks it's time to slow the worlds fastest computers to a crawl. Apparently they aren't crashing enough, too.

    • nah, just a PR move. (Score:5, Interesting)

      by twitter ( 104583 ) on Tuesday May 25, 2004 @06:34AM (#9245930) Homepage Journal
      This is a PR move. Clustering is just another thing that Windows can't do. All the Linux clusters popping up everywhere, especially at universities, demonstrate this to PHB and "influential" types. It tends to tarnish the wizz-bang image M$'s has carefully built among the clueless. Microsoft knows that PHBs will never run a cluster, a Hotmail or anything other than Word. By making one or two announcements, they can convince the clueless that M$ is all they will ever need.

      • by Anonymous Coward on Tuesday May 25, 2004 @06:49AM (#9246005)
        What do you mean, Windows doesn't cluster? Of course it clusters! In fact, Microsoft go to great lengths to tell you how to cluster your Microsoft Windows Server to achieve the best performance. There's the:

        • Primary Domain Controller
        • Secondary Domain Controller
        • A backup Secondary Domain Controller (In case the first one fails)
        • The Exchange Server
        • The second Exchange Server (Because the first can't handle the load
        • The backup Exchange Server (In case one of the two primary Exchange Servers fails)
        • The IIS Server..
        • The fallover IIS Server..
        • The fallover fallover IIS Server
        • The MS SQL Server
        • The MS SQL Server backup
        • Two or more file servers
        • The Backup server, running Arcserve or similiar (Because even an MCSE can tell you NT Backup is utter turd)
        • The Active Directory Server
        • The backup Active Directory Server

        • See? All those computers in multiple clusters. Microsoft are always ahead of the game!
        • by BigBlockMopar ( 191202 ) on Tuesday May 25, 2004 @08:34AM (#9246794) Homepage

          I love this whole idea of Windows on a supercomputer! Just think of how fast a spam drone it would make!

          Windows only technical asset is a (relatively) good GUI.

          And, as we all know, *ALL* mainframes, supercomputers and servers absolutely must have GUIs!

          After all,

          • GUIs are less resource-intensive than a CLI (but why would you care, having invested millions to get a couple of teraflops, about squeezing every last little drop of power out of it?)
          • GUIs save you time and effort! Rather than a simple shell, Perl, $whatever script to do things, have an operator point-and-click for that human touch!
          • GUIs, by virtue of being based on less code and with less features than a CLI, are inherently more secure. Microsoft, as we know, is the field's foremost expert in security and reliability.

          Memo at Los Alamos Nuclear Laboratory:

          "Please be advised that Deep Blue will be rebooted this afternoon at 5:PM in order to complete the installation of Service Pack 11. All jobs currently running and queued will be lost, even those which have already accumulated several years of processor time. We expect Deep Blue to resume normal operation sometime in early August. Thank you for your cooperation, LANL Informatics Department"

          • by Anonymous Coward
            Especially if you have old or weird hardware, e.g. an Aureal sound card.

            I have had entire clusters go down due to OS error.

            As a Linux advocate I would appreciate it if we could all just focus on promoting Linux rather than putting down other operating systems. Constant attacks against Windows are completely unnecessary; attacks against Linux from MS are necessary for them because that is SOP for MS, but two evils do not make a good. We don't have to be like them. We don't have to use FUD as a tactic.

            I
      • by stiggle ( 649614 ) on Tuesday May 25, 2004 @06:59AM (#9246053)
        Windows doesn't cluster? So what about those that have ranked in the Top 500 list then that run Windows?

        In the November 2003 list....
        At 68 - a Windows based system at Cornell from Dell with 640 processors (it originally started out at 320 on the list with 252 processors).
        At 128 - a Windows based system in Korea with 400 processors.

        So Windows doesn't cluster?

        • The Cornell cluster was donated. I don't know anything about the Korean cluster.

          The press release contained drivel such as

          Using a high volume, industry standard operating system such as Windows is an advantage to businesses and universities that want to implement production-quality HPC seamlessly throughout their organizations. Microsoft offers solutions for traditional message-passing computing and loosely coupled, "master/worker" applications, which organizations are implementing using Microsoft's .NET
    • by AKnightCowboy ( 608632 ) on Tuesday May 25, 2004 @06:58AM (#9246048)
      I guess Bill thinks it's time to slow the worlds fastest computers to a crawl. Apparently they aren't crashing enough, too.

      Well, unless Bill's going to introduce a version of Windows that doesn't have a Windows interface, WTF is the point? How many Beowulf nodes have you seen even plugged into a KVM? Windows is a stupid choice for a headless compute node just as Linux is a stupid choice for a home desktop.

      • by jarich ( 733129 ) on Tuesday May 25, 2004 @07:08AM (#9246111) Homepage Journal
        Remote Desktop works fine for this type of application. You log into the box as needed, do what needs doing and then log out or disconnect.

        Windows has come a long way since you knew you'd see the blue screen of death twice before lunch. On decent hardware it's very stable.

        Denying the current stability of Windows is no different than Bill and Co. denying the stability and power of Linux. It's pointless and it makes you look out of touch.

        • Stability (Score:4, Insightful)

          by swerk ( 675797 ) on Tuesday May 25, 2004 @07:40AM (#9246336) Journal
          Not to fan the flames, but get real. I run a homebrew GNU/Linux box (still a 2.4 kernel, I'm lazy) at home, and XP at work. At work I can get almost a week out of a boot before Windows chokes on itself and needs to restart. At home, the local power grid and my lack of a UPS determines how often I restart.

          Sure, Win2000 and XP are more stable than 95/98 or the travesty that was ME. So it has "come a long way". But let's not be silly and try to call it as stable as GNU/Linux. One crash a week, hell, even if it were once every six months, still seems pretty unstable to me. If that's an "out of touch" point of view, so be it. An OS shouldn't just decide it's had enough and flake out; I don't care how long it's been running.

          Anywho, clustering something even the tiniest bit unstable just seems like a funny idea to me. We've all seen Windows behavior when too much stuff is open or a flaky driver has impaired its ability to operate, things gradually failing, the cursor suddenly trapped in just a portion of the screen, swap thrashing as though it were a sign of the apocolypse... The mental picture of racks and racks full of convulsing, imploding Windows boxen when somebody fires up the wrong version of Quicktime is just priceless.
          • Re:Stability (Score:3, Insightful)

            by k4_pacific ( 736911 )
            If one Windows computer has a failure a week, two nodes -> two failures a week, etc. A 500 node Windows Supercomputer would experience a failure roughly every twenty minutes.

        • by cbiltcliffe ( 186293 ) on Tuesday May 25, 2004 @08:02AM (#9246521) Homepage Journal
          Windows has come a long way since you knew you'd see the blue screen of death twice before lunch. On decent hardware it's very stable.
          I run six Linux machines, and one Windows one at home. The Linux machines are running on a mixed bag of mongrel hardware, from an old Compaq Deskpro Pentium 166, to a 466 Celeron. (Old stuff, I know...)
          One is just a motherboard, processor and hard drive sitting in and around a motherboard box. This is the database server for my website.
          They run with virtually no maintenance, and only ever need to reboot if I do a kernel upgrade (rarely, on a server machine) or get a power failure. (I know....I'm an idiot for not having UPS's on my servers. Well, it's a home network....sue me.)
          My Windows machine just got a fresh install of Windows XP on a brand new 120GB drive for a LAN party this past weekend. The install was done the Wednesday before. Three days old.
          When I got to the LAN party, it wouldn't boot, as the entire registry was corrupted. One piece of it was actually completely missing.
          After an hour and a half of screwing around, doing a recovery install of Windows from my CD, and generally wanting to take the Flak Gun from UT2004 to my system, I finally got it to the point where I could actually play something.
          This is on hardware that runs Linux just as well as the rest of my machines.
          And don't even get me started about what happens to XP when you install SP1.......

      • by Yorrike ( 322502 ) on Tuesday May 25, 2004 @07:10AM (#9246129) Journal
        "just as Linux is a stupid choice for a home desktop."

        Works for me, and hundreds of thousands of others.

      • by sfe_software ( 220870 ) * on Tuesday May 25, 2004 @09:26AM (#9247382) Homepage
        Windows is a stupid choice for a headless compute node...

        On that I will agree; unless there is a very specific reason to use Windows for a cluster (or server or ...), I can think of no reason to have an OS that requires a video card (and drivers), and prompts the first time you boot without a pointing device connected, on a system that requires no interface or direct interaction.

        ...just as Linux is a stupid choice for a home desktop.

        On this I don't agree. For you perhaps. For me even, in most cases: I run Linux (and FreeBSD) on my servers, and Windows 2000 and XP on my desktops (laptop is dual-boot XP/Fedora). However, there are plenty of good reasons to go with Linux (or BSD) on a desktop system.

        I would agree that as a pre-install, or on a desktop for a user who doesn't know Linux (and will be angry that they can't run the latest Windows-based spyware-riddled game) it's not a great choice. But I wouldn't just generalize that "Linux is a stupid choice", because there are times where Linux is a good choice.

        I've set more than a couple of desktop users up with Linux -- specifically, unsophisticated people who only need to check email and browse the web, and are using older machines (that I donated in most cases). And in all of these cases, the user really didn't notice any difference (and they constantly ask if the latest virus they heard about on the news affects them).
  • I guess then the computer wouldn't be so super :o)
  • by troon ( 724114 ) on Tuesday May 25, 2004 @06:04AM (#9245767)

    I hope those guys have good firewalls.

    • by Anonymous Coward
      Mainframe on the Internet? Windows, linux, OpenVMS, AS400, VAX.. doesn't matter, if you put it on the Internet, you're a moron and your entrails should be extracted slowely with rusty pliers through your eye sockets, end of story.
  • by foidulus ( 743482 ) * on Tuesday May 25, 2004 @06:06AM (#9245775)
    "It looks like you are building a cluster, would you like me to tell you how Microsoft can bring it to it's knees?"
  • ... will they crash more quickly or more often than mine does?
  • hijack ware (Score:5, Funny)

    by wpiman ( 739077 ) on Tuesday May 25, 2004 @06:06AM (#9245777)
    Great- when the cluster gets hijacked by spyware and the like- it can send out 3 millions spam emails a hour as opposed to the 5000 a Dell does now.
  • Windows on HPC? (Score:4, Insightful)

    by ifoxtrot ( 529292 ) on Tuesday May 25, 2004 @06:06AM (#9245779)
    Is it just me or does the notion of a GUI on high performance computers sound at bit pointless. I thought the point of HPC was to crunch masses of numbers - not something joe average will want to do any time soon. So what's the point of a pretty (and resource hungry) windows interface?
    • Re:Windows on HPC? (Score:2, Interesting)

      by Anonymous Coward
      The point is that Microsoft can probably work very hard to castrate the gui windows out of Windows(tm) and end up with smallish kernel or micro-kernel architecture. They would then own the architecture and could bring in any interface technology they desire. And you could compile your non-gui code with VC++! Almost as useful as Linux on a Beowulf cluster only with large licensing fees.
      • The point is that Microsoft can probably work very hard to castrate the gui windows out of Windows(tm) and end up with smallish kernel or micro-kernel architecture. With Internet Explorer.
      • Re:Windows on HPC? (Score:5, Interesting)

        by Lonewolf666 ( 259450 ) on Tuesday May 25, 2004 @06:38AM (#9245951)
        Actually, they have done this with XP Embedded. We have tried this in a project for a windows-controlled device, and you *can* build a rather small Windows XP that has your program as "shell" instead of the usual Explorer. Maybe not quite as small as Linux in text mode, but it will do.
        The claims about Internet Exploder being inextricably connected to the OS were pure FUD for the antitrust suit.
      • Re:Windows on HPC? (Score:3, Interesting)

        by Mr_Dyqik ( 156524 )
        I thought they told the EU that they couldn't even remove mediaplayer from windows, and they told the US DOJ that they couldn't remove IE?
    • Re:Windows on HPC? (Score:5, Insightful)

      by beacher ( 82033 ) on Tuesday May 25, 2004 @06:19AM (#9245835) Homepage
      One thing alone will kill this idea... Licensing costs per proc. Linux really shines when you want to keep the TCO down due to the fact that you can get away with doing it and have zero licensing costs. (Note the get away with - I know that most HPC/Grids are installed and supported and there is support costs but that's another arguement.....

      Imagine if Google had to pay Microsoft a recurring license for their server farm and be forced to keep in lockstep with Microsoft's Licensing costs. Think there'd be a higher push for advertising and more intrusive ads? I do.
    • Re:Windows on HPC? (Score:5, Insightful)

      by Anonymous Coward on Tuesday May 25, 2004 @06:23AM (#9245857)
      Insightful?? Narrow minded and uninformed more like it! This is slash dot, news for nerds not lets poke fun at windows for everything they do even if it is useful.

      Now for the insight - Windows XP embedded has a mode to run headless (that is without a monitor or screen - the thing above the keyboard that looks like a TV and where the pictures change or for you "windows" haters the black screen with the green writing on it!)
      Also look at the Windows Storage Server no support for a graphic display on the box it runs on.

      Windows may not be your cup of tea but lets look at the good points and bad points when things like this are posted and use facts if we want to make fun.

      Bye Bye...

      P.S. sorry to jump on you Mr Trot but you were the first poster to make a dumb statement that got moded insightful but I'm sure there are more deserving victims of my rant. Guess I had a s#$% day!
    • Re:Windows on HPC? (Score:3, Insightful)

      by Anonymous Coward
      Is it just me or does the notion of a GUI on high performance computers sound at bit pointless.

      There's nothing wrong with having a graphical *front-end* to a HPC system. That's normal.

      The real problem is that the Windows OS is largely inseparable from its GUI and, as it currently stands, is way too bloated to run individual HPC nodes efficiently and effectively. MS could come up with various solutions depending on the underlying architecture of the HPC system but no matter how crappy the final solution

      • Re:Windows on HPC? (Score:5, Insightful)

        by BlowChunx ( 168122 ) on Tuesday May 25, 2004 @07:43AM (#9246351)
        The real problem is that the Windows OS is largely inseparable from its GUI and, as it currently stands, is way too bloated to run individual HPC nodes efficiently and effectively.

        I do CFD for a living. When I started my new position a couple of years back, I convinced my boss to move to Linux because (Linux + ifc) was 50% faster than (Win2k + visual fortran) for the single processor codes we were running.

        I can see Microsoft trying to pare that difference down, but it will still be prohibitive when coupled with licensing costs.
    • Re:Windows on HPC? (Score:4, Interesting)

      by tymbow ( 725036 ) on Tuesday May 25, 2004 @06:42AM (#9245970)
      I'm a Windows dood at times, but I can't see this working. The only way it might work is if the nodes are not traditional Windows installs, but rather are the core kernel and support only - ie: they have no GUI or any of the fluff and just enough to get on with doing what they have the do. The management or user land machines (or whatever the correct term is in HPC land) of course would have some GUI components. Mind you, I've wished to see a stripped down version of Windows without the GUI (or being able to start the GUI as an option ala STARTX) for ages. Maybe this is the beginning... Good luck is all I can say. They ahve a lot of work to do if it is ever to be credible.
      • Re:Windows on HPC? (Score:4, Informative)

        by Foolhardy ( 664051 ) <csmith32&gmail,com> on Tuesday May 25, 2004 @11:52AM (#9249484)
        You can run Windows without the GUI. (WARNING: this will make Windows fairly useless) Find the key "HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\SubSystems\Required" This lists the subsystems that are started automatically. Remove 'Windows' from the list and delete the 'kmode' key. Now, upon restart, the win32 subsystem won't be started; the computer will stall because it doesn't have anything to do. (winlogon may crash because the GINA depends on win32)
        The main problem with running without win32 is that there are (almost) no applications that can interface directly to the native system call interface (ntdll.dll) without using win32. This includes most services.
        Some practical examples of Windows without win32 include:
        The second part of the first phase of setup, the text mode part in 50 line VGA mode where you partition disks, the full kernel with all the bus drivers are running, but with no win32.
        The recovery console.
    • by jesterzog ( 189797 ) on Tuesday May 25, 2004 @07:00AM (#9246063) Journal

      Well I agree with you. I do think it more likely that Microsoft would at the very least turn off the graphical part of Windows, remove it completely, or possibly re-write it from scratch.

      What I really don't understand is why it would be necessary or smart to brand such a product as Windows at all. Windows means graphical user interface, and the way it's presented ties quite closely to desktop use. It definitely doesn't mean the remote administration that's likely to be required for an HPC, and trying to remotely administer a Windows box is usually quite clumsy compared with a unix box unless you drop a lot of the traditional Windows UI stuff that's often so tied into its operation.

      When I think of Windows, and I don't think I'm alone, one of the first impressions that comes to mind is a relatively klunky monolithic GUI-dependant operating system that spends a lot of time drawing pretty front-end pictures. This almost certainly isn't an accurate picture of what's actually happening all the time and it's not to say that Windows couldn't be adjusted to work in other ways. But it's a first impression.

      You can at least argue that the graphical side of things is good for usability on the desktop (even though usability realistically takes a lot more than pretty pictures), but why on earth would Microsoft want to continue that image into an HPC market? Surely they have completely different customers in that market with different goals that likely don't include chewing processor time on pretty pictures for the UI.

      To me at least, it'd make much more sense for Microsoft to simply create a new operating system here from scratch (or buy a company or whatever they do), and call it something that's not Windows. It could be Microsoft HPC Server, for instance, and be completely independent from Windows. Microsoft can then claim that their new OS specialises in HPC tasks, and it'll also give them an independent OS product to push in the future if either it or MS Windows collapses.

      • What I really don't understand is why it would be necessary or smart to brand such a product as Windows at all.

        That's easy. Sun, for example, sells workstations to its server customers on the premise that you can develop your app on small, cheap machines then when it's ready deploy it staight into your data centre without needing to change anything. Solaris is built from the ground up for this; fundamentally your code neither needs to know nor care that its threads are being scheduled on a uniprocessor U
  • Windows HPC (Score:5, Funny)

    by LittleBigLui ( 304739 ) on Tuesday May 25, 2004 @06:08AM (#9245783) Homepage Journal
    Because every Node needs a Windowing System in Ring 0.
  • by Noryungi ( 70322 ) on Tuesday May 25, 2004 @06:09AM (#9245788) Homepage Journal
    Facts:

    • Bill Gates sees a demo of the Lisa. Microsoft Windows is announced shortly afterward.
    • Bill Gates takes a look at the increase in Internet users. Shortly afterward, memo to all of Microsoft: Windows 95 must be Internet-ready.
    • Bill Gates takes a look at Google (primary target) and Beowulf clusters. Microsoft announces HPC working group.


    Coincidence? Of course not, this has been a strategy since the days of BASIC. Microsoft copies all the good ideas. Of course, it makes a bad and buggy copy, but, hey, that's what a marketing dept is here for, right?
    • Please man -
      get your facts straight.

      First off, the whole GUI environment didn't originally come from Apple (Lisa, or anything else) - it came from Xerox PARC.

      your second statement, is nothing but a very good business strategy. Give the users what they want.

      your third statement - is unsupported. Do you really think that they JUST NOW started working on this?

      and finally - your last statement - simple rebut: Oh yeah, I've never EVER come across any buggy Macintosh/Unix/Linux,(insert OS name here) etc. code
  • by blackcoot ( 124938 ) on Tuesday May 25, 2004 @06:10AM (#9245790)
    i think billy & co finally figured out how to get big enough iron for longhorn >D
  • BSOD (Score:5, Insightful)

    by Mr_Silver ( 213637 ) on Tuesday May 25, 2004 @06:11AM (#9245797)
    I guess the only thing better than crashing 1 computer at a time is crashing an entire room full at once

    All we need now is a BSOD joke and I'd swear that everytime I read Slashdot it induces a timewarp back to 1998.

  • Our freind BIll.. (Score:2, Interesting)

    by ptlis ( 772434 )
    ...certainly seems to want a finger in every pie.
  • Proof (Score:5, Insightful)

    by WordODD ( 706788 ) on Tuesday May 25, 2004 @06:13AM (#9245805)
    This action from Microsoft is proof positive that they are taking notice of the recent accomplishments of Linux and are trying to counter them with strides of their own in areas that are not their specialty. If nothing else then this is positive for everyone because not only will Linux continue to improve and develop on its own but now both MS and Linux will develop to compete with one another making the overall user computer using experience better for everyone involved. I know everything MS does is looked down upon by the /. majority but this really should be seen as "a good thing".
    • Re:Proof (Score:3, Insightful)

      by fwarren ( 579763 )
      Good, every programmer they take off of longhorn, or .net gives us a little more breathing room for open source software to improve and take marketshare/mindshare from Microsoft

      Do I sound bitter? I guess it is because I think I should own my computer. Paying to license software, for the most part is a game, especially if there is built in obsolence. I also expect there should be a way to open up a document I created 10 years ago.

      I do not mind the thought of living in a world where Micosoft does not hold

  • by allanj ( 151784 ) on Tuesday May 25, 2004 @06:13AM (#9245807)

    The same as ever - whenever Windows is mentioned, lots of wisecracks about crashing is posted. Did you imagine they'd port Win95 or Win3.11 to HPC? Duh. They'll port something like WinXP or W2K3, and guess what - those are quite stable OS'es. Of course you CAN make them unstable, but that goes for PenguinWare as well...


    Ah well, I better put on my flamesafe suit - I forgot to criticize Microsoft...

    • I see your point, though I personally disagree that it's the same problem as when you get an unstable Linux (you can fix a Linux, with windows, all you can do is wipe and reinstall).

      What really bugs me though is that this NT5 kernel that everyone loves so much, has half a dozen services that should be in user space, and before I get flamed cause 'NT is a micro kernel' it isn't, it started out as one, but then they just shoved all and sundry into kernel space to improve performance. OK, it's not the DMM (Da
      • by GeckoX ( 259575 )
        Which problem exactly requires one to wipe and reinstall windows?

        I currently have 3 XP boxes and one 2k box I use regularly. None have been installed more than once, though they've all had their fair share of issues.

        (OK, my laptop had to be re-installed, but it was due to an IBM driver issue that wiped my drive, nothing to do with windows, purely hardware)

        These boxes have been in use anywhere from 1 year to 4 years.

        Your first point is pure FUD.
        Your second point, while correct technically, is wrong becau
    • I just have one question.

      The next time I download something from the internet on Windows 2000/2003 or XP, should I check the "yes" box for "Always trust content from Micorsoft?

      That is all that needs to be said on Microsoft security and why I feel free to post about windows crashing, secuirty and annoyances.

      -------------

  • What the fsck (Score:5, Insightful)

    by drizst 'n drat ( 725458 ) on Tuesday May 25, 2004 @06:14AM (#9245811)
    Why in the world would someone want to run a bloated GUI based operating system on hardwared designed specifically to provide services (servers) to it's customers? Unix is great in this aspect as (at least for the most part) running xdm and serving up a graphical interface was intended primarily for end users requiring execution of applications in multiple windows. Unix servers used to NOT run xdm (or any graphical engine) for the purpose of streamlining and providing efficiency and better utilization of system resources. Windows (even in the current Win2003) is far too large for use in a high performance computing environment. Bill my man ... get a clue ... windows isn't for everything!
    • Why in the world would someone want to run a bloated GUI based operating system on hardwared designed specifically to provide services (servers) to it's customers?

      I think you vastly overestimate how much CPU a Windows box uses to display that "Press CTRL-ALT-DEL to Login" screen.
  • 1) You know that 5 million dollar box in the corner? It's not working now. Press OK to format all your terabytes of meteorological data.

    2) Why did the chicken cross the road? Because your supercomputer is hosed. Press OK.

    3) D'OH! Press OK.
  • Licenses (Score:2, Insightful)

    by ByteSlicer ( 735276 )
    If you run Windows on a 1000-node supercomputer, do you need a volume license? Also, MS will probably ask for a per-user license for running Office...
  • And Microsoft could build software into its desktop version of Windows to harness the power of PCs, letting companies get more value from their computers. It's a technology that's applicable to tasks such as drug discovery and microchip design.

    sounds a lot like seti@home [berkeley.edu], folding@home [stanford.edu], or the grid [grid.org] project. Another example of embrace and extend. It's definitely going to be interesting when pc's are networked for spare cpu cycles as a normal everyday event. Maybe the can use all that cpu power to get some

  • how many mice will that require?

    This guy named Darien [msdn.com] is apparently promoting "Windows Mainframes." Apparently a "Windows Mainframe" uses the cost-effective [microsoft.com] *cough* "Windows Datacenter Edition." The Unisys ES7000, one of the says you can buy 'Datacenter', starts at $35,000. Yeah! Cheap! And that gets you four processors... "mainframe" indeed.

    Google decided to use extremely large clusters of single-processor PCs and Linux.

    Microsoft will need to offer some type of very low cost, gui-less, remotely manageab
  • Been around for awhile - I have no idea how it works or what it is used for.

    However it pops up alot in MSDN when I am looking for help.

  • Can you imagine a massive cluster of servers infected with the RPC or Sasser Worms?

    I'm sure Microsoft will probably develop this and market it as a network performance testing tool.
  • by zensonic ( 82242 ) on Tuesday May 25, 2004 @06:21AM (#9245847) Homepage
    Windows has evolved into mainly a x86 platform. Hardware as common as it gets. On the contrary HPC is all about custom/very specialised hardware running a very specialised application built for one perpose alone.


    I find it naturally that MS tries its luck in the HPC world, but windows surely does not fit the bill.

    • by joib ( 70841 )

      On the contrary HPC is all about custom/very specialised hardware running a very specialised application built for one perpose alone.


      Why don't you take a look at the top500 list instead of guessing? Yes there are a small amount of supercomputer only type architectures (vector processors mostly, like the NEC SX and Cray X1), but most are off the shelf RISC, IA-64 or x86 things. For example, 5 of the top 10 computers are either x86 or IA-64, i.e. in theory Windows could run them.


      I find it naturally tha
  • Ive been laughing like a madman for 5 minutes on the train because of this. Now im getting wierd looks from all the other passengers. Thanks /. No offense to gates but i doubt the takeup of this will be high, given microsoft's reputation for processor resource abuse. The windows source must look like this: while(extraprocessingtimeisfree) { doafewforloops }
  • i guess gates, etc. are suffering major tech envy over the fact that windows is still pretty much laughed at when it comes to serious computing. all the csi (computational sciences and informatics) labs at my university run linux now (they used to be indy workstations, now they're beefy dell boxen) and except for the professors' personal machines and the office machines, every single machine in the cs department runs some kind of unix.

    ignoring the fact that the cs department has several important people w
  • by miquels ( 37972 ) on Tuesday May 25, 2004 @06:26AM (#9245878) Homepage
    .. codename "domino" ?
  • Why? (Score:4, Insightful)

    by darnok ( 650458 ) on Tuesday May 25, 2004 @06:26AM (#9245879)
    Most supercomputer users aren't going to want to plonk down literally millions of dollars in software licences to Microsoft - they'd rather be spending this money on either plugging in more hardware or on building and refining their analysis engine.

    What could MS conceivably offer that would counter this?
  • Tough work (Score:3, Informative)

    by Jesrad ( 716567 ) on Tuesday May 25, 2004 @06:30AM (#9245909) Journal
    The NT kernel only supports up to 32 or 64 CPUs, IIRC. I think it's because the scheduler has one centralised list of CPUs to dispatch threads to, and it quickly becomes a bottleneck for performance. When you have too many threads to dispatch to too many CPUs, this list is completely locked. The MACH kernel has a thread-list per CPU, and dispatches new threads or moves existing threads in a distributed way, so there's no bottleneck (hence MacOS X's performance on clusters ?). I could be completely wrong here, though, correct me if you know better. So my guess is that MS will have to redo the scheduler of the NT microkernel. I don't know about the VM subsystem...
  • by ethnocidal ( 606830 ) on Tuesday May 25, 2004 @06:39AM (#9245957) Homepage
    I'm amazed that people confuse the two. Can there really be zealots with their vocal organs sufficiently inserted into their nether regions who believe that Windows and the GUI used by Windows are one and the same?

    I'd invite you to look at Xbox as an example, and the operating system which that runs. There is no requirement for Windows to include a friendly GUI, animated characters, BSODs or any of these other 'hilarious' /. stalwarts.

    • by Alioth ( 221270 ) <no@spam> on Tuesday May 25, 2004 @07:02AM (#9246074) Journal
      The kernel doesn't necessarily need a GUI. However, as it stands, there's an awful lot on Windows that cannot be done on the command line and must be done on the GUI. For example, with the standard Windows install, it is not possible to change the computer name from the command line without downloading a utility to allow you to do it. It is not possible to kill a process from the command line without getting a Resource Kit utility from Microsoft. It is not possible to add or remove a network service (not a system service - I'm talking about the services you add in the network connection control panel, things like file and print sharing services) and after days of Googling I've still not found a way of installing or uninstalling one of these services using the command line.

      Windows is fine as a desktop OS (even if issues like this make automated rollouts a bear) but is inappropriate for the server since there are so many things that can only be done trivially through the GUI.
      • by tasinet ( 747465 ) on Tuesday May 25, 2004 @07:24AM (#9246223)
        " For example, with the standard Windows install"
        Uhm.. *Which* standard Windows install? Xp pro? 2K sp35? 2K3 sp69?

        "It is not possible to kill a process from the command line without getting a Resource Kit utility from Microsoft."
        Not true. XP PRO ships with tasklist.exe and taskkill.exe.
        The first lists your processes and the second kills them. The second is quite useful, too, as you can mass-exterminate processes by username or other filters. Entirelly useful if you want to delete all the spyware & other-useless-crap your computer boots up with.
        • by Alioth ( 221270 ) <no@spam> on Tuesday May 25, 2004 @07:48AM (#9246389) Journal
          OK, I stand corrected on the tasklist.exe/taskkill.exe utilities. The main thrust of my point still stands - there are many things that are trivial to do on an SSH session on many non-Windows operating systems that can only be done via the GUI on Windows, such as the aforementioned network service installation/configuration (netsh won't do it, unfortunately - I thought I was onto a solution by using netsh dump to save the settings in a text file, but it's about the only part of the configuration it seems *unable* to be able to manipulate).

          For changing the computer name you must either write your own program to do it in C or VB, or download a utility to do it. Same goes for adding things like new network adapters - you need to use separate tools that come with a Microsoft resource kit. These are things that should be trivially possible from the command line in a default install, but they don't even come with Windows Server 2003 let alone XP Pro.

          Then, another issue for servers. If you're writing a program that takes input from multiple sources - let's say, a socket, a named pipe, and a serial port, and some weird USB device. To process data on these three streams you must have different code to handle and dispatch input on each one - select() for sockets, PeekNamedPipe for named pipes, WaitCommEvent for serial ports, and probably some vendor specific thingy if you've got some custom USB device. On proper server operating systems, the API is consistent enough that all this input is presented in the same way and you can use select() for all four streams, reducing the complexity of your server program and therefore the possibility of bugs, and cutting out the need for four threads (and the potential race conditions if you make a programming error) and only needing a single thread to look for stuff happening. It's as if the people writing different parts of Windows didn't talk to each other when doing it, and each had to independently come up with a new way of doing it. There are other examples where the API could have been made much simpler and more consistent.

          Since the original version of NT was incompatible with DOS anyway, and DOS had to be emulated, Microsoft could have swept away all the cruft when they made NT - but instead they insisted on making something even more kludgy. Don't even get me started on the NT GINA (I had to write one) and the appalling lack of documentation. We had a very expensive (ca. $40,000 US) support contract with Microsoft so we could get support when writing our GINA (we had to write a total replacement due to the nature of the system we were contracted to build). We ended up talking one to one with NT developers - but guess what, the person who'd written this stuff had since left and it was more or less undocumented even inside Microsoft. We ended up having to almost reverse-engineer the MS GINA to find out what was going on to make our GINA set the right stuff on login.

          I'm sorry, but when faced with stuff like this, all I can conclude is Windows isn't designed or meant to be a server OS, regardless of how Microsoft markets it. It's fine as a desktop OS (I use it on the desktop daily) but that's where it should stay. A Macintosh, the quintessential desktop system, has an OS more suitable for servers these days.
      • by That's Unpossible! ( 722232 ) * on Tuesday May 25, 2004 @07:59AM (#9246497)
        Perhaps Microsoft should form a group of people to work on changes necessary to Windows to get it to run on HPC?

        Oh, right... that's what this fucking slashdot article is about.
      • by GamerGeek ( 179002 ) on Tuesday May 25, 2004 @08:14AM (#9246590)
        I have a feeling that the Microsoft approach to HPC will be significantly different then traditional systems. It is entirely possible that they could create a stripped down operating system, like something you would find in the embedded market, to create drone computers. These drone computers would not have a gui, not have any programs on them and do nothing but be a slave to some proprietary remote execution protocol. Then they would release a "Windows HPC server" which would administer all the drone computers with a GUI interface. They might even be able to get the drones to PXE boot from the server. To integrate with this product there would be HPC.NET with which you could write programs to harness the power of the grid/cluster. It might even be that the HPC system itself is distributed a .NET runtime. Microsoft's approach to HPC will not be what we know as clustering/grid computing today. It will be an integrated Microsoft proprietary system that will be simple to get into and hard to move away from.
  • Crashing (Score:4, Funny)

    by Andy Smith ( 55346 ) on Tuesday May 25, 2004 @06:48AM (#9246002)
    I guess the only thing better than crashing 1 computer at a time is crashing an entire room full at once
    Yeah because Windows crashes all the time for me. Oh yes, every day. Every hour!

    Oh no, hang on, it doesn't. Ever. I boot up in the morning, switch between video and photo editing software hundreds of times throughout the day with regular use of MSIE and Eudora as well, and then I shut it down at night without it having crashed once. Every day. For years.

    Old versions of Windows crashed a lot. Current versions don't. Fact.

    This is part of the reason why Linux isn't gaining mainstream acceptance fast enough. Linux advocates talk about all these imaginary flaws in Windows and people out here in the real world think "well that isn't my experience at all". The effect is to create a distance between regular people and Linux advocates, which in turn pushes the mainstream acceptance of Linux further and further away. Linux needs to be seen as "the other big operating system", not some niche software used by a minority who seem to have a totally different experience of Windows than the rest of us.
    • Re:Crashing (Score:4, Insightful)

      by HeghmoH ( 13204 ) on Tuesday May 25, 2004 @07:22AM (#9246205) Homepage Journal
      The problem is this. Some people, like yourself, have no problems with Windows, and it works great. Some people, like my girlfriend, have Windows installations that crash all the time. So yes, Windows can be perfectly stable, if you're lucky. (I should also point out that shutting the machine down at night shouldn't count; decent computers have sleep modes and never have to be rebooted just to make them stop using electricity.)

      With Linux or OS X or whatever, you don't have this kind of inconsistency. Basically everybody who uses them, ignoring people who run experimental kernels or unsupported drivers, never has them crash, even when the computers are up for months at a time. You don't have to be lucky or do anything special. Yes, Windows is better, but it still has a long way to go. When my girlfriend's PC stops crashing a couple of times a week (running XP) then I'll reconsider.
      • Re:Crashing (Score:5, Insightful)

        by ViolentGreen ( 704134 ) on Tuesday May 25, 2004 @07:59AM (#9246498)
        The problem is this. Some people, like yourself, have no problems with Windows, and it works great. Some people, like my girlfriend, have Windows installations that crash all the time. So yes, Windows can be perfectly stable, if you're lucky. (I should also point out that shutting the machine down at night shouldn't count; decent computers have sleep modes and never have to be rebooted just to make them stop using electricity.)

        With Linux or OS X or whatever, you don't have this kind of inconsistency. Basically everybody who uses them, ignoring people who run experimental kernels or unsupported drivers, never has them crash, even when the computers are up for months at a time. You don't have to be lucky or do anything special. Yes, Windows is better, but it still has a long way to go. When my girlfriend's PC stops crashing a couple of times a week (running XP) then I'll reconsider.



        I think it has more to do with the quality of the hardware than windows itself. On my old compaq computer, windows crashed all the time. On the machine that I built, windows is very stable. The difference is that I know what hardware is in the case and I trusted the hardware before I put it in.

        Both XFree86 and KDE were unstable on my old compaq machine as well. I had no problems with the kernal though.

        OS X is built to run on Apple's hardware so they don't have to worry as much about 3rd party hardware. Most all Linux users that I know build their own machines and know what hardware is supported by linux and what is not.

        I may be off here but that is my take on it.
  • A little vaporous? (Score:5, Insightful)

    by RetiredMidn ( 441788 ) * on Tuesday May 25, 2004 @06:55AM (#9246035) Homepage
    Do I detect a pattern here (emphasis mine)?

    Although Microsoft is a comparative newcomer to the market, the company
    could bring several advantages:

    Machines running Windows HPC Edition could seamlessly connect to desktop computers...

    Microsoft could create a specialized version of its widely praised programming tools...

    Microsoft could also adapt its popular SQL Server database software to run on high-performance systems...

    And Microsoft could build software into its desktop version of Windows to harness the power of PCs...

    Well, I guess it's time for everybody else to abandon this space, because Microsoft has it all covered.

  • by Vellmont ( 569020 ) on Tuesday May 25, 2004 @07:15AM (#9246162) Homepage
    A lot of people seem to be concentrating on the "windows crashes a lot" idea. That's not quite a fair judgement of windows anymore. The only time I've had problems with Windows 2000 and above is poorly written drivers, or anti-virus software. As long as you choose hardware with proven drivers and don't run anti-virus software (firewall it and run minimal services and no IE) Windows should be very stable.

    With that said, I think there's other problems with windows as a supercomputing cluster. The first I can think of is lack of a low-bandwidth interface. Linux you can ssh into and get results, control processes, etc. Windows requires a high bandwidth terminal services. In other words it's harder to control remotely.

    Other people have brought up the licensing costs, but I'm sure MS would offer huge deals just to get their foot in the door.

    I think the biggest problem is just historical and cultural though. The scientific community has a 30 year history with Unix, is familiar with programming in that environment, and has a lot of legacy code that's written for it. They just aren't going to take to a windows environment easily at all.
  • A few points (Score:3, Insightful)

    by Anonymous Coward on Tuesday May 25, 2004 @07:25AM (#9246225)
    It seems that most people here don't know the following:

    There is already a kind of high performance Windows server - it's called Windows 2000 Datacentre, it runs on boxes like the HP superdome mainly for bigassed databases. In general these servers are treated like mainframes - they aren't rebooted - they don't need to be!

    You don't need to have direct access to the GUI of a windows box in order to use it. Usually you connect using an RDP client, a la X server.

    Even mainframes have a local console and these are offen GUI in nature, it doesn't mean that the machines are slow.

    Please stop this mindless microsoft bashing - bash them if they deserve it, but as this product isn't available yet, it seems a bit premature to slag it off.
  • by saha ( 615847 ) on Tuesday May 25, 2004 @07:28AM (#9246249)
    ...I install HPC Windows. We run a few SGIs, our biggest being the SGI Origin 3000. We'll probably shift to either a Linux Beowulf cluster or Apple G5 Xserve cluster in the future, since the type of problems we need to solve don't typically need a single image machine using ccNUMA. I doubt Microsoft will be coming up with anything that will be able to run as a large single image for some time now and by then the competition would have moved forward even more. This is Windows HPC Vaporware so competitors will waste time and divide their resources trying to be Windows HPC compatible on their hardware. They did it with Windows NT in the beginning when they supported MIPS, PowerPC, Alpha.... The best strategy would be to ignore Windows HPC, but I know there is a gullible hardware manufacturer born everyday that will buy into Microsoft's sales pitch.
  • by Ianoo ( 711633 ) on Tuesday May 25, 2004 @07:34AM (#9246281) Journal
    Sure, there are x86 clusters. But there are also an awful lot of IBM supercomputers using Power chips, HP supercomputers using PA-RISC, heck even Apple clusters using PowerPC, SGI machines, Sun supercomputer nodes, and so on. There are a large number of strange and mysterious chips built explicitly for supercomputing that would never be seen in any other kind of use. There are also a large number o different interconnect technologies.

    Since Windows is a closed source operating system, are Microsoft volunteering to port Windows HPC to whatever architecture you happen to come up with? What about the bugs that occur when they write this port? How long is it going to take to get Windows stable on an unusual architecture if only Microsoft can change the source but only you can do the testing?

    At least with a custom kernel or Linux you can work on the system yourself until it's up and running, and if you're in the business of installing and running clusters/supercomputer, you can probably afford to pay programmers to write an operating system for nodes in that cluster/supercomputer.

    Last I heard, the Windows NT 5.x kernel (2000, XP, 2003) was not even endian-clean any more, let alone portable to RISC or VLIW architectures. Why do you think it's has taken Microsoft so long to port to x86-64 and Itanium?

    Or are Microsoft going to "mandate" that we use x86 processors for all our cluster needs in the future?

  • cornell hpc (Score:4, Informative)

    by bloosqr ( 33593 ) on Tuesday May 25, 2004 @08:25AM (#9246707) Homepage
    Actually the cornell "theory" center [cornell.edu] has or at least had a few reasonably large windows based clusters. I did a postdoc over in the CS department ages ago and ported some code over the the linux side. You can basically ssh into the cluster and standard make works (actually I seem to recall having the switch the "/" to "\". The cluster was something like 4 processor boxes glued together w/ myrinet w/ some sort of queueing system. They also had a slew of 2 processor boxes. My experience w/ them was most of the "crashing" had more to do w/ the myrinet drivers and the MPI implementation (which was a commercial MPI). Once those stabilized it ran as well as a normal linux cluster i.e. you submitted jobs they ran :) I went to a day long "windows HPC" conference back then which was a bit entertaining (btw the clusters were basically free for cornell) People only had good things to say about the cluster, but i think its was a bit opportunistic. One thing that was quite obvious was, if machines are free people will run/port to anything *but* when it came to using your own (or grant) money to buy a machine - even over at cornell - which to be honest had quite a stake in "windows based computing" people would go for a linux based cluster (which had already popped up in quite a few departments at that time)

    -bloo
  • by fnordboy ( 206021 ) on Tuesday May 25, 2004 @11:33AM (#9249227)

    One thing that may be a serious hindrance to Microsoft edging into the supercomputing market is that people who do serious supercomputing are fairly reactionary. Note that I'm referring to people who burn the vast majority of the CPU time at the US's national supercomputing centers - astrophysicists, plasma physicists, molecular dynamicists, people who run QCD (quantum chromodynamics) simulation - and also those who work at government labs doing simulations of nuclear bombs and such. Take a look at the various supercomputing center websites - NCSA [ncsa.edu], SDSC [sdsc.edu] and PSC [psc.edu] - and look up the amount of computer times various groups use. Those doing the most computing, and getting the most science done, are doing truly old-school supercomputing

    One of the main reasons for this that that these people (I'm one of them) write and use simulation codes that have a VERY long lifetime - in astrophysics there are codes that are 20-30 years old and still in wide use. This is because these codes first and foremost have to solve whatever equations you're interested in CORRECTLY, and second off, solve them FAST. People base their academic reputations on the results of these codes, and are very interested in making sure that they get the right answer. In some fields (astrophysics being the one I know the most about) people can spend 10 years developing and adding science to a code.

    Now, this is a reasonable thing on a unix machine. From the user's point of view, one supercomputer really isn't all that different than another. You just need to figure out where the various libraries and compilers are, but once you do that, you type 'make' and are up and running. So if Microsoft wants to break into the traditional supercomputing market, in order to entice hard-core computational scientists into trying their products they'll have to make it so that codes written for unix systems can be ported over essentially transparently - have the same libraries, the same types of compilers, etc. etc. Frankly, that doesn't seem like a likely thing to me. But then again, I'm one of the crusty old school big-iron computational physicists, so my opinion might not be all that forward looking. All I really care about is what platforms let me get my job done the easiest, and that seems to be the various unix and unix-like systems out there right now.

The fancy is indeed no other than a mode of memory emancipated from the order of space and time. -- Samuel Taylor Coleridge

Working...