Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Mainframe Programming to Make a Comeback? 262

ajw1976 writes to tell us that IBM has released a series of announcements today "introducing many new software tools, academic programs, and support for outside developers." The new releases are designed to help entice programmers and businesses back to the mainframe. From the article: "The announcements, according to analysts briefed on them in advance, signal a shift from defense to offense in the company's mainframe strategy. Last month, I.B.M. introduced a machine priced at $100,000, about half the previous starting price for its mainframes, which can run up to several million dollars. The announcement of the low-end mainframe was made in China, which I.B.M. regards as a promising market for the machines."
This discussion has been archived. No new comments can be posted.

Mainframe Programming to Make a Comeback?

Comments Filter:
  • mainframes rock (Score:5, Insightful)

    by yagu ( 721525 ) * <yayagu@[ ]il.com ['gma' in gap]> on Monday May 08, 2006 @07:36PM (#15289204) Journal

    Cool, I can dust off my old bell bottom pants and platform shoes. I knew they would come back!

    All seriousness aside, I started out coding for mainframes, mostly assembly. To this day some of the most screaming and cool programs I ever wrote were on mainframes (wrote (in assembly) an on-line trouble logging system to replace a paper system back in '76).

    I did lots of COBOL programming and maintenance for a major, now absorbed by increasingly corrupt larger pseudo-telcos, telco. COBOL, not the most exciting language, but the throughput and data integrity of those days I've not seen matched since (and I still love Unix as my first choice for environment).

    Which brings me (and us) to what I think works in favor of mainframes having a chance at a major comeback:

    • TCP/IP stack not builtin and assumed. In the old days, if you wanted to communicate with other architectures it was a RPITA. With internet protocol everything is easy. Now you can take the raw power and integrity of the mainframe and lace it up to foreign technology.
    • IBM's OSS/Linux participation. I don't know if IBM has completely jumped on this bandwagon, but they've made contributions, and you can "do" Unix on their mainframes. And, they have cool passthrough mechanisms, how cool is it to write a shell script that can access VSAM data? If you don't know, it is very cool.
    • Mainframes historically have gi-huge support organizations built up around them. They have backups to backups. And, it's all managed for you.
    • Mainframes are single point of support, you all know you're using the same configuration (well, to the extent you're in the same virtual system on a mainframe).
    • Mainframes aren't Windows (sorry, had to put that in for the troll mods.)

    This is a partial list. I've long lusted for the raw power of mainframes with the standard support and the nimble Unix utilities.

    • Re:mainframes rock (Score:4, Insightful)

      by EmoryBrighton ( 934326 ) on Monday May 08, 2006 @07:46PM (#15289259)
      I have heard a lot about mainfraimes (heck, I work for the gov and we rent a 3M+/year Unisys mainframe for certain sensitive databases) ... but I have never seen statistics that show *how much better* those mainframes are...

      Does anyone know of any (non VENDOR) studies & comparisons vs traditional computer architectures?
      • Re:mainframes rock (Score:5, Insightful)

        by AuMatar ( 183847 ) on Monday May 08, 2006 @07:57PM (#15289305)
        Mainframes are just too different a world. Its not just performance (in fact, the performance difference is due only to an insane number of cores and memory, not an inherently better chip), its reliability. Some IBM mainframes have CPUs that do every instruction twice in parallel (different cores on the chip). If the results don't match, it turns the chip off as defective and shunts the program to a backup. That kind of thing just doesn't exist in traditional architectures.

        Although in the days of clusters, I don't know if mainframes can make it. Clusters have the same edge and much lower cost. I think we're more likely to see some of the OS advantages of mainframes get ported down.
        • Umm I've yet to see a cluster with anything even approaching the kind of data throughput mainframes achieve though.
        • Your cluster doesn't have the memory bandwidth that mainframe does and the network latency puts it behind in performance too.
        • Re:mainframes rock (Score:3, Interesting)

          by Maxo-Texas ( 864189 )
          I work with AS/400's -- have been on IBM hardware since the system 3.

          With an AS/400, you are talking about 2 hours of unscheduled downtime per year.

          Windows computers win because they are cheap- not because they are fast or reliable.

          Also mainframes are typically built to deal with phenominal amounts of data in ways that intel architecture PC's just can't handle.

        • Re:mainframes rock (Score:3, Informative)

          by Chanc_Gorkon ( 94133 )
          The cores have almost NOTHING to do with the throughput on a mainframe. It's all the channel controllers. Instead of dedicating CPU power to communicate with the outside world it's all handled by the controllers themselves. In a way, channels are paralell processing in 1989....dedicated processors for the I/O, math co and general purpose. Mainframes still rock my world.....wish we had one.
      • Re:mainframes rock (Score:5, Insightful)

        by morgan_greywolf ( 835522 ) on Monday May 08, 2006 @08:13PM (#15289366) Homepage Journal
        Does anyone know of any (non VENDOR) studies & comparisons vs traditional computer architectures?

        Mainframes are traditional computer architecture! Unix is 'new' compared to mainframe technology.

        The modern mainframe is, in general, vastly more reliable than even the best of the best of 'big servers.' Mainframes are generally redundant to the point that you can change out thefr CPUs, memory, drives, etc. without turning the power off or rebooting the machine. Linux and Unix servers might boast about a couple of years of uptime, but many mainframe systems have been up for decades.

        Many mainframe systems can process orders of magnitude more transactions than your typical *nix system running Oracle -- even when compared to systems with SMP, gigabytes of memory and the latest in high-speed storage. In fact, the stuff that people use nowadays for high-speed, high-reliability storage -- storage area networks (SANs) -- have their roots in mainframe technology. EMC, one of the market leaders in SANs was formerly part of Data General. In fact, so does most of the rest of your high availability 'enterprise-class' technologies -- SMP, NUMA, clustering, etc. Where do you think Linux's current SMP technologies came from? IBM. Who developed them on mainframes, ported them to AIX and then eventually ported them to Linux.

        Massively-clustered systems like Google's are quickly become the norm for high-end stuff. But there are certain things that will probably always run on Big Iron. Whenever tasks are mission-critical and need to 24x7 and 'three 9's' doesn't even touch the tip of the iceberg in what you need in reliability -- you'll see mainframes running those tasks more often than not.

        • Re:mainframes rock (Score:4, Insightful)

          by molarmass192 ( 608071 ) on Monday May 08, 2006 @08:38PM (#15289470) Homepage Journal
          Mainframes excel in throughput, if you have a sh!t load of data that needs to go through in a contiguous run, mainframes are the answer. Think IRS refunds, telco billing, utilities billing, etc. Lots of the same stuff in massive amounts. That said, mission critical, 24x7, and 3-9s are no longer the sole domain of mainframes. In fact, we cluster Solaris and Linux boxen in mission critical, 24x7, 5-9 configs (that's 5 minutes downtime a year ... think network hiccup) at virtually all our deployments. Clustering took this advantage from mainframes. On that note, we don't have the same insane throughput needs that mainframes are built to address. My $0.02, take it or leave it.
          • Lots of the same stuff in massive amounts.

            Like graphics. Isn't this exactly what high-end graphics cards are high-end for? Just my ha'penny, I couldn't afford tuppence.
            • Re:mainframes rock (Score:3, Informative)

              by somersault ( 912633 )
              o_0 they're not talking anything to do with graphics. Mainframes could run graphics calculations, but they're talking about data processing, like I dont know how many MBs or GBs a second throughput, but way more than any piddly little graphics card is processing :p
        • Re:mainframes rock (Score:4, Informative)

          by AKAImBatman ( 238306 ) * <akaimbatman AT gmail DOT com> on Monday May 08, 2006 @08:38PM (#15289472) Homepage Journal
          Mainframes are generally redundant to the point that you can change out thefr CPUs, memory, drives, etc. without turning the power off or rebooting the machine.

          Same with the big Unix servers. Unix was considered "ready" for Big Iron usage once machines started shipping with crossbars (for hotplugging CPU boards) and redundnat everything. If you open a Sun E10000, you'll find that it looks a lot like a mainframe on the inside.

          The modern mainframe is, in general, vastly more reliable than even the best of the best of 'big servers.'

          Right up until Unisys invented "Clearpath" technology. Blech. Leave it to Unisys to take great tech halfway to nowhere.
          • Re:mainframes rock (Score:4, Interesting)

            by morgan_greywolf ( 835522 ) on Monday May 08, 2006 @09:24PM (#15289662) Homepage Journal
            If you open a Sun E10000, you'll find that it looks a lot like a mainframe on the inside.

            Yeah, I've seen 'em. Sun E10Ks are practically mainframes. And they cost about as much, too.

            Of course, when it comes to raw transactional throughput, your average E10K running Solaris and Oracle just doesn't hold a candle to, say, an IBM z9 Enterprise Class running z/OS and DB2.

            • Of course, when it comes to raw transactional throughput, your average E10K running Solaris and Oracle just doesn't hold a candle to, say, an IBM z9 Enterprise Class running z/OS and DB2.

              Just FYI, the Sun E10K is an old system--it was released in 1997. I wouldn't expect it to hold a candle to a recent IBM enterprise machine (I assume an IBM z9 is pretty recent).

              It's pretty interesting how Sun ended up delivering the E10K to market. Especially considering how SGI fared in the end. The San Diego based t

        • EMC, one of the market leaders in SANs was formerly part of Data General.

          Data General wasn't a Mainframe vendor. They produced Minicomputers, not mainframes.
    • All seriousness aside, I started out coding for mainframes, mostly assembly. To this day some of the most screaming and cool programs I ever wrote were on mainframes (wrote (in assembly) an on-line trouble logging system to replace a paper system back in '76). I did lots of COBOL programming and maintenance...

      My crystal ball says mainframes may come back, but not like that - not with assembler and COBOL on punchcards. My guess is you'll just log into a linux box (virtual box, that is) with all your famil

      • ...I'm seeing that even Unisys is providing a native JVM for its Unisys Clearpath IX/Dorado line (2200-series), which means if one is willing to spend the money on licensing fees then one can run Java code natively on their big iron.

        I'd love to be able to do that. Sadly, I suspect the licensing costs are far too expensive for my employer to seriously consider.
      • There was actually nothing much new in the article.
        Java has been available on the mainframe in some form for
        at least 6 years , C++ for at least 9 and pure C for at least 15.

        You currently have two third generation Java VMs one which runs
        in Unix System Services (basicly the mainframe pretends its a
        POSIX comliant UNIX) and a whole separate one which runs inside
        CICS ('cause CICS does its own thread and memory management).

        Under the USS JVM you can run any standard Java or J2EE
        application you just install the jar
  • All I know is this (Score:3, Insightful)

    by Neil Blender ( 555885 ) <neilblender@gmail.com> on Monday May 08, 2006 @07:41PM (#15289235)
    In my field (bioinformatics) data generation is far outpacing desktop computer power. I work with microarrays and in the last 5 years feature sets have increased over 1000 times with the prices moving almost as quickly in the opposite direction. We've been struggling for a while. It will soon take mainframes to process this sort of data.
    • Yikes. Looking at the article, it gives the impression that the big iron is a fast box. While they have godlike IO, the actual CPU speed is quite disappointing. Wrong kit to calculate primes, right tool for a webserver. More and more they are pitching this for things that require very little CPU power - replacing a rack (or two) of x86 hardware doing nothing harder than DNS, mail, etc. Depends on what you are doing, but I know I was disappointed... (unless you are just moving stuff around)
    • Mainframes are very good at reliably performing batch and OLTP workloads, they're hopeless at HPC - inadequate performance (even latest models) with *way* too much admin and maintenance/software cost overhead. Wrong tool for the job.
    • You need Cray, Hitachi or SGI (yes I know chap 11) , not IBM/etc mainframe.

      Super computer is not a mainframe.
  • Challenges (Score:4, Funny)

    by From A Far Away Land ( 930780 ) on Monday May 08, 2006 @07:45PM (#15289256) Homepage Journal
    I don't think Mainframes will come back in a big way. I forsee virtual servers becoming much bigger, as RDP and VNC protocols get more handy too.

    Plus, just imagine a Beowulf cluster of virtual servers!
  • by Anonymous Coward on Monday May 08, 2006 @07:51PM (#15289278)
    The market for big computing is increasing. It's just that most tasks can now be done on small machines. One of my buddies was a numerical modeller in the '70s. In 1975 they put on a night shift in the computer center to run his jobs. By 1985 he could run the same model on his desktop in less time.

    It makes sense for IBM to make less expensive mainframes. The jobs will expand to fit the computers available. If you build it they will come, etc. etc.

    I recently met another co-irker who used to program mini-computers. He said his students were calling him the old fart. I pointed out that he could be right up-to-date if he just prefaced all his comments with the word 'embedded'. There are modern chips that have exactly the same architecture and instruction set that the old minis he worked on had.

    There is a market for what IBM is doing and it isn't going away any time soon. It will just be done on cheaper machines.
  • I do not understand how IBM expects to sell mainframes when nobody knows how to use them. If I wanted to get out of the Unix/Linux biz and get into mainframes or even recommend a mainframe for use at my employer I would have to know something about using one. But I would not have any clue as to where I could go to get that kind of information or training. I have only met two mainframe knowledgable people in my whole life (among zillions of un*x people) and they are both old farts. Finding good Linux/perl/whatever people is hard enough. I can't even imagine having to recruit a mainframe person.

    So where are you young mainframe people learning how to use mainframes?
    • I had to recently log into our mainframe terminal, to change my password (everything else I can do using windows front end apps for mainframe). It was the weirdest computing experience I've ever had. I had to move the cursor around the "graphical" interface using the arrow keys and then press the right control button to select an item. Freaky!
      • I had to move the cursor around the "graphical" interface using the arrow keys and then press the right control button to select an item. Freaky!
        That's the weirdest computer experience you've ever had? I had Apple IIe programs that were exactly like that.
      • Wow! What a bass-ackwards company that was. There have been GUI interfaces since before the internet.
        • Some types of applications lend themselves very well to a text-based UI. That's why UNIX sysadmins still use a command line, for example. :-)

          Besides, mainframe screens can get quite sophisticated. A UTS terminal can handle a lot of the field alignment, field protection, and alpha/numeric data enforcement locally without any interaction from the host, leaving the network bandwidth and server resources for more important things (like processing the actual data that you end up transmitting after you've done
    • First, let me say, uh, LINUX ON Z. Now, to the real question--how can anyone learn to use mainframes?. The same way anyone learns to use anything else in the computer world. Hit the library, buy a book (or just hang out in B&N for a few hours), search the net, get some example code...

      That's how people learn, especially in the trendy world of Comp-Sci.

      Geek1: "Have you tried [insert language du jour, such as Ruby on Rails]"
      Geek2: (12 hours later) "Check out my new ap; it's in [language du jour]"
    • So where are you young mainframe people learning how to use mainframes?

      From the "old farts", as you put it.

      We've got a large mainframe contingent where I work - there are lots of critical legacy apps that nobody wants to pay to build replacements to - and I doubt any of them are under 40. But man, they all know their stuff. You could easily bring in one of them to train people if you needed to.

      I'm do not belong to the set of "mainframe people" per se however I do sometimes have to use the mainframe on my
    • Find them in India. There are thousands of Mainframe programmers. When I joined my first job, I was initially assigned to one mainframe project, which eventually I left. My employer used to have few thousand mainframe programmers. We used to call that office "mainframe factory". Tell you the truth, mainframe brings a large fraction of the revenue of India's leading IT services companies.
    • by jacobsm ( 661831 ) on Monday May 08, 2006 @10:35PM (#15290032)
      The company I work for hired a person right out of college. Spent about $2500 on him by geting him an IBM Education card which gave him one year of IBM education. This person has grown to fill a very important postion in our technical services department. He started working with CICS and is now performing a zOS operating system upgrade.

      I wish we could have more like him.

      Mark Jacobs
      Time Customer Service
      Tampa, FL
    • IBM has a program called the Academic Initiative, which provides free of charge, course material, hands-on systems and software, to colleges and universities who want to start teaching their students about mainframes. There was also a Mainframe Programming Contest [ibm.com] that ran just this last year, where over 700 students from the US and Canada logged into a system and performed various tasks to win prizes. The top five performers were flown out to IBM Poughkeepsie, NY for a tour of the facilities and to meet wi
  • by kcbrown ( 7426 ) <slashdot@sysexperts.com> on Monday May 08, 2006 @08:15PM (#15289380)
    Mainframes don't have the fastest CPUs around. Instead, they have the most reliable ones.

    The same is true of their memory subsystems, their disk subsystems, etc., though their backplane performance tends to be second to none. Mainframes are designed for throughput.

    Mainframes are capable of staying operational for decades at a time. If you don't want your computer to ever go down and can afford the price, a mainframe is what you want.

    One other nice benefit: they've had virtualization figured out on mainframes since the 1960s, so allocating resources is a relatively easy thing to do.

    If you're interested in finding out what the older mainframe OSes were like, check out the Hercules IBM mainframe emulator here [conmicro.cx].

    • if you're interested in finding out what the older mainframe OSes were like, check out the Hercules IBM mainframe emulator here. (http://www.conmicro.cx/hercules/)

      It is worth adding that this emulator lets you run 31 (not a typo) and 64-bit zSeries Linux distributions as well. Very cool stuff.
  • by starfishsystems ( 834319 ) on Monday May 08, 2006 @08:24PM (#15289412) Homepage
    There is a strong movement toward cluster computing as a way of sharing the costs and benefits entailed by massive compute resources.

    It turns out to be a lot like mainframe computing in terms of physical infrastructure and administration, and in fact often takes over disused mainframe computing centres, at least in the university space.

    Unlike the mainframe environment, anyone with Unix/Linux experience is already equipped to take full advantage of cluster and grid computing. Either enviroment provides specialized resources that you have to learn how to access, but to me, the advantage goes to whichever environment provides the most universal expression of those resources, and is least likely to lock my efforts into one particular architecture.

    A mainframe is an especially proprietary architecture. Portability has never been its strong point. Conversely, most cluster computations that I've seen have been quite trivially ported from one cluster environment to another. And to some degree it's in every vendor's interest to make it so.

    The exceptions are interesting but, at this point, surprisingly rare. Relatively few researchers are decomposing problems in a way which requires either MPI or shared memory. Perhaps the field is not mature enough for that yet, much less for the sorts of computation envisioned by the Grid community, though that day will eventually arrive.

    What I mean is, the biggest market for massive computation is always going to be driven by ordinary computation which happens to operate at a massive scale. And for that, the plainer, more symmetric, and more standardized the architecture, the better, because development and testing costs are not going to go down in the face of massive computing resources, they're going to go up.

    The perfect mainframe, in other words, is one node in a Beowulf cluster. And that's fine. Just don't go running MQ Series on it, okay?

    • by Animats ( 122034 ) on Monday May 08, 2006 @08:58PM (#15289540) Homepage
      A mainframe is an especially proprietary architecture.

      Actually, no. The IBM 370 architecture is open, as a result of an antitrust decree decades ago. There are plug-compatible peripherals and software-compatible CPUs. There's even a good emulator for PCs. It's actually more open that x86 or PowerPC.

    • by Arker ( 91948 ) on Monday May 08, 2006 @11:49PM (#15290480) Homepage
      Clusters really aren't comparable.

      They compete with supercomputers, not mainframes.

      A lot of people confuse the two, but they're very different sorts of machines designed for very different purposes, with very different characteristics.

      Supercomputers are great for intensive calculations. When you have a relatively small dataset and a very long string of operations to be performed on that dataset, you want a supercomputer.

      A subset of supercomputer tasks are easily parallelised, and on that subset, in particular, a cluster can really rock.

      But the weakness of clusters has always been in throughput - their ability to move large amounts of data around is rather weak.

      Mainframes aren't great at intensive calculation, they don't compete with supercomputers, what they're designed for and great at (besides incredible reliability) is throughput. Those suckers can move enormous quantities of data around very very quickly.

      Want to calculate more digits to pi? Break an encryption key? That's a supercomputer job, and a cluster can probably handle it fairly well.

      Want to search a database that contains every transaction your company has ever had, with any customer or supplier, globally, for the past fifty years? That's a mainframe job. And neither a supercomputer nor a cluster is going to get close to a mainframe at doing it. All those hot little cpus will sit mostly idle while waiting for all the data to trickle in through a relatively narrow set of connections, while on the mainframe, all those (relatively slow) CPUs are being kept busy by a massive array of hard drives on an interface with more bandwidth to memory than most of us can even imagine.

      Apples and oranges.
    • So, a Linux cluster would know what to do with a StorageTek SL8500?

      I suspect you don't know what large means. 300,000 tapes. 2,048 drives. The complexity on mainframes isn't computation, it's data management. Trust me, it's a completely different world, with (solved) problems that simply do not appear in any but the largest enterprises. Those are the 400GB carts, btw.

      Here's a pretty good analogy I just made up: think of how inappropriate FFT multiplication would be for most arithmetic, and how inade

      • So, a Linux cluster would know what to do with a StorageTek SL8500?

        Not to detract from your argument, but actually, it would.

        WestGrid, for example, where I used to work, has a storage cluster with a HFS which I recall supported 30 TB on disk and 200TB on tape. That was a couple of years ago, and it may have been augmented since then, given the amount of data from CERN that we expected to process. A lot of research infrastructure funding is going in this direction, because what makes these facilities

        • You mean this one [westgrid.ca]?

          Starfish, 300,000 400GB tapes hold 117185.5 terabytes. WestGrid fits in about 0.2% of that. And the notion of staging terabyte datasets to disk is ... umm.

          Look, the simple fact is you just looked right at 114 petabytes didn't recognize them, and

          Not to detract from your argument,

          ... but actually, that system you used hasn't got a clue.

  • For the past year or so. The environment has potential. But the CPU speed is horribly slow. I would have really loved a cross compiler that could offload CPU intensive C++ compilation off onto some other box that wasn't so CPU limited.

    It's really interesting the things that take no time at all on the mainframe (grepping the source tree) and the things that take forever (compiling it). It's an odd architecture. There are definitely things you should not use it for, but it would likely make an excellent web server.

  • by toybuilder ( 161045 ) on Monday May 08, 2006 @08:47PM (#15289503)
    Asking most programmers to appreciate mainframes must be like asking most drivers to appreciate 18-wheel big rigs -- you know they exist, and large companies rely on them, but you never really have a *need* to know what it's like to operate one.

    I've always believed that mainframes have their place in the world, even when the world was announcing the era of the personal computers and the death of mainframes. But while I understood them to be highly specialized high-throughput high-reliability machines, I never had a personal experience with a mainframe operating environment. So I never truly understood what a mainframe is...

    I've worked on (relatively) bigger Unix systems (8 processor SPARCservers, 4-rack Sequent NUMA-Q's, and others), but at the end of the day, they seemed no different from a single desktop Unix machine -- just faster and with more memory and storage. I've also used a VAX, briefly, during my freshman year in college. I've always imagined that VMS was closest to what a mainframe environment must be like.

    So, to the folks that understand the mainframe -- what is it about them that makes them more than just faster versions of desktop machines, or even server systems that us non-mainframes are used to?
    • by swordgeek ( 112599 ) on Monday May 08, 2006 @09:03PM (#15289574) Journal
      To answer your question at least partly, look at something that Sun termed "midframe," the SunFire 6800.

      This beast can be physically partitioned into multiple domains. One OS runs on each domain. CPU/Memory boards and I/O boats can be dynamically moved from one domain to another. You can run Solaris 8 in one domain, Solaris10 in another, Linux in a third, and um...*BSD in a fourth. Any of them runs independently of the others. If a board dies, you can deallocate it from a domain, swap it out, and add it back in--all live.

      Now multiply that by a LARGE number, add crazy amounts of fault tolerance, and you're getting into the world of mainframes.
    • by Anonymous Coward on Monday May 08, 2006 @09:31PM (#15289688)
      Your analogy of a 18-wheeler is probably a good one.

      It's not the fastest thing in the world, but would you want to haul a load of water main pipes with a Porche?

      background.. I'm a mainframe systems programmer..

      There are two major aspects of a mainframe. One is the physical hardware (and software), on how it is designed and the other how they are used. The hardware is designed from the ground up to be robust and redundant. Yes it costs thousands (millions?) of dollars for a mainframe system, but with that you get the assurance that the system *WILL NOT CRASH* when an error happens. Instead the system will perform a self diagnosis, make an automated phonecall back to support. Support will send out an engineer (CE) with the replacement parts, which will be replaced while the system is still running (note that there are some (very few) instances where the repair does require downtime to actually perform the repair).

      A few years ago, one of our CE's informed me that one of our mainframes had called home with a CPU failure. I asked if he would need to schedule some downtime to replace the card(s). He said ".. No.. we would have to lose 5 more before they would get worried.." Now.. from my viewpoint, I did not see any error, I still see the same number of "Processors" as I did before. What happens is that the system has a bunch of spare CPUs that are kept online. Instructions are run in parallel across multiple CPUs and then the results are checked. If there is a failure (as in the results don't all agree) the system will determine which CPU "failed", perform a diagnosis on that CPU and if it's determined that there is a problem will fence the failed CPU off from use. Note that this is all done under the covers from the operating systems. There is nothing that I need to do to enable or disable this.

      Mainframe operating systems behave very differently then the Windows/Unix world. -- Lets take a simple example. An application allocating memory. Under Windows/Unix what happens if the memory allocation fails? -- Answer, the program is handed back control with the hopes that it will test the returned value. On a mainframe by default if there is a memory allocation error, the application will be "abended". Now the program *can* request that if there is an error to allow it to continue by explicitly stating that it will handle the error. This concept is carried throughout the system API. By default the application will be halted if there is an error. Under Windows/Unix the default is to simply return some error flag and hope that the application will handle it.

      The way mainframes are used and maintained is a little different. Things are usually not done on a whim. This really isn't due to anything physically different on a mainframe, but more of the "culture". Yes these are big expensive boxes, therefore the company that owns (rents) them wants to make sure they are maintained and running efficently. When changes are made, they are researched and documented with fallback plans. When even minutes of downtime could mean millions of dollars lost, it's well worth the investment in time to make sure that a change is correct. Going back to the 18-wheeler analogy, I suspect that when it's time to do a scheduled maintenance on the tractor there is a lot more testing/verification then you would have done on your family car.
      • Thank you for posting this. This was actually very concise and informative, at least for me (I'm a dev with no mainframe experience).

        I do not have any mod points now, but I really hope someone mods this as Informative.
        It is certainly more useful than the 'mainframes are awesome' vs 'cluster are th3 rock' posts that pop up all over the place.

    • First and most important is: Only 5-6 guys/gals see a mainframe up and running in its dedicated/controlled room/area guarded by armed guys.

      Funny is everyone seem to prefer OS X and Windows to access mainframe.

      I know a mainframe coder which I have showed him how to defrag a windows disk :)) They are different kind of people.
    • 6250bpi round reel tape drives. I don't know of any Microcomputer system (or mini that is left) that still uses round reel.

      Mainframes (old ones and new ones with legacy data requirements) still do (but don't have to) :)

  • MF + Linux (Score:4, Informative)

    by jsailor ( 255868 ) on Monday May 08, 2006 @08:52PM (#15289525)
    Both of these are huge. I don't know of a single major financial firm that is shrinking their mainframe footprint. Also, most of the talent is retirement age - so their is a promising future for those entering now.

    Perhaps most interesting to this community is that Linux on the mainframe solves a major problem that all large institutions are dealing with: Power. Power density and consumption for intel/amd boxes is through the roof and is breaking most data centers. Exponential growth is not an understatement. Mainframes however, remain very predictable with a fairly flat and linear power curve. Porting quantitative trading and analysis applications to the mainframe would solve this problems and literally save 100's of millions of dollars.

    • Mainframes aren't exactly known for their number crunching abilities. They can, OTOH, kick data around very fast.

      So the whole porting issue depends on whether the application is data throughput bound, or CPU bound.
  • by cmacb ( 547347 ) on Monday May 08, 2006 @09:07PM (#15289589) Homepage Journal
    ALC
    Algol
    Ada ...and any other A-list languages as I think of them.
  • I've been gainfully employed on Mainframes (mainly) for about 25 years now. I wrote yet another ALGOL program this morning. I've done UNIX and some Windows on the way down the road, and am still waiting for the college graduates who know my job backwards to come in and put me out to stud. Hasn't happened yet.

    Mainframes are industrial strength. Full stop.
  • Gibsons (Score:3, Funny)

    by rkulla ( 973592 ) on Monday May 08, 2006 @09:58PM (#15289814) Homepage
    I just hope Gibsons make a comeback. They never recovered after the movie 'Hackers' came out and every kiddie on the block was brute forcing their way in.
    • If Gibsons come back, they need to change the default password from something other than "God", "sex", or "password" to keep people from hacking them.
  • by swamp boy ( 151038 ) on Monday May 08, 2006 @10:18PM (#15289922)
    For any organization that may contemplate getting into mainframes -- skip z/OS (MVS). MVS is what most folks dread when they think about mainframes (JCL, pre-allocate datasets, etc.). A modern mainframe (z/990 or z9) running z/VM (5.1 or 5.2) and a bunch of linux guests is *COOL STUFF*. What's really cool is when you need to setup a temporary testing environment -- no problem, just add a half-dozen configuration statements to your "USER DIRECT" and clone an existing guest image to the new machine's disk volumes. Done! Need more memory in that virtual Linux server? No problem, bring up USER DIRECT in XEDIT and edit a single line of text and issue DIRECTXA. Restart the linux guest and now is has more memory. Disk space (volumes) can be added while the Linux systems are running (add as many as you need).
  • Check out the definition of Mainframe at the NY State Law Library http://www.courts.state.ny.us/ad4/lib/gloss.html#M [state.ny.us]
  • Obligatory (Score:3, Funny)

    by Shadyman ( 939863 ) on Monday May 08, 2006 @10:41PM (#15290077) Homepage
    I, for one, welcome our old COBOL overlords.
  • There's one aspect of mainframes that, as of 2003 at my old company, was still in place: charge by the *cycle*. IBM mainframes have done this for years (I believe the 360 had an actual odometer, and an IBM rep would come in and read the number and preparae the invoice based on that).

    It may not be every application, but I know that IBM's MQSeries product (now called WebsphereMQ, I believe) had a per-cycle cost on big iron (on top of some huge monthly maintenance fee). I know this because I wrote some middlew
  • by couch_warrior ( 718752 ) on Tuesday May 09, 2006 @04:36PM (#15296463)
    One big difference between mainframes and UNIX or Windulls boxes is the way that resources were allocated.

    IBM allowed fine-grained control of CPU time, IO bandwidth, RAM, and disk storage. And this control was not a weighted-selection algorithm, it was WYSIWYG deteministic control.

    In mainframe shops, there were well defined workloads, often represented by a batch of transactions needing to be run against a database. These "batch jobs" would run on predictable intervals, daily, weekly, monthly. They could be scheduled to run at fixed times for known durations.

    This made the whole mainframe environment very easy to manage. Instead of having to guesstimate workloads, and install CPU and I/O capacity to match unexpected peak demands ruled by chaos theory, mainframes were safe and predictable. The need for CPU MIPS and RAM was clearly visible and easily monitored and planned.

    So when people say that mainframes were "more reliable", they don't just mean the MTBF numbers of the hardware.

    They mean that when you ran work on a mainframe, you knew exactly what programs were using what resources at what times. And when something screwed up, you could very simply back up to the previous version of the affected files and re-run the batch job.

    Life with mainframes was safe, logical, and predictable.

    Introducing some of that into UNIX or Linux would not be a bad thing. Not every problem has to run in real-time with dynamic adjustment of resources. Deterministic, static allocations of memory, CPU, and I/O can work very well for predicatable workloads.

    Linux needs a good Batch Spooling manager system.

What is research but a blind date with knowledge? -- Will Harvey

Working...