Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Microsoft Businesses

Microsoft Makes Push for COBOL Migration 487

geoff313 writes: "It would appear that Microsoft is making a real push for the migration of existing COBOL applications to Windows and their .Net platform. Micro Focus, a company who makes COBOL migration products and last year became a member of Microsoft's Visual Studio Industry Partner (VSIP) program, announced their Net Express with .Net product, a plug-in to Microsoft Visual Studio .Net 2003. It allows for COBOL code to be integrated and manged with other code in Visual Studio. In an interview with eWeek he declares that 'Micro Focus and Microsoft are bringing the mainframe to Windows and .Net'. This makes me wonder, are there any Open Source projects working to provide for this eventual migration? Gartner estimates that over 75% of business data is processed by an approximately 200 billions lines of COBOL, so this seems like a huge potential market to lose to Microsoft."
This discussion has been archived. No new comments can be posted.

Microsoft Makes Push for COBOL Migration

Comments Filter:
  • COBOL migration (Score:3, Informative)

    by bersl2 ( 689221 ) on Sunday November 09, 2003 @07:23PM (#7430613) Journal
    This makes me wonder, are there any Open Source projects working to provide for this eventual migration?

    Just from browsing freshmeat: OpenCOBOL [freshmeat.net]
  • by Anonymous Coward on Sunday November 09, 2003 @07:36PM (#7430675)
    Why go with some brand new technology when there is already something solid that works?

    There are a few CORBA-compliant ORBs out there supporting COBOL language bindings.

    Today, without waiting for some new M$ product, you can develop a CORBA layer to sit on top of the COBOL code, and then interface to the existing code from whatever other environment you wish.

    Sooo, to expand the system, you can write your new code in whatever language on whatever OS you choose and still leverage the old system. You can also start to re-implement servants done in COBOL with whatever, and the other servants should not be affected too much.

    Seems to me that .NET may someday offer some of the cross-language benefits of CORBA, but will not be able to offer the cross-platform benefits. Oh, and it won't be free and you won't be able to change it.

    (yeah, yeah I know about mono, but I doubt M$ will do much to support it and will probably try to kill it somehow).
  • by lurker412 ( 706164 ) on Sunday November 09, 2003 @08:02PM (#7430789)
    Many of the old COBOL applications rely on other components of the IBM mainframe environment. CICS, IMS, MVS etc. are often required. It is not just a matter of compiler quirks. The entire environment would need to be emulated. Not trivial.
  • by Blair16 ( 683764 ) on Sunday November 09, 2003 @08:09PM (#7430814)
    COBOL was originally designed by the military to be a language that even managers could read and understand exactly what was happening. The original COBOL specification didn't even have comments!! Statements looked like this:

    add this to that giving something-new.
    multiply x by y giving z rounded. perform function until condition.
    ...
    end-perform.
    stop run.

    Statements are called sentences. Groups of statements are called paragraphs. You get the picture. The reason that it became so popular was because it was designed by the military, and they coded everything in it. Then when those coders went to go work in the corporate world, they took COBOL with them.
  • by samael ( 12612 ) <Andrew@Ducker.org.uk> on Sunday November 09, 2003 @08:17PM (#7430859) Homepage
    Nope, still runs the vast majority of the banks and financial instituations the world over.

    The line you give isn't valid COBOL, but

    SUBTRACT EXPENSES FROM REVENUE GIVING PROFIT

    is.

    But then

    COMPUTE PROFIT = REVENUE - EXPENSES

    would do the same. (You can use COMPUTE to do most algebraic functions).
  • by Multics ( 45254 ) on Sunday November 09, 2003 @08:19PM (#7430868) Journal
    Not to mention the COmmon Business Oriented Language predates all the aforementioned languages by at least 20 years.

    COBOL histories can be found here [legacyj.com] and here [oevelen.com]. For quite a while, the language that was available on all sorts of mainframes that addressed business was COBOL. Then one could use FORTRAN for doing engineering & science. All other languages were in the noise, were research projects, or were only supported by a single vendor.

    Selecting COBOL made very good sense then and in some cases probably even makes sense now for some classes of applications. Move Corresponding still does a lot of work in a single statement. New editors make working with the verbage easy compared to the venerable 80-column card.

    -- Multics

  • Important similarity (Score:1, Informative)

    by Anonymous Coward on Sunday November 09, 2003 @08:24PM (#7430900)
    Both COBOL and Visual Basic are a response to a business advanced by the major programming language firm of the time.

    In the early 1950s IBM pushed F0RTRAN as a replacement for assembly arguing (succesfully) that it allowed for a large increase in programmer productivity without much loss of system performance. F0RTRAN however was too "computer oriented" and many programmers with a strong business background found it difficult to express business ideas in terms of fortran succesfully. So an alternate language called COBOL was created which allowed for a better expression of business concpts at the cost of both performance and abstracting the details of how the machine was opperating.

    In the early 1990s Microsoft pushed visual development in C++ (visual C++) as a replacement for standard C arguing (succesfully) that it allowed for a large increase in programmer productivity without much loss of system performance. Visuak C++ however was too "computer oriented" and many programmers with a strong business background found it difficult to express business ideas in terms of C++ succesfully. So an alternate language called Visual Basic was created which allowed for a better expression of business concpts at the cost of both performance and abstracting the details of how the machine was opperating.

    So obviously it's important to look at these languages as a reaction to the dominant languages of their day. Understanding what they are reacting to.

    BTW, COBOL is still going and growing VERY strong. COBOL-2002 is a new standard of the language, and code is still being written in it for many, many legacy applications.
  • by salesgeek ( 263995 ) on Sunday November 09, 2003 @08:25PM (#7430907) Homepage
    There are two issues here: First, the question of mission critical Microsoft. Second, the question of moving your software.

    If I were you, Mr. CIO, I'd avoid this little stunt. Mainframes and the software mainframes run exist in mainframe form for three reasons:

    * Speed - Process more data in less time
    * Accuracy - With less mistakes
    * Reliability - and a minimum of downtime

    In none of these areas does Microsoft have a credible track record. You simply have to look elsewhere. Anyone who goes with MS on this is putting their carreer at risk.

    That said - would migrating off of Cobol to a more modern development environment make sense? That's a situational question, and one that has to be answered on a case by case basis. In some cases, legacy software is a competitive advantage. In others, it's a business obstacle. In most cases, there's no compelling reason to do or not to do.

  • by Anne Thwacks ( 531696 ) on Sunday November 09, 2003 @08:29PM (#7430920)
    40 years ago Cobol was the only horse in town. Cobol dates from the 1950s, C from the 1970s.

    C is most definitely NOT any better than Cobol for what Cobol does. There is nothing actually wrong with Cobol for the applications in which it is used.

    Cobol is actually capable of structured use. The problem is that SOME programs written in Cobol were written so log ago, that we didnt know then what we know now. Cobol is not the problem - the problem, such as it is, is that the code is very old. As for lack of Cobol programmers, I am damn sure that anyone who can learn Java can learn Cobol in half the time it them took to learn Java. If offered a suitable salary

    As for "The mainframe it runs on is getting old" IBM

  • by iggymanz ( 596061 ) on Sunday November 09, 2003 @09:02PM (#7431103)
    just fine. MicroFocus is available on Linux and your Unix of choice (I've actually done healthcare adjudication app porting & integration on Linux, AIX, and HP/UX). It has a C API so you can go back and forth between any langauge that supports that (I did Java via JNI). Fujitsu also makes a kick-butt enterprise grade COBOL compiler for Linux & your Unix of choice, and I'm sure there's plenty others out there.

    That being said, for 1,000's of users, the mainframe is still the cheapest for price/user. In the 10's to 100's of users, Unix or Linux on server grade machine is dandy.
  • Re:Bah humbug... (Score:4, Informative)

    by karit ( 681682 ) on Sunday November 09, 2003 @09:21PM (#7431194) Homepage Journal
    You are missing his point. He is refering the internal workings not redunacy. The mainframes will double check their working so if errors in the datapath (ie in processor) are picked up and corrected. 80x86 (AFAIK) does not have this kind of checking built into the chip.
  • by tzanger ( 1575 ) on Sunday November 09, 2003 @09:22PM (#7431200) Homepage

    that's for the same reason one of my clients' elevator system is powered by a 100 year old solid state system. he walked me upstairs to show it off. lots of zapping and clicking noises.

    Sorry dude, that ain't solid state. I work for a solid state power electronics manufacturer. You're describing old contactor-based motion control. Solid-state is all done with SCRs or IGBTs (or depending on age, GTOs even) -- no zapping or clicking unless something is hellishly wrong.

  • by alangmead ( 109702 ) * on Sunday November 09, 2003 @09:36PM (#7431263)

    Sun has two designations for ceasing hardware support. There is "End-of-life", where they cease producing the production, and there is "end-of-service-life", where they cease providing replacement parts.

    I'm assuming they do some sort of calculation for the likelihood of failure, the cost of storage, the income from support contracts, and figure out how many extra units to store for parts, (and the price for support contract renewal.)

    Of course, they produce product for long after you would think they would. The Sun 220R [sun.com] had an EOL date of 05/2002, so you can order replacement parts until 2007.

    I guess there are occational exceptions. If you look at the Sun-4c Page [sun.com] you'll notice that the EOSL date for models like the SPARCStation 1 and 2 were extended. I'm guessing that enough people kept signing support contracts for them, which made it worth it for Sun to keep the hardware around.

    This EOSL [sun.com] document has an interesting list of Sun's products and their EOSL date.

    All this, of course, is just buying parts from Sun. There is a big 3rd party market for older mainframe and minicomputer components. I know of a large Boston area company whose publishing company is finally being moved off of the PDP-11 system they have been using for the last 20 years. From what I understand, as companies decommission systems built on old DEC hardware, people will buy it up for recondition and resale.

  • by Lawrence_Bird ( 67278 ) on Monday November 10, 2003 @01:17AM (#7432229) Homepage
    by 5B lines per year, which is quite impressive.


    You could have known this 5 days go though... # 2003-11-04 15:43:28 Microsloth courts COBOL (articles,microsoft) (rejected)
  • many factors (Score:2, Informative)

    by Anonymous Coward on Monday November 10, 2003 @01:54AM (#7432317)
    I/O Throughput wise, SGI (silicon graphics) is king, but mainframes have a lot of interesting capabilities for 'application throughput'

    OS: Lack of generality and portability in the OS lets you cut down overhead in servicing the hardware.

    FS: Filesystems that are less general but tailored towards particular needs

    TP: transaction subsystems tend to be less general than oracle or relational DBs but the simplicity allows speed and they can also be integrated with the OS, so there are less layers.

    Hardware: Multiple busses, multiple controllers, specialized subsystems (IO processors, crossbar backplanes); this is available on bigtime UNIX systems as well.

  • Re:.NET and C# (Score:3, Informative)

    by big-giant-head ( 148077 ) on Monday November 10, 2003 @02:08AM (#7432347)
    "C# applications generally outperform their Java counterparts by a large margin."

    On what?? I find my java programs perform quite well on a 25 cpu *nix box....

    Thats the problem with .NOT it only works on M$ and Intel CPU's.

    Right now we are replacing a COBOL app with a cluster of big boxes running Java and Webspere. The nice thing is if we are unhappy with IBM, we could easily move to Sun Sparcs and Weblogic with very little changes to our app. Can't be done with .NOT you have your choice of Windoze or Windoze running on intel or amd.

    Alot of big business are very pissed right now with MS over thier licensing and how virus prone thier OS is.

    Add to this the fact that most COBOL code is written to a very specific hardware implementation. Most companies will eventully choose to rewrite thier COBOL apps, and given the previous reasons many of them won't even look at MS.
  • Re:Bah humbug... (Score:5, Informative)

    by larien ( 5608 ) * on Monday November 10, 2003 @06:55AM (#7432975) Homepage Journal
    *sigh* Clusters come nowhere near the level of fault tolerance you get in big iron. In a real fault-tolerant system, there are multiple paths for all transactions; in essence, you're running the same code on two (or more) CPUs. If one fails, you have zero downtime, other than a quick reroute in hardware to compensate (you probably wouldn't even notice it). To the best of my knowledge, there is no clustering solution which comes close to this, whether based on linux, Unix or Windows.

    Yes, clusters can do a job which is "good enough" to replace expensive mainframes, but there are some cases where they aren't good enough, especially banking where you have to be 100% confident that every transaction is logged correctly.

  • by Dun Malg ( 230075 ) on Monday November 10, 2003 @10:34AM (#7433767) Homepage
    COBOL on big iron will never die. that's for the same reason one of my clients' elevator system is powered by a 100 year old solid state system. he walked me upstairs to show it off. lots of zapping and clicking noises. the thing runs 15 stories worth of 2 elevators and has for 40 years in that building (yes, bought used).that reason is reliability. if it ain't broke, don't fix it.

    First off, "solid state" means "no moving/mechanical switching", which a room full of relays does not fit the definition of. Second, I seriously doubt that system is 100 years old-- more likely it's closer to 60, as 100 years ago most elevators were operated by a man in a silly uniform pulling a lever in the elevator car. Third, relay-and-solenoid elevator controls aren't more reliable than modern electronic systems. The reason people don't replace them is because the cost is usually prohibitive, and the failure rate on older systems isn't usually bad enough to warrant replacement. It generally comes down to a question of "do you want to call the elevator tech every 2 weeks at $250 a visit to fix bad relays, or plop down $80,000 for installation of a new UNIX based electronic controller?"

  • Re:Bah humbug... (Score:4, Informative)

    by stripes ( 3681 ) on Monday November 10, 2003 @01:07PM (#7434865) Homepage Journal
    I've written programs that do trillions of operations and then run some numbers against them. In case of some weird hardware failure or human error the code is run on three machines at the same time (the program takes a few days to run). What you are saying is that along the way the x86 hardware will incorrectly compute data and return bunk results? As a programmer (though not a low-level type), I'd expect that this would have come to my attention by now. Can you provide some examples or documentation? Of course there was the now famous Pentium flaw, but otherwise, what is defect rate on x86 operations?

    Think about hardware at the gate level, your adders are a bunch of flip flops. A whole lot of them. And very very tiny ones holding very little charge. A cosmic ray striking one can change it's value (I think only form one to zero, or zero to one depending on the process and wether charged or discharged represents a zero or a one). If the cosmic ray hits one of the flip flops that is being used to produce a value about to go to a live register and then off to memory it can change the results from what your code should calulate to something with a one bit error. Maybe a low bit and the result is off by pennies, maybe a high bit and the result is so amazingly off that an assert later in the code catches it, or just a human eyeballing the results would. Worse yet something in the mid rage that nobody notices until far too late (if ever).

    This is the same thing that ECC in memory helps prevent (ECC can't protect against multiple bit failures, but while a box with multiple gigs of memory will likely get a soft memory error in your lifetime, multiple bit errors before the scrubber finds them is really really really amazingly unlikely).

    It is a force that gets worse as we go to lower voltage systems (less of a hit needed to flip values), and as we get greater densities (more likelyhood of the cosmic ray hitting something other then dead space -- at least I think so on this one).

    It can be combated by having ECC on register and cache values (or even parity as long as the bits are "big enough" that a single cosmic ray hit won't flip multiple bits!), but having similar checks on ALU operations, or just sending the values through identical ALUs that are on diffrent parts of the chip and compairing them after (rerun any cycle where the results differ, either totally in hardware or trapping to the OS to let it restart the instruction after logging the hit). Not rocket science, but it makes for a more costly CPU, and it makes for a slightly slower CPU (in fact much slower when you factor in the extra transistors used for correctness checks that a "normal" CPU would use for cache or some other performance boosting geegaw).

    As for documentation I don't recall where I saw studies done, probbaly in one of the ACM or IEEE procedings, but if you don't have access to them comp.risks might turn something up (and a lot of other stuff that might be more worth worring about :-)

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...