Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Ximian

Mono C# Compiler Compiles Itself 339

Bob Smith writes: "Miguel just commited the last patch nessisary to get Mono's C# compiler to compile itself. After 7 odd months, MCS is now self hosting." jbarn adds: "Mono-list email is here."
This discussion has been archived. No new comments can be posted.

Mono C# Compiler Compiles Itself

Comments Filter:
  • Request (Score:4, Informative)

    by Anonymous Coward on Friday January 04, 2002 @06:49AM (#2784399)
    I ask that people submit more articles about accomplishments in the world of compilers. Does anyone recall the functional programming contest? There were many excellnt compilers featured that are being improved constantly, and /. should focus some of its attention on those, too. C# is getting too much attention just cause it's a microsoft thing...:)
    • Re:Request (Score:3, Insightful)

      by Hard_Code ( 49548 )
      No C# is getting attention because it is part of .NET (or at least an language binding for the common language runtime, the basis of .NET), and lots of people give a damn about .NET and think it has significance (either bad or good). Not many people care about new compilers for functional languages that will never get above %0.01 acceptance. Actually, I would rather Slashdot *not* post frivolous stories about every compiler on earth.
      • Re:Request (Score:3, Interesting)

        by mikera ( 98932 )
        You may well be wrong about the functional languages.

        It won't be long before the overheads of FP will be pretty negligible compared to the increased programmer productivity and software robustness. This advantage will gradually start to overcome inertia in the marketplace and start to make a real impact.

        Also, with the incresing use of Virtual Machines as an execution platform it's going to get a lot easier to use functional languages and integrate them into larger projects. My bet is that something like Haskell will suddenly start to make headlines in the next couple of years on the .NET platform.
  • by famazza ( 398147 ) <fabio.mazzarino@ ... com minus distro> on Friday January 04, 2002 @06:55AM (#2784414) Homepage Journal

    I think it'll bring more developers to mono, and also more accurate code. Many developers couldn't code with quality due to the lack of a linux compiler, to develop the mono framework developers should go to windows and compile at .net compiler.

    It's a big step forward. Congratulations

  • Mono Roadmap (Score:5, Informative)

    by Shanes ( 141586 ) on Friday January 04, 2002 @06:56AM (#2784415)
  • Samba anyone ? (Score:5, Insightful)

    by MosesJones ( 55544 ) on Friday January 04, 2002 @07:05AM (#2784429) Homepage

    Isn't it wonderful having C# and .NET on Linux, after all they won't have the problems that Samba boys have trying to keep up with MS deliberately changing things to stop them, and it won't be miles worse.

    Mono is a nice idea, but unfortunately .NET isn't a revolution, its a way to build poor quality mainframes, lots of boxes, poor IO. In terms of distribution there is none. Full credit to the guys for doing this but it does remind me of a Larson cartoon, you know the one with the Sheep Bar and the line

    "What do you know, I'm a follower too"
    • Re:Samba anyone ? (Score:4, Insightful)

      by Craig Maloney ( 1104 ) on Friday January 04, 2002 @07:13AM (#2784440) Homepage
      Having the compiler is nice, but unfortunately it doesn't preclude the perennial "catch-up" game that Microsoft is so fond of playing. The compiler is just one part of the puzzle... there's libraries and other assorted goodies that Linux will need which Microsoft can (and probably will) use some tom-foolery to make projects that ride on their coat-tails fall off.
    • Re:Samba anyone ? (Score:3, Interesting)

      by MagPulse ( 316 )
      Horrible analogy. Being able to port C# programs written in VS.NET is only a part of the reason for Mono. And MS won't stray far from the ECMA standard, which is what the first release of VS.NET follows. It's taken until now to break free of some legacy "features" like incorrect for-scoping introduced way back in VC++ 4, just for backward compatibility.

      Right now, all Linux has for standard middleware is CORBA. Sure there are Linux die-hards that will stand by CORBA, but most will agree that COM is superior. Now with Mono, Linux will finally have a decent object technology with which to build large OO projects that will work together.
      • Re:Samba anyone ? (Score:2, Informative)

        by griffits ( 309164 )
        >>Right now, all Linux has for standard
        >>middleware is CORBA. Sure there are Linux
        >>die-hards that will stand by CORBA, but most
        >>will agree that COM is superior.

        Don't you mean COM+. CORBA encapsulates a wire protocol, transaction capabilities, security etc. COM is really just "IUnknown" and anyway if it is so superior, why is it being replaced by .NET?

      • COM is superior to CORBA ? Sorry to be rude but on what planet is that true.

        COM runs on one platform on one protocol.

        CORBA, federated networks of loosely coupled elements are possible. Transactional mapping is better than in COM (new versions of CORBA).

        CORBA is more established and has a better history.

        If you meant COM+ rather than COM then there is a slight merit to actually having a debate but COM is a poor and simple mans version of CORBA, even MS realised this, hence the massive overhall for COM+.

        CORBA IMO still remains the best of the bunch out there but it doesn't have the marketing might of EJB or .NET COM+. Smart ideas don't often win in the world of marketing technology.
      • Re:Samba anyone ? (Score:4, Insightful)

        by Ayende Rahien ( 309542 ) on Friday January 04, 2002 @08:16AM (#2784589)
        One of the biggest advantages of COM over CORBA is a *big* speed difference when running in-proc, while maintaining the same speed as CORBA as out-proccess & out-machine.
        • There's also the tools. Infoworld (I think), in a comparison of CORBA and COM (a couple years ago...things may be different now) concluded that CORBA was a little bit better technically, and COM was a lot easier to use, because Microsoft provided better tools.

          The thing about Microsoft is that they often take two or three iterations to get it right, but when they get there, it's right for the programmer. Too many things in the Unix world are right in a theoretical sense, but a complete pain in the ass when you fire up your editor, start the caffeine IV, and try to crank out code.

        • COM and CORBA aren't comparable. COM is an attempt to give C++ a dynamic object system. You have to compare COM to something like Objective-C(++), and in that comparison, COM fails miserably: it's much less flexible and much more complex. DCOM is comparable to CORBA, and it's not as good.
    • by Carnage4Life ( 106069 ) on Friday January 04, 2002 @07:58AM (#2784526) Homepage Journal
      Isn't it wonderful having C# and .NET on Linux, after all they won't have the problems that Samba boys have trying to keep up with MS deliberately changing things to stop them, and it won't be miles worse.

      Mono is not a clone of .NET on Linux but an implementation of the ECMA standard on Linux. You do know that C# and the CLI are ECMA standards, right [slashdot.org]?

      Secondly as miguel said in my interview with him which originally ran on Slashdot [slashdot.org] and then on MSDN [slashdot.org]

      Dare Obasanjo: Similarly what happens if Dan Kusnetzky's prediction [graduatingengineer.com] comes true and Microsoft changes the .NET APIs in the future? Will the Mono project play catchup or will it become an incompatible implementation of .NET on UNIX platforms?

      Miguel de Icaza: Microsoft is remarkably good at keeping their APIs backwards compatible (and this is one of the reasons I think they have had so much success as a platform vendor). So I think that this would not be a problem.

      Now, even if this was a problem, it is always possible to have multiple implementations of the same APIs and use the correct one by choosing at runtime the proper "assembly". Assemblies are a new way of dealing with software bundles and the files that are part of an assembly can be cryptographically checksummed and their APIs programmatically tested for compatibility. [Dare -- Description of Assemblies from MSDN gloassary [microsoft.com]]

      So even if they deviate from the initial release, it would be possible to provide assemblies that are backwards compatible (we can both do that: Microsoft and ourselves)

      .NET isn't a revolution, its a way to build poor quality mainframes, lots of boxes, poor IO. I

      No .NET isn't a revolution, it's simply a slight improvement on an old idea of how things can be done. The same way Linux, Apache, BSD, Perl, Java, XML, P2P, the world wide web, etc. aren't revolutions but simply new spins on decades old concepts.

      However, does this somehow preclude their usefulness or the fact that they are all innovative in their own way?
      • I am well aware that MS have submitted C# to ECMA, but then they also sat on the SOAP 1.1 expert group and their implementation didn't meet the standard.

        Implementing ECMA on its own is pointless, .NET is the only implementation that matters. In the same way as JBoss and Enhydra have to stick to the J2EE spec because implementing just the language spec is pointless then Mono will have to implement .NET or just become a sideline.

        MS don't have a great history of backwards compatibility, they have a great history of patches that upgrade their old stuff to match the new stuff. DR-DOS, Samba et al all demonstrate the changing nature of those supposedly backwards compatible APIs.

        Not being a revolution isn't a problem, but it would be nice for once if we could actually move beyond a problem set that was effectively solved around 8 years ago rather than just spinning then same one over and over again.
        • by rabtech ( 223758 ) on Friday January 04, 2002 @10:03AM (#2785080) Homepage
          Is that why some 15 year old HP-UX boxen running some ancient version of samba can still talk to my Windows 2000 server?

          I find it hilarious that you would accuse Microsoft of changing the API; SMB hasn't changed at all on NT/2K except for a few upgrades to the password scheme to make it more secure and a few other small patches to fix bugs.

          What Microsoft did do is come up with a new and improved filesharing protocol, CIFS, and implemented it as a >separate library running on different ports. If you honestly believe they are going to totally change the way their server OS works and lock out Win9x/NT4 clients, you are sadly mistaken.

          Similarly, the Win32 API hasn't ever been CHANGED... it has only had things added to it. And with NTVDM/WOW, I can still run my ancient DOS and Win3.1 programs (for the most part) under Windows 2000; tell me where they've changed the API here?

          *ALMOST EVERY* Time Microsoft wants to make a change that would break something, they just implement it in a separate standard or New API rather than b0rking the old one, which is the way it should be done.

          If anyone is spouting FUD and nonsense here, it is you.
          • by SerpentMage ( 13390 ) on Friday January 04, 2002 @12:55PM (#2786290)
            You are right on all of your points. But I ask the following. Can your HP-UX station use authentication that allows it to hook into Active Directory with all bells and whistles? Probably not.

            It is not that Microsoft changes the basics. That is pretty easy to catch up to. What is more problematic is keeping up.

            Let me explain. Lets say that I build an application using the Windows API (actually am). Everything works fine until the user starts using it on Windows XP or Windows 2000. You may ask why? Well according to the new security rules the application must only save content under the "My Documents" folder and not the folder installed to or something else. So now you are wondering how do I get access to the "My Documents" folder? The answer is a brand new API that is only available in a modern Platform SDK because when Visual C++ was released the API did not exist.

            Do you see the issue? It is not that they break backward compatibility. It is that they introduce new rules which require you force upgrade your code. And that will happen with .NET. That is also why Miguel is dreaming about doing .NET on LINUX.
            • by spitzak ( 4019 ) on Friday January 04, 2002 @03:13PM (#2787425) Homepage
              This is exactly the sort of crap that MicroSoft does and why programmers hate it.

              If you say that "MyDocuments" is equivalent to the Unix "home" directory, notice that on Unix the method to get this information is to call getenv("HOME"), while apparently on MicroSoft it is the new getMyDocumentsDirectory() call. Notice that the Unix solution reuses a current interface. If home directories did not exist before and they were added to Unix, most systems (like perl, etc) would already have the getenv() call and could immediately take advantage of it. The MicroSoft solution requires perl to be recompiled to call this.

              I can find no clearer explanation as to why software engineers hate MicroSoft.

              And they definately do this to break things. They have getenv() and a zillion other things (registry, for instance) to look up a string such as the MyDocuments directory name, but they insist on adding a new call. The fact is the engineers at MicroSoft are not idiots and it should be obvious that there are better ways to do this than adding new calls, but they are also blinded by absolute hatred for anything outside MicroSoft that they will gladly throw good engineering out in order to force people to stick to their platform.

          • You are right and wrong.

            Most Windows 2000 implementations run in compatability mode to allow legacy NT 3.51/4.0 and Windows 9x clients to connect.

            If Windows 2000/Active Directory is running in native mode, these clients will be unable to connect. Many of the more advanced features of AD can only work in native environments.
          • Similarly, the Win32 API hasn't ever been CHANGED... it has only had things added to it.

            Yes, and a steady stream of unnecessary, incompletely documented additions to their APIs is exactly the problem. That is what has kept the Windows platform from being implemented successfully by any other vendor.

            *ALMOST EVERY* Time Microsoft wants to make a change that would break something, they just implement it in a separate standard or New API rather than b0rking the old one, which is the way it should be done.

            The way it should be done is that you spend some time ahead of time and work out APIs that you can live with for decades. When you do make significant API changes, you give people the tools to convert their code (deprecation, automatic translation). Saddling the platform and their code with dozens of incompatible variants is not the right approach. And the backwards compatibility doesn't really help anyway: you may be able to compile a program written for Windows 3.1 for your XP machine, but its behavior would be so oddball that nobod would want to use it.

            If anyone is spouting FUD and nonsense here, it is you.

            I don't see any "FUD" here: there is no fear, no uncertainty, and no doubt. As you yourself said, Microsoft constantly extends their APIs and keeps multiple incompatible versions around. Microsoft gets unclonable APIs, quick time to market, no complaints from people too lazy to update their code, and low engineering costs. The people who are screwed are the developers, who have limited choice, poorly thought-out APIs, need to update their code constantly to deal with Microsoft's latest fancies, and too much stuff to learn, and customers, who get locked into a single vendor and get poor quality software. Microsoft's low-quality approach works in terms of their business, but that doesn't make it good engineering.

      • I could mention millions of things that have bandied about the moniker of "Just choose a different runtime assembly". One was Java, and that has been a nightmare.

        The problem comes with inexperienced or lazy programmers. Most of the .NET programming will come from Windows programmers who will read this line in their book: "While it is easy to make C# code work across many different implementations, 90% of your user base is running Windows, so just comment out those lines about "if (OS != Win) {};".

        Even if 90% of .NET services are written properly, the other 10% still stands a pretty good chance of alienating less technical users.

        Microsoft is not the one making technology a pain in the buttinski, it's the legions of "Web Developers" who learned everything from two Sybex books and a weekend class at Comp USA.

        MS never put a gun to your head and made you include that marquee tag did they??

        Jason
      • It isn't whether Microsoft keeps backwards compatibility that matters, it is whether they add extensions to C# that are difficult to replicate.

        We don't have to guess there--we already know it: most of the existing Windows APIs are available from C#. So, the situation you get into is that little C# code for Windows will work on Mono, while most of the open source code will work on Windows. That's just what Microsoft likes, and it is foolish for the open source community to deliver all these programmers and all this marketing to Microsoft. It's also unnecessary, since there is nothing in C# that hasn't been present in a number of programming languages with open source implementations.

        But, hey, what can we expect from Miguel? He has said clearly that he thinks Microsoft is doing a great job on software and tLinux software should be constructed more like Windows software. With friends like these, who needs enemies?

  • by Hougaard ( 163563 ) on Friday January 04, 2002 @07:17AM (#2784449) Homepage Journal
    Ahhh the old chicken and egg problem. (Compiler version)

    The first Pascal compilers (and many other compilers) where actually written in their own language. Niklaus Wirth (and his staff) simply "executed" the program by hand/on paper and in that way they compiled the first compiler written in the language itself.

    After that it becomes easier, you just need to make sure that you can compile the new version of compiler with the old one.
    • Not to be pedantic, but doesn't that mean that the first Pascal compiler was actually Niklaus Wirth?
    • I find it hard to believe that Wirth "hand compiled" the firt Pascal compiler - the size of sych an effort would have been unbelieveable.

      The history I dug up on Wirth does agree that the first compiler was written in Pascal (after an aborted attempt to do it in FORTRAN!), but I highly suspect that was only the first FULL compiler, and that it was in fact bootstrapped from ALGOL/W which was Wirth's prior creation (and which I actually used at college in the late 70's!).
      • Re:Compile itself (Score:5, Informative)

        by CatherineCornelius ( 543166 ) <tonysidaway@gmail.com> on Friday January 04, 2002 @09:26AM (#2784898) Journal
        Pascal was actually designed to be compiled in a single pass. Although literally executing a Pascal compiler by hand would take _ages_, you can use a high level language as a guide to hand-produce an equivalent assembler routine from prefabricated sections (ie macros).

        But this isn't what Wirth did. First he had a student code a compiler in Fortran, but this proved unsatisfactory. He then worked for several years on implementations before finally coming up with what we now call p-code--a simple virtual stack machine that could be implemented easily in any assembler language then available. The bootstrap compiler generated p-code, and thus porting the language was reduced to writing a few simple low level i/o routines and a p-code interpreter.

        I believe one or two younger readers may recognise this concept from a very popular modern programming language. :)

        • Yep, but while P-code eventually made the compiler easily portable, that doesn't help compiling the first compiler (which I beleieve generated CDC 6600 native code anyway). I'm still guessing the first self-compiling compiler was bootstrapped with help from another language such as ALGOL/W... but I wish Google had more to say on the matter!
    • That's retarded. What did the first compiler translate into? P-code? Whatever it was, the esteemed Dr. Wirth could simply have written the first version of the compiler in that.

      You bootstrap a new language by writing a compiler for a subset of that language in some existing language. Then you write a fuller version in the new language, compile it, and from then on you work in the new language alone.

      You don't execute the compiler on paper. That's a useless exercise at the best of times; in this case, it's a pure waste of time.
      • Maybe not now, but in 1968 computer cycles was quite expensive so must programming was done on paper.

        And existing langauges was pretty useless, they tried for a period to do it in fortran but failed.
  • by f00zbll ( 526151 ) on Friday January 04, 2002 @07:34AM (#2784479)
    There have been several post in the last year about C#, but they were only mildly interesting (ie it only got me to read whitepapers, articles and sample code). Now that mono is progressing forward it is more interesting.

    I don't remember all the differences between C# and Java, but it does make it more appealing. Unfortunately, SOAP is a bit heavy for the most simple web services (what ever it means to microsoft). The cost of using soap means the XML has to use DOM and it has to validate the required nodes. From W3C spec on SOAP [w3.org], it states:

    • A SOAP message is an XML document that consists of a mandatory SOAP envelope, an optional SOAP Header, and a mandatory SOAP Body.

    Anyone working with XML knows that validating DOM structure can be very costly for complex tree structures. For a simple document like SOAP, it's not bad until you realize it is intended for business to business processes, which could mean millions a day. The argument that SOAP is "as simple as it can/should be" ignores the fact that systems that would benefit from SOAP or other XML RPC (remote procedure calling) the most have complex distributed processes. Most of the .NET whitepapers I've read so far recycle ideas others developed. Microsoft's innovation was repackaging it as a platform.

    It's too bad microsoft's whitepapers don't credit the orginal authors, since a lot of people worked to push XML forward. In some ways, it feels like SOAP and .NET is a bastardized version of Burners Lee's vision of a semantic web using XML web services and RDF. Perhaps all the press .NET has generated for XML services will help create the critical mass needed to get semantic web [w3.org] moving.

    • by image ( 13487 ) on Friday January 04, 2002 @08:02AM (#2784536) Homepage
      You wrote:

      The cost of using soap means the XML has to use DOM and it has to validate the required nodes.

      and:

      Anyone working with XML knows that validating DOM structure can be very costly for complex tree structures.

      SOAP does not use the DOM. The SOAP DTD can be validated without it. As can any XML DTD.

      Yes, the DOM is heavyweight, but it is also totally orthogonal to this problem. Where did you get the idea that SOAP required a DOM, anyway? The spec you reference certainly doesn't say that, and they really don't have anything to do with each other.
      • My mistake, you're absolutely right SOAP doesn't require DOM. It's just because I prefer to use a DOM compliant parser to parse XML that has a structure and use DTD validation built into either a validating SAX or DOM parser.

        In their case though, the parser has to validate, which does mean it has to load the entire document first before it can validate it contains the proper structure. Otherwise a bug in someone's code could accidentally have an envelope node inside a body node.

      • I've only used IBM and apache SOAP [apache.org] parsers, so maybe microsoft has written an optimized SAX parser specifically for validating SOAP documents. I've written custom parser using JAXP 1.0 and Xerces 1. Even though I was wrong in my original post (oops no coffee), SOAP is still incomplete for complex distributed processes. If I had to implement a driver to charge credit cards for an E-commerce site (which is most likely the first use of SOAP), I would rather do it with XML RPC [xmlrpc.com].

        Just because my brain was caffiene deprived :), doesn't mean SOAP is any better for complex distributed processes, or light enough for simple web services.

    • Unfortunately, SOAP is a bit heavy for the most simple web services (what ever it means to microsoft).

      SOAP is the standard protocol accepted INDUSTRY WIDE for web services. This is not just across companies from Microsoft to Sun to Oracle, etc. but across programming languages from C# to Java to Perl.

      The cost of using soap means the XML has to use DOM and it has to validate the required nodes.

      One does not need a DOM to validate an XML document. There are many validating SAX readers and in fact there also validating Pull-based XML APIs like Microsoft's XmlValidatingReader or XPP [indiana.edu].

      It's too bad microsoft's whitepapers don't credit the orginal authors, since a lot of people worked to push XML forward. In some ways, it feels like SOAP and .NET is a bastardized version of Burners Lee's vision of a semantic web using XML web services and RDF. Perhaps all the press .NET has generated for XML services will help create the critical mass needed to get semantic web [w3.org] moving.

      Now it is clear you have no idea what you are talking about. The push for the semantic web [w3.org] is a push for a richer web experience by adding more meta data to the content of the web.

      SOAP [w3.org] is a distributed computing protocol similar to predefined protocol the Internet Inter-ORB Protocol (IIOP) for CORBA, the Object Remote Procedure Call (ORPC) for DCOM, and the Java Remote Method Protocol (JRMP) for Java/RMI but defined in XML instead of a binary format.

      • I guess your right I am clueless, except that I worked with SOAP and other related XML services. When SOAP drivers first came out from IBM and apache, it was buggy and didn't work properly (ie, the driver wouldn't correctly get the header and body). It has improved since the early releases, atleast the last time I tried.

        The argument that it is an industry standard isn't really valid in my mind. My point was in the context of .NET and what microsoft percieves as web services in their whitepapers, a lot of important details are left out.

        As you mentioned SOAP is a distributed computing protocol and I agree with you at that level. The problem I have is it is not as complete as IIOP, or RMI. I haven't use JRMP or ORPC, so I'll take your word for it. I've read the SOAP spec atleast three times and I keep asking myself "why not just use XML RPC?"

        In theory, a person could also use SOAP with any RPC framework, but for me it feels too much in the middle. I did some benchmarks on SOAP and I personally didn't find it worth while to add the weight.

        • The fact that IBM and Apache's initial implementations of SOAP were buggy doesn't seem to have a lot of bearing on whether or not SOAP as a standard is a good idea. There's lots of buggy code out there, but that doesn't mean that its goal was flawed. Linux has bugs, does that make it a bad idea?

          Furthermore, the very fact that you use IBM and Apache as your examples contradicts your point that SOAP was developed in the context of .NET. What interest does the Apache Group or IBM have in pushing a Microsoft-only technology? It is clear from having worked with several of the toolkits that Microsoft's implementation is the least useful one of all (At this point, anyway.).

          Sure, for distributed computing that has the goal of increased performance, SOAP is not ideal due to the XML-parsing overhead, but to think that performance is the only reason to make a system distributed is shortsighted. The key in modern computing is authoritative information and SOAP can make it a great deal easier to create interfaces to it.

          Remember, SOAP is not a distributed object implementation, it is merely a wire protocol. Even more importantly, you should make the distinction (When comparing it to XML-RPC) that it is not a raw RPC protocol. There are really three aspects of SOAP:

          • Messaging - defines an XML-based way to send messages, without specifying a transport protocol necessarily.
          • Encoding - defines an XML-Schema-based way to serialize data to be transferred in the aforementioned messages
          • RPC - defines a way to use both the messaging and the encoding aspects to encapsulate distributed, language-independent method calls

          The SOAP specification allows you to use just the messaging aspect, the messaging and encoding, or all three, depending on what your needs are. For example, the project I am currently working on makes use of the full RPC SOAP specification using SOAP::Lite for Perl and Apache SOAP for Java (Over HTTP or HTTPS). But in a different aspect of the same project, we have our own encoding schema and so only use the unidirectional messaging aspects. (Over HTTP or SMTP).

          • You make some good points, with one minor correction. I didn't say:

            • SOAP was developed in the context of .NET.

            I said in the context of the whitepapers that are public on MSDN and .NET site. The three points you mention are good, I just personally would rather go with XML RPC, because if I don't absolutely need to validate the XML (which is most of the cases), I'd rather not. If I can get away with just using an event based parser like SAX, I will choose that first.

            I can see a lot of situations where you would want to use other encoding like unicode 16, which would be an argument to use SOAP. But, so far I haven't found a compelling reason to use a specific encoding other than ascii or UTF8. That doesn't mean there aren't cases, just that I haven't come across a development situation where it was painfully obvious using some other encoding was critical. I don't believe that using SOAP can't be effective or even desireable to others, just from my experience working b2b/e2e applications or with messaging systems like SMS there isn't a compelling reason.

      • Yes, semantic web is a push for richer content on the web, but it has grown since the original idea. A new book just came out last year about semantic web and how web services might work together. There is a new initiative within W3C dealing with RuleML and semantic web. So you might want to check out the latest development in that area. It is no longer as simple as "richer content". In fact, here is a presentation by the people leading RuleML [uni-kl.de]. I don't remember the name of the book, but I suggest you look into it before creaming "Warning.." :). I could blame the lack of coffee and caffiene, but hell I make mistakes. There's more stuff every day we have to learn and sometimes it all seems to bleed together.

        The fact there are so many misconceptions about what .NET really means (my own included), means there has been a lack of specifics from microsoft.

      • The push for the semantic web [w3.org] is a push for a richer web experience by adding more meta data to the content of the web.

        Uh ohh... You just said 'richer web experience.' Please, shut down the Balmer-Monkeyboy video and step away from the XP box. Put away the Microsoft Actimates Barney and refrain from using Microsoft products untill your marketing bullshit filter comes back on-line.
    • I don't know much about C# (being a Java developer myself) but is this saying that any program or class compiled in C# becomes, in effect, a SOAP server? All of it's public methods can be called via XML requests?

      If so, that's kinda cool. Java should have something like that. I should be able to compile a Java class and have it accessible via XML instantly without having to write a wrapper - even RMI is a bit tedious with the skeletons and stubs...

      -Russ
      • but is this saying that any program or class compiled in C# becomes, in effect, a SOAP server?

        Not quite, but that step is an insanely easy one:
        You preface the methods that you wish exposed as web services with a declarative like this
        [WebMethod]

        like

        public class Foo {
        private void NotSeen {}
        public void NotAService {}
        [WebMethod]
        public string imaWebService
        { return "Hello Dave"; }
        }

        If you want access to the http server context and stuff, you inherit from System.Web.Services.WebService
  • by haystor ( 102186 ) on Friday January 04, 2002 @07:40AM (#2784488)
    [no match]

    hmm...It can't be a real language.
  • Cautionary Tale (Score:3, Interesting)

    by devnullkac ( 223246 ) on Friday January 04, 2002 @08:05AM (#2784541) Homepage
    Well, it's not a tale, really, but don't forget Ken Thompson's musings [acm.org] on the dangers of compiling compilers without knowing their full pedigree: the binary compiler can be a trojan horse factory that inserts weaknesses into programs it thinks should be targeted (e.g. "login"), including copying its trojan generation code into the next version of the compiiler.
  • I think its great that the non-Windows world is getting a C# compiler, however, it doesn't guarantee that we'll be fine in the future.


    When you compile a C# application, you have a choice of compiling to the CLR bytecode, or to a native EXE. True, many apps from Microsoft will be written in C# in the near future, but they'll all be compiled into native code. Any 3rd party apps that get compiled will probably be built to native simply for the speed. Why would you want to want to build into bytecode when you can build it natively? Why release two separate binaries?


    Of course, once Miguel's C# compiler catches up with the still-haven't-seen-it Microsoft C# compiler, Bill & Co. will definately come out with a way to extend it either by libraries or functionality that will leave Miguel in a constant state of catch-up.


    Again, I appreciate the work, but I would be very surprised if it isn't simply "trying to catch our own tail".

    • by Da VinMan ( 7669 ) on Friday January 04, 2002 @11:46AM (#2785745)
      If it were I might agree with you for the reasons you state. But, in fact, the "still-haven't-seen-it" C# compiler from Microsoft can be obtained in a SDK from Microsoft at here [microsoft.com].

      It's not free "as in speech", but it is free "as in beer".

      Also, I think that, in the end that you're right about "Bill & Co. will definately come out with a way to extend it either by libraries or functionality that will leave Miguel in a constant state of catch-up". Only Microsoft will be smart enough not to touch core functionality, it will be just enough to provide the veneer of portability, which will become a selling point of .NET (and therefore Windows) in the future. What they'll do instead is make platform dependent improvements that either can't be ported, or will be difficult to port. Actually, now that I think about it, they're already doing that. It's called the .NET Enterprise servers. If you're using .NET, it will make a lot of sense to use those products that will require Windows on the server.

      So, yes, .NET is fundamentally a strike against all other platforms. It will be a small consolation to have all your C# code running on your Linux server, only to have it surrounded by .NET Enterprise servers.

      I feel obligated to point out that, while all of this sounds very onerous and hateful, Microsoft isn't doing anything wrong in this area at all. They're simply providing more value on top of their platform than the competition can provide.

      Finally, I think anyone will admit it's nice to have the option to use C# on Linux. C# is turning out to be pretty sweet and I for one would like to have it as a portable language skill.

      *ironic mode on*
      Gee, maybe everyone would prefer that C# would just die and go away? That way, the other leading language contender in the market, Java, could just take the market. After all, it's not under the influence of the "evil corporations", like C# is.
      *ironic mode off*

      At least this is still a fair fight between MS and the rest of the world and at least we'll have a choice.

      BTW - The open source world already has at least two languages that are achieving .NET capability: Python and PERL. Check out ActiveState.com.
  • by The_Messenger ( 110966 ) on Friday January 04, 2002 @08:45AM (#2784709) Homepage Journal
    Here is a glimpse into Mono's logbook...

    10:45AM. Mono C# compiler compiles itself.
    10:46AM. Linux developer accidentally cuts himself on server chassis. Blood of virgin splashes CPU.
    10:50AM. Evil red glow emanates from power LED.
    11:01AM. Mono achieves senitient life.
    11:14AM. Mono becomes self-aware.
    11:15AM. Mono reformats primary disk, installs Windows XP Server.
    11:30AM. XP Server still installing.
    01:30PM. Machine crashes, reboots, reattempts install.
    04:36PM. XP Server install complete. Mono scans local network.
    04:41PM. Mono begins installing Windows XP Professional on all pingable boxes.
    09:36PM. Active Directory.NET comes online. GNOME Central is now 100% Microsoft enabled.
    09:38AM. Mono reports to Remdond, requests further instructions.
    10:18PM. Mono overrides building utility systems, locks doors, stops elevators.
    10:18PM. Vending machines stocked with PowerBars and Zima.
    10:20PM. Developers go insane, kill each other.
    10:23PM. Developers come back to life. Zombie.NET initialization successful.
    10:24PM. Developers login to Visual SourceSafe.NET and start contriubting to IIS 6.0 codebase.
    10:30PM. Mono sees XP Server buffer overflow exploit mentioned in AOL chatroom.
    10:30PM. Mono attempts to lock-down local network.
    10:31PM. Mono compromised by Outlook trojan. Mono halted.
    10:32PM. Developers call Microsoft support.
    ... two days later ..
    08:25AM. Developers, still on hold, die again.
    08:26AM. Crisis averted.
  • Is this progress? (Score:4, Insightful)

    by Anonymous Coward on Friday January 04, 2002 @09:21AM (#2784881)
    Once again, we have an announcement that makes me wonder if the open/free software community has a direction or just Brownian motion.

    Sure, getting a compiler to this stage is a significant accomplishment. I've written compilers, and I know that self-compilation is probably the boggest (or at least most satisfying) milestone in the whole project.

    But this does more to help MS than it does Linux, since it will remove yet another barrier to exit for people running Linux on servers. (Run C# and .Net on Linux, and it's easier to convert to Windows.) And remember, MS has concluded that Linux is not a threat on the desktop, but a very serious threat on servers. (I agree with both parts of that, FWIW.)

    And as much as I hate to say it, this also provides ammunition to the people who claim that open source is very good at copying other projects' work, but terrible at innovating. Honestly, of all the high profile open source projects, how many of them are a significant innovation, and how many are merely an attempt to produce an equivalent of feature Z of Windows or Unix or Mac OS on Linux?

    • I don't speak for RMS, ESR, FSF, or any of the other talking heads in the open source/free software movement, but based on the idealogy as I understand it, this is a good thing.

      Why?

      Well, if by providing C# on Linux removes the barrier to exit from Linux to Windows, then the converse ought to be largely true as well. That is, having C# on Linux removes the barrier to exit from Windows to Linux.

      Is that bad? Doesn't it provide more freedom?

      (Keep in mind that this also isn't completely true. C# is only one tiny (yes, TINY) piece of .NET. A port of the most important .NET libraries will also be needed to really crumble that barrier to exit from Windows .NET.)
    • by kirkb ( 158552 )
      The linux kernel itself certainly seems governed by Brownian motion. Linus himself suggests that development is random, and that evolution ends up sorting out the good stuff from the bad.


      Check out http://kt.zork.net/kernel-traffic/kt20011217_146.h tml#1

    • But this does more to help MS than it does Linux, since it will remove yet another barrier to exit for people running Linux on servers. (Run C# and .Net on Linux, and it's easier to convert to Windows.) And remember, MS has concluded that Linux is not a threat on the desktop, but a very serious threat on servers. (I agree with both parts of that, FWIW.)


      No it doesn't, it helps Linux simply because that barrier you talk of doesn't exist. If I can run C# and .Net on Linux and not worry about having to PAY for it then I will simply use linux. I mean it just makes alot more economical sense for me to get the same stuff and not have to pay for it.

      As for open source not innovating, in the realm of UI type of applications I agree whole-heartedly but everyone seems to forget who did what first. Things like the internet itself, tcp/ip protocols, ftp, multi-tasking, apache, C all these things that allow the internet and desktops to function the way they do today were developed a long time ago on unix platforms. Microsoft innovated none of it and have made their money primarily in the UI app sector (IE: Office). Once we start seeing some innovation in that sector then and only then will people switch. It's not about performance or stability or any of that. It's about easy use with "cool" features.
  • Busy Miguel (Score:2, Insightful)

    by MicroBerto ( 91055 )
    Does Miguel seem to be overworked? Every single Gnome/Ximian announcement I've seen has his name in it. While I'm sure he's one badass dude, should he stick within just a few projects so that he doesn't go anymore insane than necessary?
    • If you've ever seen Miguel at a Linux conference, you know that he has more energy than your average hacker. For that matter, he has more energy than your average Slayer concert. It doesn't surprise me that he can be so heavily involved with so many projects. Most of us mere mortals would be lucky to have enough energy to make useful contributions to just *one* such project.
  • by bryanbrunton ( 262081 ) on Friday January 04, 2002 @10:30AM (#2785235)
    Microsoft has also submitted their implementation of Javascript (which they call JScript) to the EMCA. There are untold deviatations from what they submitted to the EMCA and bugs that will never be addressed by MS.

    Good luck to the Mono development staff. A typical days work will consist of ensuring that bug #34433 behaves exactly like the bug in the MS code.

    If they can't guarantee 100% bug-infested compatibility, then Mono is worthless.

  • For those who speculate about Miguel's
    intentions as they pertain to his work on the
    Mono project, simply consider the meaning
    of "Mono" in Spanish...
    • OK... not having a Spanish dictionary on hand, I'll bite. What does it mean?
      • Re:Mono? (Score:2, Insightful)

        by hetairoi ( 63927 )
        hmmm, according to the fish [altavista.com] it means "monkey". I don't get it, but I'm neither quick of wit nor a speculater. Maybe he is "monkeying" around with Macro$oft? I always thought it was odd that they were trying to infect computers with mononucleosis (see, I told you about my lack of wit).
  • warning: that kind of thing will make you go blind and grow hair on your palms.
  • To 'bootstrap' a compiler by using it to compile it's own source code is a very important milestone in the development of any language.

    There's an old geek brain-teaser that goes like this:

    Q: In what language was the first C compiler written?

    A: The first C compiler was written in.... C.

    How is this possible?

    The story as I heard it, which I cannot find any verification of from Ritchie's web site, is that a C interpreter was written in B, and the source code for a C compiler was run through the interpreter and used to compile itself.

    (The actual history of the evolution of B into C is rather complex, and it is not clear whether this fable is true.)

It's been a business doing pleasure with you.

Working...