Posted
by
timothy
from the not-the-same-as-self-flagellation dept.
Bob Smith writes: "Miguel just commited the last patch nessisary to get Mono's C# compiler to compile itself. After 7 odd months, MCS is now self hosting."jbarn adds: "Mono-list email is here."
This discussion has been archived.
No new comments can be posted.
by Anonymous Coward writes:
on Friday January 04, 2002 @06:49AM (#2784399)
I ask that people submit more articles about accomplishments in the world of compilers. Does anyone recall the functional programming contest? There were many excellnt compilers featured that are being improved constantly, and/. should focus some of its attention on those, too. C# is getting too much attention just cause it's a microsoft thing...:)
No C# is getting attention because it is part of.NET (or at least an language binding for the common language runtime, the basis of.NET), and lots of people give a damn about.NET and think it has significance (either bad or good). Not many people care about new compilers for functional languages that will never get above %0.01 acceptance. Actually, I would rather Slashdot *not* post frivolous stories about every compiler on earth.
You may well be wrong about the functional languages.
It won't be long before the overheads of FP will be pretty negligible compared to the increased programmer productivity and software robustness. This advantage will gradually start to overcome inertia in the marketplace and start to make a real impact.
Also, with the incresing use of Virtual Machines as an execution platform it's going to get a lot easier to use functional languages and integrate them into larger projects. My bet is that something like Haskell will suddenly start to make headlines in the next couple of years on the.NET platform.
I think it'll bring more developers to mono, and also more accurate code. Many developers couldn't code with quality due to the lack of a linux compiler, to develop the mono framework developers should go to windows and compile at.net compiler.
Isn't it wonderful having C# and.NET on Linux, after all they won't have the problems that Samba boys have trying to keep up with MS deliberately changing things to stop them, and it won't be miles worse.
Mono is a nice idea, but unfortunately.NET isn't a revolution, its a way to build poor quality mainframes, lots of boxes, poor IO. In terms of distribution there is none. Full credit to the guys for doing this but it does remind me of a Larson cartoon, you know the one with the Sheep Bar and the line
Having the compiler is nice, but unfortunately it doesn't preclude the perennial "catch-up" game that Microsoft is so fond of playing. The compiler is just one part of the puzzle... there's libraries and other assorted goodies that Linux will need which Microsoft can (and probably will) use some tom-foolery to make projects that ride on their coat-tails fall off.
Horrible analogy. Being able to port C# programs written in VS.NET is only a part of the reason for Mono. And MS won't stray far from the ECMA standard, which is what the first release of VS.NET follows. It's taken until now to break free of some legacy "features" like incorrect for-scoping introduced way back in VC++ 4, just for backward compatibility.
Right now, all Linux has for standard middleware is CORBA. Sure there are Linux die-hards that will stand by CORBA, but most will agree that COM is superior. Now with Mono, Linux will finally have a decent object technology with which to build large OO projects that will work together.
>>Right now, all Linux has for standard
>>middleware is CORBA. Sure there are Linux
>>die-hards that will stand by CORBA, but most
>>will agree that COM is superior.
Don't you mean COM+. CORBA encapsulates a wire protocol, transaction capabilities, security etc. COM is really just "IUnknown" and anyway if it is so superior, why is it being replaced by.NET?
COM is superior to CORBA ? Sorry to be rude but on what planet is that true.
COM runs on one platform on one protocol.
CORBA, federated networks of loosely coupled elements are possible. Transactional mapping is better than in COM (new versions of CORBA).
CORBA is more established and has a better history.
If you meant COM+ rather than COM then there is a slight merit to actually having a debate but COM is a poor and simple mans version of CORBA, even MS realised this, hence the massive overhall for COM+.
CORBA IMO still remains the best of the bunch out there but it doesn't have the marketing might of EJB or.NET COM+. Smart ideas don't often win in the world of marketing technology.
One of the biggest advantages of COM over CORBA is a *big* speed difference when running in-proc, while maintaining the same speed as CORBA as out-proccess & out-machine.
There's also the tools. Infoworld (I think), in a comparison of CORBA and COM (a couple years ago...things may be different now) concluded that CORBA was a little bit better technically, and COM was a lot easier to use, because Microsoft provided better tools.
The thing about Microsoft is that they often take two or three iterations to get it right, but when they get there, it's right for the programmer. Too many things in the Unix world are right in a theoretical sense, but a complete pain in the ass when you fire up your editor, start the caffeine IV, and try to crank out code.
COM and CORBA aren't comparable. COM is an attempt to give C++ a dynamic object system. You have to compare COM to something like Objective-C(++), and in that comparison, COM fails miserably: it's much less flexible and much more complex. DCOM is comparable to CORBA, and it's not as good.
Isn't it wonderful having C# and.NET on Linux, after all they won't have the problems that Samba boys have trying to keep up with MS deliberately changing things to stop them, and it won't be miles worse.
Mono is not a clone of.NET on Linux but an implementation of the ECMA standard on Linux. You do know that C# and the CLI are ECMA standards, right [slashdot.org]?
Dare Obasanjo: Similarly what happens if
Dan Kusnetzky's prediction [graduatingengineer.com] comes true and
Microsoft changes the.NET APIs in the
future? Will the Mono project play catchup or will it
become an incompatible implementation
of.NET on UNIX platforms?
Miguel de Icaza: Microsoft is remarkably good
at keeping their APIs backwards
compatible (and this is one of the reasons I think
they have had so
much success as a platform vendor). So I think that
this would not be
a problem.
Now, even if this was a problem, it is always possible
to have
multiple implementations of the same APIs and use the
correct one by
choosing at runtime the proper "assembly". Assemblies
are a new
way of dealing with software bundles and the files
that are part of an
assembly can be cryptographically checksummed and
their APIs
programmatically tested for compatibility.
[Dare -- Description
of Assemblies from MSDN gloassary [microsoft.com]]
So even if they deviate from the initial release, it
would be possible
to provide assemblies that are backwards compatible
(we can both do
that: Microsoft and ourselves)
.NET isn't a revolution, its a way to build poor quality mainframes, lots of boxes, poor IO. I
No.NET isn't a revolution, it's simply a slight improvement on an old idea of how things can be done. The same way Linux, Apache, BSD, Perl, Java, XML, P2P, the world wide web, etc. aren't revolutions but simply new spins on decades old concepts.
However, does this somehow preclude their usefulness or the fact that they are all innovative in their own way?
I am well aware that MS have submitted C# to ECMA, but then they also sat on the SOAP 1.1 expert group and their implementation didn't meet the standard.
Implementing ECMA on its own is pointless,.NET is the only implementation that matters. In the same way as JBoss and Enhydra have to stick to the J2EE spec because implementing just the language spec is pointless then Mono will have to implement.NET or just become a sideline.
MS don't have a great history of backwards compatibility, they have a great history of patches that upgrade their old stuff to match the new stuff. DR-DOS, Samba et al all demonstrate the changing nature of those supposedly backwards compatible APIs.
Not being a revolution isn't a problem, but it would be nice for once if we could actually move beyond a problem set that was effectively solved around 8 years ago rather than just spinning then same one over and over again.
Is that why some 15 year old HP-UX boxen running some ancient version of samba can still talk to my Windows 2000 server?
I find it hilarious that you would accuse Microsoft of changing the API; SMB hasn't changed at all on NT/2K except for a few upgrades to the password scheme to make it more secure and a few other small patches to fix bugs.
What Microsoft did do is come up with a new and improved filesharing protocol, CIFS, and implemented it as a >separate library running on different ports. If you honestly believe they are going to totally change the way their server OS works and lock out Win9x/NT4 clients, you are sadly mistaken.
Similarly, the Win32 API hasn't ever been CHANGED... it has only had things added to it. And with NTVDM/WOW, I can still run my ancient DOS and Win3.1 programs (for the most part) under Windows 2000; tell me where they've changed the API here?
*ALMOST EVERY* Time Microsoft wants to make a change that would break something, they just implement it in a separate standard or New API rather than b0rking the old one, which is the way it should be done.
If anyone is spouting FUD and nonsense here, it is you.
You are right on all of your points. But I ask the following. Can your HP-UX station use authentication that allows it to hook into Active Directory with all bells and whistles? Probably not.
It is not that Microsoft changes the basics. That is pretty easy to catch up to. What is more problematic is keeping up.
Let me explain. Lets say that I build an application using the Windows API (actually am). Everything works fine until the user starts using it on Windows XP or Windows 2000. You may ask why? Well according to the new security rules the application must only save content under the "My Documents" folder and not the folder installed to or something else. So now you are wondering how do I get access to the "My Documents" folder? The answer is a brand new API that is only available in a modern Platform SDK because when Visual C++ was released the API did not exist.
Do you see the issue? It is not that they break backward compatibility. It is that they introduce new rules which require you force upgrade your code. And that will happen with.NET. That is also why Miguel is dreaming about doing.NET on LINUX.
This is exactly the sort of crap that MicroSoft does and why programmers hate it.
If you say that "MyDocuments" is equivalent to the Unix "home" directory, notice that on Unix the method to get this information is to call getenv("HOME"), while apparently on MicroSoft it is the new getMyDocumentsDirectory() call. Notice that the Unix solution reuses a current interface. If home directories did not exist before and they were added to Unix, most systems (like perl, etc) would already have the getenv() call and could immediately take advantage of it. The MicroSoft solution requires perl to be recompiled to call this.
I can find no clearer explanation as to why software engineers hate MicroSoft.
And they definately do this to break things. They have getenv() and a zillion other things (registry, for instance) to look up a string such as the MyDocuments directory name, but they insist on adding a new call. The fact is the engineers at MicroSoft are not idiots and it should be obvious that there are better ways to do this than adding new calls, but they are also blinded by absolute hatred for anything outside MicroSoft that they will gladly throw good engineering out in order to force people to stick to their platform.
I would be very curious to see more specifics of that anecdote as well. I currently have 2 different apps written in VC++ that save lots of data to their own directories, as well as user-specified locations. I regularly test both on Win98,ME,2K, and XP, and have had 0 issues related to saving files.
This guy has to be trying to accomplish something much more involved than just "saving content". Either that, or he misunderstood Microsoft's recommendations (specifying that user data is stored under \Documents and Settings\User) as rules.
Most Windows 2000 implementations run in compatability mode to allow legacy NT 3.51/4.0 and Windows 9x clients to connect.
If Windows 2000/Active Directory is running in native mode, these clients will be unable to connect. Many of the more advanced features of AD can only work in native environments.
Similarly, the Win32 API hasn't ever been CHANGED... it has only had things added to it.
Yes, and a steady stream of unnecessary, incompletely documented additions to their APIs is exactly the problem. That is what has kept the Windows platform from being implemented successfully by any other vendor.
*ALMOST EVERY* Time Microsoft wants to make a change that would break something, they just implement it in a separate standard or New API rather than b0rking the old one, which is the way it should be done.
The way it should be done is that you spend some time ahead of time and work out APIs that you can live with for decades. When you do make significant API changes, you give people the tools to convert their code (deprecation, automatic translation). Saddling the platform and their code with dozens of incompatible variants is not the right approach. And the backwards compatibility doesn't really help anyway: you may be able to compile a program written for Windows 3.1 for your XP machine, but its behavior would be so oddball that nobod would want to use it.
If anyone is spouting FUD and nonsense here, it is you.
I don't see any "FUD" here: there is no fear, no uncertainty, and no doubt. As you yourself said, Microsoft constantly extends their APIs and keeps multiple incompatible versions around. Microsoft gets unclonable APIs, quick time to market, no complaints from people too lazy to update their code, and low engineering costs. The people who are screwed are the developers, who have limited choice, poorly thought-out APIs, need to update their code constantly to deal with Microsoft's latest fancies, and too much stuff to learn, and customers, who get locked into a single vendor and get poor quality software. Microsoft's low-quality approach works in terms of their business, but that doesn't make it good engineering.
I could mention millions of things that have bandied about the moniker of "Just choose a different runtime assembly". One was Java, and that has been a nightmare.
The problem comes with inexperienced or lazy programmers. Most of the.NET programming will come from Windows programmers who will read this line in their book: "While it is easy to make C# code work across many different implementations, 90% of your user base is running Windows, so just comment out those lines about "if (OS != Win) {};".
Even if 90% of.NET services are written properly, the other 10% still stands a pretty good chance of alienating less technical users.
Microsoft is not the one making technology a pain in the buttinski, it's the legions of "Web Developers" who learned everything from two Sybex books and a weekend class at Comp USA.
MS never put a gun to your head and made you include that marquee tag did they??
It isn't whether Microsoft keeps backwards compatibility that matters, it is whether they add extensions to C# that are difficult to replicate.
We don't have to guess there--we already know it: most of the existing Windows APIs are available from C#. So, the situation you get into is that little C# code for Windows will work on Mono, while most of the open source code will work on Windows. That's just what Microsoft likes, and it is foolish for the open source community to deliver all these programmers and all this marketing to Microsoft. It's also unnecessary, since there is nothing in C# that hasn't been present in a number of programming languages with open source implementations.
But, hey, what can we expect from Miguel? He has said clearly that he thinks Microsoft is doing a great job on software and tLinux software should be constructed more like Windows software. With friends like these, who needs enemies?
Microsoft will stick with the "Standard" just like they did with Java, only this time there won't be anyone pig-headed like Sun rejecting their improvements out of spite.
Microsoft will stick with the "Standard" just like they did with Java, only this time there won't be anyone pig-headed like Sun rejecting their improvements out of spite.
I am not sure I'd classify what Microsoft wanted to do to Java as "improvements". Whether or not they were improvements, Microsoft broke the license agreement with Sun. The agreement specificly stated any Java implementation by Microsoft would be standards compliant. Microsoft purposly failed to do this and as a result lost in court because of it.
Ahhh the old chicken and egg problem. (Compiler version)
The first Pascal compilers (and many other compilers) where actually written in their own language. Niklaus Wirth (and his staff) simply "executed" the program by hand/on paper and in that way they compiled the first compiler written in the language itself.
After that it becomes easier, you just need to make sure that you can compile the new version of compiler with the old one.
I find it hard to believe that Wirth "hand compiled" the firt Pascal compiler - the size of sych an effort would have been unbelieveable.
The history I dug up on Wirth does agree that the first compiler was written in Pascal (after an aborted attempt to do it in FORTRAN!), but I highly suspect that was only the first FULL compiler, and that it was in fact bootstrapped from ALGOL/W which was Wirth's prior creation (and which I actually used at college in the late 70's!).
Pascal was actually designed to be compiled in a single pass. Although literally executing a Pascal compiler by hand would take _ages_, you can use a high level language as a guide to hand-produce an equivalent assembler routine from prefabricated sections (ie macros).
But this isn't what Wirth did. First he had a student code a compiler in Fortran, but this proved unsatisfactory. He then worked for several years on implementations before finally coming up with what we now call p-code--a simple virtual stack machine that could be implemented easily in any assembler language then available. The bootstrap compiler generated p-code, and thus porting the language was reduced to writing a few simple low level i/o routines and a p-code interpreter.
I believe one or two younger readers may recognise this concept from a very popular modern programming language.:)
Yep, but while P-code eventually made the compiler easily portable, that doesn't help compiling the first compiler (which I beleieve generated CDC 6600 native code anyway). I'm still guessing the first self-compiling compiler was bootstrapped with help from another language such as ALGOL/W... but I wish Google had more to say on the matter!
That's retarded. What did the first compiler translate into? P-code? Whatever it was, the esteemed Dr. Wirth could simply have written the first version of the compiler in that.
You bootstrap a new language by writing a compiler for a subset of that language in some existing language. Then you write a fuller version in the new language, compile it, and from then on you work in the new language alone.
You don't execute the compiler on paper. That's a useless exercise at the best of times; in this case, it's a pure waste of time.
There have been several post in the last year about C#, but they were only mildly interesting (ie it only got me to read whitepapers, articles and sample code). Now that mono is progressing forward it is more interesting.
I don't remember all the differences between C# and Java, but it does make it more appealing. Unfortunately, SOAP is a bit heavy for the most simple web services (what ever it means to microsoft). The cost of using soap means the XML has to use DOM and it has to validate the required nodes. From W3C spec on SOAP [w3.org], it states:
A SOAP message is an XML document that consists of a mandatory SOAP envelope, an optional SOAP Header, and a mandatory SOAP Body.
Anyone working with XML knows that validating DOM structure can be very costly for complex tree structures. For a simple document like SOAP, it's not bad until you realize it is intended for business to business processes, which could mean millions a day. The argument that SOAP is "as simple as it can/should be" ignores the fact that systems that would benefit from SOAP or other XML RPC (remote procedure calling) the most have complex distributed processes. Most of the.NET whitepapers I've read so far recycle ideas others developed. Microsoft's innovation was repackaging it as a platform.
It's too bad microsoft's whitepapers don't credit the orginal authors, since a lot of people worked to push XML forward. In some ways, it feels like SOAP and.NET is a bastardized version of Burners Lee's vision of a semantic web using XML web services and RDF. Perhaps all the press.NET has generated for XML services will help create the critical mass needed to get semantic web [w3.org] moving.
The cost of using soap means the XML has to use DOM and it has to validate the required nodes.
and:
Anyone working with XML knows that validating DOM structure can be very costly for complex tree structures.
SOAP does not use the DOM. The SOAP DTD can be validated without it. As can any XML DTD.
Yes, the DOM is heavyweight, but it is also totally orthogonal to this problem. Where did you get the idea that SOAP required a DOM, anyway? The spec you reference certainly doesn't say that, and they really don't have anything to do with each other.
My mistake, you're absolutely right SOAP doesn't require DOM. It's just because I prefer to use a DOM compliant parser to parse XML that has a structure and use DTD validation built into either a validating SAX or DOM parser.
In their case though, the parser has to validate, which does mean it has to load the entire document first before it can validate it contains the proper structure. Otherwise a bug in someone's code could accidentally have an envelope node inside a body node.
Dude that is sweet. Now I wish it was open source back in 2000 (or that I knew of one) when I was working with SOAP for SMS related services. Since the early drivers for it didn't work (in beta), I just used DOM parser. I hadn't consider writing a specialized parser to handle SOAP the way you described it, credit to you for thinking of an efficient way to do it without the cost of loading the whole document. I honestly haven't kept up with all the SOAP drivers out there, so my mistake. I was researching the feasibiliy of using SOAP within a transactional message based system, so having valid message was a critical requirement as a non-valid message could potentially hang up the system.
I've only used IBM and apache SOAP [apache.org] parsers, so maybe microsoft has written an optimized SAX parser specifically for validating SOAP documents. I've written custom parser using JAXP 1.0 and Xerces 1. Even though I was wrong in my original post (oops no coffee), SOAP is still incomplete for complex distributed processes. If I had to implement a driver to charge credit cards for an E-commerce site (which is most likely the first use of SOAP), I would rather do it with XML RPC [xmlrpc.com].
Just because my brain was caffiene deprived:), doesn't mean SOAP is any better for complex distributed processes, or light enough for simple web services.
Unfortunately, SOAP is a bit heavy for the most simple web services (what ever it means to microsoft).
SOAP is the standard protocol accepted INDUSTRY WIDE for web services. This is not just across companies from Microsoft to Sun to Oracle, etc. but across programming languages from C# to Java to Perl.
The cost of using soap means the XML has to use DOM and it has to validate the required nodes.
One does not need a DOM to validate an XML document. There are many validating SAX readers and in fact there also validating Pull-based XML APIs like Microsoft's XmlValidatingReader or XPP [indiana.edu].
It's too bad microsoft's whitepapers don't credit the orginal authors, since a lot of people worked to push XML forward. In some ways, it feels like SOAP and.NET is a bastardized version of Burners Lee's vision of a semantic web using XML web services and RDF. Perhaps all the press.NET has generated for XML services will help create the critical mass needed to get semantic web [w3.org] moving.
Now it is clear you have no idea what you are talking about. The push for the semantic web [w3.org] is a push for a richer web experience by adding more meta data to the content of the web.
SOAP [w3.org] is a distributed computing protocol similar to predefined protocol the Internet Inter-ORB Protocol (IIOP) for CORBA, the Object Remote Procedure Call (ORPC) for DCOM, and the Java Remote Method Protocol (JRMP) for Java/RMI but defined in XML instead of a binary format.
I guess your right I am clueless, except that I worked with SOAP and other related XML services. When SOAP drivers first came out from IBM and apache, it was buggy and didn't work properly (ie, the driver wouldn't correctly get the header and body). It has improved since the early releases, atleast the last time I tried.
The argument that it is an industry standard isn't really valid in my mind. My point was in the context of.NET and what microsoft percieves as web services in their whitepapers, a lot of important details are left out.
As you mentioned SOAP is a distributed computing protocol and I agree with you at that level. The problem I have is it is not as complete as IIOP, or RMI. I haven't use JRMP or ORPC, so I'll take your word for it. I've read the SOAP spec atleast three times and I keep asking myself "why not just use XML RPC?"
In theory, a person could also use SOAP with any RPC framework, but for me it feels too much in the middle. I did some benchmarks on SOAP and I personally didn't find it worth while to add the weight.
The fact that IBM and Apache's initial implementations of SOAP were buggy doesn't seem to have a lot of bearing on whether or not SOAP as a standard is a good idea. There's lots of buggy code out there, but that doesn't mean that its goal was flawed. Linux has bugs, does that make it a bad idea?
Furthermore, the very fact that you use IBM and Apache as your examples contradicts your point that SOAP was developed in the context of.NET. What interest does the Apache Group or IBM have in pushing a Microsoft-only technology? It is clear from having worked with several of the toolkits that Microsoft's implementation is the least useful one of all (At this point, anyway.).
Sure, for distributed computing that has the goal of increased performance, SOAP is not ideal due to the XML-parsing overhead, but to think that performance is the only reason to make a system distributed is shortsighted. The key in modern computing is authoritative information and SOAP can make it a great deal easier to create interfaces to it.
Remember, SOAP is not a distributed object implementation, it is merely a wire protocol. Even more importantly, you should make the distinction (When comparing it to XML-RPC) that it is not a raw RPC protocol. There are really three aspects of SOAP:
Messaging - defines an XML-based way to send messages, without specifying a transport protocol necessarily.
Encoding - defines an XML-Schema-based way to serialize data to be transferred in the aforementioned messages
RPC - defines a way to use both the messaging and the encoding aspects to encapsulate distributed, language-independent method calls
The SOAP specification allows you to use just the messaging aspect, the messaging and encoding, or all three, depending on what your needs are. For example, the project I am currently working on makes use of the full RPC SOAP specification using SOAP::Lite for Perl and Apache SOAP for Java (Over HTTP or HTTPS). But in a different aspect of the same project, we have our own encoding schema and so only use the unidirectional messaging aspects. (Over HTTP or SMTP).
You make some good points, with one minor correction. I didn't say:
SOAP was developed in the context of.NET.
I said in the context of the whitepapers that are public on MSDN and.NET site. The three points you mention are good, I just personally would rather go with XML RPC, because if I don't absolutely need to validate the XML (which is most of the cases), I'd rather not. If I can get away with just using an event based parser like SAX, I will choose that first.
I can see a lot of situations where you would want to use other encoding like unicode 16, which would be an argument to use SOAP. But, so far I haven't found a compelling reason to use a specific encoding other than ascii or UTF8. That doesn't mean there aren't cases, just that I haven't come across a development situation where it was painfully obvious using some other encoding was critical. I don't believe that using SOAP can't be effective or even desireable to others, just from my experience working b2b/e2e applications or with messaging systems like SMS there isn't a compelling reason.
Yes, semantic web is a push for richer content on the web, but it has grown since the original idea. A new book just came out last year about semantic web and how web services might work together. There is a new initiative within W3C dealing with RuleML and semantic web. So you might want to check out the latest development in that area. It is no longer as simple as "richer content". In fact, here is a presentation by the people leading RuleML [uni-kl.de]. I don't remember the name of the book, but I suggest you look into it before creaming "Warning..":). I could blame the lack of coffee and caffiene, but hell I make mistakes. There's more stuff every day we have to learn and sometimes it all seems to bleed together.
The fact there are so many misconceptions about what.NET really means (my own included), means there has been a lack of specifics from microsoft.
The push for the semantic web [w3.org] is a push for a richer web experience by adding more meta data to the content of the web.
Uh ohh... You just said 'richer web experience.' Please, shut down the Balmer-Monkeyboy video and step away from the XP box. Put away the Microsoft Actimates Barney and refrain from using Microsoft products untill your marketing bullshit filter comes back on-line.
I don't know much about C# (being a Java developer myself) but is this saying that any program or class compiled in C# becomes, in effect, a SOAP server? All of it's public methods can be called via XML requests?
If so, that's kinda cool. Java should have something like that. I should be able to compile a Java class and have it accessible via XML instantly without having to write a wrapper - even RMI is a bit tedious with the skeletons and stubs...
but is this saying that any program or class compiled in C# becomes, in effect, a SOAP server?
Not quite, but that step is an insanely easy one:
You preface the methods that you wish exposed as web services with a declarative like this
[WebMethod]
like
public class Foo {
private void NotSeen {}
public void NotAService {}
[WebMethod]
public string imaWebService
{ return "Hello Dave"; }
}
If you want access to the http server context and stuff, you inherit from System.Web.Services.WebService
Well, it's not a tale, really, but don't forget Ken Thompson's musings [acm.org] on the dangers of compiling compilers without knowing their full pedigree: the binary compiler can be a trojan horse factory that inserts weaknesses into programs it thinks should be targeted (e.g. "login"), including copying its trojan generation code into the next version of the compiiler.
This is a very interesting viewpoint. Suppose Microsoft planted similar 'bugs' in.NET compilers. Could we use Mono's implementation to find them?
Maybe not. After all, if mono was bootstrapped on C#, then there is the theoretical possibility to propagate stuff into the Free implementation.
Of course, this would require more foresight on MS part than I give them credit for. Plus there is the practical problem of how to you alter your compiler such that it can recognize that it is compiling a competitive but yet unwritten compiler.
I think its great that the non-Windows world is getting a C# compiler, however, it doesn't guarantee that we'll be fine in the future.
When you compile a C# application, you have a choice of compiling to the CLR bytecode, or to a native EXE. True, many apps from Microsoft will be written in C# in the near future, but they'll all be compiled into native code. Any 3rd party apps that get compiled will probably be built to native simply for the speed. Why would you want to want to build into bytecode when you can build it natively? Why release two separate binaries?
Of course, once Miguel's C# compiler catches up with the still-haven't-seen-it Microsoft C# compiler, Bill & Co. will definately come out with a way to extend it either by libraries or functionality that will leave Miguel in a constant state of catch-up.
Again, I appreciate the work, but I would be very surprised if it isn't simply "trying to catch our own tail".
If it were I might agree with you for the reasons you state. But, in fact, the "still-haven't-seen-it" C# compiler from Microsoft can be obtained in a SDK from Microsoft at here [microsoft.com].
It's not free "as in speech", but it is free "as in beer".
Also, I think that, in the end that you're right about "Bill & Co. will definately come out with a way to extend it either by libraries or functionality that will leave Miguel in a constant state of catch-up". Only Microsoft will be smart enough not to touch core functionality, it will be just enough to provide the veneer of portability, which will become a selling point of.NET (and therefore Windows) in the future. What they'll do instead is make platform dependent improvements that either can't be ported, or will be difficult to port. Actually, now that I think about it, they're already doing that. It's called the.NET Enterprise servers. If you're using.NET, it will make a lot of sense to use those products that will require Windows on the server.
So, yes,.NET is fundamentally a strike against all other platforms. It will be a small consolation to have all your C# code running on your Linux server, only to have it surrounded by.NET Enterprise servers.
I feel obligated to point out that, while all of this sounds very onerous and hateful, Microsoft isn't doing anything wrong in this area at all. They're simply providing more value on top of their platform than the competition can provide.
Finally, I think anyone will admit it's nice to have the option to use C# on Linux. C# is turning out to be pretty sweet and I for one would like to have it as a portable language skill.
*ironic mode on*
Gee, maybe everyone would prefer that C# would just die and go away? That way, the other leading language contender in the market, Java, could just take the market. After all, it's not under the influence of the "evil corporations", like C# is.
*ironic mode off*
At least this is still a fair fight between MS and the rest of the world and at least we'll have a choice.
BTW - The open source world already has at least two languages that are achieving.NET capability: Python and PERL. Check out ActiveState.com.
Actually, I agree. I've really come to appreciate what the Java inertia contributed to the market; C# and.NET would probably not have been needed if Java hadn't done as well as it has.
You know, there are those of us who actually love having the choices. I hope the Java market stays strong and keeps Microsoft on their toes. Likewise, I hope Microsoft stays strong and keeps everyone else on their toes.
And as far as the rest of us are concerned, we can only benefit from this.
Here is a glimpse into Mono's logbook...
10:45AM. Mono C# compiler compiles itself.
10:46AM. Linux developer accidentally cuts himself on server chassis. Blood of virgin splashes CPU.
10:50AM. Evil red glow emanates from power LED.
11:01AM. Mono achieves senitient life.
11:14AM. Mono becomes self-aware.
11:15AM. Mono reformats primary disk, installs Windows XP Server.
11:30AM. XP Server still installing.
01:30PM. Machine crashes, reboots, reattempts install.
04:36PM. XP Server install complete. Mono scans local network.
04:41PM. Mono begins installing Windows XP Professional on all pingable boxes.
09:36PM. Active Directory.NET comes online. GNOME Central is now 100% Microsoft enabled.
09:38AM. Mono reports to Remdond, requests further instructions.
10:18PM. Mono overrides building utility systems, locks doors, stops elevators.
10:18PM. Vending machines stocked with PowerBars and Zima.
10:20PM. Developers go insane, kill each other.
10:23PM. Developers come back to life. Zombie.NET initialization successful.
10:24PM. Developers login to Visual SourceSafe.NET and start contriubting to IIS 6.0 codebase.
10:30PM. Mono sees XP Server buffer overflow exploit mentioned in AOL chatroom.
10:30PM. Mono attempts to lock-down local network.
10:31PM. Mono compromised by Outlook trojan. Mono halted.
10:32PM. Developers call Microsoft support.
... two days later.. 08:25AM. Developers, still on hold, die again.
08:26AM. Crisis averted.
by Anonymous Coward writes:
on Friday January 04, 2002 @09:21AM (#2784881)
Once again, we have an announcement that makes me wonder if the open/free software community has a direction or just Brownian motion.
Sure, getting a compiler to this stage is a significant accomplishment. I've written compilers, and I know that self-compilation is probably the boggest (or at least most satisfying) milestone in the whole project.
But this does more to help MS than it does Linux, since it will remove yet another barrier to exit for people running Linux on servers. (Run C# and.Net on Linux, and it's easier to convert to Windows.) And remember, MS has concluded that Linux is not a threat on the desktop, but a very serious threat on servers. (I agree with both parts of that, FWIW.)
And as much as I hate to say it, this also provides ammunition to the people who claim that open source is very good at copying other projects' work, but terrible at innovating. Honestly, of all the high profile open source projects, how many of them are a significant innovation, and how many are merely an attempt to produce an equivalent of feature Z of Windows or Unix or Mac OS on Linux?
I don't speak for RMS, ESR, FSF, or any of the other talking heads in the open source/free software movement, but based on the idealogy as I understand it, this is a good thing.
Why?
Well, if by providing C# on Linux removes the barrier to exit from Linux to Windows, then the converse ought to be largely true as well. That is, having C# on Linux removes the barrier to exit from Windows to Linux.
Is that bad? Doesn't it provide more freedom?
(Keep in mind that this also isn't completely true. C# is only one tiny (yes, TINY) piece of.NET. A port of the most important.NET libraries will also be needed to really crumble that barrier to exit from Windows.NET.)
The linux kernel itself certainly seems governed by Brownian motion. Linus himself suggests that development is random, and that evolution ends up sorting out the good stuff from the bad.
Check out http://kt.zork.net/kernel-traffic/kt20011217_146.h tml#1
But this does more to help MS than it does Linux, since it will remove yet another barrier to exit for people running Linux on servers. (Run C# and.Net on Linux, and it's easier to convert to Windows.) And remember, MS has concluded that Linux is not a threat on the desktop, but a very serious threat on servers. (I agree with both parts of that, FWIW.)
No it doesn't, it helps Linux simply because that barrier you talk of doesn't exist. If I can run C# and.Net on Linux and not worry about having to PAY for it then I will simply use linux. I mean it just makes alot more economical sense for me to get the same stuff and not have to pay for it.
As for open source not innovating, in the realm of UI type of applications I agree whole-heartedly but everyone seems to forget who did what first. Things like the internet itself, tcp/ip protocols, ftp, multi-tasking, apache, C all these things that allow the internet and desktops to function the way they do today were developed a long time ago on unix platforms. Microsoft innovated none of it and have made their money primarily in the UI app sector (IE: Office). Once we start seeing some innovation in that sector then and only then will people switch. It's not about performance or stability or any of that. It's about easy use with "cool" features.
Does Miguel seem to be overworked? Every single Gnome/Ximian announcement I've seen has his name in it. While I'm sure he's one badass dude, should he stick within just a few projects so that he doesn't go anymore insane than necessary?
If you've ever seen Miguel at a Linux conference, you know that he has more energy than your average hacker. For that matter, he has more energy than your average Slayer concert. It doesn't surprise me that he can be so heavily involved with so many projects. Most of us mere mortals would be lucky to have enough energy to make useful contributions to just *one* such project.
Microsoft has also submitted their implementation of Javascript (which they call JScript) to the EMCA. There are untold deviatations from what they submitted to the EMCA and bugs that will never be addressed by MS.
Good luck to the Mono development staff. A typical days work will consist of ensuring that bug #34433 behaves exactly like the bug in the MS code.
If they can't guarantee 100% bug-infested compatibility, then Mono is worthless.
hmmm, according to the fish [altavista.com] it means "monkey". I don't get it, but I'm neither quick of wit nor a speculater. Maybe he is "monkeying" around with Macro$oft? I always thought it was odd that they were trying to infect computers with mononucleosis (see, I told you about my lack of wit).
To 'bootstrap' a compiler by using it to compile it's own source code is a very important milestone in the development of any language.
There's an old geek brain-teaser that goes like this:
Q: In what language was the first C compiler written?
A: The first C compiler was written in.... C.
How is this possible?
The story as I heard it, which I cannot find any verification of from Ritchie's web site, is that a C interpreter was written in B, and the source code for a C compiler was run through the interpreter and used to compile itself.
(The actual history of the evolution of B into C is rather complex, and it is not clear whether this fable is true.)
It depends on how this C# is intended to be use. If it is only to leech from MS.net it's not an alternative - you'll still be tied to MS thoughts of a row ahead...
I have heard good things about C# - it's apparently got all the good bits of Java, with some nice additions that would have made sense the first time:-) There are some dodgys, but they have more to do with the platform than the language.
Well if one needs to develop a new language why, for God's sake, must everyone reimplement language at the nearly-the-same level as C?
And (unlike C), most of this so-called-portable languages use a stack VM machine. We don't know much about optimizing it in hardware and we don't have compilers capable doing that.
Guess why ppc (which is slower than most x86 in real apps) wins in float SPECs that much. Guess why hammer and p4 have sse2. Hint - 80387 design... which is... yes you got it.
If you think C# is at nearly the same level as C, you're very mistaken. It has much in common with Java, which is only related to C about as much as mice are to elephants.
C# and C share a letter and an ancestry, but have a completely different raison d'être, philosophy and implementation.
And and the controversy over the last while over whether our buddy Miguel et al are aiding and abetting the Redmond Ragtags, or opening the use of C#, quite likely one of the best all-round OOP RAD-able languages available, to the rest of the world outside those who don't mind being "locked in", as you say.
If nothing else, it will be yet another choice, and let's face it: if Linux allows for nothing else, it's choice. Choice in WMs, xterms, Desktop Environments, GUI toolkits, distros, package managers, and now choice in top-end populist OOP languages.
If the compiler can build itself, then people can work on improving the compiler without needing to have a different compiler to build it with.
In this case, until now you needed to have Windows and a C# compiler for Windows in order to work on the C# compiler for Linux--that would shut someone like me (who has no Windows) out from being able to do compiler work. Not that I want to work on C# anyway, but you get the point.
Imagine if all the gcc developers had to buy a C compiler to work on developing the free one!
Haha. That's a very interesting topic. And it has two basic kinds of solutions. You have to end up using the data as code, or using the code as data. Regardless, the kind of program you are talking about is called a quine. Yes, a quine. If you google it you should get some good tutorials. Hehe. Oh the memories. Have fun,
First I'll define some terms. The "Program" is the whole thing. It consists of two major components, the "Code" and the "Data". Both the Code and the Data are written in the language of the Program. The "Payload" is some more code that is reproduced along with the Code, but is not used in printing the entire Program.
The Program starts out with a bunch of data declarations which are the Data. In C, this could be an array declaration with an initializer which contains a bunch of static strings. Then after the Data (in this example, a huge array declaration) comes the Code.
The way the Code works is that it has two sections, one that prints the Data declarations, using the Data declarations; and, a second section that uses the Data to print the Code.
Obviously, the Data is a bunch of strings which make up the source code of the Code and the Payload, but not the entire Program.
I hope this explanation is clear. I haven't actually implemented it, but the explanation should tell you how to do it in a fairly language-neutral way. I hope this wasn't your homework assignment. I hope I didn't make some major blunder due to insufficient caffine. If I have, I'm sure someone here will embarrass me or mod me as Stupid (-2) or something.
I don't think you need to understand this to understand compilers. But understanding this is one element necessary to understand how to trojan a compiler's source code so that when the unmodified source of the compiler is recompiled using the compiler, that the trojan is inserted into the binary output (a new compiler), even though the trojan is no longer present in any of the compiler's source code. The Payload is then useful to further trojan the compiler such that when it compiles the source to the login program, it compiles in a back door, even though the source to the login program has no back door.
Can someone who has had more caffeine plese provide a link to the famous article describing this before I scream.
The java language has built-in limitations that would prevent this. Also, what's the point? Java is a language that is equally poor on all platforms.
There is nothing particularly poor about the language. But that is a matter of opinion. The language certianly has lots of great features, from a certian perspective. Perhaps the implementations are equally poor, but I disagree with this premise also. Some implementations are not so poor as others.
this just means that they have a compiler on their target platform that can compile the source code to the compiler. To compile something just means to put the high level language into a form that the metal+OS can deal with it. SO, by compiling the compiler on the target platform they are demonstrating that they have an effective compiler for at least that subset of the language that is used to write the compiler.
how do you think new versions of gcc get compiled anyway?
but what is so good about a compiler being able to compile itself?
A compiler being able to compile itself is an important "graduation ceremony". Any language that is incapable of compiling itself is obviously incomplete in some fundamental way.
Until a compiler compiles itself it is usally considered a "toy" language. There are 100's of them out there. They may be interesting academiclly, but they get no respect.
Sometimes the parent compiler can even have legal restrictions the use/distribution of it's output.
It's also sort of symbolic - the language is now self sustaining, and can break free. You can continue development in a "pure" enviorment of the new language.
Until a compiler compiles itself it is usally considered a "toy" language. There are 100's of them out there. They may be interesting academiclly, but they get no respect.
...kind of like the COBOL compiler that's been putting food on my table for the past decade-and-a-half...Although I'm not certain I would call it "interesting"...and the academic applications are questionable...I certainly agree that I get no respect!
(grin)
Seriously, though...IMO, the best indication of the importance of a complier is in how many lines of code/how many executables the compiler has been used to...compile..., and not necessarily any artifical metric.
That being said, I'll consider Mono's C# implementation a success if and only if it will allow a sizable chunk of C# sourcecode to be *used* on non-Microsoft platforms. Anything less, and it becomes at best "just another Java", only not as portable.
Miguel has now produced a "real compiler"...but I still think the jury's out on it becoming a "real success." I'm rooting for them, though!
Any language that is incapable of compiling itself is obviously incomplete in some fundamental way.
You might want to be a little more specific here for those who don't know the difference between interpreted and compiled languages. Interpreted languages (Java, Perl, Python, etc.) won't compile themselves. Before all the Java programmers jump all over me.. yes, Java is "compiled", but the output is not a machine-code executable, it's bytecode that is interpeted by the JVM.
An interesting thought - In theory the Java compiler could be implemented using a JVM and a "Compiler" bytecode...
he difference between interpreted and compiled languages. Interpreted languages (Java, Perl, Python, etc.) won't compile themselves.
Pick pick pick pick pick.
(Don't mind me, I'm just irritated at the other posters who can't grasp the difference between me saying a language should be ABLE to complile itself, and every compiler MUST have been complied by itself.)
Ok, interperted languages symbolicly "graduate" when someone writes an interperter for that language in that language.
Coug - I know you get it, but for everyone else:
The point of the "graduation ceremony" is that someone demonstrates that (A) the language is complete enough for a significant task, (B) practical enough to actually write a significant task in it, and (C) functional enough to actually execute a significant task.
Toy languages fail either (A) completeness, (B) practicality, or (C) functionality.
Or just unsuited for writing compilers in. The gcc Fortran compiler is not written in Fortran, because C is a better language for that
I never said that every implementation of a compiler must be compiled by itself. I said:
A compiler being able to compile itself is an important "graduation ceremony" I'm sure that someone somewhere has compiled a fortran compiler with a fortran compiler. Fortran is Turing complete. That means it can do anything any other Turing complete language can.
In an adjacent post someone mentioned Perl. I don't know much about Perl, but I'd hazzard a guess that it is also Turing complete, and therefore capable of compiling itself.
search through archives of alt.folklore.computers.
It's not done often,and not with modern fortran compilers, because it's generally a dumb idea:)
You can accomplish anything in either C or Fortran. That doesn't make them equally good choices for all situations.
For example, it is possible to hand-optimize C to get speed equivalent to Fortran. But why bother? For fortranny applications, the fortran code is generally shorter and quicker to write, even before the optimization. I found about 3:1 in time a couple of years back. And that only gives you the initial code; now you get to hand-optimize.
On the other hand, part of what makes Fortran faster in the first place are the very things it leaves out, allowing the optimizer to make stronger assumptions. This generally isn't a limitation when smashing matrices into one another, but the OS folks would be horrified not to have these.
Use the right tool for the right job.[1] A high level language such has fortran has no serious need to compile itself. But a language that claims to be appropriate for writing compilers and operating systems had damn-well better be able to compile itself. [and as another post hints, a Cobol compiler that could compile itself just might be scary . ..]
hawk
[1] Two caveats:
I) If force isn't solving your problem, you didn't use enough.
II) If Windows is the answer, you asked the wrong question.
Not a dumb question at all. As others have pointed out, if you user your own (software) tools, then you're that much more motivated to fix them when they break. Also, a compiler is a Big Program, and likely to exercise itself in ways that small sample programs from C# for Dummies and the like won't. The wider the variety of programs you can test a compiler with, the better--another canonical stress test for compilers is the output of program generators, which will do things humans typically won't. (Canonical example: parser generators like yacc, which churns out basically a for loop around a HUGE switch statement and a really big initialized array.)
Hmm interesting article - but having the compiler compile itself does not help you to do this - you could introduce such trojans to the compiler and then compile it with some other compiler - the result would be the same.
but having the compiler compile itself does not help you to do this - you could introduce such trojans to the compiler and then compile it with some other compiler - the result would be the same.
You miss a critical point. Doing as you suggest requires that you keep the trojan in the source code of the compiler. With an open-source compiler, that trojan is in the source in front of God and everyone, plain for all to see.
Much better is to get the binary of the compiler trojaned in such a way that it will re-trojan subsequent compilations of the compiler; but the compiler source code has no evidence of a trojan.
A compiler is basically a program which converts a program in a source language (A) to a target langage (B). The compiler itself is written in a language (C) which may or may not be equal to A or B.
Therefore lot of compilers can never compile themselves, since they are written in a langauage other than the one that they compile. I suspect, for example, that the Visual Basic compiler is written in C++, and therefore cannot self-compile.
In the case that A = C, the compiler can compile itself assuming that it supports a sufficiently large feature set from the language. This is the point that the Mono project has apparently reached. This is a decent achievement for a complex compiler, since it suggests that the project is getting near to being feature complete.
You could probably write a self-hosting toy compiler relatively quickly if that was your goal. Especially if you chose a "nice" target language, i.e. something like Scheme rather than x86 assembler.....
Generally, how mature is a compiler when it reaches self compiling capabilities? young? is it a main goal of the project?
One way of porting a language to a new architecture involves a kind of bootstrapping, wherein a compiler for a simple subset of the language is written in some host language (C, say) that is already available. Then more sophisticated parts of the compiler can be coded using the first generation of the compiler. Where the new language is already largely a superset of the porting language (as is the case with c-like languages), you can end up with a compiler that can compile itself.
So it's a feature of the method used to port the language. Obviously if a C# compiler were written in Pascal or Fortran, say, it would not ever be able to compile itself, no matter how mature it became.
The ability to write compilers is definitely a useful feature of any language, but producing a self-compiling compiler isn't necessarily a goal. If portability is important, a porting kit for the language should be provided in C or some other lingua franca.
how mature is a compiler when it reaches self compiling capabilities? It looks like this milepost is "compiles itself", but "compiles itself correctly" is still a long time away. This probably means that nearly all the essential features are in the compiler now, but there is a lot of debugging left. When it compiles itself correctly, it will be close to done, but there are probably some features that aren't used by the compiler, and still have to be written and debugged. For instance, I can't see how a SQL database interface, or the ability to create a web page, would be relevant to compilers, but I would certainly want these capabilities in the library before I tossed out all my old programming tools.
Incidentally, self-compiling is a self-test that is only relevant to some languages. Many languages are quite useful in limited fields, but their primary features are not relevant to compilers, and they may lack features that are essential to compilers. It would be possible to write a compiler in BASIC, FORTRAN, or COBOL, but it would be insane to try it. OTOH, some languages were invented solely for writing compilers -- when it compiles itself correctly, it's done, but it's not much interest to anyone except a computer science professor specializing in compilers.
Self-compilation is a pretty good test for C-type compilers, because these are general purpose languages that are appropriate for compilers, among other things. It's a good test in another sense; quite often one of the problems in testing software is determining whether the output of the program is right or not. If you ran a CAD program (say) through the compiler, the compiler-writers could recognize a few gross errors (won't compile, won't run, erases the data file when you hit Save), but couldn't tell whether or not the finer nuances came out right. But they definitely know what a _compiler_ should do.
>But on all North-Americal telephone systems, the # sign is called "pound".
You're calling the octothorpe a "pound?" *shudder*. Report to a reeducation center at once! They'll also force you to move your punctuation back inside the quotation marks so that the paper doesn't tear when stuck with the printing press . . .:)
Request (Score:4, Informative)
Re:Request (Score:3, Insightful)
Re:Request (Score:3, Interesting)
It won't be long before the overheads of FP will be pretty negligible compared to the increased programmer productivity and software robustness. This advantage will gradually start to overcome inertia in the marketplace and start to make a real impact.
Also, with the incresing use of Virtual Machines as an execution platform it's going to get a lot easier to use functional languages and integrate them into larger projects. My bet is that something like Haskell will suddenly start to make headlines in the next couple of years on the
More developers to mono (Score:5, Insightful)
I think it'll bring more developers to mono, and also more accurate code. Many developers couldn't code with quality due to the lack of a linux compiler, to develop the mono framework developers should go to windows and compile at .net compiler.
It's a big step forward. Congratulations
Mono Roadmap (Score:5, Informative)
Samba anyone ? (Score:5, Insightful)
Isn't it wonderful having C# and
Mono is a nice idea, but unfortunately
"What do you know, I'm a follower too"
Re:Samba anyone ? (Score:4, Insightful)
Re:Samba anyone ? (Score:3, Interesting)
Right now, all Linux has for standard middleware is CORBA. Sure there are Linux die-hards that will stand by CORBA, but most will agree that COM is superior. Now with Mono, Linux will finally have a decent object technology with which to build large OO projects that will work together.
Re:Samba anyone ? (Score:2, Informative)
>>middleware is CORBA. Sure there are Linux
>>die-hards that will stand by CORBA, but most
>>will agree that COM is superior.
Don't you mean COM+. CORBA encapsulates a wire protocol, transaction capabilities, security etc. COM is really just "IUnknown" and anyway if it is so superior, why is it being replaced by
COM v CORBA... (Score:2)
COM is superior to CORBA ? Sorry to be rude but on what planet is that true.
COM runs on one platform on one protocol.
CORBA, federated networks of loosely coupled elements are possible. Transactional mapping is better than in COM (new versions of CORBA).
CORBA is more established and has a better history.
If you meant COM+ rather than COM then there is a slight merit to actually having a debate but COM is a poor and simple mans version of CORBA, even MS realised this, hence the massive overhall for COM+.
CORBA IMO still remains the best of the bunch out there but it doesn't have the marketing might of EJB or
Re:Samba anyone ? (Score:4, Insightful)
Re:Samba anyone ? (Score:2)
The thing about Microsoft is that they often take two or three iterations to get it right, but when they get there, it's right for the programmer. Too many things in the Unix world are right in a theoretical sense, but a complete pain in the ass when you fire up your editor, start the caffeine IV, and try to crank out code.
apples and orang (Score:2)
Mono is does not depend on .NET (Score:5, Informative)
Mono is not a clone of
Secondly as miguel said in my interview with him which originally ran on Slashdot [slashdot.org] and then on MSDN [slashdot.org]
No
However, does this somehow preclude their usefulness or the fact that they are all innovative in their own way?
Microsoft and "standards" (Score:2, Troll)
Implementing ECMA on its own is pointless,
MS don't have a great history of backwards compatibility, they have a great history of patches that upgrade their old stuff to match the new stuff. DR-DOS, Samba et al all demonstrate the changing nature of those supposedly backwards compatible APIs.
Not being a revolution isn't a problem, but it would be nice for once if we could actually move beyond a problem set that was effectively solved around 8 years ago rather than just spinning then same one over and over again.
Re:Microsoft and "standards" (Score:5, Informative)
I find it hilarious that you would accuse Microsoft of changing the API; SMB hasn't changed at all on NT/2K except for a few upgrades to the password scheme to make it more secure and a few other small patches to fix bugs.
What Microsoft did do is come up with a new and improved filesharing protocol, CIFS, and implemented it as a >separate library running on different ports. If you honestly believe they are going to totally change the way their server OS works and lock out Win9x/NT4 clients, you are sadly mistaken.
Similarly, the Win32 API hasn't ever been CHANGED... it has only had things added to it. And with NTVDM/WOW, I can still run my ancient DOS and Win3.1 programs (for the most part) under Windows 2000; tell me where they've changed the API here?
*ALMOST EVERY* Time Microsoft wants to make a change that would break something, they just implement it in a separate standard or New API rather than b0rking the old one, which is the way it should be done.
If anyone is spouting FUD and nonsense here, it is you.
Re:Microsoft and "standards" (Score:5, Insightful)
It is not that Microsoft changes the basics. That is pretty easy to catch up to. What is more problematic is keeping up.
Let me explain. Lets say that I build an application using the Windows API (actually am). Everything works fine until the user starts using it on Windows XP or Windows 2000. You may ask why? Well according to the new security rules the application must only save content under the "My Documents" folder and not the folder installed to or something else. So now you are wondering how do I get access to the "My Documents" folder? The answer is a brand new API that is only available in a modern Platform SDK because when Visual C++ was released the API did not exist.
Do you see the issue? It is not that they break backward compatibility. It is that they introduce new rules which require you force upgrade your code. And that will happen with
Re:Microsoft and "standards" (Score:4, Insightful)
If you say that "MyDocuments" is equivalent to the Unix "home" directory, notice that on Unix the method to get this information is to call getenv("HOME"), while apparently on MicroSoft it is the new getMyDocumentsDirectory() call. Notice that the Unix solution reuses a current interface. If home directories did not exist before and they were added to Unix, most systems (like perl, etc) would already have the getenv() call and could immediately take advantage of it. The MicroSoft solution requires perl to be recompiled to call this.
I can find no clearer explanation as to why software engineers hate MicroSoft.
And they definately do this to break things. They have getenv() and a zillion other things (registry, for instance) to look up a string such as the MyDocuments directory name, but they insist on adding a new call. The fact is the engineers at MicroSoft are not idiots and it should be obvious that there are better ways to do this than adding new calls, but they are also blinded by absolute hatred for anything outside MicroSoft that they will gladly throw good engineering out in order to force people to stick to their platform.
Re:Microsoft and "standards" (Score:2)
This guy has to be trying to accomplish something much more involved than just "saving content". Either that, or he misunderstood Microsoft's recommendations (specifying that user data is stored under \Documents and Settings\User) as rules.
Re:Microsoft and "standards" (Score:3, Informative)
Most Windows 2000 implementations run in compatability mode to allow legacy NT 3.51/4.0 and Windows 9x clients to connect.
If Windows 2000/Active Directory is running in native mode, these clients will be unable to connect. Many of the more advanced features of AD can only work in native environments.
Re:Microsoft and "standards" (Score:2)
Yes, and a steady stream of unnecessary, incompletely documented additions to their APIs is exactly the problem. That is what has kept the Windows platform from being implemented successfully by any other vendor.
*ALMOST EVERY* Time Microsoft wants to make a change that would break something, they just implement it in a separate standard or New API rather than b0rking the old one, which is the way it should be done.
The way it should be done is that you spend some time ahead of time and work out APIs that you can live with for decades. When you do make significant API changes, you give people the tools to convert their code (deprecation, automatic translation). Saddling the platform and their code with dozens of incompatible variants is not the right approach. And the backwards compatibility doesn't really help anyway: you may be able to compile a program written for Windows 3.1 for your XP machine, but its behavior would be so oddball that nobod would want to use it.
If anyone is spouting FUD and nonsense here, it is you.
I don't see any "FUD" here: there is no fear, no uncertainty, and no doubt. As you yourself said, Microsoft constantly extends their APIs and keeps multiple incompatible versions around. Microsoft gets unclonable APIs, quick time to market, no complaints from people too lazy to update their code, and low engineering costs. The people who are screwed are the developers, who have limited choice, poorly thought-out APIs, need to update their code constantly to deal with Microsoft's latest fancies, and too much stuff to learn, and customers, who get locked into a single vendor and get poor quality software. Microsoft's low-quality approach works in terms of their business, but that doesn't make it good engineering.
Re:Mono is does not depend on .NET (Score:2)
The problem comes with inexperienced or lazy programmers. Most of the
Even if 90% of
Microsoft is not the one making technology a pain in the buttinski, it's the legions of "Web Developers" who learned everything from two Sybex books and a weekend class at Comp USA.
MS never put a gun to your head and made you include that marquee tag did they??
Jason
muddled thinking (Score:2)
We don't have to guess there--we already know it: most of the existing Windows APIs are available from C#. So, the situation you get into is that little C# code for Windows will work on Mono, while most of the open source code will work on Windows. That's just what Microsoft likes, and it is foolish for the open source community to deliver all these programmers and all this marketing to Microsoft. It's also unnecessary, since there is nothing in C# that hasn't been present in a number of programming languages with open source implementations.
But, hey, what can we expect from Miguel? He has said clearly that he thinks Microsoft is doing a great job on software and tLinux software should be constructed more like Windows software. With friends like these, who needs enemies?
Re:Samba anyone ? (Score:4, Interesting)
Well considering the base class library has also gone through ECMA standardisation along with C# I'm not sure you need worry.
Mono will be implimenting the ECMA spec, so MS an drift where they like.
Microsoft will stick with the "Standard" just like they did with Java, only this time there won't be anyone to sue them.
Re:Samba anyone ? (Score:2)
Re:Samba anyone ? (Score:2)
Microsoft will stick with the "Standard" just like they did with Java, only this time there won't be anyone pig-headed like Sun rejecting their improvements out of spite.
I am not sure I'd classify what Microsoft wanted to do to Java as "improvements". Whether or not they were improvements, Microsoft broke the license agreement with Sun. The agreement specificly stated any Java implementation by Microsoft would be standards compliant. Microsoft purposly failed to do this and as a result lost in court because of it.
Compile itself (Score:5, Funny)
The first Pascal compilers (and many other compilers) where actually written in their own language. Niklaus Wirth (and his staff) simply "executed" the program by hand/on paper and in that way they compiled the first compiler written in the language itself.
After that it becomes easier, you just need to make sure that you can compile the new version of compiler with the old one.
Re:Compile itself (Score:3, Funny)
Re:Compile itself (Score:2)
The history I dug up on Wirth does agree that the first compiler was written in Pascal (after an aborted attempt to do it in FORTRAN!), but I highly suspect that was only the first FULL compiler, and that it was in fact bootstrapped from ALGOL/W which was Wirth's prior creation (and which I actually used at college in the late 70's!).
Re:Compile itself (Score:5, Informative)
But this isn't what Wirth did. First he had a student code a compiler in Fortran, but this proved unsatisfactory. He then worked for several years on implementations before finally coming up with what we now call p-code--a simple virtual stack machine that could be implemented easily in any assembler language then available. The bootstrap compiler generated p-code, and thus porting the language was reduced to writing a few simple low level i/o routines and a p-code interpreter.
I believe one or two younger readers may recognise this concept from a very popular modern programming language. :)
Re:Compile itself (Score:2)
Re:Compile itself (Score:2)
You bootstrap a new language by writing a compiler for a subset of that language in some existing language. Then you write a fuller version in the new language, compile it, and from then on you work in the new language alone.
You don't execute the compiler on paper. That's a useless exercise at the best of times; in this case, it's a pure waste of time.
Re:Compile itself (Score:2)
And existing langauges was pretty useless, they tried for a period to do it in fortran but failed.
This might make it a bit more intriguing (Score:3, Offtopic)
I don't remember all the differences between C# and Java, but it does make it more appealing. Unfortunately, SOAP is a bit heavy for the most simple web services (what ever it means to microsoft). The cost of using soap means the XML has to use DOM and it has to validate the required nodes. From W3C spec on SOAP [w3.org], it states:
Anyone working with XML knows that validating DOM structure can be very costly for complex tree structures. For a simple document like SOAP, it's not bad until you realize it is intended for business to business processes, which could mean millions a day. The argument that SOAP is "as simple as it can/should be" ignores the fact that systems that would benefit from SOAP or other XML RPC (remote procedure calling) the most have complex distributed processes. Most of the .NET whitepapers I've read so far recycle ideas others developed. Microsoft's innovation was repackaging it as a platform.
It's too bad microsoft's whitepapers don't credit the orginal authors, since a lot of people worked to push XML forward. In some ways, it feels like SOAP and .NET is a bastardized version of Burners Lee's vision of a semantic web using XML web services and RDF. Perhaps all the press .NET has generated for XML services will help create the critical mass needed to get semantic web [w3.org] moving.
Re:This might make it a bit more intriguing (Score:5, Informative)
The cost of using soap means the XML has to use DOM and it has to validate the required nodes.
and:
Anyone working with XML knows that validating DOM structure can be very costly for complex tree structures.
SOAP does not use the DOM. The SOAP DTD can be validated without it. As can any XML DTD.
Yes, the DOM is heavyweight, but it is also totally orthogonal to this problem. Where did you get the idea that SOAP required a DOM, anyway? The spec you reference certainly doesn't say that, and they really don't have anything to do with each other.
Re:This might make it a bit more intriguing (Score:2)
In their case though, the parser has to validate, which does mean it has to load the entire document first before it can validate it contains the proper structure. Otherwise a bug in someone's code could accidentally have an envelope node inside a body node.
Re:This might make it a bit more intriguing (Score:2)
Re:This might make it a bit more intriguing (Score:2)
Just because my brain was caffiene deprived :), doesn't mean SOAP is any better for complex distributed processes, or light enough for simple web services.
Warning: The above post is clueless (Score:3, Informative)
SOAP is the standard protocol accepted INDUSTRY WIDE for web services. This is not just across companies from Microsoft to Sun to Oracle, etc. but across programming languages from C# to Java to Perl.
The cost of using soap means the XML has to use DOM and it has to validate the required nodes.
One does not need a DOM to validate an XML document. There are many validating SAX readers and in fact there also validating Pull-based XML APIs like Microsoft's XmlValidatingReader or XPP [indiana.edu].
It's too bad microsoft's whitepapers don't credit the orginal authors, since a lot of people worked to push XML forward. In some ways, it feels like SOAP and
Now it is clear you have no idea what you are talking about. The push for the semantic web [w3.org] is a push for a richer web experience by adding more meta data to the content of the web.
SOAP [w3.org] is a distributed computing protocol similar to predefined protocol the Internet Inter-ORB Protocol (IIOP) for CORBA, the Object Remote Procedure Call (ORPC) for DCOM, and the Java Remote Method Protocol (JRMP) for Java/RMI but defined in XML instead of a binary format.
Re:Warning: The above post is clueless (Score:2)
The argument that it is an industry standard isn't really valid in my mind. My point was in the context of .NET and what microsoft percieves as web services in their whitepapers, a lot of important details are left out.
As you mentioned SOAP is a distributed computing protocol and I agree with you at that level. The problem I have is it is not as complete as IIOP, or RMI. I haven't use JRMP or ORPC, so I'll take your word for it. I've read the SOAP spec atleast three times and I keep asking myself "why not just use XML RPC?"
In theory, a person could also use SOAP with any RPC framework, but for me it feels too much in the middle. I did some benchmarks on SOAP and I personally didn't find it worth while to add the weight.
Re:Warning: The above post is clueless (Score:2)
Furthermore, the very fact that you use IBM and Apache as your examples contradicts your point that SOAP was developed in the context of .NET. What interest does the Apache Group or IBM have in pushing a Microsoft-only technology? It is clear from having worked with several of the toolkits that Microsoft's implementation is the least useful one of all (At this point, anyway.).
Sure, for distributed computing that has the goal of increased performance, SOAP is not ideal due to the XML-parsing overhead, but to think that performance is the only reason to make a system distributed is shortsighted. The key in modern computing is authoritative information and SOAP can make it a great deal easier to create interfaces to it.
Remember, SOAP is not a distributed object implementation, it is merely a wire protocol. Even more importantly, you should make the distinction (When comparing it to XML-RPC) that it is not a raw RPC protocol. There are really three aspects of SOAP:
The SOAP specification allows you to use just the messaging aspect, the messaging and encoding, or all three, depending on what your needs are. For example, the project I am currently working on makes use of the full RPC SOAP specification using SOAP::Lite for Perl and Apache SOAP for Java (Over HTTP or HTTPS). But in a different aspect of the same project, we have our own encoding schema and so only use the unidirectional messaging aspects. (Over HTTP or SMTP).
Re:Warning: The above post is clueless (Score:2)
I said in the context of the whitepapers that are public on MSDN and .NET site. The three points you mention are good, I just personally would rather go with XML RPC, because if I don't absolutely need to validate the XML (which is most of the cases), I'd rather not. If I can get away with just using an event based parser like SAX, I will choose that first.
I can see a lot of situations where you would want to use other encoding like unicode 16, which would be an argument to use SOAP. But, so far I haven't found a compelling reason to use a specific encoding other than ascii or UTF8. That doesn't mean there aren't cases, just that I haven't come across a development situation where it was painfully obvious using some other encoding was critical. I don't believe that using SOAP can't be effective or even desireable to others, just from my experience working b2b/e2e applications or with messaging systems like SMS there isn't a compelling reason.
RE: clarification on semantic web (Score:2)
The fact there are so many misconceptions about what .NET really means (my own included), means there has been a lack of specifics from microsoft.
Re:Warning: The above post is clueless (Score:3, Funny)
Uh ohh... You just said 'richer web experience.' Please, shut down the Balmer-Monkeyboy video and step away from the XP box. Put away the Microsoft Actimates Barney and refrain from using Microsoft products untill your marketing bullshit filter comes back on-line.
Re:This might make it a bit more intriguing (Score:2)
If so, that's kinda cool. Java should have something like that. I should be able to compile a Java class and have it accessible via XML instantly without having to write a wrapper - even RMI is a bit tedious with the skeletons and stubs...
-Russ
Re:This might make it a bit more intriguing (Score:2, Informative)
Not quite, but that step is an insanely easy one:
You preface the methods that you wish exposed as web services with a declarative like this
[WebMethod]
like
public class Foo {
private void NotSeen {}
public void NotAService {}
[WebMethod]
public string imaWebService
{ return "Hello Dave"; }
}
If you want access to the http server context and stuff, you inherit from System.Web.Services.WebService
M-x mono-mode (Score:4, Funny)
hmm...It can't be a real language.
Cautionary Tale (Score:3, Interesting)
Re:Cautionary Tale (Score:2)
Maybe not. After all, if mono was bootstrapped on C#, then there is the theoretical possibility to propagate stuff into the Free implementation.
Of course, this would require more foresight on MS part than I give them credit for. Plus there is the practical problem of how to you alter your compiler such that it can recognize that it is compiling a competitive but yet unwritten compiler.
Nonetheless, an amusing idea.
Great, but Microsoft will probably win in the end (Score:2, Insightful)
When you compile a C# application, you have a choice of compiling to the CLR bytecode, or to a native EXE. True, many apps from Microsoft will be written in C# in the near future, but they'll all be compiled into native code. Any 3rd party apps that get compiled will probably be built to native simply for the speed. Why would you want to want to build into bytecode when you can build it natively? Why release two separate binaries?
Of course, once Miguel's C# compiler catches up with the still-haven't-seen-it Microsoft C# compiler, Bill & Co. will definately come out with a way to extend it either by libraries or functionality that will leave Miguel in a constant state of catch-up.
Again, I appreciate the work, but I would be very surprised if it isn't simply "trying to catch our own tail".
C# is not vapor-ware... (Score:4, Interesting)
It's not free "as in speech", but it is free "as in beer".
Also, I think that, in the end that you're right about "Bill & Co. will definately come out with a way to extend it either by libraries or functionality that will leave Miguel in a constant state of catch-up". Only Microsoft will be smart enough not to touch core functionality, it will be just enough to provide the veneer of portability, which will become a selling point of
So, yes,
I feel obligated to point out that, while all of this sounds very onerous and hateful, Microsoft isn't doing anything wrong in this area at all. They're simply providing more value on top of their platform than the competition can provide.
Finally, I think anyone will admit it's nice to have the option to use C# on Linux. C# is turning out to be pretty sweet and I for one would like to have it as a portable language skill.
*ironic mode on*
Gee, maybe everyone would prefer that C# would just die and go away? That way, the other leading language contender in the market, Java, could just take the market. After all, it's not under the influence of the "evil corporations", like C# is.
*ironic mode off*
At least this is still a fair fight between MS and the rest of the world and at least we'll have a choice.
BTW - The open source world already has at least two languages that are achieving
Yup.. (Score:2)
You know, there are those of us who actually love having the choices. I hope the Java market stays strong and keeps Microsoft on their toes. Likewise, I hope Microsoft stays strong and keeps everyone else on their toes.
And as far as the rest of us are concerned, we can only benefit from this.
A glimpse into Mono's logbook... (Score:4, Funny)
10:45AM. Mono C# compiler compiles itself.
10:46AM. Linux developer accidentally cuts himself on server chassis. Blood of virgin splashes CPU.
10:50AM. Evil red glow emanates from power LED.
11:01AM. Mono achieves senitient life.
11:14AM. Mono becomes self-aware.
11:15AM. Mono reformats primary disk, installs Windows XP Server.
11:30AM. XP Server still installing.
01:30PM. Machine crashes, reboots, reattempts install.
04:36PM. XP Server install complete. Mono scans local network.
04:41PM. Mono begins installing Windows XP Professional on all pingable boxes.
09:36PM. Active Directory.NET comes online. GNOME Central is now 100% Microsoft enabled.
09:38AM. Mono reports to Remdond, requests further instructions.
10:18PM. Mono overrides building utility systems, locks doors, stops elevators.
10:18PM. Vending machines stocked with PowerBars and Zima.
10:20PM. Developers go insane, kill each other.
10:23PM. Developers come back to life. Zombie.NET initialization successful.
10:24PM. Developers login to Visual SourceSafe.NET and start contriubting to IIS 6.0 codebase.
10:30PM. Mono sees XP Server buffer overflow exploit mentioned in AOL chatroom.
10:30PM. Mono attempts to lock-down local network.
10:31PM. Mono compromised by Outlook trojan. Mono halted.
10:32PM. Developers call Microsoft support.
08:25AM. Developers, still on hold, die again.
08:26AM. Crisis averted.
Is this progress? (Score:4, Insightful)
Sure, getting a compiler to this stage is a significant accomplishment. I've written compilers, and I know that self-compilation is probably the boggest (or at least most satisfying) milestone in the whole project.
But this does more to help MS than it does Linux, since it will remove yet another barrier to exit for people running Linux on servers. (Run C# and .Net on Linux, and it's easier to convert to Windows.) And remember, MS has concluded that Linux is not a threat on the desktop, but a very serious threat on servers. (I agree with both parts of that, FWIW.)
And as much as I hate to say it, this also provides ammunition to the people who claim that open source is very good at copying other projects' work, but terrible at innovating. Honestly, of all the high profile open source projects, how many of them are a significant innovation, and how many are merely an attempt to produce an equivalent of feature Z of Windows or Unix or Mac OS on Linux?
In line with free software idealogy. (Score:2)
Why?
Well, if by providing C# on Linux removes the barrier to exit from Linux to Windows, then the converse ought to be largely true as well. That is, having C# on Linux removes the barrier to exit from Windows to Linux.
Is that bad? Doesn't it provide more freedom?
(Keep in mind that this also isn't completely true. C# is only one tiny (yes, TINY) piece of
Re:Is this progress? (Score:2, Insightful)
Check out http://kt.zork.net/kernel-traffic/kt20011217_146.
Re:Is this progress? (Score:2)
No it doesn't, it helps Linux simply because that barrier you talk of doesn't exist. If I can run C# and
As for open source not innovating, in the realm of UI type of applications I agree whole-heartedly but everyone seems to forget who did what first. Things like the internet itself, tcp/ip protocols, ftp, multi-tasking, apache, C all these things that allow the internet and desktops to function the way they do today were developed a long time ago on unix platforms. Microsoft innovated none of it and have made their money primarily in the UI app sector (IE: Office). Once we start seeing some innovation in that sector then and only then will people switch. It's not about performance or stability or any of that. It's about easy use with "cool" features.
Busy Miguel (Score:2, Insightful)
Re:Busy Miguel (Score:2)
EMCA doesn't mean Sh*t! (Score:4, Insightful)
Good luck to the Mono development staff. A typical days work will consist of ensuring that bug #34433 behaves exactly like the bug in the MS code.
If they can't guarantee 100% bug-infested compatibility, then Mono is worthless.
Miguel (Score:2)
For those who speculate about Miguel's
intentions as they pertain to his work on the
Mono project, simply consider the meaning
of "Mono" in Spanish...
Mono? (Score:2)
Re:Mono? (Score:2, Insightful)
eww (Score:2)
Old Joke: In what language was 1st C compiler? (Score:2)
There's an old geek brain-teaser that goes like this:
How is this possible?
The story as I heard it, which I cannot find any verification of from Ritchie's web site, is that a C interpreter was written in B, and the source code for a C compiler was run through the interpreter and used to compile itself.
(The actual history of the evolution of B into C is rather complex, and it is not clear whether this fable is true.)
Re:good news, (Score:2, Insightful)
Re:I haven't heard good things about C# (Score:2)
There are some dodgys, but they have more to do with the platform than the language.
Re:I haven't heard good things about C# (Score:2, Insightful)
And (unlike C), most of this so-called-portable languages use a stack VM machine. We don't know much about optimizing it in hardware and we don't have compilers capable doing that.
Guess why ppc (which is slower than most x86 in real apps) wins in float SPECs that much. Guess why hammer and p4 have sse2. Hint - 80387 design
Re:I haven't heard good things about C# (Score:5, Insightful)
C# and C share a letter and an ancestry, but have a completely different raison d'être, philosophy and implementation.
Re:I haven't heard good things about C# (Score:5, Insightful)
And and the controversy over the last while over whether our buddy Miguel et al are aiding and abetting the Redmond Ragtags, or opening the use of C#, quite likely one of the best all-round OOP RAD-able languages available, to the rest of the world outside those who don't mind being "locked in", as you say.
If nothing else, it will be yet another choice, and let's face it: if Linux allows for nothing else, it's choice. Choice in WMs, xterms, Desktop Environments, GUI toolkits, distros, package managers, and now choice in top-end populist OOP languages.
Re:a dumb question (Score:5, Informative)
In this case, until now you needed to have Windows and a C# compiler for Windows in order to work on the C# compiler for Linux--that would shut someone like me (who has no Windows) out from being able to do compiler work. Not that I want to work on C# anyway, but you get the point.
Imagine if all the gcc developers had to buy a C compiler to work on developing the free one!
Sumner
Re:a dumb question (Score:2)
Justin Dubs
Re:a dumb question (Score:2)
First I'll define some terms. The "Program" is the whole thing. It consists of two major components, the "Code" and the "Data". Both the Code and the Data are written in the language of the Program. The "Payload" is some more code that is reproduced along with the Code, but is not used in printing the entire Program.
The Program starts out with a bunch of data declarations which are the Data. In C, this could be an array declaration with an initializer which contains a bunch of static strings. Then after the Data (in this example, a huge array declaration) comes the Code.
The way the Code works is that it has two sections, one that prints the Data declarations, using the Data declarations; and, a second section that uses the Data to print the Code.
Obviously, the Data is a bunch of strings which make up the source code of the Code and the Payload, but not the entire Program.
I hope this explanation is clear. I haven't actually implemented it, but the explanation should tell you how to do it in a fairly language-neutral way. I hope this wasn't your homework assignment. I hope I didn't make some major blunder due to insufficient caffine. If I have, I'm sure someone here will embarrass me or mod me as Stupid (-2) or something. I don't think you need to understand this to understand compilers. But understanding this is one element necessary to understand how to trojan a compiler's source code so that when the unmodified source of the compiler is recompiled using the compiler, that the trojan is inserted into the binary output (a new compiler), even though the trojan is no longer present in any of the compiler's source code. The Payload is then useful to further trojan the compiler such that when it compiles the source to the login program, it compiles in a back door, even though the source to the login program has no back door.
Can someone who has had more caffeine plese provide a link to the famous article describing this before I scream.
Re:a dumb question (Score:2)
There is nothing particularly poor about the language. But that is a matter of opinion. The language certianly has lots of great features, from a certian perspective. Perhaps the implementations are equally poor, but I disagree with this premise also. Some implementations are not so poor as others.
Re:a dumb question (Score:2)
The IBM Java compiler (Jikes) is much much faster than javac, for example, and does exactly the same job.
Re:a dumb question (Score:2)
Re:a dumb question (Score:2, Interesting)
how do you think new versions of gcc get compiled anyway?
Re:a dumb question (Score:4, Insightful)
A compiler being able to compile itself is an important "graduation ceremony". Any language that is incapable of compiling itself is obviously incomplete in some fundamental way.
Until a compiler compiles itself it is usally considered a "toy" language. There are 100's of them out there. They may be interesting academiclly, but they get no respect.
Sometimes the parent compiler can even have legal restrictions the use/distribution of it's output.
It's also sort of symbolic - the language is now self sustaining, and can break free. You can continue development in a "pure" enviorment of the new language.
-
Re:a dumb question (Score:2, Interesting)
...kind of like the COBOL compiler that's been putting food on my table for the past decade-and-a-half...Although I'm not certain I would call it "interesting"...and the academic applications are questionable...I certainly agree that I get no respect!
(grin)
Seriously, though...IMO, the best indication of the importance of a complier is in how many lines of code/how many executables the compiler has been used to...compile..., and not necessarily any artifical metric.
That being said, I'll consider Mono's C# implementation a success if and only if it will allow a sizable chunk of C# sourcecode to be *used* on non-Microsoft platforms. Anything less, and it becomes at best "just another Java", only not as portable.
Miguel has now produced a "real compiler"...but I still think the jury's out on it becoming a "real success." I'm rooting for them, though!
Re:a dumb question (Score:2, Interesting)
Does Perl compile itself? Nice toy.
Re:a dumb question (Score:2, Interesting)
You might want to be a little more specific here for those who don't know the difference between interpreted and compiled languages. Interpreted languages (Java, Perl, Python, etc.) won't compile themselves. Before all the Java programmers jump all over me.. yes, Java is "compiled", but the output is not a machine-code executable, it's bytecode that is interpeted by the JVM.
An interesting thought - In theory the Java compiler could be implemented using a JVM and a "Compiler" bytecode...
java com.sun.JavaCompiler1_2 com/sun/JavaCompiler1_3.java
Hmmm....
Re:a dumb question (Score:2)
Pick pick pick pick pick.
(Don't mind me, I'm just irritated at the other posters who can't grasp the difference between me saying a language should be ABLE to complile itself, and every compiler MUST have been complied by itself.)
Ok, interperted languages symbolicly "graduate" when someone writes an interperter for that language in that language.
Coug - I know you get it, but for everyone else:
The point of the "graduation ceremony" is that someone demonstrates that (A) the language is complete enough for a significant task, (B) practical enough to actually write a significant task in it, and (C) functional enough to actually execute a significant task.
Toy languages fail either (A) completeness, (B) practicality, or (C) functionality.
-
Re:a dumb question (Score:2)
I never said that every implementation of a compiler must be compiled by itself. I said:
A compiler being able to compile itself is an important "graduation ceremony"
I'm sure that someone somewhere has compiled a fortran compiler with a fortran compiler. Fortran is Turing complete. That means it can do anything any other Turing complete language can.
In an adjacent post someone mentioned Perl. I don't know much about Perl, but I'd hazzard a guess that it is also Turing complete, and therefore capable of compiling itself.
-
Re:a dumb question (Score:2, Redundant)
open FILE,";
eval $code;
Re:a dumb question (Score:2)
trying again.
except that perl's not compiled
open FILE,"<$ARGV[0]";
$code = <FILE>
eval $code;
it's been done (Score:2)
It's not done often,and not with modern fortran compilers, because it's generally a dumb idea
You can accomplish anything in either C or Fortran. That doesn't make them equally good choices for all situations.
For example, it is possible to hand-optimize C to get speed equivalent to Fortran. But why bother? For fortranny applications, the fortran code is generally shorter and quicker to write, even before the optimization. I found about 3:1 in time a couple of years back. And that only gives you the initial code; now you get to hand-optimize.
On the other hand, part of what makes Fortran faster in the first place are the very things it leaves out, allowing the optimizer to make stronger assumptions. This generally isn't a limitation when smashing matrices into one another, but the OS folks would be horrified not to have these.
Use the right tool for the right job.[1] A high level language such has fortran has no serious need to compile itself. But a language that claims to be appropriate for writing compilers and operating systems had damn-well better be able to compile itself. [and as another post hints, a Cobol compiler that could compile itself just might be scary . .
hawk
[1] Two caveats:
I) If force isn't solving your problem, you didn't use enough.
II) If Windows is the answer, you asked the wrong question.
Re:a dumb question (Score:2)
Re:a dumb question (Score:5, Insightful)
Once the compiler can compile itself, you can stick a trojan into it [acm.org] and have a good chance of nobody noticing.
Re:a dumb question (Score:2, Insightful)
Re:a dumb question (Score:2)
You miss a critical point. Doing as you suggest requires that you keep the trojan in the source code of the compiler. With an open-source compiler, that trojan is in the source in front of God and everyone, plain for all to see.
Much better is to get the binary of the compiler trojaned in such a way that it will re-trojan subsequent compilations of the compiler; but the compiler source code has no evidence of a trojan.
Re:Someone please? (Score:2, Informative)
c# compiler source + gnu c compiler = C# compiler
C# compiler source + c# compiler = another c# compiler
app source code + c# compiler = app
So it's not the application (exe?) that can compile, but it's the compiler that can compile...
Uhmm.. Yeah..
Re:It's the chicken ... (Score:2)
Re:Realitive? (Score:3, Interesting)
Therefore lot of compilers can never compile themselves, since they are written in a langauage other than the one that they compile. I suspect, for example, that the Visual Basic compiler is written in C++, and therefore cannot self-compile.
In the case that A = C, the compiler can compile itself assuming that it supports a sufficiently large feature set from the language. This is the point that the Mono project has apparently reached. This is a decent achievement for a complex compiler, since it suggests that the project is getting near to being feature complete.
You could probably write a self-hosting toy compiler relatively quickly if that was your goal. Especially if you chose a "nice" target language, i.e. something like Scheme rather than x86 assembler.....
Re:Realitive? (Score:2, Informative)
One way of porting a language to a new architecture involves a kind of bootstrapping, wherein a compiler for a simple subset of the language is written in some host language (C, say) that is already available. Then more sophisticated parts of the compiler can be coded using the first generation of the compiler. Where the new language is already largely a superset of the porting language (as is the case with c-like languages), you can end up with a compiler that can compile itself.
So it's a feature of the method used to port the language. Obviously if a C# compiler were written in Pascal or Fortran, say, it would not ever be able to compile itself, no matter how mature it became.
The ability to write compilers is definitely a useful feature of any language, but producing a self-compiling compiler isn't necessarily a goal. If portability is important, a porting kit for the language should be provided in C or some other lingua franca.
Re:Realitive? (Score:4, Insightful)
Incidentally, self-compiling is a self-test that is only relevant to some languages. Many languages are quite useful in limited fields, but their primary features are not relevant to compilers, and they may lack features that are essential to compilers. It would be possible to write a compiler in BASIC, FORTRAN, or COBOL, but it would be insane to try it. OTOH, some languages were invented solely for writing compilers -- when it compiles itself correctly, it's done, but it's not much interest to anyone except a computer science professor specializing in compilers.
Self-compilation is a pretty good test for C-type compilers, because these are general purpose languages that are appropriate for compilers, among other things. It's a good test in another sense; quite often one of the problems in testing software is determining whether the output of the program is right or not. If you ran a CAD program (say) through the compiler, the compiler-writers could recognize a few gross errors (won't compile, won't run, erases the data file when you hit Save), but couldn't tell whether or not the finer nuances came out right. But they definitely know what a _compiler_ should do.
Re:are they going to....... (Score:2)
So probably soon.
-Jon
Re:how is this pronounced? (Score:2)
But in Britain, the # sign is known as a "hash" (not pound - pound is a cursive L that's used to indicate currency).
So to all non-US English speakers, C# will probably be known as C-Hash.
How ironic
*shudder* (Score:2)
You're calling the octothorpe a "pound?" *shudder*. Report to a reeducation center at once! They'll also force you to move your punctuation back inside the quotation marks so that the paper doesn't tear when stuck with the printing press . . .
hawk
Re:Get Your Hands Off My Pointers (Score:2, Insightful)
Re:CSL vs. C# (Score:3, Informative)
C# does not have inner classes, but it has delegates, which can be thought as closures. They basically encapsulate a method plus its object.
Miguel.