Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

F# - A New .Net language 295

Goalie_Ca writes "Neonerds.net has learned of an implementation of the ML programming language for the .NET Framework. F# is essentially an implementation of the core of the OCaml programming language . F#/OCaml/ML is a mixed functional-imperative programming language which is excellent for medium-advanced programmers and for teaching. In addition, you can access hundreds of .NET libraries using F#, and the F# code you write can be accessed from C# and other .NET languages. See the F# Homepage."
This discussion has been archived. No new comments can be posted.

F# - A New .Net language

Comments Filter:
  • by frleong ( 241095 ) on Saturday June 08, 2002 @07:32AM (#3664528)
    Smaller languages can really compete with popular languages, because the same library is available to everyone. This promotes fairness in the development tools domain (in contrast with what MS does in its business tactics).

    Although there are other languages that can run on JVM, the ease of getting inheritance and cooperation to this level is only possible in .NET.

    • So this compiler lets you access Microsoft's proprietary .Net class libraries. A native code O'Caml compiler can access the hundreds of libraries written in C, on a range of platforms. What's the big deal?
      • So this compiler lets you access Microsoft's proprietary .Net class libraries. A native code O'Caml compiler can access the hundreds of libraries written in C, on a range of platforms. What's the big deal?
        You are correct. Micro$oft is constructing a Developers' Superstore - a familiar place where any developer of any language will go to code. Micro$oft is trying to achieve branding.

        This is a tried and tested method, 50 years ago you went to the clothes store and bought some clothes, electrical store and bought some electricals, mechanical store and bought a washing machine, tool store and bought some tools.

        Micro$oft is trying to construt a "Wal-Mart" that everybody goes to instead of all these disparate places. Not bad, appeals to beginners.

        Right now a new developer says "I want to learn how to code" and you say, "Which platform? What type of program - textprocessing=Perl, compiler=Haskell, generalhighexecspeed=C++, generalhighdevelopmentspeed=Java, webdevelopment=PHP,J2EE,..." Micro$oft is trying to make it so that instead of all these disparate RPMs that confuse the heck out of newbie developers, you just use one IDE - Micro$oft's IDE, same as Wal-mart. The only difference (apart from the obvious) to a newbie will be that C++ has a compile button and Perl doesn't, his questions will become gradually more complicated after that. The weakness is that a bug in the CLR will affect all languages that use that functionality, you lose bug compartmentalisation and damage limitation.

    • by SuperKendall ( 25149 ) on Saturday June 08, 2002 @09:21AM (#3664756)
      Take a look here [microsoft.com], at the comparison of F# to OCaml.

      This is what I really dislike about .NET - the promise of multiple langauges, with the DELIVERY of multiple crippled langauges. If MS succeeds I can predict what will happen - the demise of almost any other interesting language, as developers are drawn in by the supposed "compatibility" of the C# libraries in thier favorite languages. But devoid of many interesting langauge features, combined with odd syntax for acccessing those wonderful Java, er, C# base libraries they wlll just drift to using C# (the ONLY language not neutered by the platform).

      I think you are really a troll, but in any case you bring to light a common ignorance about the whole .NET platform that should be corrected whenever possible.
      • This is what I really dislike about .NET - the promise of multiple langauges, with the DELIVERY of multiple crippled langauges
        I don't know why you're calling me a troll (because I may seem to attack Java? But I like Java too and I actively program JSP and servlets at work!). But the point is that .NET allows these smaller languages to be put in commercial arena, which is good, because it promotes competition and may actually force the big ones to take some of the nicer features of these academic languages and everybody will benefit by then.
        • Maybe he called you a troll because you said:
          Although there are other languages that can run on JVM, the ease of getting inheritance and cooperation to this level is only possible in .NET.

          And if you had looked at the language description, you would have seen that F# lacks the following:
          • Substructures/Namespaces/Packages
          • Functors
          • Inheritance (authoring)
          • Structured classes
          • Variance on type parameters
          • Labels
          • Default parameters
          • "Printf" style formatting

          Therefore your claim, that somehow getting parts of a language to run on .NET could be beneficial in some way, rings hollow...
        • I am truly sorry for calling the post a troll if it was not - I calculated it might be a troll not so much for attacking Java (indeed, it did not really do so at all) as seemingly being a bit too blindly supportive of .NET as a generic language platform.

          In truth, I'm not aware of any language features that are that much more easily implemented on the .NET platform as opposed to the JVM - it even seems like both will be supporting Generics about the same time (the cynical among us would say .NET is waiting for the Java JSR to be finalized to see how Generics should be implemented in .NET!! ;-) )

          I think the difference is that with .NET, Microsoft is strongly supporting multiple langauges on the platform while in Java, multiple langauges are fundimentially supported by the VM but not encouraged by tools - so while the JVM could have a multi-langauge develoipment tool, no-one has seen the benefit of doing so and thus none have been built.

          While the mutli-langauge drive seems great at first, like I was saying I find it very disturbing that the same thing could happen to languages that has happened to OS's - we could enter a long dark period where the only real "language" you could program in is .NET, with only the ability to choose your favorite syntax (with your favorite syntax eventually and inevitably becoming C#). I like Java a lot but I also like to see work on other langauges (like Ruby) progress, and hate to see a movement to one unisex langauge.
          • While the mutli-langauge drive seems great at first, like I was saying I find it very disturbing that the same thing could happen to languages that has happened to OS's - we could enter a long dark period where the only real "language" you could program in is .NET, with only the ability to choose your favorite syntax (with your favorite syntax eventually and inevitably becoming C#). I like Java a lot but I also like to see work on other langauges (like Ruby) progress, and hate to see a movement to one unisex langauge.
            I see a different future scenario. Sure, the popular languages (C++, Java and presumably C#) will remain popular, but the lateral competition from these academic languages will more likely influence the future syntax of these popular languages. This benefits almost everybody in the end, i.e. users of these popular languages, I assume.

      • F# is not crippled. It's just missing the module constructs of O'Caml, which as far as I know should not cause any problem to be implemented -- they just aren't yet.

        SML.NET, which I think works (though I haven't checked) apparently implements all of SML, including its module language. SML's module language is even cooler than O'caml's, too.
    • by Baki ( 72515 ) on Saturday June 08, 2002 @09:43AM (#3664809)
      Just compare any .NET language with the original, (C++ versus the real thing, F# versus ML etc). What you see is that any language, in order to achive interoperability at this level (including inheritance etc) and thus get access to .NET libraries, needs to be mutated into something different.

      Only superficially all .NET languages are different, only superficially they are like their originals (syntax etc). In fact all .NET languages are structurally alike, only the syntax is somewhat different.

      Therefore, should .NET succeed in the marketplace, it would be an enormous loss. The choice (of languages) is just fake. In fact it is total assimilation and destruction of variety.

      I have nothing against the virtual machine idea (C# + CLR) which is 100% like Java + the JVM. That is a good principle which has its uses (just asking why not go with Java, C# merely adds some syntactic sugar but brings no true improvements such as templates or multiple inheritance). But this plan/strategy of so mutating all existing languages in all alike .NET variants is horrible.

      • Only superficially all .NET languages are different, only superficially they are like their originals (syntax etc).

        F# is a language with type variables, type inference, and lexical closures. Those are real differences, not just syntax. I'd be overjoyed if, say, Java had support for them, but it doesn't.

        (C# + CLR) which is 100% like Java + the JVM

        No, C#/CLR are not "100% like" Java/JVM. C#/CLR is clearly based on Java/JVM, but Microsoft added a few features that make some difference. Among others, the CLR offers support for value types, pointer manipulation, and (experimentally) genericity. Those features do change the runtime significantly relative to the JVM (whether for better or for worse is a separate debate).

        • C#/CLR is clearly based on Java/JVM, but Microsoft added a few features that make some difference. Among others, the CLR offers support for value types, pointer manipulation, and (experimentally) genericity.

          You didn't mention an important one which is a big lack in JVM. That is: tail recursion. Does it have it?
          • You probably mean tailö-recusion-optimization (that iss, that tail calls get optimized into jumps, that doesn't grow the stack).

            It i not up to the JVM to implement or lack proper tail-recursion-optimization, but up to the java compiler. I know that a (standard compliant) Scheme compiler has been written targeting java bytecode. The Scheme standard requires proper tail-recursion-optimization.
            • It i not up to the JVM to implement or lack proper tail-recursion-optimization, but up to the java compiler.

              No, this is not true. TRO can happen whenever a method calls another method just before returning. But the JVM lacks the primitives for expressing this concept in general and it isn't guaranteed to perform the optimization automatically, either. When a Scheme compiler performs TRO on top of the JVM, it can't be using JVM method calls.

              I know that a (standard compliant) Scheme compiler has been written targeting java bytecode. The Scheme standard requires proper tail-recursion-optimization.

              Java is Turing equivalent, so you can implement any language on top of it (in the worst case, as a full interpreter). The real question for CLR/JVM is whether the optimization happens naturally when you map onto its built-in notion of functions. And for JVM, the answer appears to be "no".

            • The JVM (basically a neutered Forth VM) can't do proper-tail-recursion the "easy" way- it lacks the required stack manipulation primitives, according to some people they left them out for security reasons at the time of design.

              The fully-rnrs-scheme-compliant scheme, SISC [sourceforge.net] on the JVM uses the JVM heap, not stack, where most schemes on real processors would use the processor's real stacks, munging them for tail-recursion.


              • The JVM (basically a neutered Forth VM) can't do proper-tail-recursion the "easy" way- it lacks the required stack manipulation primitives, according to some people they left them out for security reasons at the time of design


                Can you explain that deeper or provide a link to a paper?

                I learned Tail Recursion Elimination is done by unrolling a recursive call into a loop.

                fib(int n) {
                if (n == 0) return 1;
                if (n == 1) return 1;
                return n * fib(n-1); // tail of fuction fib-n
                // and a call to fib-n-1
                }

                fib_loop(int n) {
                if (n == 0) return 1;
                if (n == 1) return 1; // end of loop!
                int res = 1; // known from loop end
                for (int i = n; i > 0; i--) {
                res *= i;
                }
                return res;
                }

                This is PURE HIGH LEVEL language code, and tail recursion elimination is applied! This has nothing to do with the underlying virtual machine, IMHO.

                angel'o'sphere

          • You didn't mention an important one which is a big lack in JVM. That is: tail recursion. Does it have it?

            Tail Recursion is not the issue of a VM; be it CLR or JVM. Its a issue of the language.

            Java supports recursion, like all modern languages except perhaps some BASICSs.

            Do you perhaps mean tail recursion elimination? Thats an optimization which is done by the compiler.

            Elimination of recursion, tail recirsion or others is hard in late bound invocations. In Java its only easy doable for static or final methods.

            angel'o'sphere
      • by Carnage4Life ( 106069 ) on Saturday June 08, 2002 @11:43AM (#3665130) Homepage Journal
        Just compare any .NET language with the original, (C++ versus the real thing, F# versus ML etc). What you see is that any language, in order to achive interoperability at this level (including inheritance etc) and thus get access to .NET libraries, needs to be mutated into something different.

        Agreed, this is true of any mechanism that allows interoperability. Java does not allow one to effectively utilize benefits of the target platform so that it is interoperable across operating systems. Many would claim that this is a good thing. SOAP and web services are making people similarly compromise to enable building distributed applications something that is accessible to the average developer. Again this is widely considered A Good Thing

        Compromise for the sake of interoperability is something that is done all the time. The question typically is whether the amount of compromise is worth the benefits of interoperability.

        Only superficially all .NET languages are different, only superficially they are like their originals (syntax etc). In fact all .NET languages are structurally alike, only the syntax is somewhat different.

        I'm not sure what you mean by structurally alike but I'll hazard a guess and assume that you meant they are semantically alike. So far I have used 4 .NET languages to program; C#, JScript.NET, VB.NET, and Managed C++. Being skeptic I originally assumed that .NET Framework would simply be creating skinnable languages where the syntax may be different but the underlying semantics are mostly unchanged which in fact many claim is the case for VB.NET and C#.

        However, the more I used the languages the more I realized that although some similarities existed the core of the languages from my past; JScript and C++ was still in their .NET versions. I can still declare vars in JScript and best of all in managed C++ I get all my favorite C++ constructs (STL, the 4 casts, templates, etc) but can combine them "managed code" to get the benefits of .NET.

        Now there are certain compromises such as the fact that the CLR only supports single inheritance (which I believe some research language discussed a while ago on Slashdot found a workaround for) but in my opinion this is a small price to pay to be able to access my C++ objects from VB.NET or my C# objects from JScript.NET. I consider even better that there is one unified class library that I can count on across all the languages besides the language specific ones like the STL or the JScript function library.

        Disclaimer: I work for Microsoft but the thoughts expressed in this post are my opinion and do not reflect the opinions, intentions, plans or strategies of my employer
        • I think the integration is too fine grained to implement really different languages, such as
          - not-OO at all (but purely functional or logical languages)
          - languages like ADA or OCCAM that have completely different parallelism paradigmas

          I prefer more loose integration, where different languages/environments can implement truely different paradigmas. Then you integrate for example by:
          - building a 'bridge' in the C, the de-facto glue language (JAVA has JNI, almost any language has some kind of interface to C
          - more coarse component mechanisms such as Corba (or COM or web services if you like)

          In more specialized environments (that thus are impossible to integrate at the fine level as .NET does) you can have more effective solutions for particular application domains.
        • .Net is not the only game in town for standardizing a virtual machine. The Parrot project, which is associated with Perl 6, is working towards a highly performant (pseudo)register-based VM that will be available as a separate codebase should other language builders wish to use it.

          This makes sense - it is wasteful and time consuming for disparate teams to reinvent the VM.

          Now before getting on your high horse about these unique VMs exposing some key element that the others omit - just remember, every language is ultimately a wrapper/macro/syntax rewrite of your CPUs intruction set. Raising the level of assumption to the VM level is realistic.

          Now admittedly Parrot does not encapsulate a lot of the features in .Net (security, data abstraction, object pooling...etc etc), but its a step in the right direct.

          Don't underestimate .Net - it makes sense.

        • Is it legal to use .NET to write GPLed applicatons? I heard that the license restricts the right of the programmer to choose how they wish the license their code. If this is the case (let me know if it's not) then any benefits the .NET platform may offer are useless.
      • FUD (Score:2, Insightful)

        by Hard_Code ( 49548 )
        Well, I am a fulltime middleware Java programmer, and have no particular love for MS, but your post is mostly FUD. Microsoft is not trying to ingratiate themselves to *US*, the Slashdot crowd (duh!) - they are trying to jump off their legacy languages (VB, C++) into a J2EE-like world (which Java has had for years now), and to this goal, .NET/CLR is a pretty damn good architecture. The things which it *does* have over Java which are actually nice, is, for example, optional "unsafe" keywords for native integration, and auto-boxing (which Sun has now planned to introduce to Java in 1.5). .NET is a great platform for current Microsoft developers to migrate to a web-services/J2EE-like world, and it does a damn good job at it. I know if I was an MS developer I sure would rather develop in .NET/CLR/C# than the crap that they currently have to deal with (VB/C++/MFC/ActiveX/COM+).

        .NET is a good architecture for what it is designed for. That doesn't mean MS is not still going to try to do something evil with it, but it is plainly dishonest to spew FUD like that against it. Read the spec before bitching.
      • I've been programming in C++ for, cripes, it must be ten years now and I've been doing Java on paying basis for three years. This is an honest and sincere question. Can anyone give me examples of where multiple inheritance is a) necessary, and b) superior to the Java trick of interfaces?

        In my experience, multiple inheritance kills reuse, and is somtimes so overused that you get a complex, rootless graph of classes. I actually dislike multiple inheritance. Now, I freely admit that this could be product of ignorance of the the intelligent use of the feature.

        By contrast, I genuinely like the Java system of interfaces. It adds a "works-like" or "can-do" semantic to the traditional object semantics of "is-a" and "has-a." In my experience, it is easier to keep clean and extend, and it doesn't really overcomplicate the class heirarchy.

        Mind you, I've programmed in C for nearly 20 years, so it took a long time to really become an OO programmer instead of a C programmer who uses class libraries -- which is what I believe about 60% of C++ programmers really are. I think C++ took off because of the relatively easy transition from C, but I think all of my complaints about C++ come from the holdover baggage of C.

        Things I dislike in C++
        o "virtual" Sure, high efficiency, but can make for real confusion. "Which method got called?" I like Smalltalk and Java better on this.

        o Multiple inheritance. I like interfaces. Please tell me why I'm wrong (the ony cure for ignorance is confession).

        o Operator overloading (this is also something I like). Here my complaint is bad use of operator overloading. How am I supposed to know what it means to increment an Employee?

        o Memory management. Being an old C and later C++ man I was highly dubious about Java's garbage collection model. One month of Java and I totally changed my tune. An entire class of bugs simply disappears. With Java "It compiles, therefore it works" became almost true. Never in C++.

        o Pointers and instances. I don't like (and its just for cleanliness reasons -- not on principle) that you can instantiate an object both by declaration and allocation. In practice, I've ended up using object pointers all the time. When I mix object pointers (read references) and declared objects I end up confusing programmers who must maintain my code in the future. (Should "payment" be deleted or just go out of scope?)

        o No root class. In Smalltalk and Java you get many advantages from having a root class that at some point every other class inherits from. In fact, it gets rid of many of the cases where templates are required in C++ (but not all -- I do like templates!)

        Things I really like about C++:

        o Templates. You can't beat 'em.

        o Operator overloading. When you use it well, it's nice. One bit of code I wrote a long time ago that I use over and over again is a file-backed array class. Overload the subcript operator and your go right to line "n" of a text file. It works so well I haven't had to change a bit of code in it in five years.

        o Speed. Java sucks on speed. It just does.

        o Destructors. I really miss destructors in Java. Even though I consider it a small price to pay for losing the memory management/pointer bugs that are so easy to make in C/C++, it is nice (REALLY nice) to have a method that is ALWAYS called when the object is destroyed (no longer used), as opposed to when it happens to get garbage collected. This is especially noticed when dealing with resources outside of the language, like DB connections and sockets to external services. Sure, you code around this, but a destructor is really nice.

        So, set me straight!

        (No holy war intended here -- I love almost all the languages I've programmed in for one reason or another. They all have their uses.)
  • by OffTheRack ( 551671 ) on Saturday June 08, 2002 @07:37AM (#3664540)
    is F#@%. I've seen references to it and some language called SH%T in print before.
  • Think I'll stick to what I know, instead of trying to learn more and more crap each time something new comes out - there's enough standards out there as it is already.
    • by cyborch ( 524661 ) on Saturday June 08, 2002 @08:20AM (#3664623) Homepage Journal

      Think I'll stick to what I know, instead of trying to learn more and more...

      Isn't this the perfect way to fall behind? I think the only way to actually stay on top of things and keep informed is to be willing to keep learning. If you stop learning you will quickly grow ignorant of what is going on in the world and everyone else will get ahead of you. I prefer to keep learning the therefore hope that I will never adopt your additude towards new things.

      I have to say, to your defense, that .NET and everything that comes from it has a very good chance of being no more than a publicity stunt from MS. But since the people who made C# were quite competent before MS hired them (and may still be), I will have to look at it at some point, to evaluate it if nothing else. The same will be the case for F#. I will look at it and see it as a chance to learn something new and become a better and more wise programmer.

  • by theolein ( 316044 )
    I don't mean to troll, but I took a quick look at the F# website (which seems to be unfinished and doesn't render properly in Moz) and just had to ask myself what the point of this language is. I agree that .Net is interesting because you can mix languages, but just how useful is this? I can see it being nice because you can code speed important parts in C++ and glue the rest together with JScript or VB but what is the use of a language like F#? Teachers are not suddenly going to jump on this bandwagon having only recently moved to Java. Not only this but there is a significant amount of CS professors out there who still think that teaching programming in languages like Haskell or scheme is a good thing even though these are hardly used in the real world.
    • by Anonymous Brave Guy ( 457657 ) on Saturday June 08, 2002 @08:06AM (#3664590)
      Teachers are not suddenly going to jump on this bandwagon having only recently moved to Java.

      Teachers of real CS courses have been teaching functional programming since before Java was a blink in the eye of its creators. ML, Haskell, Scheme are such are far more popular in CS academia than they are in industry, and OCaml is arguably one of the most powerful functional (-based) languages around. Sorry, but I don't see your point.

      • I just don't see the relevance of these languages. They might be good to get concepts across, but they are practically useless in the real world. When I was at university in the early 80's we learnt everything in Pascal because, as the CS profs claimed, it teaches you structured programming. It was only after I started learning C that I realised that Pascal might be more clean in terms of syntax, but *it doesn't actually do any more than C*. You can make a messy unreadable programme in *any* language and scheme, haskell, ML etc are no exception.

        I personally feel that students would be better off learning one of the realworld languages or at least languages which real world languages were based on such as C, Simula or SmallTalk or Java (since C# seems to be based on that). At the same time it should be, IMO, a part of every CS course that students should learn to write well commented code that is legible to maintainers etc.
        • You can make a messy unreadable programme in *any* language

          Yes, but some languages encourage it. Classic Fortran, APL and Perl are both languages where normal code approaches unreadability. You want to teach students in a clean language, so that they have more of a tendency to write in that style.

          *it doesn't actually do any more than C*.

          In a pedantic sense, this is true of every language from Intercal to Perl. In an unpedantic sense, yes, actually, it does. Or did you miss sets and nested functions? And if you don't want to learn a language that doesn't do more than C, why are you complaining about languages like O'Caml, which goes huge steps beyond C in some areas?

          I personally feel that students would be better off learning one of the realworld languages or

          Why? A computer science degree should teach more than how to hack out code, and any decent computer programmer can pick up the basics of a new language in a day, provided it's similar to one they've already learned. If you want realworld languages, why didn't you include Fortran or COBOL, possibly the two most used languages in the world?
        • I just don't see the relevance of these languages. They might be good to get concepts across, but they are practically useless in the real world.

          Writing millions of lines of code projects is a different goal to learning some basic programming concepts. The best tool for one of these goals may or may not be the best tool for the other.

          Getting the concepts across is the whole point of teaching. You succeed if and only if you get the concepts across. Languages are easy to learn, but paradigms are very difficult. I've become a better C++ programmer by learning ML, Python and Perl. I don't use ML in my work, and probably won't in the forseable future, but it's exposed me to paradigms that a flexible "real world" language like C++ wouldn't have forced me to learn.

          I personally feel that students would be better off learning one of the realworld languages or at least languages which real world languages were based on

          The goal of taking a course is not to learn particular technologies. You have trade certifications for that sort of thing. Students need to learn not just one language, but several, to gain broad exposure to different programming paradigms. A good degree will inevitably include some "real world" code, because you need real world languages to get "close to the metal", so courses like operating systems involve some C.

          At the same time it should be, IMO, a part of every CS course that students should learn to write well commented code that is legible to maintainers etc.

          Things like this are the tip of a large iceberg-- you can only really learn to comment properly from experience. Most student code is simple enough that only the headers need to be commented.

          The problem is that it's very difficult to teach "real world programming" within the constraints and limitations of classes. The classes typically only run for a semester, which means that school will not give you much of an idea of what it's like to work with a large codebase over a long period of time.

        • Ask any ML advocate (or Google), and they'll point you to a list of real-world uses of ML and its brethren, including the back ends of several big-name products. These tools are eminently suitable for real-world use, and a remarkably high proportion of those who've tried and documented their results have done well.

          The fact that these languages are not more widely used, given their demonstrated potential, is more a sad indictment of the "professional" software development community than anything else.

      • Unfortunately, it seems that at least in its current implementation F# is an extermely cut down version of OCaML, with many of the more useful parts omitted. I can't really speak to this, as I don't use OCaML, but it sure looks like that. Now this may be purely because this is an early beta. I don't know. But some languages have found that basic features need to be left out to achieve acceptable conformity with the rules of .NET. It isn't really fair, e.g., to call Eiffel without multiple inheritance, Eiffel at all. I suppose that one could call it an Eiffel languages, but only for the sake of the pun. It *is* called Eiffel#, and the creator is the creator of Eiffel. To the best of my ability to determine, he should be ashamed of himself. With some languages this wouldn't matter, but multiple inheritance was one of the central features of Eiffel.
        • You appear to have exactly the same concern as me. Microsoft's "C++ Managed Extensions" and the version of "C++" you can run under .NET are far from the C++ I know and use every day. C++ without MI, templates, most of the standard library, etc. is not really C++ at all.

          Now, given that, almost implicitly, pretty much anything you write in OCaml is generic to some degree by default, it's going to be interesting to see what, if anything, they can do about that under .NET. If they have to effectively emulate that genericity, the performance is going to take a Big Hit(TM), as is the credibility of MS' claims about .NET. If, OTOH, .NET can support concepts like generics well either now or in the near future, MS' credibility is going to go way up.

    • Yeah.. just look at it:

      <meta http-equiv="Content-Language" content="en-gb">
      <meta name="GENERATOR" content="Microsoft FrontPage 5.0">
      <meta name="ProgId" content="FrontPage.Editor.Document">
      <meta http-equiv="Content-Type" content="text/html; charset=windows-1252">
      <title>F#</title>
      <meta name="Microsoft Theme" content="arcs2 000, default">
      <meta name="Microsoft Border" content="tl, default">

      ... no wonder it doesn't render right in Mozilla. Of course, the ugly buttons gave it away...
    • I don't mean to troll


      Yes you do.

  • I see a pattern coming up:

    win95, win98, win98se, winmx, win2k, winnt, wince, winme...

    activeX, vbs, asp, j++...

    and now c# and f#.

    I foresee in the near future g# h# j# k# and n#.
    (i# and l# are skipped to annoy IBM and Linux people)

  • All this shit and more has been available for years in Smalltalk.
    • by dpbsmith ( 263124 ) on Saturday June 08, 2002 @09:20AM (#3664750) Homepage
      Amen, brother.

      I wouldn't mind it so much if someone would IMPROVE on Smalltalk, but why do we have to spend twenty years devising languages that are not quite as good as Smalltalk (in the name of efficiency, or compatibility, or ease of learning for people that have never learned more than one computer language...)

      Smalltalk is the only language I've ever used that I felt truly extended my reach as a programmer and truly enabled me to do easily things that would have been difficult in other languages.

      It is also the only language in which I have truly taken big, complicated, rich pieces of code that did ALMOST what I wanted and spent very small amounts of time subclassing them and making them do EXACTLY what I wanted--and without having to spend hours reverse-engineering and understanding what the original code was doing.
      • There were a few stragglers still cranking around in SmallTalk a few years back, but at this point the prospects for this language having a real imapct is zilch. Throw it up on the scrap heap with Lisp and the other dead way-cool languages.
        • There were a few stragglers still cranking around in SmallTalk a few years back

          You mean like IBM? Last time I heard, VisualAge's IDE (and probably more) was written in Smalltalk.

          but at this point the prospects for this language having a real imapct is zilch.

          You mean, besides the impact it has already had? Smalltalk's OO strongly influenced how Java's OO was set up, among other things.

          Throw it up on the scrap heap with Lisp and the other dead way-cool languages.

          Um, Lisp? You know, the language that drives Sawfish and Emacs? It's hardly dead.

          What is it with people's desire to throw away all languages but a couple of the cool new ones? A compiler doesn't stop working because some one comes up with a new language; and a large part of the reason these languages die is because people go "does anyone use that language anymore" and use something different, whether or not there's actually support for the language, or whether or not it would be the best language for the job, or whether or not you would enjoy writing it in that language most. Guess what; there are good compilers for Ada and Lisp and O'Caml and Smalltalk on Linux and all major platforms, and besides the libraries which do exists, all those languages may interfacing C from them easy. If you feel like it would be easiest to use them, or it would produce the best code, use them! Maybe it will provoke a renassiance in the language; or maybe it will just get the job done well in an enjoyable fashion.
    • No, "all this shit" has not been around in Smalltalk. OCAML incorporates a lot of ideas and technologies developed in the 1980's and 1990's, after Smalltalk was designed. Whether you need the extra power of OCAML or whether it gets in your way is a separate question, but OCAML is not a subset of Smalltalk and Smalltalk is not the be-all-and-end-all of all ideas in programming.
  • From the web page:


    F# also happens to be the first released .NET language that is able to produce Generic IL [microsoft.com], and the compiler was really designed with this target language in mind. However the compiler can also produce standard .NET binaries, which is just as well because there is no publicly available release of a .NET Common Language Runtime for .NET that supports generics.


    So it looks like there are releases of Microsoft's jitter that have full support for execution-time expansion of generics. I can't wait!
  • Did you remember when Slashdot said that they'll post articles about there sponsors products? Well here's the result.

    I'm not anti-MS by any means but even I can notice that this F# have very limited appeal. Posting an article about it, is very suspicious.
  • by g4dget ( 579145 ) on Saturday June 08, 2002 @10:13AM (#3664862)
    Looks like they did a good job on the generics support in their runtime. In particular, unlike batch C++ compilers, they delay generation of specialized code until it's needed. That gives you the efficiency of C++ (no boxing, for example) without the bloat. This is what Sun should have done. Instead, Java has a half-baked and not type-safe genericity hack. Even the more complex proposals for Java genericity have been more limited in comparison.

    Let's just hope that the specification and implementation strategy for .NET genericity will be open. While there doesn't seem to be anything terribly sophisticated or novel in it, Microsoft could conceivably have applied for a bunch of patents and may try to keep efforts like the Mono project from implementing this.

    As for F# itself, it seems to be a closed-source distribution only, which makes it uninteresting except as a technology demonstration. Microsoft almost certainly has no intention of supporting it commercially, making it effectively an orphaned, single-platform, non-evolving system. Even if it were a commercial product, lack of an open source distribution would make the chances of its adoption nearly zero.

  • That's what this is about, Branding!!!

    I'm not interested in it. In fact I'm less then zero interested in it, as the generic brand contains the same ingredients and cost a whole hell of a lot less.

    For those stuck on words used. Microsoft is a brand name. GNU, FSF, GPL is the generic indicator, organization of establishing the generic and the licensing of the generic that insures it stays that way.
    • I should add that I'm so tired of Microsofts abstraction manipulations, or lies and manipulations of the meaning of words, etc...

      It's like the boy who cried wolf. At some point it just becomes more productive to simple ignore them and their distortion. While focusing on that which is without alot of crying wolf.

      Remember that developer use of .net is constrained to non-GLP work.

  • by ndogg ( 158021 )
    I see a few people complaining that there is "yet another language to learn."

    They ask, "Why bother creating another language? There is quite a number of them out there. Any of them should be able to solve almost any problem out there."

    Programming languages are an art form, and like any other art form, it deserves respect. Each programming language forces a programmer to think about a problem in a slightly different manner. Prolog forces a programmer to think in terms of goals to be achieved, and OO languages force a programmer to think in terms of self contained parts of the problem. Different programming languages make it easier to solve different types of problems. Haskell is a great language to represent infinite sequences due to its use of lazy evaluation.

    Would you tell a musician to not experiment with sounds not created by traditional instruments because there are so many musical instruments out there for him/her to use, and that any musical piece can be performed with any of them? Musicians come up with new sounds all the time because those new sounds allow them to view music from another perspective
  • It could have been called ML#, or M# if you really need the one letter thing. But now by calling it F# they have inflicted terrible damage on the community.

    Sure, it's F-unctional/imperative, but there are other Functional/imperative languages, how come they got the priviledge on the F? It's a non-intuitive choice for something based on ML, and it takes over the F...

    How many headlines in SatireWire et al could have been based on that F#? How many inside jokes, idiomatic expressions? An important piece of the .Net culture has been removed before it had time to blossom.

    Can you use "this F# S# is not compiling!" anymore? No, because it might not be implemented in F# and would confuse your co-workers, who would have otherwise understood your feelings exactly. We will see, ad nauseam, co-workers mocking poor F# developers with jokes that, in the pretense of sophistication ,will get old very quickly: "so, I hear you're using that F# language? Am I going to have to read that F# S# when you're done? I hated debugging that when I was in college, my prof was a F# A#".

    Confusion in the workplace will stifle any creative idiomatic use of F#, nagging by co-workers with repetitive jokes will stifle any creative alrogithmic use. F# will die and will kill the F# with it.

    Saying just "piece of S#!" is not the same. And how long before even that is removed, when they implement Smalltalk in .NET? What's next, A# for Algol?

    Why, Microsoft, why?!
  • by mlinksva ( 1755 )
    F# looks cool, I've been meaning to learn O'Caml for awhile. But I really want to see E [erights.org]#.
  • by ajs ( 35943 )
    I'm getting sick of these long names like Coctothorpe and Foctothorpe. Can't we just go back to "C", "Perl" and exorbitantly long, "Pascal"?
  • F#UCKING C#UNTS, I say.

    doo bee doo bee doo bee doo bee doo bee doo bee doo bee doo bee

Computers are not intelligent. They only think they are.

Working...