Forgot your password?
typodupeerror
Programming Science

Why Scientists Are Still Using FORTRAN in 2014 634

Posted by timothy
from the why-change dept.
New submitter InfoJunkie777 (1435969) writes "When you go to any place where 'cutting edge' scientific research is going on, strangely the computer language of choice is FORTRAN, the first computer language commonly used, invented in the 1950s. Meaning FORmula TRANslation, no language since has been able to match its speed. But three new contenders are explored here. Your thoughts?"
This discussion has been archived. No new comments can be posted.

Why Scientists Are Still Using FORTRAN in 2014

Comments Filter:
  • by wispoftow (653759) on Friday May 09, 2014 @08:56PM (#46963903)

    1) Modern Fortran is not all uppercase
    2) Modern Fortran does not have to start on column 7
    3) Modern Fortran has dynamic memory allocation
    4) Modern Fortran can use the same types as C (maximizes interoperability), hence can be called where C might be called
    5) Modern Fortran has an objects, polymorphism, etc.
    6) Modern Fortran has (a limited form of) pointers
    7) Modern Fortran has concise array/vector/matrix operations
    8) Modern Fortran has dynamically allocatable, multidimensional arrays that can be indexed starting with any integer
    8) Modern Fortran supports the complex type without higgery-jiggery
    9) Modern Fortran doesn't *need *pointers *in *all *the *places *that &C does, pass by reference is the norm
    10) Modern Fortran is blazingly fast and designed for sciene ....

    Some folks still write in Fortran 77, and the tired tales of woe that are bound to come from a language specification that is many decades old.

    But, that code/style still works, and who am I to judge how you want to get your work done?

  • by Anonymous Coward on Friday May 09, 2014 @08:58PM (#46963911)
    No, not just "legacy code." Fortran (yes, that's how it's spelt now, not "FORTRAN") was designed to be highly optimizable. Because of the way Fortran handles such things as aliasing, it's compilers can optimize expressions a lot better than other languages.
  • by Rostin (691447) on Friday May 09, 2014 @09:00PM (#46963923)
    I have a PhD in engineering, and my dissertation involved writing lots of code. Now I work at a national lab in the US, and I and nearly all of my coworkers work on scientific or engineering codes of some sort. Although there is significant amounts of legacy code that was written in Fortran lying around (a project I work on uses a fortran library written in 1973), very little development is done in that language. It's all C++ or Python.
  • by mbkennel (97636) on Friday May 09, 2014 @09:09PM (#46963977)

    A: Legacy code, and because Fortran 2003+ is a very good modern language for scientific computation and maps very naturally to problems. As it turns out, the language semantics (both legacy and modern constructs) make it very good to parallelize. And it runs fast, as in, equalling C++ level of performance is considered a weak showing.

    If you haven't seen or used modern Fortran and think it's anything like Fortran 66/77 then you're mistaken. Except for I/O, which still tends to suck.

    In addition there are still some seemingly trivial but actually important features which make it better than many alternatives (starting from Fortran 90).

    There's some boneheaded clunkers in other languages which Fortran does right: obviously, built-in multi-dimensional arrays, AND, arrays whose indices can start at 0, 1 (or any other value) and of course know their size. Some algorithms are written (on paper) with 0-based indexing and others with 1-based and allowing either one to be expressed naturally lowers chance of bugs.

    Another one is that Fortran distinguishes between dynamically allocatable, and pointers/references. The history of C has constrained/brain-damaged people to think that to get the first, you must necessarily take the second. That doesn't happen in Fortran, you have ALLOCATABLE arrays (or other things) for run-time allocation of storage, and if you need a pointer (rarer) you can get that too. And Fortran provides the "TARGET" attribute to indicate that something *may be pointed to/referenced*, and by default this is not allowed. No making pointers/references to things which aren't designed to be referred to multiple times. This also means that the aliasing potential is highly controlled & language semantics constructed to make Fortran able to make very aggressive, and safe, optimization assumptions.

    The more parallel you want, the more of these assumptions you need to get fast code, and naturally written Fortran code comes this way out of the box than most other languages.
  • by rubycodez (864176) on Friday May 09, 2014 @09:28PM (#46964079)

    no, used because Fortran is the high level language that produces the fastest code for numeric computation, it is by far the most optimizable. Yes, it blows away C.

  • by mbone (558574) on Friday May 09, 2014 @10:18PM (#46964273)

    Yeah, I used to hear that argument a lot in 1978...

  • by InfoJunkie777 (1435969) on Friday May 09, 2014 @10:29PM (#46964341)
    OP here. This is what the article said. Compilers are the key. They have been around a long time. Another key is that commercial compilers (like Intel for example) further increase the speed, as the manufacturers know how to optomize the code for the specific CPU at hand.
  • by Anonymous Coward on Friday May 09, 2014 @10:44PM (#46964405)

    > it's[sic] compilers can optimize expressions a lot better than other languages.

    No. Wrong. C has had similar aliasing rules for a very long time. It's only recently that they got turned on in GCC by default. This is what all the -f(no-)strict-aliasing nonsense is all about. The problem is, people know that once, before C89, Fortran was faster. So of course it's still faster. Why would anyone ever modify C to be faster? That's just dumb.

    Also Blitz++ uses template metaprogramming and a DSL (that looks just like normal c-vector math) to optimize BLAS far beyond what fortran can do. A fairly minor example is rewriting the expression tree for (A+B)[2] to (A[2]+B[2]) at compile time.

  • by edibobb (113989) on Friday May 09, 2014 @11:25PM (#46964587) Homepage
    This is absolutely right. It's also easier to write, in many cases. Most scientific applications don't need things like lambda expressions or derived classes. Many people who write applications as tools in their research don't want to spend time learning esoteric aspects of languages.
  • by K. S. Kyosuke (729550) on Friday May 09, 2014 @11:50PM (#46964677)

    Blabbing on and on about vector based GPUs is idiocy, because not everything uses trig where vector based processing is beneficial. I have no confidence you have ever seen math intensive code based on what you are talking about. Nile from their own page is # The Nile Programming Language ## Declarative Stream Processing for Media Applications and NOT a language for Math.

    I find this view quite amusing, given that the whole scope of the VPRI project (of which Nile has been of the intermediate results) is to reduce everything in common personal computing into mathematics. And Nile in particular was designed precisely and explicitly to allow the VPRI people to express as wide an array of graphical operations using as short a high-level description as possible - a mathematical description, in equational form, to allow them to express the majority of Cairo (or any other Cairo-like 2D library) in a few hundred lines of these equations. Furthermore, nowhere have I made the claim that the semantics of Nile in its current form is a perfect replacement for any language for scientific computation, as opposed to the thought that there could be some lessons to be learned.

    And why don't you log in? There seem to be quite a few anonymous psychotic individuals running around here recently. It makes the conversation feel quite disingenuous.

  • by K. S. Kyosuke (729550) on Saturday May 10, 2014 @12:16AM (#46964769)
    Actually, you can get one [pckeyboard.com] for mere $35! [pckeyboard.com] ;) But no, the surface syntax is definitely something I didn't have in mind.
  • Arrays! (Score:5, Informative)

    by amaurea (2900163) on Saturday May 10, 2014 @02:54AM (#46965129) Homepage

    The big thing Fortran has over C is proper support for multidimensional arrays, with powerful slicing operations built into the language. It was the inspiration for numpy arrays. My first languages were C++ and C, but when I do scientific programming, my languages of choice are now python and fortran (with f2py making it very easy to glue them together). Fortran is horrible at text processing, and has an almost absent standard library, but for scientific use, good arrays make up for that - especially when you can use python in the non-performance-critical parts.

    C++ has some multidimensional array classes, but none of them are as convenient as fortran arrays. Especially when it comes to slicing. At least that's how it was the last time I checked.

  • by stenvar (2789879) on Saturday May 10, 2014 @03:29AM (#46965217)

    Fortran has been an "APL style language" since Fortran 95, with most of the APL operations present. That was done both for optimization and for convenience. And other APL-style languages are very popular as well, foremost MATLAB.

  • by amck (34780) on Saturday May 10, 2014 @03:37AM (#46965235) Homepage

    ALL CAPS has been optional since 1990, at least.

    Fortran has had modularisation, structured code since 1990, Classes and object-orientated since 2003. Please update your prejudices.

  • Its not strange (Score:4, Informative)

    by nurb432 (527695) on Saturday May 10, 2014 @08:20AM (#46965865) Homepage Journal

    Most scientists like to use tools that work, and they are proficient in.

    FORTRAN falls under both categories.

  • by gweihir (88907) on Saturday May 10, 2014 @10:20AM (#46966369)

    Yes, FORTRAN sucks, but it is stable, fast and well understood. It runs on a number of supercomputer architectures. It is way easier to program in FORTRAN than in C for non-CS people. So what is the issue? Oh, maybe that this is not a "modern" language? Here is news for you: Java sucks a lot more than FORTRAN.

  • by Athrac (931987) on Saturday May 10, 2014 @01:27PM (#46967689)

    Restrict keyword is not related to threading. C/C++ compilers have always assumed that data is not accessed from several threads without synchronization. It just wasn't standardized until the new memory model in C11 and C++11. So if you don't use mutexes, memory barriers etc, the compiler is allowed to assume a single thread of execution.

    What restrict does is it guarantees that two pointers do not point to same area in memory (aliasing). Let's say a function takes two pointers (char* a, char* b). If you write to the data pointed by a, then the compiler has to emit code to re-read data pointed by b, because a and b might refer to the same location. With restrict pointers the compiler doesn't have to do this.

    C/C++ have also always had the concept of strict aliasing, which basically says that pointers with different types may not be used to access the same memory location (char pointers are exception). It allows the same optimizations as the restrict keyword. However most compilers don't enforce the rule because programmers are stupid and use all kinds of noncompliant hacks with pointers.

Whenever a system becomes completely defined, some damn fool discovers something which either abolishes the system or expands it beyond recognition.

Working...