Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
News

Seeking Multi-Platform I/O Libraries? 88

An Anonymous Coward asks: "I'm just getting ready to plunge into a new project, and joy of joys have been given complete freedom when it comes to the implementation language - so long as the program will build and run on both x86 Linux and Windows. Now, I don't need a GUI, this is systems stuff only (processing binary executables in fact, so lots of bitfiddling and big nasty algorithms over hairy data structures) so pretty much all I need are standard IO libraries. C is currently at the top of my list..but what other language should I be looking at? I'm happy to learn a new one, and have the go ahead to do it..like I say, they want absolute speed. Can someone suggest a better language? C++ is out, it does come with a speed hit (using C++ properly anyway, not as a souped-up C). If I'm gonna take the speed hit, I may as well consider something like Ocaml which might let me claw the speed back with better algorithms and data structures.."
This discussion has been archived. No new comments can be posted.

Seeking Multi-Platform I/O Libraries?

Comments Filter:
  • by JabberWokky ( 19442 ) <slashdot.com@timewarp.org> on Friday April 19, 2002 @08:21AM (#3372592) Homepage Journal
    C++ is out, it does come with a speed hit

    [ in my best announcer voice ]:

    Let's get ready to RUMBLE!!!

    --
    Evan

    • The only speed hit relevant to c++ would be using virtual methods, which is just one of many tools c++ provides you.

      Templates, for example, allow for similar common factoring of code without resorting to inheritance and virtuals.

      C++ certainly has it's downsides, but please don't reject it based on arguments that have been repeated over and over without basis.
      • The only speed hit relevant to c++ would be using virtual methods, ...

        Ideally, perhaps, but not in reality. There's definite, significant overhead in using much of C++'s standard library. Time a loop using IOstreams (std::cout) against one using std::printf(). In GCC 3.0, at least, you should find the C++ significantly slower. (In fact, slower than a similar test in Java.) C++ is a good language with a lot of critics, but just because the criticisms have all been heard before doesn't mean they lack merit. One can say most of the performance problems aren't inherent in the language, and that'd be true, but what difference does it make if all of the implementations are so bad?

        • You're right on the money there, unfortunately. There is no theoretical reason for C++ to be slower than C at all; if you implement the equivalent functionality to, say, virtual function dispatch in C, then it's likely to run with at least the same overhead as if the compiler does it for you in C++.

          OTOH, standard library implementations mostly suck. At least they're improving, though, sometimes drastically; check out SGI's latest. And the template implementation actually helps the optimiser a lot in some cases. When you've finished comparing cout and printf, try qsort vs. the C++ library sort and spot the difference.

          IMHO, though, the biggest problem is still the optimisers behind the compilers. We've got decades of experience optimising C to high levels. We've got perhaps a quarter of that optimising things like templates, exceptions and so on in a C++ compiler. Again, it's getting better, and sometimes quantum leaps get made as a new implementation technique is discovered, but it's still got a way to go. That is where the really big practical disadvantage lies, at least for now.

        • Not so. Go benchmark. Iostreams often beat printf et all. Not accross the board mind you, my point is merely that it sometimes does, and that accross the board, the performance delta is so slight as to be a non-issue. Review Stroustrup et all's presentation at SD2000, they addressed this issue.
  • You will likely find that algorithmic improvements will gain you more speed than IO library efficiency, as long as you avoid VB. Heck, I'd even strongly look at Java with a good JIT. Don't write off anything 'till you've tried it.
    • But given the same algorithmic improvements,you will of course be able to gain additional speed by choosing your language.

      If speed were not so critical, I'd suggest Perl, actually. With the speed demands, and the need for cross-platform IO, I think C is probably what you want to use.

      /Janne
      • I wonder if you could rip the Perl abstracted I/O layer, its written in C and is cross platform
      • If speed were not so critical, I'd suggest Perl, actually

        If I'm not mistaken, he said "lots of bitfiddling and big nasty algorithms over hairy data structures", not "text processing".

        But then again, you;d probably recommend Perl for embedded real time applications, too...

        • Perl is actually very good for bit fiddling; the pack and unpack operations would be excellent for this type of work. And it is also a very nice language for manipulating complex data structures with the ability to dynamically create and manipulate hashes and arrays.

          Perl is no good at real-time tasks, of course. I doubt I would consider Perl for heavily calculation-oriented applications either.

          /Janne
  • by Phoukka ( 83589 ) on Friday April 19, 2002 @08:59AM (#3372768)
    I know it's a bit of a stretch, but consider Python. Prototype the heck out of the system in Python, profile the application, then recode the bottlenecks in C. Use SWIG to generate your interfaces. Easier to program, easier to extend, easier to read/maintain. Shorter programming time, too.

    You'll be happier, your fellow programmers will be happier, your successor programmers will be happier, and the chewy parts of your code will still be really fast. Think about it.
    • by Anonymous Coward
      Yes and no. Python is a nice language, and potentially useful. However, where I work we have a "legacy" system written in python with SWIG (interacting with C++).

      The problem is that under large workloads (which is normal for us) you end up with python spending more time marshalling and unmarshalling objects. It's a PITA. I blame this mostly on SWIG (which I am NOT a fan of. Don't get me started on what the maintainers consider good development practice.)

      Python's a great choice if you can do it all natively. It's also a great language to prototype in and then "translate" to another language like C++ or C or Java. (depending on task and preference.) But I wouldn't do the python+swig thing.

      [Note: I'm only posting anonymously to protect my identity. There are certain political factions at work that read /. and would be very unhappy with me discussing this.]
  • Hi, yes! I have a similar question: What is the best language for What I Want To Do? It needs to be able to handle floating point numbers. I don't want to use perl, because it's slower than C but messier than Smalltalk. Also, should I use vi or emacs to edit my source?

    But seriously.. Every language provides standard support for file IO, unless it's totally half-assed*.

    If you actually want to get helpful answers, you might provide a little more information. For instance: How much analysis will you be doing on the files? How much data are you dealing with? Is this probably going to be blocking on input all the time, or does run speed actually matter? How large and/or complicated will your program be? Does cost of deployment really matter?

    * Or halfway totally assed, or whatever.
    • Long answer:
      What you are asking for does not need an entire language, just a
      library.
      Get the best library out there, and scan through list of its "binding"
      languages, then pick the one you are most comfortable with.

      Short answer:
      Common Lisp (CLISP + CLOCC + emacs + ilisp) is a killer.
  • Yes, try O'Caml! (Score:4, Informative)

    by Tom7 ( 102298 ) on Friday April 19, 2002 @09:24AM (#3372907) Homepage Journal
    Yes, I would really recommend O'Caml. Here's why:

    If you just write the same program you would have written in C, the speed will be quite good, probably about 20% CPU-slower than C. (And if your program is IO-heavy, you might not notice this at all.)

    If you have any sort of limited time or interest (as most projects do), you'll be able to write a much better program in O'Caml than you would in C, because:

    - Because it's safe, you won't need to ever spend time tracking down or debugging core dumps or memory leaks. Because it's statically typed, a large percentage of bugs are caught at compile-time.
    - If your program is interacting with the network, you won't need to worry about buffer overflows, format string bugs, or most of the common security problems.
    - O'Caml has a much richer core language than C, with support for algebraic datatypes, pattern matching, higher-order functions, threads, modules, and objects. You can do a lot of great stuff with these.
    - O'Caml has a nicer (though not as nice as, say, SML) module system, which keeps your program from getting unmanageable, and helps isolate faults to a particular module.

    And by better, I also mean faster -- development wisdom says that algorithms and data structures are what matter most, not just the instruction-level efficiency of your code.

    Of course, if you don't know the language, then it will have a higher startup cost for you. But I think it's worth it; you'll learn a different programming style that can help you think in new ways even when you're writing code in Old School languages. =)

  • Scince you mention it yourself, why not really use OCaml. The "speed hit" isn't too big compared with other languages, and optimizing "nasty algorithms over hairy data structures" will definitly work better than in C.

    Of course, it has a portable IO lib - just because the corresponding module for more low level stuff is called "Unix" doesn't mean that it isn't available on Windows as well, with some restrictions [inria.fr].

  • c++ is out? (Score:5, Insightful)

    by Aniquel ( 151133 ) on Friday April 19, 2002 @09:50AM (#3373108)
    I'm really very curious why you decided that c++ is out. I understand that the common (mis)perception is that c++ is slower - but let me ask this: Have you ever benchmarked it? If not, then I strongly suggest that you don't discount c++ out of hand. It has the cross-platform io facility of which you speak (streams), already has all the (completely debugged) algorithms and advanced data structures. Look, nothing is going to be faster than c (except for hand-tuned assembly) - If you absolutely need every little bit of performance, then don't bother with a language other than c. But, if you're looking for a language nearly as fast, with a complete template and streams library, that's portable, then you ought to seriously consider c++. (btw, I've written extensive projects in c++ (25000+ lines) - There isn't much performance difference, and the benefits to using it far outweigh any other penalties.)
    • Right on.

      Benchmarking is the key. And, it pays to do it every few years or so, as compilers and hardware and software platforms evolve.

      While not related directly to your I/O question, a colleague found that earlier benchmarks we had done for floating point intensive calculations which showed FORTRAN beating C++ by about a factor of two were outdated. Current tests show them comparable in speed (as long as you're not too careless with your C++).

      I think I/O in C++ can be reasonably fast for most purposes, but again, as long as your careful about how you do it.

      By all means, benchmark!

    • Re:c++ is out? (Score:3, Informative)

      by jmv ( 93421 )
      My experience shows that in many situations, C++ can actually be much faster than C (not always of course). The reason: templates and inlining. With inlining, not only do you save function calls (which usually aren't that expensive), but the optimizer is free to use common sub-expression elimination across the "call". With templates, you can produce better generic code. Just compare the C qsort to the C++ sort algorithm. In the first case, you go through a function call by pointer (for the comparison operator) which is *very* expensive, while in the second case, the function will be optimized just for the type you need.
    • Re:c++ is out? (Score:4, Informative)

      by gkatsi ( 39855 ) on Friday April 19, 2002 @12:55PM (#3374339)
      Even though you have it right that it is a misconception that C++ is slower than C, you miss one very importatnt point: the supposedly slower features of C++ (like virtual functions) do not have an equivalent in C. In fact, in order to achieve the same functionality in C, you will have to hand code what the compiler already does for you in C++. But we already know that compilers are better than humans in avoiding errors and applying the same solution over and over with good efficiency.

      Moreover, because the compiler knows what you're actually trying to do, it can often perform optimizations that are not possible in C. For the example of virtual function calls, the equivalent in C (both in terms of functionality and efficiency) is calls using function pointers. The difference is that in C++ the compiler often knows the dynamic type of an object (if it's an actual object and not a pointer or reference) and can optimize away the virtual function call and replace it with a static call (or even inline the function). The C compiler is unable to do that.

      So yes, there are features in C++ that have a performance penalty, but they have no equivalent in C, so the comparison is invalid.

      As for ocaml or other FP languages, I think it's a good idea to try them. Besides the productivity and maintainability gains, you may also have actual efficiency benefits. Again, because the compiler knows what you're trying to do in a high(er) level language, sometimes it can perform obscure but very effective optimizations that can beat what an average or even good C programmer can do.
      • The difference is that in C++ the compiler often knows the dynamic type of an object (if it's an actual object and not a pointer or reference) and can optimize away the virtual function call and replace it with a static call (or even inline the function).

        This is the police. Put down the bong and come out with your hands up.

    • I agree. Here's why using C++ properly doesn't incur much of a performance penalty: The only difference performance-wise when using 'basic' C++ (i.e. classes, inheritance, perhaps operator overloading) is during method calls to class instances, because the this pointer must be passed as an extra argument to the method. This means an extra push instruction on the caller side. Adding this extra instruction isn't much of an overhead.

      The next additional overhead involves virtual methods. A call to a virtual method costs much more than to a normal method, because there's a memory lookup involved into the vtable.

      The nice thing about C++ is that you opt-in to these extra features. Don't use virtual methods if you don't need them. And if you do need them but can't live with the overhead, sometimes you can use templates instead, which use up more memory but are just as efficient as normal classes.

      Don't forget the 90-10 rule. 90% of the program time is spent running 10% of the code. Even if you build your entire program using virtual functions, at the end you can profile it, find those 10% (usually less than that, according to my experience) and optimize them using improved algorithms, time-memory tradeoffs, inlining, and other such methods. I find it hard to believe you'll regret using C++ over C because of the performance hit.

    • Did you or any of the people who replied to you actually read the original post? He said he had to do some nasty input etc... fstream, just won't cut it. he needs some thing as low level as stdio, or even just 'open', 'read', 'write', 'close'. (notice no 'f' prefix)
      • Go do a benchmark, stream based io often beats atomic io. Never automaticly equate low level code with fast code. Just because it's specified in a way close to the machine representation does not mean it's effective code. Anyone can write slow c, and slow assembly just as easily as they could write slow c++ or slow ocaml.
        • If you're going to read an entire file into memory, the low level open()/read()/close() calls will be fastest. If you'll be reading a few bytes at a time, streamed IO with buffering enabled will save quite a few disk hits & context switches.
    • > It has the cross-platform io facility of which you speak

      My work experience is that c++ is not easily portable.
      All c++ compilers I've worked with on various unixen had some kind of brain damage that made most of the advanced c++ features (like templates) near unusable.
  • Try using Ocam with a transputer board or 2 pluged into your PC

  • Java is very portable and can do all that bit fiddling just as well as C. The syntax is very similar to C, so it shouldn't take long to adapt.

    Once you have written the progam for Linux, the exact same code would work on Windows. Write the program once, not twice. Save yourself some time.

    You won't have to worry anywhere near as much about messing up a pointer somewhere or about allocating the wrong amount of memory.

    Performance? If you're worried about performance, then you have not used a recent copy of Java. Find Java 1.3 or 1.4 and try it for yourself. I've got a Java program that scans through about 6,500 Novell user accounts in under two minutes. Performance is not a problem unless you want speedy GUI.

    Since you're not needing a GUI, I think Java would be an excellent choice.
    • Exactly.
      And at some point, if you decide you want a GUI, take a look at IBM's SWT instead of AWT and Swing. Programs written with SWT are indistinguishable from native programs on Windows, Linux and Solaris (That's all I've tested BTW, and in addition SWT has bindings for Photon so you can run it on QNX!).
    • Performance? If you're worried about performance, then you have not used a recent copy of Java.

      But if you do get into a performance problem in a particular section of your program (from the sound of it it'll probably be an algorithm-related part), you can always implement that part in C and call it from Java using JNI. This still leaves most of your program cross-platform, while solving the performance bottleneck.

  • If you know C best, use C. If you know Java best, use Java. Ditto for Perl.

    Really.

    The better you know a language, the faster you will be able to write your app, the more optimized it will be, fewer bugs, etc. This is common sense.

    (I was going to have a really smart-assed comment on Logo, but I'll reserve that for later....)
  • Use a threaded compiled FORTH implementation, if you fancy learning a new language. In fact, you can start off by writing yourself a little FORTH nulceus in C to bootstrap itself. You can easily add new primitives (ie machine code) by simply writing a new function and plugging it in to the dictionary, or even a c function with inline assembler, if speed is that important. You can write your own words to allocate and initialise memory for all the data structures you need etc. It'll be a great learning experience.
  • by CaptainAbstraction ( 43162 ) on Friday April 19, 2002 @11:30AM (#3373767)
    This is more than just a language question. It looks like you're starting to get the standard responses already for Java, C++, etc.

    But all of these opinions presume that you're fairly experienced in these languages. Ignore them.

    Language experience/familiarity is THE factor here, so don't discount it. Someone who has been eating and breathing Java would likely produce speedier code than someone who is just learning C, for example.

    Your employer/client wants SPEED. This project involves hairy and complicated bit fiddling. I would suggest NOT using this project to learn a new language, for the risks outweigh the rewards in this situation.

    If you choose to use a new langauge for this critical job, you're setting yourself up for disappoint. Do not forget that you're going to have to go through the all the growing pains associated with a new langauge. You're going to spend weekends tracking down (and learning from) all the newbie mistakes one makes with a new langauge. You are going to encounter new and unfamiliar bugs at all levels - logical design, physical design, semantic, syntactic.

    Do you really want to spend your nights and weekends figuring out what the heck is throwing some particular JAVA exception seamingly at random? Why your C++ function template specialization is being ignored?

    Learning a new language is exhilarating, but that will quickly turn to FRUSTRATION when you run into that weekend-long show-stopper bug.

    With your product being measured by performance, and with deadlines looming... When it comes down to crunch-time, I think the choice is OBVIOUS!!

    Choose a different, fun project to learn a new language. But for this product you're delivering, I would encourage you to stick with the tools you know and love.

    Best,
    Captain Abstraction
  • Even though both NT and linux are POSIX compliant, there are enough quirks in the implementations, especially with regard to multi-threading libraries. As long as you use C or C++ (or any language that does not provide both a rich threading interface and good runtime support), consider using the NSPR [mozilla.org] libraries that are meant to provide a rich set of cross-platform interfaces.
  • Use C (Score:5, Funny)

    by mccalli ( 323026 ) on Friday April 19, 2002 @12:53PM (#3374327) Homepage
    Looking for an IO library standard across platforms?

    #include <stdio.h>

    Says it all really.

    Cheers,
    Ian

    • I'd add:
      #include <stdlib.h>

      Yeah, that does say it all. I have been working on such a thing. I have been attempting to do a cross platform library. So that it will at least be source compatible.

      This is really difficult to do. If all you are doing is memcpy, file io, printf, then it is possible. If you get into sockets then it gets a little more machine dependant. Use log on and off, is even worse.

      One option is to pick a cross platform C API. glib may work. I think there is a port to windows and if not it still should work under cigwin. Its speed is not that bad and it gives you things like sockets and linked lists and all the things you'd need for a daemon process or simple none gui program.

  • I had a lot of good experiences using memory-mapped files. If you need random access to file body as opposed to sequential (streams), pick whatever has MM files in it. That would be C, C++, or Java SDK 1.4.
  • O'Caml is an excellent choice. I think it should definitely be your first choice.

    Nevertheless, C++ can be fast, powerful, and simple as well. People have problems with C++ if they don't understand it well or if they work with people who don't understand it well. That is a real problem (most commercial and open source C++ programs and libraries are awful), but don't blame the language.

  • Ok, so Java isn't the greatest at performance, but it is cross-platform.
  • by Anonymous Coward

    Apache 2.0 is based on an excellent platform independent IO library (and many other cross platform data types, data structures, etc), the Apache Portable Runtime. It's written in C, and it's fast.

    http://apr.apache.org/
  • wow (Score:5, Funny)

    by sinserve ( 455889 ) on Friday April 19, 2002 @02:07PM (#3374874)
    Your "speed" priority, and the binary processing bit, got me almost sold, and then
    I saw O'Caml!!

    You quiche eating wanker, how COULD you forget assembly? Isn't that what programming is
    all about? And WHY are you comparing C to O'Caml, a fine assembly macro language, to
    shity ML dialect used by equally hard-wanking mathematicians and abstractly thinking
    creatures? If these wankmaticians knew how the world operated, they would not
    have invented recursion let alone APPROVED of inductions as a sane, corner stone
    princible in their so called "art". Induction is only possible as long as the
    the "counter" register can hold your index, and recurssion is the crackwhore narcessistic
    twin sister of iteration (there is nothing she does, iteration can't do with
    a well placed label and a jump.)

    Listen to me son, read Quine, Boole and DeMorgan, get the manual to your processor,
    and "script" at the level of the ONE TRUE ABSTRACTION LAYER.
  • by PD ( 9577 ) <slashdotlinux@pdrap.org> on Friday April 19, 2002 @02:14PM (#3374920) Homepage Journal
    How can you use it improperly? C++ is an object capable language, not a strict object oriented language. If you want to use objects, then fine. If not, then please don't.

    Object oriented development is a tremendous thing, useful for many things, and a marvel of overcoming complexity through abstraction.

    BUT, OOP is not the solution for everything. There are many problems that don't need an object structure, and should be written another way. Above all, drop the notion that C++ should be used only a certain way to be proper. The latest cool feature of C++, the Standard Template Library, isn't even object oriented - it's GENERIC, because that type of programming just was the right thing to do for that library.

  • by WetCat ( 558132 )
    Try TCL.
    For me, using TCL my performance increased by 60%
    (especially when using its [Incr TCL] OO Extension)
    TCL works on most unices, Windows, Mac, VMS, Palm Pilot...
    Tk graphical library is so successful that other languages
    (perl, prolog, python) are using it.

  • I'm not seeking Multi-Platform I/O Libraries. Thanks for asking.

  • One Word: (Score:3, Funny)

    by brunes69 ( 86786 ) <`gro.daetsriek' `ta' `todhsals'> on Friday April 19, 2002 @03:44PM (#3375477)

    QBasic.

    • I am glad that somebody apart me ever used QBasic.

      QBasic was a cool language, and I used it a lot in my childhood days....when I was learning programming...

      But now I just love C/C++....its more structured and coool....i just love it...
    • I coded in QBasic for 6 years before I learned Pascal, with never an indention. Now that I've moved on to "real, professional" coding in C, C++ and Java, I can't help but look back and notice that the stuff I could do in QBasic was a lot cooler than the stuff I can do in C and Java today.

      I'm a worse programmer today, and the worst part is, I can't remember any of it... :(
  • by Anonymous Coward
    You may want to try AT&T's sfio [att.com], coauthored by David Korn of the shell by the same name fame.
  • by SIGFPE ( 97527 ) on Friday April 19, 2002 @04:38PM (#3375774) Homepage
    Last time I checked people were writing faster [oonumerics.org] readable code in C++ than in C.


    A smart C++ programmer can use template metaprogramming in a library like Blitz++ [oonumerics.org] to automatically build code optimised for the job. To write the equivalent code in C is possible but it's much more laborious and harder to maintain.


    There are good reasons not to use C++. Performance isn't one of them.

    • Sure it is. ever clocked the time it takes for KDE to start up? Sure the runtime speed is fine... but load time for all that loadtime linking is too much.

      Oh, and before someone says, Just aply the prelink patch... Tried it, but I can't spare my computer for the days it would take to compile all that C++ code.
      • wtf are you blathering on about. KDE's dynamic linking broker has nothing whoatsoever to do with the performance of c++ in general.

        That's like saying c sucks because my linear search of my 100meg dataset is slow. Perl is much faster when I use binary search, therefore Perl is always faster than c.

        Now, you *have* raised one valid point about template heavy c++: It can be a bit slower to compile. So far, this has never been an issue for me. By far the longest compiles I've had to deal with were output from yacc, which is of course, POCC (plain ole c code).
        • Not talking about KDEs dynamic linking broker. wtf are _you_ blathering about? What it has to do with is the high overhead of using deep levels of inheritance. and the memory usage.
          • These traits that you point out aren't necessarily C++ problems. Yeah...some people get carried away with Russian-doll like hierarchies of C++ - but some people don't. Similarly there's no reason for C++ to have much of an memory overhead compared to C. If you use virtual functions you might get a tiny performance hit and memory hit but plenty of C code uses tables of pointers to functions. I think the problems you're seeing are due to the way programmers who like bloat are drawn to C++ rather than being a C++ problem inherently.
          • You're clearly confused. The performance of ld.so has little to do with C++ and nothing to do with deep levels of inheritance. There IS an issue with virtual functions (vtables) and relocations, which is probably what you're trying to reference, but this isn't really an issue with C++ and the problem can be addressed by lazy binding of vtables. It's also completely moot when we're talking about a low-level C++ application: the chances of him needing to do heavy dynamic linking or having a vast framework of objects with significant numbers of virtual functions is so slim that it doesn't even bear mentioning in this discussion.
  • If you're really looking for performance, then you should look no further than C#. I wrote a C application and a C# application to compare the performance. The C app was over 1500% slower than the C# app. Then of course, I did have some infinite loops in the C app ;-)
  • Write it in Intel assembler. All the bit fiddling you want or not. Then simply cobble up some I/O code for the target platforms you want to run on. Should take you about an afternoon - if you know what you are doing, which it sure doesn't sound like from your dumb question.
  • You can choose to use a general-purpose language which has a good spread of capabilities, or you can go with a best of breed language in the area you are trying to work in.

    For general projects, I use a mix of Python and C++. I'd say the best of breed languages for text would be Perl, math would be Haskell, and for getting down to the metal would be Assembler.

    For what you are trying to do, the no-brainer choice would be souped-up C, i.e. C which uses a few C++ features to make your life easier.

  • by Jayson ( 2343 )
    K [kx.com] is a high-performance data processing language. It is a high-level language with very fast performance (it even beats out well written C code). Many people after switching to K have noticed 100x decrease in code side (yes, 2 orders of magnitude) and sometimes even more. It has very high-performance I/O facilities and was explicitly made for muching data. It is cross-platform and runs on NT, Solaris, Linux, FreeBSD, and AIX (you can probably get a build for other systems, too, since the guy who write it is very nice about that).


    Some of theK programming maxims are that memmap is better than read/write (the native file I/O is memmap), operating over bulk data is better than scalar data (the language is built around bulk operators), and terse code is good.


    There is a warning, though. K is very elite and may be too elite for you (it was for me at first), but it is very eay to learn.

  • No one's mentioned Borland's tools, but I think they'd fit the bill. Borland has great compiler technology, and it will compile and run cleanly across Linux and Windows (possibly with a few {$IFDEF}s). It has an I/O library that's as capable as C's (maybe a bit more wordy sometimes). Developing and debugging in Kylix is *much* quicker, in my experience, than using gcc/gdb. It's truly compiled, the compiler is lightning fast, and the integrated debugger is quite a bit more efficient than gdb based solutions.
  • Take a look at this guy's page [bagley.org], some interesting benchmarks between a number of computer languages for a number of well known algorithms.
  • Given the stated requirements, Ada 95 should be in the trade space. Only downside I can think of is that while there are several vendors for the Windows side, I am only aware of a single vendor for a Linux Ada 95 compiler (www.gnat.com).

    You can download a non-supported version (windows and Linux) from
    ftp://ftp.cs.nyu.edu/pub/gnat
    or wait a few weeks for gcc 3.1 to be released (since the Ada 95 GNAT backend will now be included)
  • It aint just another rave drug. You too can be a coder...

    It has a few things going for it.
    ** Faster than a turtle
    ** Anyone can code it
    ** Doesn't show up during random drug testing

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...