Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Tim Sweeney On Programming Languages 523

Fargo writes "GameSpy posted a six-page essay of Tim Sweeney's entitled "A Critical Look at Programming Languages." Tim Sweeney is, of course, the lead coder for the Unreal game engine (one of the most licensed 3D game engines for the PC.) Juicy quote: "We game developers hold the keys to the future of hardware and software technology... Games were responsible for creating the market which enabled 3dfx and NVidia to mass-product $100 graphics chips which outperform $100,000 Silicon Graphics workstations. In terms of technological progress, we game developers are way more influential than most of us realize." "
This discussion has been archived. No new comments can be posted.

Tim Sweeney On Programming Languages

Comments Filter:

  • I think Sweeney's got some really, really good ideas here, especially as far as virtual classes. I can see that being very useful in the future. I wonder if there are any languages currently in development that take advantage of any of the features he was talking about.

    I'm fairly new to perl, but It seems to me that perl does SOME of the things he talks about for "language of the future".

    At the very least, I now have a compeling reason to learn UnrealScript.

  • All of what he talks about as "NextGeneration" was all stuff I learned 10 years ago when I first picked up OOP. Back then this stuff was old hat.

    Indeed, time has proven that Framework-based approaches and Meta-Object-Programming are not a silver bullet eithr. They all have trade offs.
  • by jelwell ( 2152 ) on Tuesday January 25, 2000 @02:44PM (#1336867)
    As a good example look at the huge problem that GNU [gnu.org] has coming up with an english word for "free as in speech" rather than "free as in beer". The bottom line is, there is no good word - if you disagree contact GNU [mailto]

    This problem creates havoc in trying to explain people the idea behind free software. Sure they understand "freedom" but there is no good adjective that can be used on objects that are normally bought and sold (commodities), in the english language, to describe this freedom. Because of the lack of a word for it, it becomes much more difficult to understand - and truely limits many people into thinking that free software is something other than what it is meant to be.

    Free speech == words that cost no money? If not how do you say it?
    Joseph Elwell.
  • by mcc ( 14761 ) <amcclure@purdue.edu> on Tuesday January 25, 2000 @02:44PM (#1336868) Homepage
    i am shocked by the incredible ignorance displayed in this article, by the way it covers such a tiny division of the programming languages in wide use, and such bad languages at that. This person seems to think that everyone in the world uses obscure, cumbersome languages like C, C++, objective C, java, perl, lisp, PHP, Bash, FORTRAN, Cobol, Forth, smalltalk or some form of assembly. What an isolated world this person must live in! He seems to have some extreme bias toward use of functional programming languages.

    Specically i am very annoyed by the total lack of any discussion of INTERCAL, [muppetlabs.com]umlambda [eleves.ens.fr], or orthogonal [tuxedo.org]--what i feel to be the most important languages out there, especially for games. None of these were even mentioned! Why would you write an first person shooter in C++ instead of INTERCAL? Why, as far as i'm aware there isn't even opengl available for c-based languages!

    If you don't like these three above for some silly reason, at LEAST use Forth. any language where you can't redefine the value of four is for wimps. Or use Visual Basic-- its usefulness, portability, flexibility and sheer power are unparalleled. (i'm sorry, that last bit was a little over the top, wasn't it?)

    -mcc
    hmm. that reminds me, i need to learn objective c..
    2B OR NOT 2B == FF
  • Shi a! Wo shuo henhaode hanyu.

    Actually, my Chinese sucks. :^) I have no idea how to say 'moral hazard' in Chinese. I've studied the language quite a lot, but don't get that much practice.

    I have, however, faced similar enough problems in other langauges to feel confident in my answer. I invite you to look through the linguistics literature if you want a fluent Chinese speaker to pronounce on the issue. I'm sure there is one who has published on the topic.
  • Unless computers start doing more then the standard copy/compute/control operations (i.e. when we need to make quantum compilers) the choice of language doesn't make much difference.

    The backward compatability he goes on about in length is nothing new, and can be done in any language, some of them just try to do it automaticly and hide it better then others.

    The only factor that matters is "How fast can we spit out a profitable product with language X".

    Choose your language accordingly.



  • If this hypothetical economist wishes to show off his English, or simply because any short Chinese term he might use for 'moral hazard' implies too many unwanted connotations, he may simply plop the English term 'moral hazard' into the language. That's how 'deja vu' started

    What you are saying is that the fact of importing a foreign term in you language is deja vu, oups, I meant already seen ;)

    BTW, I'm French and at first I was telling "already seen" instead if "deja vu" because I didn't knew how to say it, until somebody told me :). Now when I don't know a term I use the French one and one time out of two this is the good one or a close one.
  • I agree. The cost of hardware is miniscule compared to the cost of software maintenance. I mean, almost insiginificant.

    My machine at work is about $2000. There is a full time person paid at about $40,000 year to support about 25 such machines, and that doesn't count lost productivity due to buggy software, reboots, etc.

    I want a programming language that will let me write:

    DoTheRightThing().

    And I won't care how slow it is.
  • I found, if not the paper then what seems to be a good precis of it, and read it -- (all this and more [demon.co.uk]) -- it's irrelevant to the subject discussed. I am in some difficulty as to why you thought it would be relevant. Proving that some concepts are cross-lingual impinges not at all on the question of whether all are, which lies at the heart of the "does language limit thought" debate.
  • I was 12 or 13 and writing in QuickBASIC (actually bought the compiler and actually sold 2 programs written in it.. paid for itself so I was happy) when I started playing ZZT. Thenn I printed out the programming manual. It was a whole new world. I had never heard of C++ or anything close to object-oriented technology but it all made sense. I truly loved ZZT.

    There is one drawback and you might not want to hear this... ZZT has no throttling function... its basically impossible to run on anything much faster than a 486, everything runs too fast... sigh... I still play text-graphics based games almost daily (been playing Angband and variants for almost 10 years... i will beat one someday!) and ZZT would be a hoot to relive. If you or anyone else reading this knows of a program or some way that I can throttle my 350MHz machine to run this program in a decent manner, I would love to hear of it...

    I wonder if Tim would still have/been able to release the ZZT source... ?

    Esperandi
    Would gladly pay for the ZZT source code and thinks its just fine if they wanna charge for it.
  • If you redefine cognitive science to include any scientific data about human minds, I'll agree that the only scientific inputs linguistics is getting come from CogSci. But by that definition, most linguists are cognitive scientists.

    I could argue that the data coming from computer science, while helpful, isn't science at all.

    The roots of linguistics aren't descriptive, they're even worse, there philological. Early linguists were interested in literature and historic language change. That's the one thing I have to give Chomsky credit for: establishing linguistics outside of the humanities.

    Linguistics is no longer taught as an art at any respectable school of theoretical linguistics. There are still many universities where they teach what I might risably call applied linguistics but which mostly ought to be a part of the education department. That is a different story, but it also has different goals.

    I spent two semesters in math classes and three in programming labs in order to get my linguistics degree. I spent hours doing SQL queries on usage databases in hopes of getting enough data to do serious statistical analysis to support my hypothesis.

    More recently, it has become harder to be taken seriously as a young linguist without some knowledge of AI methods. (This doesn't apply to the hard-core Chomskyans, who don't believe in AI, as if that made any difference.)

    In short, things are changing in linguistics.

    As for having no feel for science, I have a Bachelor's in Nuclear Physics (from before doing linguistics) and let me point out that my citation of Berlin and Kay is not inappropriate. If the strongest form of Sapir-Whorf were true, this experiment should not have turned out that way. It covers more ground than the neurology of vision processing, since it shows that the mental manipulation of visual signals is independent of the native language of the subject.

    We can reformulate Sapir-Whorf to get around this, but depending on who you read, we are left either with a hypothesis that can't be verified (and is thus not science) or a hypothesis so weak as to be useless.

    Read Pinker on the subject, he puts forward his case against Sapir-Whorf on the basis (in part) of Berlin and Kay as a cognitive scientist.
  • Hey! Forth rules!

    If I had to throw away all my nice dev tools and start over from popping in a byte at a time with 8 toggles and an enter key, the first thing I would do is code a Forth interpreter/compiler; even before I wrote an assembler (it's easier).

    The only reason I don't use Forth now is because I've got the entire UNIX dev environment to support me, which is almost as efficient as Forth and gives you some hint of portability.

    But Forth builds good programmers. You learn never to make a mistake, because mistakes eat your machine. You learn to debug your code by eyeball and brainpower, not with any machine aid. You learn to rewrite everything every time, and you learn that this can be faster than reading the docs for a library.

    (BTW, none of this applies to ANS Forth, which is a bad joke from the people who made Ada; Forth is something you should rewrite for every machine you use it on, just ask Chuck [ultratechnology.com])

    Oh, you laugh now, but just you wait until we start writing massively parallel systems and you're begging Chuck to burn a thousand of his 250 MIPS (with a cheap fab) 10k transistor (with video processing!) processors on one chip.

  • Speak for yourself, buddy. My linguistics courses were plenty scientific. Perhaps you think that if we don't use numbers, it's not science.

    And Berlin-Kay is in no way only about the purely physical process of converting signals from rods and cones in the eye into nerve signals, as you seem to say. If you've ever read their work and understood it, you'd know that. Or perhaps you think that there's nothing behind the eyes doing the processing that allows you to describe color?
  • Orwell's idea was mainly that with a simplified language you'd have a hard time discussing treasonous thoughts, which you could have.

    Big brother was personified and anyone could want that image to be killed.

    But, explaining that to enough other people to make it happen... That's where it's tough with a newspeak type language. And that was the idea. Keep conspirators alone, and thus unable to conspire.
  • ...and many even be possible, but it's a very weak form.

    Now, computer languages and human languages are very different matters. It is true that I pick my languages on the basis of what kind of project I have to do, unless the spec requires a specific language. When that happens, I can usually solve the problem, even if it isn't as much fun. If I can solve the problem in a language, I'm have some reason to doubt that the choice of language restricts project choices.

    So, I'll retain my scepticism with regard to computer languages, but it's just scepticism, I may be wrong about the habits of programmers.
  • by Junks Jerzey ( 54586 ) on Tuesday January 25, 2000 @03:54PM (#1336904)
    Saw this at gamasutra.com:

    Toward Programmer Interactivity: Writing Games In Modern Programming Languages [gamasutra.com]

    But I guess this guy isn't as well known as Sweeney :)
  • by Tim Pierce ( 19033 ) on Tuesday January 25, 2000 @02:57PM (#1336905)

    Games were responsible for creating the market which enabled 3dfx and NVidia to mass-product $100 graphics chips which outperform $100,000 Silicon Graphics workstations. In terms of technological progress, we game developers are way more influential than most of us realize.

    My father has argued for years that the ``real purpose'' of the personal computer is gaming. Everything else is just an excuse to develop and buy these fabulously expensive toys.

    For a long time, I thought it was a pretty funny joke, but a couple of years ago I realized that he was exactly right in a very real sense. No other market drives the demand for powerful hardware as much as the game market. There are few applications that really demand such high processor speeds or bus clock rates or monster graphics cards, and those applications tend to be special-purpose high-end professional tools like CAD or scientific visualization software. Gaming is an end-user luxury market, and is the chief reason for the constantly increasing bleeding edge of the computer hardware market.

    It's true. The real reason that the personal computer exists is so that we can play games. It's very fortunate that we have been able to construct this multi-billion-dollar industry to hide that fact.

  • in that cases you throw out the rulebook and make it efficient for the narrow task at hand

    However, "little languages" have a nasty habit of growing into full-fledged programming languages and the compromises that were fine when it was just a quick hack suddenly become ugly, cancerous warts.

    IMHO, Tcl is an excellent example of a "little language" pushed too far. It was originally designed as a glue language for bits of C. The designer never envisiaged anyone writing fully-fledged programs purely in Tcl, and consequently there are some quite large warts lying around.

  • A large portion of the reading audience of GameSpy will be younger gamers with very little knowledge of Object Oriented Programming. Some with no programming experience, or interest in such programming.

    Also, most will be hardcore gamers, which isn't always the same as hardcore geeks. These people are interested to hear that their hobbies are near the cutting edge of technology. And with the right development cycle, computer gaming has the capability of *always* being near the fore-front of the technological edge.

    Because you all remember when running a phone-book type database was 'cutting edge' on D-Base III, right? `8r)

    Disclaimer: I'm the Unix admin for GSI [gamespy3d.com], running the chat portions.

    --
    Gonzo Granzeau


  • I had a look there, and the only paper to refer to Berlin-Kay gave a very rough overview of it. If you'd actually bothered to read the full paper (as some of us have), you'd know that it is about this very topic.

    Keep your "opinions" to yourself, thank you very much, unless you have some facts to back them up.

  • No it isn't! BUT, since you're obviously not intested in genuine debate, I guess "hear endeth the argument". (but one last hint: there's a strong argument for calling all of philosophy since the Vienna School Linguistic Philosophy: but I guess now that you've cleared it all up they can all just go home/roll in their graves (pick an appropriate response).
  • Hmm... Java is dynamically typed, and most of the time it isn't "really, truly" compiled. (Unless you count compiling for a VM as "really, truly").


    That was true of the really old java run-time environments, but it hasn't been for a while now. All of the RTEs I know of compile the bytecode to native code before they execute it.

    I know, a minor point.
  • by Esperandi ( 87863 ) on Tuesday January 25, 2000 @03:03PM (#1336918)
    Eiffel is the object oriented language at the forefront of object oriented development. It's not widely used, but whenever a new concept in object oriented computing comes around, Eiffel incorporates it. I don't know enough of Eiffel to know if it supports "virtual classes" yet, but I'm sure if its doesn't that its only a matter of time...

    It does include something that will probably get included in whatever language comes down the pike in the future (and there are ugly hacks that try to do these things in C++ and Java that people actually SELL)... Design By Contract is one, and incremental compilation built into the language is another. The Design By Contract is where the gold is for me (then again I've never made a single change to a program and had to wait 2 hours for a recompile)... once the source code is done in Java and C++, once your classes are frozen and done, you have to write documentation. With Eiffel you just hit the "generate short form" button and it generates an astounding amount of information for your classes, variables types, bounds, everything. And it does it in every format imaginable. Setting hard code-level bounds on variables with a single keyword is golden, setting down the bounds of things being passed into a method and the things going out of it is wonderful.

    Even better is that 99% of these really high level features cause NO code bloat, they're to speed debugging, facilitate actual code reuse, etc.

    How many people do you know in C++ that are actually comfortable and practice the "black box" theory of classes?

    Esperandi
  • "...if you disagree contact GNU"

    I have contacted GNU, and so have hundreds of other people. The one word that closest represents what they mean keeps getting rejected. The problem is, they have redefined/appropriated the word "free" for their concept of software, and now they refuse to change it. They continue in the belief that Free Software == Free Speech. This is ludicrous on the face of it. If they truly believe this, why aren't they passing around Constitutional ammendments for ratification? They also continue in their belief that proprietary software "subjugates" and "dominates" the user.

    That not everyone falls for this GNUspeak clearly demonstrates that human language does not limit human thought. That would be like saying sleight-of-hand limits visual acuity.

  • But, there are alternatives. One is that thought takes place in some kind of lingua mentalis that is the same for everybody and exists only in the mind. I think this is the majority position among linguists (but is by no means universal.)

    There certainly is no evidence that people think in their native language at all, and considerable evidence agsinst the idea. The details would take some time to document and explain.

    Due to the failure of experiment to show any Sapir-Whorf effect, after decades of trying, I feel secure in saying that the hypothesis is, certainly in its strongest form, as without merit as the aether or flogiston in physics.

    As for books, I recommend starting with Pinker's Language Instinct. It contains some things that I don't think are true, but for someone who isn't a linguist these are mostly minor matters. Language Myths (edited by Bauer and Trudgill) is another good one, and The Great Eskimo Vocabulary Hoax by Pullum and McCawley.

    Those are good places to start. If you're looking for something more like an entry level textbook, e-mail me.

  • Yes, I see where you're coming from, but I think the point you are addressing here is more "can new ideas ever develop?", which, if we accept the principle that language limits thought, is co-extensive with the question, "does language evolve/develop?". Well, clearly the answer to the latter question is, "yes". What I was thinking of, was more, whether translation from one language into another doesn't inevitably translate native concepts into different ideas, their nearest equivalents in the target language. See Heideigger for powerful arguments on why this might be the case.
  • Abstraction in a language doesn't have to incur slowness. I think you are thinking of "layers of abstraction." Such as "xml running on a regular expression parser running on perl running on java running in an applet running in netscape running on an internal scripting language running on C running on motif running on xlib running on pipes running on linux running on C over tcp running on ip running on ethernet running on linux running on C running on machine language running on a transmeta code morpher running on VLIW on Crusoe."
  • Well, I'm a member of a community that branched off from zzt, based on a zzt-like program called megazeux(which I'd link to if a good site existed). Anyway, one of my fellow members(kevin vance, who I thnk goes by kvance here)is currently working on a gl reimplementation of zzt, and it's really weird :P http://www.linuxstart.com/~kvance/evil.j pg [linuxstart.com] and http://www.linuxstart.com/~kvance/evil2 .jpg [linuxstart.com] show some intruiging screenshots. Anyway, this looks like it may be the thing for you, and there is a port of megazeux for linux coming(eventually) in the future. Hope this whets your zzt in linux appetite. Also, zzt.org is a good source of zzt information if anybody cares.
  • Completely wrong. Try to catch up on some 20th Century philosophy in your free time.
  • I'm familiar with Lakoff's work on metaphors, but I don't think his work touches on the Sapir-Whorf question. To put it simply, I don't think his work in metaphors qualifies as linguistics.

    Now, I think what he is doing is useful and I'm all for it. I like Lakoff, and his work on categorisation (which is a linguistic topic) was a major influence on me before I traded career for money and became a programmer. However, are his metaphors really linguistic, or rather cultural commodities? None of the metaphors I know him to have described involve word play (eg confusing 'free as in free beer' vs 'free as in freedom'). If I recall what Lakoff was saying, there is no fundamental reason, other than chance and culture, why those same metaphors couldn't exist in Chinese, or Inuit for that matter.

    Common sense can throw you for a loop. Look at quantum mechanics - it's about as counterintuitive as you can get. Always hold out for empiricism.

  • Moslo.

    It's a nice little cpu-eater that's really simple to use. Something along the lines of "moslo -66 whatever.exe" will slow the cpu by 66% and run your program. It'r pretty prolific on the net, especially on fan sites of old games. Origin even packaged it with the Ultima Collection so folks could play the old Ultima games on their beefy new systems.

    Dunno if it works in dosemu tho.
  • I have the strangest feeling that the statements

    The days of philosophy's monopoly over the mind are long past, and philosophy is better because of it.

    and

    I use windows. Sorry.

    are somehow related.

  • by vlax ( 1809 ) on Tuesday January 25, 2000 @03:09PM (#1336947)
    You speak English (I assume you're unilingual, if not, imagine the many folks on /. who are) yet you can clearly understand the difference between 'free as in speech' vs. 'free as in free beer.'

    Your language has obviously not restricted your ability to think in this respect. Someone, somewhere, once upon a time, explained what they meant by 'free software' and now you have no trouble thinking clearly about it. The lack of a simple translation has no impact on your ability to think.
  • The main implication that occurs to me is that verbs aren't conjugated, and you probably need helper verbs for different tenses (or are expected to just pick it up from context)
    Daniel
  • If there is a concept which is unexpressible in another language, I would like to see it.

    Expressibility is not strong enough. You can express quantum chromodynamics in English, but you sure as shit can't use English to solve problems in quantum chromodynamics.


  • 1) Java does make programming easier than C++. It eliminates the need to screw around with stupid #include guards, forward declarations, memory allocation policy, and Java exceptions instantly zero you in on the line of code causing problems, which makes debugging way easier. Java also comes with *WAY* better runtime libraries than ANSI C++.

    2) It's not significantly slower. You made the assertion, you provide the benchmarks with source. I'll optimize your Java benchmark to show you that you don't know what you're talking about.

    3) You don't have a clue as to what you're talking about. Almost all complex games use a runtime interpreted scripting language that is significantly slower than Java in execution speed.

    QuakeC (Quake1), UnrealScript, Tribes Script, Half-Life, etc.

    The reason why this might confuse you, is because you don't understand the optimization principle: 90% of of the CPU is spent in 10% of the code. Switching from interpreted scripts to binary C might yield a 10% framerate increase, big deal.

    On the other hand, with interpreted scripts, you're guaranteed not to be hit by a VIRUS in that free quake module you just downloaded.

    VMs also make it easier for game developers to dynamically test game logic without having to rebuild/relink a library and reboot the Quake server.

    One thing's for sure, Carmack knows alot more than you. And he has a Ferrari.


  • Object oriented programming *works*. Even the Gnomies who rag on KDE because C++ use OO-like constructs in their "pure" C code and brag about their Python wrapper.

    But you yourself use OO. Just look at your post. You write "mini-languages ... to express a class of GUI widgets".
  • "Why did you not write a separate mini-language for each of the GUI widgets?"

    What kind of idiot would write a language he is only going to make one statement in? I am not against solving the problems that OOP is meant to solve, it is the methods that do not work.

    When I write a mini-language, it is to parse many simple and concise descriptions of a group of similar things. A group of similar things can be described as a class.

    In an OOP language, there is a formal way of expressing things that fit into a class (in the general sense; the formal OO and C++ terms "class" have different meanings from each other and from this general English one). This formal method is often excessively verbose, inadequately expressive, inefficient, and generally a pain in the ass.

    So the reason I don't really like the majority of the elaborations on C syntax in C++ is that they are trying to create a single comprehensive method of expressing the infinite possibilities of variations on a theme. Learning this method takes too much effort (hence the oft repeated complaint "nobody knows all of C++"), using it takes too much overhead (both in programmer effort and in inefficiency of the final product), and it doesn't even make a good fit half the time.
  • Don't pay any attention to the actual comment I posted, wilya now? Too difficult, I guess. Like I say, if you want to argue, I'm here (although delayed a few days sometimes); if you don't, then... what can I say? Sorry if I've misunderstood you, but you don't seem to want to argue; you merely restate you case in increasingly-incoherent terms.
  • Paul's right; the Berlin and Kay paper is about human vision processing and not about the manipulation of abstract conceptual structures. The two could hardly be farther apart. But this is the problem with linguistics in general - it's roots are in a descriptive field much like botany or zoology. These two became modern "hard" sciences only with the advent of biophysics, genetics and molecular biology.

    Linguistics is still waiting to cross that threshhold. Its only truly scientific inputs are from computer science and cognitive science, and both of these are still relatively new. Linguistics is still taught more like an arts subject like anything else, a bit like psychology used to be.

    This affects both the type of students attracted to the field, and their subsequent education. As a result, the majority of linguistics students have a relatively poor understanding of (and no instinctive feel for) science as a result. Your inappropriate citation of the Berlin and Kay study is a good example of this.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • you can clearly understand the difference between 'free as in speech' vs. 'free as in free beer.'

    I totally agree with what you meant, and the guy you were responding to should note what you said.

    That said, I feel compelled to point out that "free as in speech" is still ambiguous in conveying the sofware concepts: "open" source, and GPL-user vs BSD-vendor freedoms of redistribution. It's easy to conceive of paid source you can't redistribute, free object you can, and source you are free to redistribute as object, and object whose source you are entitled to see.

  • Corrinne:

    Well, you just answered my unspoken question of "Man, I wonder how much of a pain in the ass is it when after twenty something years of learning you're still asked questions about some thing that you've had since before you were born."

    Now I know :-)

    QOTD: "Isn't it ironic they ask a Playmate to describe Linux? And they ask a female coder "how is it like to be a woman"?

    A couple (real) questions sprung to mind, incidentally:

    compiler internally generating and maintaining a virtual table of a lot of NULLs

    Could you elaborate a bit more on why architectures involving derived classes create wasteful indexes of information? I'm unfamiliar with compiler design, and am curious where this generation comes from.

    But then any subclassed class, which becomes someone else's base class, becomes less modifyable. Thus, in a way, a base-d class loses power.

    Basically, an object in such a class is constrained by the "contractual expectations" embedded within the parent class. There are certain properties/functions which the class is expected to support, and failure will ensue if such functionality is not implemented.

    But does this burden disappear at all when the coder must explicitly know to provide a predefined degree of functionality? Doesn't having a base class enforce a minimum degree of functionality on derived classes, thus preventing situations where the programmer forgets to add some property and chaos ensues?

    It is more difficult to document and comprehend such a deep weaving web.

    So what can be done to make deep weaving webs more understandable? I did some experimentation with the autodocumentation system Genitor a while back, but what have you seen, either implemented or theoretical, that would make deep webs feasable for human comprehension?

    That, of course, leads into a more disturbing concept: Could the human ability to comprehend complex logical relationships be considered a bottleneck that needs to be progressively abstracted as time goes on? Could this pose serious problems for software and hardware design as time progresses? Does it already?

    Just interested in what you think.

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • Too bad the author ignored functional languages completely.

    Anyway, My wishlist for next popular language:
    • Layered architecture
    • Functional core
    • Component hype
    • Lambda calculus made easy
    • Hindley-Milner type system
    • Proper tutorial on abstractions and complexity
    • Concurrency
    • Only dependencies restrict execution order
    • Top-Down + Bottom-up process
    • Binary compability
    • User friendly(tm) open-source GUI library
    --
    1) The ostrich algorithm, 2) Detection, 3) Prevention, 4) Avoidance
  • He argues faster machines should not be required, yet he years for more and more abstraction.

    Actually, abstraction eventually traslates into faster code. Who would want to work out all the optimisations gcc dose in ams? It would take you a lot of work just to figure out what gcc dose in 1 second.

    Simillarly, functional langauges will eventually allow the compiler to generate code which takes advantage of opimizations which a human could not reasonably do. Example: some experements with structured self-modifing code (i.e. functions which write optimized functions) have achived massive improvments in preformance (factor of 10 for Henery Masslin's Sethisis microkernel; this can be done by object oriented lanagues too).

    Currently, it is easier to speed up the hardware, but when we start hitting the theoretical limits we will need to switch to compiler optimisations for speed improvment.. and imperitive langauges are considered to be something of a dead end for optimisation.

    Jeff
  • It's amazing and terrifying what people are doing with C++ templates today. Basically, some people discovered that you can force the template mechanism to instantiate templates recursively, and in this way get iteration at compile time. This makes general computation at compile time possible, although in an incredibly slow and awkward way.

    Debugging this stuff is hell. You can't watch the expansion process at compile time, and most debuggers don't provide much help stepping through the output of something like this.

    LISP started to develop cruft like this just before it tanked. (Anybody remember the MIT Loop Macro?) It's an indication that something is terribly wrong with the language and nobody knows how to stop it.

  • by chazR ( 41002 ) on Tuesday January 25, 2000 @03:24PM (#1336996) Homepage
    As I read it he says:

    Programming models change every ten years
    Indisputable
    They seem to change for the better
    Yup. So far.
    C++ is not a bad language, but it's too big
    Damn right.
    But I think he dismisses the STL architecture too readily. It is an amazingly good abstraction, easy to extend (not that you need to often) and avoids being too object-oriented (which would have made it ridiculous)
    Java is a good language, but it's slow
    Yup. Again
    UnrealScript rocks (but not enough)
    Never used it. However good it is, can I model a telecomms network with it?

    I also I think he skipped a bit to briefly over the 'Groups of Objects' technologies (patterns etc).

    The fundamental problem is that we don't think in terms of 'language' (human or computer). We think with ideas. Computers 'think' with binary operations. I suspect that the reason so many people find computers difficult and scary is that they don't know how to translate their ideas into terms the computer will 'understand'

    As an example, I get paid to develop software. Mostly I do architecture rather than programming. When I am designing a system I use paper, pens and whiteboards. When I try and transfer the design to an 'electronic' format, I struggle. The tools don't exist. Why not? The tools I use are very good, but I find it difficult to express my ideas with a screen, mouse and keyboard. I spend too much time second-guessing the tool programmer, thinking 'How would the programmer have expected me to this?' - I shouldn't have to concern myself with this (mostly I don't - I shout for the tools programmer to show me how to do it)

    To end this tedious rant:

    Yes, we need to rethink programming models constantly

    No, there is 'No Silver Bullet'

    Human/computer interaction systems are a disgrace to us all. Please will somebody make a computer as easy to use as a pencil and paper. I'll help.





  • If you redefine cognitive science to include any scientific data about human minds, I'll agree that the only scientific inputs linguistics is getting come from CogSci. But by that definition, most linguists are cognitive scientists.

    I'd have to disagree there. Cognitive Science is a fairly well-defined (if multidisciplinary) field and it relies heavily on neuroanatomical studies. I contend that in CogSci, a psychological theory (including anything in Linguistics) without neuroanatomical evidence to support it, is just pure speculation.

    I could argue that the data coming from computer science, while helpful, isn't science at all.

    It'd be a weak argument. Computer science really is a science, in the sense that it is a study of the behaviour of real systems - both physical ones and abstract ones. It allows us to put the math into motion, so to speak. It comprises both theory and practical experimentation; results can be numerically quantified, predicted by theory and verified by experiment. There is no meaningful definition of "science" I know of, which doesn't include Computer Science. Maybe you're still thinking more in terms of the old notion of "Natural Philosophy".

    More recently, it has become harder to be taken seriously as a young linguist without some knowledge of AI methods. (This doesn't apply to the hard-core Chomskyans, who don't believe in AI, as if that made any difference.)

    That's truly excellent news. About time! I believe Chomsky is vastly overrated. It's probably because he had the whole field of theoretical linguistics mostly to himself for so long.

    ...[mini-resume]...

    The science credentials you claim are certainly more than adequate. I'm glad to see that science is creeping into the curriculum in some places at least. Perhaps you're talking about Linguistics as currently taught in the US? Not that it should make much difference - but my most direct experience of the field comes from my sister who's now doing her Master's at the University of York in the UK. From what she tells me, the field is still mainly about descriptive theories which provide very little in the way of testable predictions. That, in my book, is not science.

    my citation of Berlin and Kay is not inappropriate. If the strongest form of Sapir-Whorf were true, this experiment should not have turned out that way. It covers more ground than the neurology of vision processing, since it shows that the mental manipulation of visual signals is independent of the native language of the subject.

    I won't argue with that; vision is much older than language and all the neurology involved is highly specialised and mostly physically remote and separate from the associative cortex and other structures (Broca's, Wernicke's) involved in language processing and abstract reasoning. I agree that language doesn't shape vision at least on a low level. It is silly to advance a theory that *every* single aspect of our experience is constrained by language. Nothing is ever so simple or clear cut in complex emergent systems like human minds. But even a misconstrued "Strong" Whorf hypothesis doesn't claim any such thing. And I don't believe anyone here is supporting the strong form (I might like to, but I wouldn't dare ;o)

    I said your citation was inappropriate because I believed you were advancing it in support of the idea that language does not even partially constrain abstract thought. I hold that reports about the experience of simple visual stimuli are too limited in scope to have much to say on that subject. In any case, Berlin and Kay's conclusions in Basic Color Terms have been effectively disputed [human-nature.com] (and I for one fail to see how their results are supposed to *disprove* the dependence of subtle colour perception upon language), and Sapir and Whorf have been unjustly denounced [sunflower.com] most likely for sociopolitical reasons. It's just not done, these days, to suggest that culture or ethnicity could possibly have any deterministic effect on behaviour. Bah.

    Still, thanks for the flame-free argument. I always enjoy debate when it's conducted on a civilised level. Kudos to you for that.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • Sorry, but you're wrong.

    Philosophy is a study of the non-technical, that which can't be measured. The idea of philosophy is to examine what the limits of knowledge are, not the specific knowledge.

    If you can test something, it's no longer philosophy.

    If you can devise a test to see if people can't conceptualize an idea without a word for it, then it becomes a testable theory.

    I myself think the theory is bunk. A society might not have 'war' in their vocabulary, but they'd understand how something bigger can move something smaller, perhaps hurting the smaller thing in the process. This can easily be extrapolated up to war. They may not understand why you'd go to war, but understanding of the basic concept wouldn't escape them.

    If 20th century philosophers have moved into the realms of the testable, then they're not philosophizing.
  • by John Carmack ( 101025 ) on Tuesday January 25, 2000 @03:25PM (#1337004)
    An Nvidia GeForce/quadro kicks the crap out of a $100k+ SGI reality engine on just about any test you could devise.

    The reality engine is quite a few years old, and its follow on, the infinite reality and IR2 still have some honest advantages. You can scale them with enough raster boards so that they have more fill rate, and they do have some other important features like configurable multisample anti-aliasing and 48 bit color.

    Even now, lots of applications can be compiled on both platforms and just run much better on WinNT+nvidia than on the best sgi platform available. Having a 700mhz+ main processor helps, irrespective of the graphics card.

    Some people still have a knee-jerk reaction that claims that "real applications" still run better on the high end hardware, and there are indeed some examples, but the argument bears a whole lot of resemblence to the ones put forth by the last defenders of ECL vector supercomputers.

    Most people's applications don't show advantages on high end SGI hardware.

    The big issue is the pace of progress -- SGI aimed for a new graphics family every three years with a speed bump in between. The PC vendors aim for a new generation every year with a speed bump at six months.

    John Carmack
  • If color, which is obviously expressed differently across langauges, does not affect perception of color or mental processing of colours in the abstract, tell me what test will show that a concept can't be manipulated, or whose processing is in some way affected, by a person's langauge.

    Having worked with the people responsible for making phone books in Inuktitut, and Bible translators for languages that don't even have a word for 'cross', I submit that the relative ease with which ideas and documents are translated and communicated across language groups is pretty damning evidence agsinst that idea.

    Can you find a contrary case?
  • The for loop is explicitly sequential. The notation is not. If dealing with operations with potential side-effects this notation actually contains a promise that you don't have to worry about side-effects - which allows the compiler to generate more parallel code. This in turn can be optimized to be done with vector operations, or (in a different problem domain) across chips.

    In other words the for loop is a barrier to optimization since in order to observe certain classes of optimizations the compiler has to figure out your code (and verify - for instance - whether the value of i has to wind up being calculated.)

    Cheers,
    Ben
  • Need you ask? It is not the best definition for what RMS means by "free", but then again, neither is "free". It is a word that RMS vociferously denies he has anything to do with. Think about it and you'll figure it out.

    By the way, has anyone other than me observed the inherent contradiction between the following two RMS quotes: "An unambiguously correct term would be better" and "To stop using the word ``free'' now would be a mistake"
  • I think what some people here are grasping for is the correlation between lack of word for an abstract concept in a language and ignorance to that abstract concept. Coining a word allows for efficiency in calling up meaning. However, to actually understand the word, you must understand the concept.

    Orwell describes a people who are brought up into ignorance leading to a particular mindset. Their coining of new terms to efficiently describe the abstract concepts they came into contact with is only a result of understanding that concept. Ignorance, not language, was what held them back.
  • Hardly! BSD, MIT and other unrestricted licenses are not copyleft. Kirk McKusick likes to refer to the BSD schema as copycenter. "Take it down to the copy center and run off as many as you want."
  • by Corrinne Yu ( 121661 ) on Tuesday January 25, 2000 @03:27PM (#1337019)
    Good meaty informative writing.

    Subclassing can yield a lot of power to re-usability.

    There are many caveats to subclassing implementation though.

    Inheritance brings with it all the baggage of your chains of base classes. As much as you attempt to virtualize, thus yielding flexibility, you are still:
    a. compiler internally generating and maintaining a virtual table of a lot of NULLs
    b. the skeletal structure of the existing virtual functions or data members still define, and thus confine, your derived classes

    You may gain the power to saving code from subclassing.

    But then any subclassed class, which becomes someone else's base class, becomes less modifyable. Thus, in a way, a base-d class loses power.

    For portability, ease of use, ease of understanding, anything which is a base class (something *any* subclass derives from) become not only self-consistent, but remain having to be concerned of intentional and unintentional behaviors (and optimization) of all of its derived classes.

    A change to a base class A propagates all its changes, all its decrease in speed, all its complexity, down the chain to all its children.

    Such "a complicated web we weave" makes *most* of the engine code difficult to modify and ungrade and re-use. Since changing most classes, since most of them are base classes, have too many performance or behavorial ramifications and penalties to the rest of the code.

    Part of encapsulation is minimization interaction or effect of one set of code to another.

    "Uncareful" or "deep" web of derivation in classes can turn encapsulation upside down.

    Low level classes intended to hide or encapsulate behaviors, end up being the "weakest chain" that breaks in performance when too many vital classes are derived from them.

    It is more difficult to document and comprehend such a deep weaving web.

    Sometimes non-class, single interface, "flat" non-classing languages would and can ease both encapsulation, or maintainability.

    To be fair to our poor DNF coders Chris, Nick and Tim, virtual class or no, it is a lot more than 4 lines of code to make magical DukeNukem.Actor's. :)

    // OT to the "chick" thing

    An example of "chick-divisive news" being harmful to women: the same site (GameSpy) requested an interview of me that I turned down.

    Why?

    Because the interview questions are / would be along the lines of "How is like to be a woman programmer?" "How is like to be a woman developing games?" "What insights would you have for women in game developement?" "What insights would you have for games for women?"

    How in the world would I know or can speak for 50% of world population?

    A Tim Sweeney, even a Seamus Blackley (Trespasser lead), never have to face questions like that.

    They can discuss math, code, game, science, language.

    But a woman would always be gender first, knowledge second. It shall always be more informative to know "how is it like to be a woman" from me than any knowledge I can or cnanot share with others.

    The day when /. and others stop posting chick-gender-divisive articles. The moment sites stop posting essays and insights and editorials by women about women for women. The day when men and women coders are human coders.

    Is the day when sites and interviewers will stop asking human coders like me "How is it like to be a woman?" and start asking me questions I know the answers to.

    P.S. Isn't it ironic they ask a Playmate to describe Linux? And they ask a female coder "how is it like to be a woman"?

  • "compiler internally generating and maintaining a virtual table of a lot of NULLs
    Could you elaborate a bit more on why architectures involving derived classes create wasteful indexes of information? I'm unfamiliar with compiler design, and am curious where this generation comes from."

    -- The way most OO compilers keep track of object functionality being able to be "virtalized" is that even though in your code it is virtualized, in machine code it needs to keep an internal look up table to point to the right data memebers / function pointers by context.

    -- Most implementations of instantiaion of any object derived from classes with virtual members involve *each* instantiation have its own sizable (size dependent on number of virtual members) table to allow for contextual retrieval.

    "But does this burden disappear at all when the coder must explicitly know to provide a predefined degree of functionality? Doesn't having a base class enforce a minimum degree of functionality on derived classes, thus preventing situations where the programmer forgets to add some property and chaos ensues?"

    -- This involves the foresight of know all possible future derivation (or variations of such derivation) before implementation, before debugging, before "base class usage."

    -- It is a chick and egg thing. While it is *possible* to be such a forward thinking and insightful coder, it is at best *very difficult*.

    -- In most cases, it is a code base class, derive children, amend base class, derive more children, amend base class further ... derive children, must amend base class again, but wait that would affect my other children like this.

    -- This is theoretically prevented by "thinking ahead about your base class" before you either code your base class nor your child.

    -- Therefore, how *well* this works out comes back to the Goedel system of how "simple" your base class is.

    -- The more powerful your base class, the more mathematically infeasible it is for you to design a non-breaking-base class.

    "So what can be done to make deep weaving webs more understandable?"

    -- Documentation will always be a challenge, and certain data structures like subclassing presents even greater challenge.

    -- When something is flat (which of course has its own set of problems), its "structure" mimics that of "human language" which is also linear / verbal / mono-directional.

    -- (Yes, conversation goes both ways. But most "comments" and "documentation" are one-way conversation. We *all* know the best documentation is a living polite lucide programmer who continually has 2-way communication with user.)

    -- So, given human language comment and documentation *is* mono-directional, is linear, is flat, it is easier to map similar system to similar system.

    -- It is thus easier to explicit document and comment linear C (non-base class referencing) code than OO code (which as well as you can describe its current behavior, you must reference back to its base class behavior to gain true understanding).

    -- Yes, again, a completely intuitive base class that requires no documentation and comment of its interaction and effect with its derived children in anyway can solve this problem.

    -- Unfortuantely there are very few real base classes besides a few obvious math primitive examples.




  • From one of the INTERCAL enthusiasts' webpages [tuxedo.org]:

    • Abandon All Sanity, Ye Who Enter Here

      So, you think you've seen it all, eh?

      OK. You've coded in C. You've hacked in LISP. Fortran and BASIC hold no terrors for you. You write Emacs modes for fun. You eat assemblers for breakfast. You're fluent in half a dozen languages nobody but a handful of übergeeks have ever heard of. You grok TECO. Possibly you even know COBOL.

      Maybe you're ready for the ultimate challenge...INTERCAL.

      INTERCAL. The language designed to be Turing-complete but as fundamentally unlike any existing language as possible. Expressions that look like line noise. Control constracts that will make you gasp, make you laugh, and possibly make you hurl. Data structures? We don't need no steenking data structures!

      INTERCAL. Designed very early one May morning in 1972 by by two hackers who are still trying to live it down. Initially implemented on an IBM 360 running batch SPITBOL. Described by a manual that circulated for years after the short life of the first implementation, reducing strong men to tears (of laughter). Revived in 1990 by the C-INTERCAL compiler, and now the center of an international community of technomasochists.

    This is an excerpt of a program that does ROT-13, written in INTERCAL. Being a non-INTERCAL developer, I chose what seemed to be a representative sample of the code.

    • (10) PLEASE DON'T GIVE UP
      (1) DO .2 <- '?.1$#64'~'#0$#65535'
      DO .2 <- '&"'.1~.2'~'"?'?.2~.2'$#32768"~"#0$#65535"'"$".2~. 2"'~#1
      DO .3 <- '?#91$.1'~'#0$#65535'
      DO .3 <- '&"'#91~.3'~'"?'?.3~.3'$#32768"~"#0$#65535"'"$".3~ .3"'~#1
      DO (11) NEXT
      DO (2) NEXT
      DO (12) NEXT
      (11) DO (13) NEXT
      PLEASE FORGET #1
      DO (12) NEXT
      (13) DO (14) NEXT
      PLEASE FORGET #2
      DO (12) NEXT
      (14) DO STASH .1
      DO .1 <- .3
      DO (1000) NEXT
      DO .1 <- .3
      DO .2 <- #1

    And so on.

    Yeah, I can see how writing Space Invaders or Quake bots or a MUD would be MUCH better in this language.

    :)
  • Abstraction can slow things down, but it can also speed things up.

    Not everyone here is an asm guru. I know enough that I could easily write a memset routine for x86 and a few other platforms, but I can't guarantee that it'll be as fast as the library code.

    Ditto with, for example, complex numbers. I can (in c++) declare n, i and j as complex numbers, set them, and say n = i * j. I could do the same with structures holding the real and imaginary, or parallel arrays. And theoretically, I could do it faster, if I coded it in ASM. But, the abstract level lets a programmer who majored in math do the actual code, probably squeezing out a few extra cycles by knowing shortcuts, and guarding against any special cases (div by zero type things) that I may not know about.

    Abstraction also lets the compiler use the computer to its fullest. I could code vector manipulation, for a 3d engine, in asm, and even if it was provably the fastest code, that wouldn't do me any good with a new cpu. If I write it in C, or better, something more abstract where I can hand off whole matrices, the compiler will (ideally) know the the target CPU has Altivec, or 3dnow, or perhaps some higher-level FPU capable of whole matrix ops. By writing code that attains some reasonable fraction of the best speed, say 90% or so, you write code that attains that speed on all architectures, and uses new features with only a recompile. Hand code the routine in ASM and you only gain 10% or so, which might be important in a critical loop, but you lock yourself into having to recode it later, to get the same speed. And that means having to be an expert on all the target architectures.

    And then, there's the argument that you can always give away abstraction, writing some critical code in ASM, if needed. You can't go the other way and use inheritance in ASM (or in typical assemblers.)

    Languages that restrict you, that prevent you from being specific when you know something the compiler doesn't, are bad. That's why nobody uses Pascal. But the ideal language lets you ignore all the finicky details until you decide otherwise, it doesn't require anything either way.
  • Nope. That's YOUR definition of philosophy, which is of course (somewhat ironically) informed by a particular philosophical outlook.

    "If you can devise a test to see if people can't conceptualize an idea without a word for it, then it becomes a testable theory."

    Have you, or has anyone else, devised such a generic test? If not, then you are rather hoist on your own petard, are you not?

  • "Copyleft stops me from running off as many as I like?"

    That's right. Sticking with the copier analogy, the GPL only let's me make copies on the GNU copier. I am not allowed to make copies on non-GNU compatible copiers. And I can't copy just the binary parts that I want, I have to copy the source parts as well. And I certainly can't collate copies done on different brand copiers into one book.

    Most clauses in the GPL points to instances where sharing code with your friend would be wrong.
  • I don't think you quite get it. Take a look at what Objective-C does, and take a look at Common Lisp The Language, and read the section on CLOS in the appendicies.

    Objective-C's ability to delegate procedures to other classes achieves a lot of the "virtual classes" concept. You can also selectively override parts of a class as he describes in virtual classes in objective-c.

    The Common Lisp Object System (CLOS) and it's MetaObject Protocol (MOP) allow you to do what's described with a virtual class, and more (so much more that it's a bit intimidating to think about it).

    CLOS implements object heirarchies as a list, and instead of only thinking of procedures that can be applied based on type, it has a set of rules that determines which procedure in a list (think an array of function pointers) will be applied in a given situation based on a set of rules. Sounds like C++ or Java.

    The MOP, however, lets you define that behavior - you can specify that the object system will change it's behavior when you need it to.

    So, it's been done and proven, but it's still "academic" from his point of view (ignoring the implementation of next/openstep, and the fact that LISP machines did do a lot of work at one point, and lisp still does stuff... it's just not mainstream. Too bad...)

    -Peter
  • by Jogorun ( 143504 ) on Tuesday January 25, 2000 @06:38PM (#1337048)
    Sweeney gave some good unspoken advice, That is to say, keep learning new languages. Check out "The Pragmatic Programmer" [2000 Hunt, Thomas.] You can find it in any good bookstore.

    The book has some useful advice on the practice of programming. Things like "How not to be stuck programming in a dead language." (I'm paraphrasing), and it expands upon the concepts that Sweeney mentioned, like orthogonality.

    Just a little extra material, if you bought the gospel the Sweeney was preaching. Otherwise...
  • by cje ( 33931 ) on Tuesday January 25, 2000 @01:42PM (#1337049) Homepage
    Towards the end of the article, Sweeney says:

    "People don't need to buy new 800 MHz Athlon processors for running Microsoft Word .."
    Well, not this release, anyway ..
  • by Kev Vance ( 833 ) <kvance@NOSPaM.kvance.com> on Tuesday January 25, 2000 @06:39PM (#1337051) Homepage
    Yes, we've been working on ZZT engine workalikes for some time now. I've done a good bit of work on the subject. In my document The ZZT File Format [trap.cx], I have a lot of detailed information for anyone interested in working with ZZT files.

    I've also been working on various other ZZT projects. I see JZig has pointed out [slashdot.org] my attempts at combining OpenGL and ZZT (he missed a picture [linuxstart.com]). I dunno how well this will work out due to performance issues -- it's a lot more polygons than you think in those ZZT screens :) Part of this has been my ever-evolving libzzt, which is almost working now.

    If you're interested in helping development (or any other slashdaughters for that matter), I could eventually clean this stuff up and put it up on sourceforge...

    The other ZZT clone project of note is ZZT++ [welcome.to], a C++ reimplementation of ZZT. It's very DOS centric, but the source is GPLd, so it doesn't have to stay that way. See zzt.org [zzt.org] for general ZZT info and news (or trap.cx/zarchive [trap.cx] since zzt.org seems not to be resolving)

    That enough ZZT info for you? :)
    - k e v
  • Sweeney seems so intent on proving his point that he seems a bit detached from reality. Almost... academic. This makes it hard for me to take his point seriously. Let's move through his article:

    This is a really profound realization, that your language has such power to expand--or limit--your horizons, and define which concepts you are able to think about fluently, and which ideas are not easily ponderable.
    Several other posts show that liguists in fact generally disagree with this argument. It doesn't seem to hold up well in real world programming either. I've seen very complex programs written entirely in assembly language. (Some people [grc.com] continue to write significant applications this way.). I certainly see object oriented programming on a day to day based in C (curses, the Win32 API (particularly the GUI bits), stdio, and the code I work on for a living). I've meet hard core C only programmers who vilify C++, then go on to write gloriously easy to read object orient code.

    Of course, in his portrayal of C as being an ancient, out of date language, he goes so far as to claim:

    Yes, you could develop user interfaces in C, but they were extremely kludgy in those early days, and didn't become mainstream until the advent of object-orientation and GUI class hierarchies.
    Apparently Microsoft Windows and Mac OS weren't "mainstream" enough for Tim. (I won't argue that they're kludgy... that's a whole different problem.)

    Skipping over brief discussions of hardwiring games and assembly, we come to C. He discusses slow adoptation of C by game programmers, then holds DOOM as a turning point of some sort:

    When id Software released DOOM, they surprised much of the industry by having no reliance on assembly code--despite excellent game performance, and by successfully cross-developing the game (in NeXTstep and DOS), then successfully porting it to an astounding variety of platforms.
    While DOOM didn't rely on assembly to run, it relied on assembly to run at acceptable speeds on mainstream computers. In addition, usage of C and C's contemporaries was already well entrenched in the game industry by that point.

    I'll gloss the section on C++. I think he's getting overworked about the failings of C++. Many, many projects continue to work in C++ just fine without collapsing. I think he's a bit arrogant for lump UnrealScript into the mix. He filtered out Pascal and other C contemporaries to presumably keep his list simple, then lumps in a fairly specialized pet language.

    Getting the future we get to the good parts. He wants to discuss "parametric polymorphism." He handwaves away C++'s templates with:

    Unfortunately, the C++ language bastardizes this concept in its support for "templates" (C++ lingo for the same concept), a terribly hacked and inadequate feature which, unfortunately, leads programmers to believe that parametricity is just a flawed concept--just like object orientation looked like a flawed concept to C programmers.
    Again, he ignores the many people who find that templates (especially the STL) provides many of the features he wants without introducing the ambiguity he does (more in a moment). He also ignores the C programmers who have been programming in object oriented styles for years. Oh well.

    Anyway, we get to his list of things he wants for "'parametric polymorphism' in its full glory." (His extra quotes, not mine. Read into what you will.) First we wants an "open world evolution of source code and binaries." I read this as, he wants to be able to change various bits (containers, algorithms, other bits) without breaking compatibility. Why templates fail as a "link-time feature" is beyond me. I manage to "evolve" my template containers and algorithms without breaking anything just fine. Sure, I can break things, but I fail to see how any language can stop me from breaking things.

    He wants "functions references bound to specific objects." I can't figure this one out. I certainly can stick function pointers in objects. Perhaps he means closure, which is easily done with function objects (an object which looks and acts suspiciously like a function). (This is slightly inelegant in C++, but certainly not an "enormous amount of 'duct tape'".

    He also wants his polymorphic types to support bounds and constraints (simple enough), and "higher-order function calling" (No guess).

    He then dwells on an example. He's got three integer arrays A, B, and C. He wants to add the each element in B and C and place the result in the corresponding element in A. He seems deeply bothered that C++ doesn't look at "A = B + C" and "do the right thing." He seems to ignore other, reasonable interpretations of B + C. Perhaps it means concatenate the arrays, or add all of the elements in B and C into a single number and put it into A[0]?

    Looking at things this way, the beginning C programmer who naively tries "C=A+B" is showing more ingenuity and insight into programming than the experienced C programmer who knows why that doesn't work.
    Looking at things this way, Tim Sweeney naively assumes that everyone can agree what + means in the situation.

    Ultimately he appears to be looking for a general way to work on sets in this sort of way. It's a darn shame he hasn't looked at the STL recently, since tools like for_each() provide a solid, general basis for running over a single array, and a for_each like function running over two containers isn't very hard to write.

    Moving on, we find:

    What if the modeler built several pieces of trees -- branches, roots, and leaves; and the programmer wrote code to hook them together...? The forest could be infinitely larger and more realistic due to each tree being unique."
    Why this isn't possible (and already being worked on just fine) in C++ (or even, god forbid, C), is beyond me.

    His discussion on virtual classes and frameworks is interesting. Somehow I'm not feeling a compelling calling for virtual classes, but it's interesting.

    His discussion on how wonderful UnrealScript is particularly interesting. He wants the various "good bits" of Java (binary interoperability, security), plus a few more (language supported serialization that automatically is backward compatible). This sounds nice and all, but he quietly ignores speed issues. Given that Unreal shipped with framerates regularly below 10fps on systems that were "up to date" when it shipped, I suspect he doesn't care. Like the Java advocates say, "A slight speed hit is okay, since computers keep getting faster." Of course, the "slight" speed hit generally turns out to be at least cutting your speed in half, it isn't okay. I'd certainly notice Word getting 50% slower, and I darn well want to run by games with as many graphical bells and whistles on as possible. Oh well.

  • The biggest problem with Meta-Object-Programming is that it makes it hard to integrate 3rd party tools. This isn't necessarily a bad thing, but it's a trade off.

    The biggest problem with frameworks is that you're imposing a certain flow of control on developers. Again, this isn't necessarily a bad thing, but it does limit their flexibility. J2EE is a really good example of this. Starting a thread in Java is the most natural thing in the world, but in J2EE you basically can't do it inside of an Enterprise Java Bean. Consequently, the work todo the equivalent is EXTENSIVE to say the least. Now, this isn't a problem for many J2EE projects, but you simply can't be all things to all people.
  • Abuse (a great game) was written almost entirely in Lisp. I can't think of other examples, but there's no reason there couldn't be more games written in Lisp.

    Be careful, even the author of that game readily admits that it's not "almost entirely" written in Lisp. A custom Lisp is used as the scripting language, but there's a whole lot more C code than Lisp code in the game. It's something like 85% C and 15% Lisp.

  • Thank you! I've been looking all over for this URL, to illustrate my point - that which the Gamasutra article sees and Tim obviously doesn't: that functional languages are not only extremely useful but conceptually far more advanced than anything that ever came out of New Jersey. I'll write a more complete post about this right away.
  • From Frances Bacon (if you don't know who he was, try google -- suffice to say that he was up there with Einstein and Newton):
    For men imagine that their reason governs words, whilst, in fact, words react upon the understanding; and this has rendered philosophy and the sciences sophistical and inactive. Words are generally formed in a popular sense, and define things by those broad lines which are most obvious to the vulgar mind; but when a more acute understanding, or more diligent observation is anxious to vary those lines, and to adapt them more accurately to nature, words oppose it. Hence, the great and solemn disputes of learned men often terminate in controversies about words and names, in regard to which it would be better (imitating the caution of mathematicians) to proceed more advisedly in the first instance, and to bring such disputes toa regular issue by definitions. Such definitions, however, canoot remedy the evil in natural and material objects, because the consist themselves of words, and these words produce other, so that we must necessarily have recourse to particular instances, and their regular series and arrangement, as we shall mention when we come to them mode and scheme of determining notions and axioms.
    --

    That is, language does limit our thought processes and, even worse, the knowledge that we can express. Scary, huh?

  • Java has its own performance based problems for performance critical sections of my code.

    Are there versions of Java or implementations of Java in which only user explicitly specific garbage collection?

    A Java machine arbitrarily deciding when and how to garbage collect when I want to maintain network connection or something like this does not appear helpful.

    Garbage collection-less Java would of course not have this problem for me.

  • I actually agree with the gist of your post (as I interpret it), that is, that the realm of philosophy is those areas that haven't proven amenable to scientific investigation. Ironically, some philosophers would consider the very idea non-sensical -- oh well, there's nowt stranger than folk! My main point is, I don't concede that such a general proposition as "there can never be a proposition in language A that is not fully-translatable into language B" a statement ammenable to scientific proof. I find my empirical experience, plus the arguments of such philosopers as Heideigger, rather argues the opposite. Perhaps it is easy for those who belong to the dominant language/ideology combine on Earth right now to accede to the theory that language and thought are a one-to-one transformation. Speaking as an Irishman, my history seems to prove the opposite.
  • ... refer to the BSD schema as copycenter. "Take it down to the copy center and run off as many as you want."

    Copyleft stops me from running off as many as I like?

  • by Effugas ( 2378 ) on Tuesday January 25, 2000 @03:38PM (#1337079) Homepage
    There's an interesting aspect of openness going on here: Education, and a slow but steady ramping up of the "coolness" of highly technical skills.

    Medicine is cool because you can save lives. Acting is cool because lots of people enjoy your work. Programming, over time, will become more and more cachet as A) It remains difficult to master but simple to begin(something neither medicine nor acting can ever approach), B) Budding programmers realize the value of an audience interested in what they personally have to say, to teach, and to create, and C) The end result is frantic appreciation from either businesses(Linux developers) or the 14-30 gaming crowd(Game programmers).

    Appreciation is a good thing.

    About Sweeney's paper(truly excellent, incidentally), a couple things come to mind. He talks about the concept that "C=A+B" should be equivalent to C[n]=A[n]+B[n] -- in other words, take the first value of the A array, add it to the first value of the B array, and then put that in the first element of the C array. After all, that's what C=A+B obviously means, right?

    I don't know about that. Perl thinks C=A+B would expand to "C is the A list with the B list tacked on at the end". The add is one dimensional, not two dimensional--the two lists are glued together, not mixed into a sum. I think that's rather logical.

    And what of another perfectly logical explanation? Maybe C is meant to be a single integer. Now you take all the ints in A, and all the ints in B, add 'em all up, and put 'em in that C value.

    Perhaps we need more punctuation, more symbols to describe the differences--we could have +, ++, +-+, +++ATH...that's the solution Perl found, and it's Perl's biggest albatross--too much dense punctuation.

    Perl without Punctuation is like Programming Without Caffeine.

    Of course, as long as you know there's something you don't understand, you can look it up. But if you think you know what C=A+B is "obviously" doing, when in reality it's doing something completely and utterly different, you're going to have a much harder time debugging your code. Not knowing what's broken is possibly the single most expensive debugging scenario possible, by any measure.

    Stop for a second and ponder the power of such a concept -- with about four lines of code, you've sub-classed a 150,000 line game engine and added a new feature that will propagate to several hundred classes in that framework.

    This sounds really, really cool, but...

    How predictable can a system where this occurs be? Would we map destinations of modified code? Don't you usually get problems when new features are bolted onto old architectures when really the old methods need to be wholly rewritten?

    Of course, these are problems that have stricken *every* advance in language design...there's always the optimization that becomes impossible as you go up the ladder.

    The most desirable approach is to have language-level security, where the compiler can usually tell you "that's not allowed" rather than determining security violations at runtime--that approach allows the maximum amount of optimization compared to the brute-force kernel transitions of operating system security.

    Sweeney's awesome, and I respect him highly, but this is probably the biggest error of the entire piece.

    Yes, it'd be nice to be notified *as the programmer* that your code violates a security constraint--in fact, it'd be beautiful, because then you'd have line-level notification of where your code is misbehaving in ways that would compromise the security of the host machine. (The concept of "Buffer Overflow Waiting To Happen on Line 12431" just appeals to me.) But, um, that presumes that the programmer doesn't *want* their to be a buffer overflow, or a kernel backdoor, or whatnot. Put all security in the compiler, and a malicious entity will simply compile the code on their compromised OS, move the binary over to a target machine, and grab themselves a rootshell.

    Clearly, this isn't an optimal scenario.

    Now, you do have situations like the JIT compilers for Java Bytecodes that go to some length to verify the validity of a bytecode before compiling it, and may(I'm not sure, and this probably varies by implementation) lock off entire branches of functionality through the compiler. But that's different--the code must pass through the compiler *on the host machine* to be converted to machine language. In effect, the end binary is a combined product of the bytecode and the host-controlled compiler. If the system designer wishes to have a userspace process handle extensive security analysis before passing a binary off to be executed, that's one thing. Trusting binaries from arbitrary compilers is quite another!!

    And, it remains the unfortunate truth that there are more professional Cobol programmers than C programmers, more C++ programmers than Java programmers, and for many years there will be more Java programmers than there are followers of the successor language.

    I've been thinking about this, and as languages have gotten "cooler", I think there's something to be said about a loss of stability. I don't think anybody is surprised if a COBOL based billing system doesn't go down for a year; I also doubt most people are surprised if a Java applet manages to trash their web browser within a period of minutes.

    Something's wrong there.

    Maybe the reason there are still COBOL programmers around is that few C, C++, or Java based systems could remain acceptably reliable after 25 years?

    In terms of technological progress, we game developers are way more influential than most of us realize.

    I noticed. I've been saying this for years: John Carmack is damn near personally responsible for Intel's dominance. Had Quake not been so perfectly tuned for Intel's Pentium processors--and thus so amazingly unoptimized for AMD's and Cyrix's competing x86 processors--Intel would have taken severe hits in either market share or year end profits over the last five years. For all the talk about the genius of Andy Grove--and don't get me wrong, I'd probably bungee jump off the Grand Canyon for a chance to have dinner with the man--it was John Carmack's low level pipeline optimizations and hyperusage of the floating point capabilities of the Pentium architecture that directed quite literally billions of dollars of purchasing decisions away from AMD and Cyrix into the waiting arms of Intel.

    3Dfx's long term dominance in the 3D market was a similar scenario--the fact that Unreal hasn't played all that well on anything *but* a 3Dfx card until rather recently played no small part in their dominance.

    game developers are in a unique position, by virtue of starting projects anew every 2-3 years, to "short circuit" the process and radically accelerate the adoption of cool new technology.

    The best language in the world ain't going anywhere without top notch compilers that the gaming industry isn't going to write. This, more than anything else, is the biggest problem that game developers have if they want to choose new languages. A handful of games a while back were written in Java, using Asymmetrix's Java Flash Compiler(amazingly cool tech, really. You could recompile the code of a running app, and *it wouldn't stop running*. Then you could actually compile and release x86 binaries of your code. Never made it to Java 1.1 *siiiigh*)...none of 'em did all that well.

    There's another issue to consider--game developers are truly writing less and less of the low level code. This is a good thing--who wants to write Yet Another Sound Mixing routine when you can just toss another wave at the sound device--but it does create some constraints against spawning new languages. It didn't used to be that hard to change languages--you were rewriting everything, after all. Now, you're talking about every single game shipping rife with dependancies on external sound libs, 3D rendering drivers, input systems, socket code...

    Yes, there's always translation layers, but that kills half the gain.

    Tim, if there's one question in this entire piece that I'd like you to reply to, it'd be this:

    Network effects--the fact that a given standard gets exponentially more valuable as others share in that standard--have essentially locked TCP/IP as the internetworking protocol of choice for the foreseeable future, to the point where even upstart additions such as IPv6 and IPSec are finding acceptance to be a difficult task.

    Could the same fate befall any new kinds of advanced programming languages, the identities of which were notably and painfully absent from your essay?

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • More importantly: dynamic scope.

    This is an extremely anti-modular misfeature, which is being phased out of the language, but which is still a thorn in the foot of any large system.

    Absolutely agree--I have had to work with large systems in Perl that use dynamic scope intensively, and it's horrible.

    You seem to be implying, though, that this is a reason not to start a big project in Perl today. I disagree. If your coding standards (for large projects you do have coding standards, I hope) mandate use strict and discourage use vars (in other words use my variables for everything), it's no worse than C or C++, and has an object system that I rather like.

    -Doug

  • The 'problem' is that english is too slippery, and is defined by usage. 'Free' means, liberty, and also, unpriced. That's because we've collectively added to it over time. It means both, depending on context. It's not that english doesn't have a word for 'free as in speech' but that it doesn't have a word exclusively for it.

    German doesn't have a short root word for a lot of things, it has compound words that are like our phrases. To complain that english doesn't have a short work for every concept is like complaining that german has long words.

    If 'free as in speech' is what you mean to say, then say it. Or say 'legally unencumbered', or 'unrestricted copying', etc. Find a phrase that explains exactly what you're trying to say.

    I'd hazard a guess that 'libre' in french doesn't mean 'free as in speech, under USA law'. It probably means 'allowed to exercise it's rights' or something similar, which would lead a purist to ask what rights software has. More correct would be 'available for use in exercising your rights' which illustrates that the software is available for you to use in any way to exercise any of your rights, but which is still a bit vague in defining what rights you think people have.
  • Oh, please! refer back to your post: you didn't say "BSD copier", you said "compared BSD to a copier". By those rules, the GPL could be compared just as easily. Leaving out that you are toying with the word copier the way Stallman toys with the word free, and the rules attach to the documents, not the copier...

    It is still a stupid analogy. This magical BSD copier you love so much lets you collate copies together into one book, yes, but it lets you stop other people from copying that book even on a magical BSD copier? Copiers like that used to sit in the basement of the Kremlin. No wonder Stallman has the urge to scream freedom! Me, I just want to scream that I'm wasting my time talking about this.

    Have an opinion, sure, but I can see the merit in the BSD license. If you can't see the corresponding merit in the GPL, you are not very sharp.

  • by X ( 1235 )
    Good JVM's do incremental GC, which should prevent you from experiencing any noticable delays from GC. Java2 also allows you to use references to influence the GC routine. Finally, good JVM's run the GC in a seperate thread. Most of the time GC's will happen when the system is otherwise idle.

    The truth is it's all in the VM, and Java, because of it's youth, has largely been populated by poor VM's.
  • by Captain Zion ( 33522 ) on Tuesday January 25, 2000 @01:53PM (#1337095)
    Games were responsible for creating the market which enabled 3dfx and NVidia to mass-product $100 graphics chips which outperform $100,000 Silicon Graphics workstations.
    That's right, but for games. Many of the nice features you can find in an SGI box are not implemented in the cheap cards, like an accumulation buffer, or layers. In many cards the stencil buffer is missing. I'm currently working with OpenGL robot dynamics simulation with a high-end PC and a TNT2 card, and I would trade it for an O2 anytime.
  • By the way, has anyone other than me observed the inherent contradiction between the following two RMS quotes: "An unambiguously correct term would be better" and "To stop using the word ``free'' now would be a mistake"

    Hmm, let's see just how much of an "inherent contradiction" these two quotes are, by substituting for "free" something else:

    "A fire extinguisher would be better"

    versus

    "To stop people evacuating the building now would be a mistake"

    The point being, just because something might be "better" in the abstract does not necessarily mean it's wise to switch to that thing at any given moment during a campaign where the thing proposed as to be discarded is still serving a critical role.

    (No, I don't know whether RMS himself would agree that the statements are not contradictory as he meant them. I'm just pointing out that they aren't inherently contradictory. It's best to ask RMS himself, if you want to know how he wishes those statements to be interpreted.)

  • by Kaufmann ( 16976 ) <rnedal AT olimpo DOT com DOT br> on Tuesday January 25, 2000 @05:26PM (#1337104) Homepage
    (Also posted to GameSpy's forum)

    I'd like to see Tim explain just how it is that functional languages are confined to the realm of theory, considering the infinity of real-world applications in which they have been used since Lisp's inception in the late 1950's. (You do remember that Lisp is one of the oldest programming languages still in use, don't you?) Even more so when you consider the roots of the thing that Tim touts as one of the next big things: "parametric polymorphism", which is nothing but a (poor) adaptation to the imperative paradigm of a subset of Haskell's type system.

    Perhaps this is a matter of taste, but I tend to dislike strawmen, especially when attempted by this kind of engineer, who, for some reason, seem to have a dislike of anyone who even sounds like a computer scientist (the keyword here being "scientist"). Face it, Tim: simply sweeping all which doesn't conform to the New Jersey mindframe under a blanket of "purely theoretical languages that have no use in the real world" won't make it true. The fact is that, as long as we're discussing what programming will be like in the future, functional languages are far beyond the state-of-the-art from New Jersey. (Even other game developers recognize this, as evidenced in a mid-1999 article on Gama Sutra about Haskell and other languages in gaming.)

    For a glimpse of the real future of programming (as well as computing in general), I suggest the TUNES project [tunes.org], of which I am a member.

    (By the way, Tim would have you believe that C was the very first structured programming language. I laughed especially hard when I read that part of the article.)
  • by vlax ( 1809 ) on Tuesday January 25, 2000 @01:54PM (#1337107)
    The first few paragraphs about human language - basically the idea that your language restricts what and how you can think - is about 95% false.

    It's called the Sapir-Whorf hypothesis (although there's some debate as to whether either Sapir of Whorf had anything to do about it) and is not widely held to be true among linguists. It occaisionally comes up in the literature, mostly to trash it.

    Therefore, it is not a recurring theme in any literature about linguistics written by actual linguists of the last 30 years.

    As for the contention that programming languages limit what kinds of programs we write, I am sceptical, but not so dismissive. Certainly I hate writting code to do lots of complicated string handling in C, prefering perl or java for the job. However, by building my functions carefully I can do it in C, and in fact have. I would balk at doing the same kinds of programs in assembly language.

    That, however, seems mostly to be a question of finding the right tool for the job. If the next generation of languages allows me to abstract these differences and use one language with different toolkits (in principle I suppose this is possible with today's languages), I'm all for it.

  • Perhaps you think that if we don't use numbers, it's not science.

    I can't think offhand of any cases where this wouldn't be at least partly true. Science is about hypothesis, experiment and measurement.

    perhaps you think that there's nothing behind the eyes doing the processing that allows you to describe color

    No! You miss my point. An assertion about the role of language in the reporting of direct visual experiences says nothing about its role in abstract reasoning. Which is where the real controversy lies. Does our language guide and inform our mental habits - I think it does.

    If science is infiltrating the linguistics field in some institutions now, I'm glad to hear it.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • Why did you not write a separate mini-language for each of the GUI widgets? Don't answer, I'll guess instead. It's because those widgets have a lot of properties in common so using a single interface to them made sense. Now this isn't an accurate definition of OO, but it's the basic kernel of the OO idea.

    I will agree with you that enforced OO is a pain in the ass, particularly when you aren't working with objects (I really hate those books that tell you to "find" your objects in the problem before you start coding). However, your post was dissing all of OO, and C++ in particular, for no apparent reason other than its age or the fact that you didn't write it.

    The advantages of OO programming do not require a OO language. In fact, most modern programmers probably use them on a day to day basis, just as they similarly use structured and functional concepts.

    Abstraction => integer, widget, file
    Encapsulation => source file, module
    Derivation => a button is a widget
    Polymorphism => I sort numbers and words the same way.
  • Oh, man, that is so *cool*... too bad the 3D block graphics kinda ruin the better 3D graphics to be represented as colored ASCII art. :) I'd much rather see something that just makes proper use of 2D graphics... xterm et al are likely out of the question, since DOS ASCII is such a vastly different encoding from anything used in most modern terminals, but graphic tile rendering could easily enhance the quality of ZZT, much like xnethack/gtknethack.

    Oh, zzt.org [zzt.org] is pretty cool too. PlanetQuake meets ZZT. I like. :) (How long do you think it'll take for some PlanetQuake junkie to "discover" ZZT like when Scorched Earth suddenly became the ultra-trendy Quake scene kr4d-31337 g4m3 0f th3 m0nth? :)
    ---
    "'Is not a quine' is not a quine" is a quine [nmsu.edu].

  • ... it was John Carmack's low level pipeline optimizations and hyperusage of the floating point capabilities of the Pentium architecture...
    I was under the impression that this was Michael Abrash's area of expertise...


    *Slap* Totally forgot about Abrash. My apologies.

    Nonetheless, I seem to remember Carmack being quite talkative about making sure both pipelines in the Pentium were processing *something* at any given time. I remember seeing some of Intel's tools for pipeline analysis at Software Development '97 and going "Damnit, iD did this stuff *by hand*".

    Anyone remember exactly how the programmatic load was distributed at iD? Abrash deserves his due :-)

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • by chadmulligan ( 87873 ) on Tuesday January 25, 2000 @02:01PM (#1337135)
    Quite a good article in fact. I've never programmed in UnrealScript and little in Java, but I agree with the author's conclusions about C++. I've not had the time to analyze his "virtual class" concept thoroughly, but I'd like to point out that very similar mechanisms have already existed two decades ago in SmallTalk - and are in daily use in Objective C, which is in several respects just SmallTalk with C syntax.

    When I started out in programming 31 years ago [mumbled the old geezer ;-)] I bought Jean Sammet's classic book "Programmi ng Languages:History and Fundamentals" [amazon.com], a two-inch thick thing with not overly detailed descriptions of nearly a hundred programming languages. I've actually managed to work in over a dozen of those, as well as several modern ones, and about 20 assembly "languages" - it's been a while since I counted those separately, as nowadays one thinks of a single Assembly language with trivial variations for each CPU or MPU. Nearly all of those languages are extinct, I suppose. Remember, for instance, the Algol 68 disaster? An evolutionary dead-end which advanced compiler theory a lot, but has no descendants today... I could go on for a couple of pages, but will spare you.

    That said, I think the author's conclusions are valid for his chosen field - games for desktop computers. In embedded systems, C++ or even C is a perfectly valid way to go, since one usually has either the whole source avaliable and compiling/linking together, or one has reasonably static library components. And in the smaller MPU/MCU universe, one is back in the olden days of 8K code space, 128 bytes of RAM, and bit-twiddling to save space or time...

    Regarding the non-games desktop arena, I definitely think that some evolution of NeXTstep (aka Apple's Cocoa) will point the way for the next 10 years. I'm starting to learn that now, the learning curve is a little steep, but it's very powerful stuff.

  • by MattMann ( 102516 ) on Tuesday January 25, 2000 @05:32PM (#1337140)
    I knew that article was off to a bad start when he didn't refer to "natural" and "formal" languages by their official names. But this early post got it mostly right, but lacked examples so too many people got distracted by that obscure philosophical discussion. Here are some examples that I think make the points vlax was attempting.

    Sapir-Whorf is wrong. It is very easy to see that language does not determine what or how you think:

    • When you can't remember a word, you know exactly what you want to say, you just can't get to the word
    • People who suffer from aphasic brain damage often lose entire vocabularies (all the colors, for example) but they don't lose the ability to see and think about the concepts
    • Deaf people don't learn their native" language, and yet, they can think perfectly clearly, and
    • deaf Frenchmen are just as clearly "French" as other Frenchmen (tant pis? :)

    Russians have no word for fun? I doubt it! But, if they don't, it means they have no sense of fun. But in that case it's the culture influencing the language, not vice versa. English had no word for chic, but we knew/learned the concept so we borrowed a word from French. Our culture considers French culture to be chic and so do the French. But that's why they had a word for it: that's the culture talking. Eskimos have 32 words for snow (myth, I know) because they need them to succintly describe the snow they encounter. English could add those too, if it needed them, just like cross country skiers quickly learn the names of the different waxes they need to use. It's pretty overwhelming: thinking takes place underneath the veneer of language.

    programming languages need not limit what kinds of programs we write

    This "string handling in C" counterexample is a small one. Full object-oriented programming is easily accessible in C. When you typedef a struct (let's use "Shape" as the example), include an array "data member" (hint: call it vtable) which is pointers to functions. Use NewShape() instead of malloc to create one, and have NewShape initialize the vtable with pointers to functions like PrintShape. Create a "circle" struct, and have NewCircle first call NewShape, then depending on how you initialize its vtable, you can have inheritance, virtuals, polymorphism, etc. There are small syntactic differences: instead of circle.PrintObj(), use PrintObj(circle), though you could use C's awkward function pointer syntax. Passing the obj as an arg is what C++ does underneath anyway. if you think that this is not OOP, you need to learn to think abstractly. There are some differences of course: this way lacks destructors (like Java) but it can solve the multiple inheritance "problem" by being explicit about the semantics.

    the weakness of the analysis in the article can be seen in the table. Look across the columns... object-object-object? That axis has little to distinguish it. Most of what he says is not outright wrong, can even be insightful, but still so much is severely lacking. For example, I understand his "procedural" programming paradigm and why he is tempted to lump fortran and C together, and yet fortran can't conveniently be object oriented in the way that I describe for C.

  • by aheitner ( 3273 ) on Tuesday January 25, 2000 @02:03PM (#1337145)
    But I don't necessarily agree with him.

    He never once considers (as far as I could tell in my admittedly hasty reading) speed issues for more advanced languages. There are plenty of fancy languages out there, but C/C++ is still the choice for power without sacraficing speed.

    That said, I think more powerful constructs are very important, and I've seen several of them implemented in C++, with very positive effects.

    Persistance can make managing of not only game objects (i.e. saved games become trivial) but resources very simple, since a resource file (standard formats like TGA or DXF, or your own internal formats) are just simple methods of persisting data. If your data all lives on disk in its "dead" state, it's very easy to organize and bring up what you want out of a database. It's possible to build very intelligent tools, too -- for example, developing the art and changing the game constants on top of a database that detects when you change something and loads it into the running game. Again, I've seen this system implemented in C++.

    Along those lines, tools will become increasingly important as games get more complex. If you're going to run a massively-multiplayer RPG on the internet (à la Ultima Online, Everquest, etc) you need to have a crew of people there continously creating new art, models, and code, and they need to have an efficient way of doing it. Metadata about the objects in your game can let older chunks of code learn about and manipulate these objects as they are added. You can also do a lot of this in C++; the framework may be painful but once implemented there's no reason for it to be hard to use.

    ....

    This is all well and good, but I still see a fundamental disconnect between language designers and practical programmers. Language designers are seeking elegant representations for cases that are complex in current practical languages (i.e. the mythical everpresent C/C++), but they aren't usually building "real" programs (the kind of programs you use every day) in them. There are other languages that are very practical, such as Python, or practical in theory, such as Java, but they're not particularly appropriate for games -- they're ungodly slow.

    Perhaps it is time for a new language that balances hopefully a cleaner organization than C++ (perhaps dropping or at least heavily restructuring some of the more hopeless features like crazy dynamic casts and this chimaerical exception handling) but maintains the speed inherent in the language. It seems to me that game programmers are largely using a fairly safe subset of C++ anyhow; the representation changes in the language should make it easier to do the things we can now do with difficulty in C++, not make the impossible difficult. After all, that seems to be what Stroustrup was doing when he first wrote the first versions of C++ back in the early 80s.

    ....

    You don't have to agree (except that Java sucks) and I'm not dissing PERL (but it will still never be a game language). Just my $0.02
  • by Anonymous Coward on Tuesday January 25, 2000 @02:06PM (#1337157)

    I had a look at this document expecting to find loads of stuff to to contradict but instead I find a well writen article and my respect for Tim Sweeney growing. It's all to common particuarly with games programmers for these kinds of documents to be little more than a grind stone for a given set of ideas. Most of what is said concerning the development of programming languages is easy to agree with although the type of language construct he is talking about may be a little far of than he realizes.

    For a long time their has been the argument of which weather or not C++ is slower than C, in reality C++ is slower in some situations, in others it is no diffrent from C. In 'C' a variable is a variable and a function call is a function call, in C++ and other OO languages functions can be virtulized (without you realizing it) createing extra layers of indirection. A 50k line peice of software using all your latest multiple inheritance, operator overloading etc. my be easier to develop and debug debug next to the 200k line pure 'C' version, but there is a good chance all that extra function call overhead and indiretion will total up into a signifcant performance hit. Aside from applications like games where processor load is high and performance is near the top of priority list language discisions become a little more vague. Richard Matthias makes a convincing argument in favour of laguages like Visual Basic in this [exaflop.org] article. In games however, as the article correctly states performance is the key, the kind of functionality Tim is looking for is going to produce even indirection etc. That's not to say it won't happen, after all one of the things I have noticed in the latest generation of x86 processors is that indirection and function all overhead seem to be less of a performance drag.

    Most of the other things we deal with like the hardware accelerators, processors and development tools have changed radicly in the last 5 years but we are still using the same programing laguages, so prehaps it is time for the language department to catch up.

  • by be-fan ( 61476 ) on Tuesday January 25, 2000 @02:08PM (#1337186)
    Tim has a point on a few things, but notably about how important game developers are to the PC. Although he is a little bit in left field with his $100 graphics chip vs. $100K SGI, he is close. Without gaming and its inevitable push, 3D on the PC would not have happened. Sure there were proffessional OpenGL cards before gaming cards came out, but how did they perform, and where would they be today? It is proven that consumer technology moves significantly faster than corporate and high end technology. I seriously doubt that the high end OpenGL cards on the PC, such as the Intergraph Wildcat 4000, would be as fast as they are without the competition from consumer cards. (Face it, would you want to release an OpenGL card for $2K when a 3Dfx was a quarter the performance, but only $200?) The proof is this. The Quadro GPU is faster than a wildcat at about half the price. Why? Because it is based on the lightning fast GeForce consumer card. Do you really think that Intergraph is going to sit on its ass, or are they going to develop a card that blows it away?Secondly, gaming has pushed the processor and multimedia subsystems to increadible levels. All the technologies that make a PC competitive with a low or mid end SGI, such as AGP, PCI, SDRAM, the new intel multimedia hub, SSE, 3DNow!, etc, are mostley pushed by gaming. (Yes, SDRAM is a gaming technology. It features much lower latency than FPM. Latency really isn't important in image editing or word processing, but is critical in games. Same thing for PCI. ISA graphics cards were plenty fast for word processing.) You can trace gaming's influence even farther than these reletivly new technologies. I seriously doubt that PCs would ever have been seen as a multimedia machine (a term that got coined in the early 486 days.) without gaming. They were the first "multimedia" apps out on PCs and the ones that continued to push technology. The fact that they pushed technology is also important. Do you need a 800 MHz athlon to word process, watch movies, or even photo editing ?(I mean cleaning up old pics, etc.) But you do need it to play quake. So while other technologies have come onto the PC because of its increasing power, the PC would not have that power if games had not pushed it there. (or it would have been much slower to come around.) Of course everyone benifits from that power. The PII was designed to play games (and to a lesser extent to do multimedia), but it still make a damn good server. I often see on slashdot, though people who think that games aren't "real" apps. Or that a server is a "real" computer. I've heard people say things along the lines of, "who needs games on Linux, go out and do some REAL work." Well, sorry to bust your bubble but games got the PC where it is, and all the gamers, everyone who has ever used a multimedia application, all the people who used to have to buy a $20K SGI to do their graphics work but now can buy a $3000 PC, and yes, even all the sysadmins who saved $15000 by not having to buy a SUN, owe game developers.
  • by maynard ( 3337 ) on Tuesday January 25, 2000 @02:09PM (#1337194) Journal
    Let's be clear, Tom Sweeny's got my money for Unreal Tournament... and it plays in Linux on my cheap-ass Voodoo II card wonderfully. UT is plenty fun. But to compare even a high end AGP 3Dfx or NVIDIA card against serious SGI iron is just plain wrong. He's got a point that the newer 3D cards are good... they finally support 32 bit color (earlier 3Dfx cards like mine only support 16bit), they're reasonably fast... but they don't hardware acclerate anything but pushing pixels out to the display. Here's what Steve Baker wrote on the FlightGear Hardware Requirements [flightgear.org] page for a simple overview of the differences between high end 3D acceleration and what we're using on our PC's. I quote:
    "The important thing to think about when considering the performance of 3D cards is that the present generation of consumer-level boards only speed up the
    pixel-pushing side of things.

    When you are drawing graphics in 3D, there are generally a hierarchy of things to do:

    1.Stuff you do per-frame (like reading the mouse, doing flight dynamics)
    2.Stuff you do per-object (like coarse culling, level-of-detail)
    3.Stuff you do per-polygon or per-vertex (like rotate/translate/clip/illuminate)
    4.Stuff you do per-pixel (shading, texturing, Z-buffering, alpha-blend)

    On a $1M full-scale flight simulator visual system, you do step (1) in the main CPU, and the hardware takes care of (2), (3) and (4)

    On a $100k SGI workstation, you do (1) and (2) and the hardware takes care of (3) and (4)

    On a $200 PC 3D card, you (or your OpenGL library software - which runs on the main CPU) do (1), (2) and (3) and the hardware takes care of (4).

    On a machine without 3D hardware, the main CPU has to do everything."
    Now, I'm nitpicking. It's a cool article from a cool guy, who just made a minor exaggeration. Oh, and Flightgear [flightgear.org] is one Free Software (GPL) project you want to track... if you've got even a cheap-ass 3D accelerator (like me), and are into flight simulators, this is one cool project! :-)
  • by Pascal Q. Porcupine ( 4467 ) on Tuesday January 25, 2000 @02:13PM (#1337233) Homepage
    I think it's safe to say that quite a few CS geeks have been brought up to more modern programming practices thanks to Tim Sweeny's earlier work, most notably ZZT. ZZT wasn't much of a game on its own; where it shined was the fact you could extend it by writing your own games, since it included an object-oriented message-passing trivially-multi-tasking scripting language of its own, including a rather cool IDE. I learned quite a bit of high-level programming stuff simply by toying with this rather low-level interface; I learned about message-passing, parallel processing, deadlock-avoidance, and object-oriented programming in general thanks to sitting in front of my old 286 with Hercules monochrome into the wee hours of the night. I even learned about bad interface design by playing a lot of other peoples' games which assumed that I had a color display.

    I personally think UnrealScript is a sweet language, and I can't wait for Unreal binaries for Linux (so that I can finally play and create with Unreal, which I purchased so long ago). Even if I never get around to that, the principles behind it are what drive my thoughts for a 3D MUCK system I'm working on in what passes for my spare time.

    Back in the "good old days" when Epic Megagames was Potomac Computer Systems, I exchanged snailmail with him all the time. Every now and then I'd send him some program I was working on, and he'd send me a beta of whatever game he was working on (I was probably one of the first people on the planet to have, and beat, the first episode of Jill of the Jungle); one time he even gave me the registered version of ZZT. I still have that around somewhere, though it's kinda hard to use it since it's on a 5.25" disk. :)

    In any case, I just wanted to publically express my thanks to him here. Once upon a time I emailed him directly and he was obviously very busy (Unreal was "about to come out;" this was a couple years before it finally did :) and I doubt he's gotten any less busy nowadays.

    I wonder if there's been any thought of writing a portable ZZT engine clone... anyone know of any good ZZT game archives? (Yeah, I know ZZT itself is free(beer) now, and would be free(speech) if the source code weren't lost... I'm too lazy to get dosemu working again though. :)
    ---
    "'Is not a quine' is not a quine" is a quine [nmsu.edu].

  • by SimonK ( 7722 ) on Wednesday January 26, 2000 @04:37AM (#1337234)

    Full object-oriented programming is easily accessible in C

    I take issue with the word 'easily'. You might think this is nit-picking, this seems to have missed the point of the article. Noone disputes that you *can* implement a framework for OO programming in C, just as you *can* implement a framework for 'true' procedural programming in old Fortran (newer versions have re-entract procedures, pointers etc, and are therefore capable of everything C is). Similarly you can write code in Java or C++ to produce the effects of parametric polymorphism or virtual classes. Would you actually want to use it ? Didn't think so.

    Language choice doesn't constrain what you can do directly. All languages are Turing complete after all. It just makes some things easier and some things harder. Java vs C++ is an excellent example. Its possible in Java in simulate the effects of multiple inheritance, but rarely done because it requires so much typing.

    Adding features to language of the kind suggested in the article increases what is loosely called their 'expressive power' and thus makes programming a higher level and in some ways simpler activity.

    Sapir-Worff is indeed wrong, *but* there can be no dispute that choice of natural language makes it easier or harder to communicate certain things. The same is true of formal languages, no ?

  • by SheldonYoung ( 25077 ) on Tuesday January 25, 2000 @02:13PM (#1337239)
    Tim paints himself into an interesting corner. He argues faster machines should not be required, yet he years for more and more abstraction.

    Abstraction is great for design and programming speed, and not good for executable speed. It's the trade off we make, and one of the biggest reasons applications are bigger and slower and better than ever.

    His C=A+B argument is a good example. The function which adds an element of A to an element of B incurs overhead because it is a function. If you make special allowances for the list-of cases then you have just undid the abstraction. Yes, optimizers can do good things, but they only work so far.

  • by TheDullBlade ( 28998 ) on Tuesday January 25, 2000 @02:17PM (#1337248)
    Few people use C++ for object oriented programming. Java was a bad idea that should be forgotten as quickly as possible. UnrealScript is a special purpose language for a game; in that cases you throw out the rulebook and make it efficient for the narrow task at hand.

    The ideas in C++ and Java have been around for a long time, they've just been hyped relatively recently. They are neither the present nor the future of programming languages; they are the past: the idea of the One True Language. The present is a babel of special purpose languages, as is the future. The only difference in the future is that they will be easier to tie together.

    Certainly there will be more attempts to build the One True Language, but they will fall as short of the goal as did Standard C++, Java, and Ada, and blend into the background noise.

    My personal favorite programming method is to generate C (well, C++ using struct methods to shorten function names) code with Perl (while it has other uses, it stands out for me as the best quick-hack text processing language out there). If you can't express it readably with the language you've got, express it in a mess generated with readable code in another language. Perl is handy because you can dump a whole other text file into it's midst with the $var = <<'END_OF_C'; syntax.

    You just use make to run the Perl script (I use the extension p2c) and redirect the output into a .c file.

    I don't stick to one language when another does the job better, a typical small "C" project of mine involves 3 or 4 languages, while a large project of mine might involve a dozen or more mini-languages I wrote to express a class of GUI widgets or text-parsing details. You might want to try it, I find it very efficient.
  • by Hard_Code ( 49548 ) on Wednesday January 26, 2000 @06:07AM (#1337280)
    Exactly...people are beating up Tim for no reason.
    He /explicitly/ stated he was making a distinction between what is /possible/ and what is /practical/. While OO might be /possible/ in C, it is only really /practical/ in C++. Same with assembly.

    "Sapir-Worff is indeed wrong, *but* there can be no dispute that choice of natural language makes it easier or harder to communicate certain things. The same is true of formal languages, no?"

    Right...the previous poster and others, keep saying that language "doesn't determine what or how you think". They pick adults and then run some tests on them, and find that, lo and behold, they /can/ actually think about concepts without the words. Another poster says that it is merely /culture/ influencing /language/. If one knew anything about child development or anthropology, they'd know that culture and language are intimately intertwined. Culture affects language, which in turn affects culture. The amalgam of those can influence how you think, or at least with what /frame of reference/ you view the world.

    Jazilla.org - the Java Mozilla [sourceforge.net]
  • by vectro ( 54263 ) <vectro@pipeline.com> on Tuesday January 25, 2000 @02:24PM (#1337308)
    This is a good point. Another one is Free Software or Open Source. Namely, there is no word like french libre to connotate free speech, not free beer.
  • by vlax ( 1809 ) on Tuesday January 25, 2000 @02:27PM (#1337326)
    That 5% was a concession to the handful of linguists (mostly anthropologists) who still take some portion of Sapir-Whorf seriously. In some very weakened form, the idea is still possible, but the strongest form is either unverifiable (and thus has no place in linguistic science) or has already been falsified (as the Berlin and Kay studies, among others, ultimately showed.)

    A unilingual Chinese speaker is capable of understanding the notion of 'moral hazard' and can use it as well as an anglophone. Speaking Chinese is not a barrier to comprehension.

    Should a Chinese economist wish to discuss the problem of 'moral hazards' in a paper in Chinese, this person will quickly find or devise a term for it and continue without difficulty, at most having to explain the notion at the beginning of the paper. The same is true of most anglophones, the majority of whom probably do not understand the term moral hazard intuitively (at least in the sense that I understand it - primarily as a term in economics) and would require that same explanation.

    If this hypothetical economist wishes to show off his English, or simply because any short Chinese term he might use for 'moral hazard' implies too many unwanted connotations, he may simply plop the English term 'moral hazard' into the language. That's how 'deja vu' started. There is no reason why 'deja vu' can't be said using other terms in English - the concept no doubt existed for anglophones before the French term became current.
  • by Stickerboy ( 61554 ) on Tuesday January 25, 2000 @02:30PM (#1337346) Homepage
    Something Tim Sweeney seems to have left out of the discussion of next-generation language concepts is real-world performance concerns. He mentions it briefly: "C++ failed to deliver binary platform-independence, and Java failed to deliver high performance."

    Does he not see that binary platform-independence in Java led directly to its performance problems? Even with hacks like JIT compilers, performance of bytecode lags well behind binaries compiled from Java. This is just one of many examples that illustrates a common principle: at every level of language advancement, there's going to be some performance tradeoff.

    The best example of how this affects programmers is the Quake 1 engine. Released before mainstream hardware acceleration, the most processor-intensive routines in the engine are written in assembler, and just about every possible performance/elegance conflict is resolved with performance in mind. The result? We all played Quake with 35 fps on a Pentium 166 in 320x200 software mode. In the newly released source code, rewriting the assembler in C drops performance by close to half. With today's machines its not a problem, but back then I don't think anyone would have enjoyed playing Quake too much with sub-20 fps.

    But then he goes on to lay out what he sees as the major shortcomings of current generation languages, which really comes down to:
    • The distinction between primitives and objects (especially the lack of easy manipulation of objects a la primitives), and
    • The lack of uber-classes.
    Let's go back to the article: "Stop for a second and ponder the power of such a concept -- with about four lines of code, you've sub-classed a 150,000 line game engine and added a new feature that will propagate to several hundred classes in that framework. Besides that, it just seems beautifully high-level to be able to express such a concept with a single statement "class DukeNukemEngine extends UnrealEngine"."

    I wince just thinking about the compile times that programs from such a language would take. Throw in a requirement for the language to be binary platform-independent, and who needs Microsoft to spur hardware upgrades?

    Tim Sweeney identifies social inertia as the main cause of reluctance to adopt next-generation languages, but a concern with just as much importance in developers' minds is performance, like the development of Quake 1 shows. Until near-infinite processing power and/or bandwidth is accessible to consumers, it will continue to hold back advancement in such a manner. What else can I say? Tim Sweeney is a man ahead of the times.
  • by roystgnr ( 4015 ) <roy&stogners,org> on Tuesday January 25, 2000 @02:34PM (#1337360) Homepage
    Take a look at the GeForce - it does (3) and (4) just like those SGI workstations, and on a $200 (or $300 depending on things like RAM bandwidth) PC 3D card. No, it's not up to $100k SGI workstation standards, but it's getting closer. Nvidia's Quadro (which isn't much more than a souped-up Geforce) isn't a gamer's card, and SGI is working with Nvidia on future PC accelerators IIRC. Take a look at Anandtech's Quadro DDR review [anandtech.com] and see what you can get on a PC for under $1000.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...