Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Extensible Programming for the 21st Century 438

Anonymous Cowardly Lion writes "An interesting article written by a professor at the University of Toronto argues that next-generation programming systems will combine compilers, linkers, debuggers, and that other tools will be plugin frameworks [mirror], rather than monolithic applications. Programmers will be able to extend the syntax of programming languages, and programs will be stored as XML documents so that programmers can represent and process data and meta-data uniformly. It's a very insightful and thought-provoking read. Is this going to be the next generation of extensible programming?"
This discussion has been archived. No new comments can be posted.

Extensible Programming for the 21st Century

Comments Filter:
  • Go Greg! (Score:5, Informative)

    by xcham ( 200708 ) * on Wednesday May 26, 2004 @06:57PM (#9263350)
    The document is mirrored here [third-bit.com] to help compensate for the bandwidth deluge.
  • by Lord Grey ( 463613 ) * on Wednesday May 26, 2004 @06:57PM (#9263357)
    ... will obviously be "forbidden." Yes, I did RTFA.
    • by Impy the Impiuos Imp ( 442658 ) on Wednesday May 26, 2004 @07:23PM (#9263572) Journal
      > Programmers will be able to extend the syntax
      > of programming languages, and programs will be
      > stored as XML documents so that programmers
      > can represent and process data and meta-data
      > uniformly.

      Sounds like they've found a use for future eight trillion teraflop processors. Scripting on top of scripting on top of scripting. :(

      • Just for clarification: you don't actually code in XML. This is what it means:

        <function>
        <name>foo</name>
        &nbs p; <abstract>Blah blah</abstract>
        <args>
        <arg>
        <name>bar</name>
        &nbs p; <type>int</type>
        &nbs p; <purpose>Blah blah</purpose>
        ...

        When you fire up your IDE, you'll see:

        // abstract:
        // Blah bl

  • by leandrod ( 17766 ) <l@dutras . o rg> on Wednesday May 26, 2004 @07:00PM (#9263373) Homepage Journal
    This is incredibly stupid. How come XML helps in dealing with data and metadata? Metadata *is* data.

    What we really want is an user-extensible type system, like the one proposed by Date and Darwen in _The Third Manifesto_ for relational database systems. Remember, types are domains plus operators.
    • This is incredibly stupid. How come XML helps in dealing with data and metadata? Metadata *is* data.

      Just an observation. This comment was literally posted 2 minutes after the story went live on /. - thus there's absolutely no way in hell you've read the paper and already you're trashing it. How mature.
    • by MisterFancypants ( 615129 ) on Wednesday May 26, 2004 @07:04PM (#9263428)
      This is incredibly stupid. How come XML helps in dealing with data and metadata? Metadata *is* data.

      It goes up to 11, see. That's one higher.

    • This is incredibly stupid. How come XML helps in dealing with data and metadata? Metadata *is* data.

      Metadata is, of course, data. Sometimes, a finer-grained taxonomy method is helpful. After all, sausages and uranium are both matter, but calling them matter doesn't help me with my dinner selection.
      Mmmmmm - sausage.

      • >
        Sometimes, a finer-grained taxonomy method is helpful

        But when it comes to data it just confuses, because taxonomies imply a hierarchy, and hierarchies are hard to agree upon on the first place, are quite arbitrary, and tend to change quite fast and radically.

        The relational model already provides a better alternative to taxinomies: attributes. And then, metadata becomes just data.

  • been done (Score:5, Informative)

    by studboy ( 64792 ) on Wednesday May 26, 2004 @07:00PM (#9263375) Homepage
    programs will be stored ... so that programmers can represent and process data and meta-data uniformly.

    Yup. Back in the day, we called this "Lisp". It was about as readable as XML, but a hella lot more fun.
    • Re:been done (Score:3, Insightful)

      by Carnildo ( 712617 )
      Having worked with both XML and Lisp, I'd say that Lisp is easier to read.
    • Re:been done (Score:5, Interesting)

      by ron_ivi ( 607351 ) <sdotno@cheapcomp ... s.com minus poet> on Wednesday May 26, 2004 @07:22PM (#9263564)
      Agreed, and even mentioned in TFA:
      3. You don't need XML to do this.

      Scheme proves by example that everything described in this article could have been done twenty years ago, and could be done today without XML.

      And IMHO lisp's syntax has always had a nicer structure than XML's repetitive redundancy.

      _<whatever>
      __<you> want to do in <xml>xml</xml>
      __</you>
      _</whatever>
      is nothing but a set of s-expressions that read much nicer in a lisp-like syntax:
      (whatever
      _(you want to do in (xml xml)
      _)
      )
      IMveryHO the big failure of the lisp guys of old was that they were so proud of how many ')' they could put next to each other that it made their code harder to read than necessary. I bet XML would have failed too if it were commonly written
      <whatever>
      _<you> want to do in <xml> xml </xml></you></whatever>

      (and yes, the _ are just there for /.'s formatting)

      • Re:been done (Score:4, Insightful)

        by Wolfkin ( 17910 ) on Wednesday May 26, 2004 @08:14PM (#9263899) Homepage
        Piling up all the closing parens makes the code *easier* to read, not harder. After you've been lisping for a few weeks, the parens just sorta disappear, and you rely on indentation to give you the overall structure of the current function, and then just add however many are left over at the end. Any good editor will let you know when you've put enough, and you can define a "fill out close parens to the top level" command in most.
        • Re:been done (Score:3, Interesting)

          by makapuf ( 412290 )

          So why not take the python approach and use the indentation as the structure ?

          compare

          (whatever _(you
          want to
          do
          in (xml xml)
          _)
          )
          (indentation doesn't follow strcture)

          with

          whatever :
          you :
          want to do in : xml
          • Re:been done (Score:5, Informative)

            by boots@work ( 17305 ) on Thursday May 27, 2004 @04:01AM (#9264925)
            This has been proposed and implemented several times as an alternative syntax. I think Arc [paulgraham.com] is meant to have this, but it's hardly the first. I don't think it's ever really caught on.

            Why?

            - Sometimes it's nice to put code into things that don't reliable preserve whitespace, such as, say, comment fields on web sites.

            - Parens are well-established in lisp. If you change it you give an additional barrier to people coming from other lisp dialects, without particularly helping people coming from elsewhere.

            - Whitespace by itself is not enough. Do you want to write (+ 3 (* 4 5)) across 3-5 lines? Python ends up with fairly complex rules about backslashes, open parens, etc.

            - One advantage of lisp is that it's easy to write out from a program. This is really not true of Python.

            - If you accept that we need paren syntax, then you can wonder whether indentation should be supported as an alternative. But having two different syntaxes for one language, though an interesting idea, is likely to cause a lot of practical confusion.

            So I think all you really want is an editor that ensures the indentation is always valid, and that can highlight parens and do other things. emacs goes a long way, but it could be better -- for example by making outer parens larger, as in TeX-printed or handwritten mathematics.

            In my humble opinion what Lisp needs to take from Python is not semantic indentation, but rather a single standard dialect with good OS bindings. The last thing we need is yet another slightly incompatible dialect that can't bind to existing code. Sheesh; I love lisp but lisp implementers really exasperate me.
    • Re:been done (Score:5, Insightful)

      by Tarantolato ( 760537 ) on Wednesday May 26, 2004 @08:32PM (#9264012) Journal
      A cynical take on this article would be that it's sour grapes by a stranded Lisp/Smalltalk guy who never got used to doing things The Unix Way, and still wants to lead the unwashed masses kicking and screaming to the promised land.

      "PUT DOWN THE VIM! WE HAVE YOU SURROUNDED!"

      All cynicism aside, this is a mixed back. Extensible syntax is a great idea; and yeah, Lisp already had it in the 50's. What needs to happen for broader adoption is to do it in a natural Algolish syntax, which basically means limiting functionality. Languages like Python and Ruby (with lambdas and blocks/procs) are starting to do it and I expect to see others follow.

      The whole "seamless translation into XML and back into any language of your choice" is a lovely idea, but even small bugs in implementation would handicap its usefulness considerably. It'd also take a tool oodles more complicated than gcc, which he doesn't seem to like.

      Finally, tight coupling of language and development environment can mean added productivity, but it also tends to mean less flexibility in practice: this is one of several reasons that Smalltalk hasn't caught on.
      • Even without bugs in the implementation, the XML format won't work to a general enough degree. Let's see... we already have a format which many programming languages translate to, and which can be translated back to a limited degree, and that's object code. Translating object code back to a programming language may work, sure, but it doesn't generate the same level of semantics which were there in the original.

        Now translate the object code to XML. Is it any better? Probably not. It's now readable, bu

  • Can't wait (Score:2, Interesting)

    A good example is code like this in C++

    Vector a,b,c;
    . . .
    c = a+2*b;

    Written naively the overloaded '+' operator returns a vector object. But I don't want any object returned. I want the code to be expanded in place as

    c[0] = a[0]+2*b[0]
    c[1] = a[1]+2*b[1]
    c[2] = a[2]+2*b[2]

    Now you can do this in C++, but look at what you need to implement [oonumerics.org] to do it. The code is a hideous nightmare of template metaprogramming. Of course you can do it in a language like C, but then you lose the ability to express yourself cl
    • Re:Can't wait (Score:3, Informative)

      Actually I believe this particular example might well be optimised out by the compiler to the code you mention. But your point is probably valid for trickier problems.
      • Actually, this example often isn't optimized away which is why the Blitz++ library exists. In fact, the author of the library, Todd Veldhuizen, has written at least one paper spelling out what compiler developers need to do in order to ensure stuff like this is optimized away. I've done lots of experiments myself. As soon as you put something like my example line inside a template that is instantiated from another template etc. I've found compilers start missing what you think ought to be easy optimizations
    • Re:Can't wait (Score:5, Insightful)

      by X ( 1235 ) <x@xman.org> on Wednesday May 26, 2004 @07:11PM (#9263487) Homepage Journal
      Now you can do this in C++, but look at what you need to implement to do it

      It would be great, if instead, I could hook into the compiler and tell it exactly how it should handle vectors.

      Umm... what makes you think that programming a compiler is going to be more straight forward than doing generic programming? That seems like a huge assumption to me.

      The closest thing I've seen to what this article talks about was CLOS's MOP, which was great, but once again, a lot of people had a hard time groking it.
      • what makes you think that programming a compiler is going to be more straight forward

        I don't know how to make it easier myself. I think it's a hard problem and I hope people smarter than me are working on it. But I have a hunch it can be made a lot easier than template metaprogramming, say. My reasoning is simply this - template metaprogramming wasn't designed to do the sorts of things that are being done with it, it's almost an accident that you can write stuff like Blitz++ or boost::mpl. I've a feeling

        • I've a feeling that something designed to do this has got to be easier to use.

          It might be easier to read, but the fundamental challenges with generic programming are essentially language and type theory problems which exist independantly of the C++ language or it's syntax.
    • Re:Can't wait (Score:3, Insightful)

      I'd think most compilers will already expand it to that. I know Visual C++.NET does.
      • (1) Try a more complex example along similar lines. (2) Try embedding that code, not in a simple function, but inside a nest of templates. (3) Implement the vector class in a generic way so that it accepts different types and sizes (known at compile time). I get great results in Visual C++ with simple examples too.
    • by real gumby ( 11516 ) on Thursday May 27, 2004 @01:54AM (#9264455)
      It would be great, if instead, I could hook into the compiler and tell it exactly how it should handle vectors.

      Well of course that's what templates are. Yes, their syntax is horrendous but that's what comes of trying to wedge the concept into the existing crannies of C syntax (or when, as Stroustrup remarked to me once, "the ecological niche was already polluted").

      If you hanker for a language in which metasyntactic extension is natural, you need Lisp macros [oopweb.com] (or here [c2.com] and here [c2.com] for a more complex example), Scheme "hygenic" macros [mit.edu] or the CLOS MOP [lisp.org].

      But if you really want to consider "hooking into the compiler" as you say then you should look at the reflective programming work, the ground work for which was laid down almost 25 years ago [acm.org] by Brian Cantwell Smith [indiana.edu] and was even implemented [acm.org], by me and others, back then. Although a lot of work continued in this area [readscheme.org] that vein pretty much got mined: unless you can think up a completely new control structure there's not a huge amount more you can do with such a system than you could with a normal metasyntactic extension mechanism.

      HTH
      -d

  • Wrong. (Score:5, Funny)

    by Bingo Foo ( 179380 ) on Wednesday May 26, 2004 @07:04PM (#9263424)
    XML? Bah. Next generation languages will be written in "WIMNNWIS" (What I mean not necessarily what I say) and will run on processors liberally sprinkled with pixie dust.

  • by the_skywise ( 189793 ) on Wednesday May 26, 2004 @07:04PM (#9263426)
    And suddenly he's propheysing the future?

    Editors like Emacs, Visual SlickEdit and even the loved/loathed MS Visual Studio have plug-in frameworks.

    As for XML being the "glue" for holding things together... No. It'll be a data neutral "modulator" you emit your data from your program by name in a particular format. Transmitting and receipt by the other programs will be handled by a remodulator. In between it might be XML, it might be binary, it might be whatever you feel like using that day.
    (and no I haven't read the artile (FORBIDDEN)
  • by SharpFang ( 651121 ) on Wednesday May 26, 2004 @07:05PM (#9263438) Homepage Journal
    How humans can tell what will be in a few years if they can't tell what will be tomorrow?
    I'd completely agree if the claim wasn't "that next-generation programming systems will combine compilers"... but "should combine...".
    Right, the idea is nice. But where will the market go, how will big corporations guide the development, what will become the new fancy or if there will be a new development that will render XML completely obsolete and feeling ugly comparing to that "new thing" - we don't know.
    • > that next-generation programming systems will combine compilers"... but "should combine...

      Does there not already exist development systems that can handle code written in multiple languages?
      (e.g. some of the modules being written in C++, others in Pascal or whatever)
      • Does there not already exist development systems that can handle code written in multiple languages?
        (e.g. some of the modules being written in C++, others in Pascal or whatever)


        Most linkers have been able to do this for at least a decade. You feed the program into the "make" utility in whatever languages it's using. For each file, it runs the appropriate compiler, which produces object code in the appropriate format, usually using the C ABI. The linker then combines those object files with the appropri
        • I'm more specifically referring to a debugger which will, when stepping through the code, be able to display and debug modules written in several different languages. I know this is possible with HDLs. I know of an environment that can debug a model consisting of modules written in VHDL, Verilog HDL and (a version of ) C. As you say, linking an executable from modules of more than one language is old news.
  • Instead of treating each new idea as a special case, they allow programmers to say what they want to, when they want to, as they want to.

    Is this not the Ultimate goal of programming? The Holy Grail of programming perhaps?
  • Plugins?! (Score:4, Insightful)

    by Greger47 ( 516305 ) on Wednesday May 26, 2004 @07:07PM (#9263448)

    Developers can add new or improved optimizations to SUIF by writing a filter and adding it to the compiler's configuration.

    Dude, I have enough trouble debugging my code without having my homemade, guaranteed to be buggy, optimizer introducing even more bugs...

    /greger

  • by mcrbids ( 148650 ) on Wednesday May 26, 2004 @07:08PM (#9263454) Journal
    So here I am, coding away merrily, when I run into a *STICKY* problem.

    I'm running applications as user X, and need to access data as user Y. I have all the routines and everything written (in PHP) to access the data, but I need to do this as user Y, while accessing the data as user X.

    There's just no easy way to do this. You have to use some kind of glue (such as XML), along with parsers, socket connections, pipes, shared memory, and all that jazz just to be able to access data remotely.

    Ouch.

    What I'd like to see is the concept of a "remote object". Imagine standard OOP, except that a particular object doesn't have to exist in the same memory/process space as the parent.

    For example, instantiate an object on a remote server, or as another user on the same server, or at least in a different memory space as the same user & server.

    The biggest problem with XML is that it's heavy, very heavy, and requires specialized scripting in order to work.

    If you have an class already written that does what you need, you should be able to simply instantiate that object in the context you need it to run in, and then begin using it, COM style.

    Obviously, some calls (such as GLOBAL) would be affected or even disabled with such functionality - but can you imagine the benefits?

    Ah well. That world doesn't exist, yet.
    • by Greger47 ( 516305 ) on Wednesday May 26, 2004 @07:12PM (#9263497)
      So whats wrong with CORBA [omg.org]? Here's one among several implementations for PHP: http://sourceforge.net/projects/universe-phpext/ [sourceforge.net]

      /greger

      • Nothing is wrong with CORBA, other than that it's clunky as hell. And nothing is wrong with SOAP, other than that it discarded the "Simple" that is supposed to be the first letter of its name long ago, and is now turning into CORBA. I would love to see a true, universal, easy to program remote object model, but my experiences trying to use the current implementations have convinced me that we have a long way to go.

        No, I don't claim to know what a better way is, only that neither CORBA nor SOAP is it.
    • If you have an class already written that does what you need, you should be able to simply instantiate that object in the context you need it to run in, and then begin using it, COM style.

      So I can use this to instantiate a "shell" object in the context of "root" with no problems?
    • Basically, you're saying that next genration programming languages will all be Java with RMI and Jini.

      Decent remoting is already built into many development frameworks, including Java and .Net. This is nothing new, it's been around for years. I suggest you look into it ( especially if you're using *ugh* PHP for anything; been there done that too, trust me, ditch it while you still have time to go with another framework ).

    • .NET provides "remoting" and Java provides RMI. These are essentially Remote Objects. CORBA provides this cross-platform. If I understand exactly what you're asking for, it already exists. Clients can instantiate objects remotely, and maintain virtually local control of these remote objects, using the aforementioned technologies.

      However, if you're using PHP, the whole concept of remote object has to be patched together (just like in ASP) simply because there is no true "state" in web applications.
  • by Carnildo ( 712617 ) on Wednesday May 26, 2004 @07:08PM (#9263459) Homepage Journal
    A brief read of the article indicates that the author is trying to solve problems that don't exist, and as a result, is coming up with solutions that are worse than the supposed problems.
  • Until we have a major change in teh hardware elements programming lanuages will become more of the same with some refinements not to the languages but to the object models and header files that are associated with them.
  • Sounds like Forth (Score:4, Insightful)

    by Anonymous Coward on Wednesday May 26, 2004 @07:09PM (#9263462)
    Except for that XML stuff.

    I don't understand this fascination with XML. It's just a generic container for storing data - nothing more. OpenOffice uses it as the underlying format for storing documents, but that doesn't mean I have to deal with it when writing a document. It's transparent to the end user.

    In the same way, , why should I have to deal with it when coding? It's sort of like requiring coders to be able to pop up a hex editor and cruise through the code.

    Remember MVC (model-view-controller)? Being able to disassociate the different parts was considered a good thing. Swing decided it was too cumbersome, ASP.NET joins them at the hip, and now we've come all the around, with Microsoft proclaiming with XAML that everything should be dumped into one big XML box.

    Bleah.

    • dealing with XML (Score:5, Informative)

      by Yobgod Ababua ( 68687 ) on Wednesday May 26, 2004 @08:30PM (#9264001)
      That's exactly one of the author's points! You shouldn't (and in his vision won't) have to deal with the XML directly, -unless- you are one of the people actually writing new plugins rather than just using them.

      His suggestion is primarily that we start using editors that transparently present the 'code file' in our choice of format rather than forcing us to edit it byte-by-byte. It's like the syntax-highlighting you probably use now, only effecting more than just colors.

      Using XML for the underlying syntax is mostly irrelevant to his proposal, but he suggests it merely because it is currently popular, well suppoerted, and well suited to it's primary job of presenting data in an easily MACHINE READABLE format.

      His proposal is, in fact, exactly the opposite of requiring coders to pop open a hex editor, and he likens our current ASCII-only coding methods to doing exactly that at one point.

    • The funny part is that eventhough in 99% of the cases XML is indeed transparent to the user it is still selected over binary formats (DER, TLV, whatever) because it's ASCII !

      Having talked to religious XML zealots in a past, I gathered that they either were simply not aware of the alternatives or were 'afraid' of the binary formats due to the nature of their programming environments (VB & co). Duh.

  • Hot air? (Score:3, Insightful)

    by Henrik S. Hansen ( 775975 ) <hsh@member.fsf.org> on Wednesday May 26, 2004 @07:10PM (#9263470) Homepage
    Ok. Perhaps there are some interesting things to pick out from this. But it is a giant leap to claim that the next big thing will be extensible programming. I think we can all remember XML, OOP, and all the rest.

    Good ideas, which are the correct choice for some problem domains (OOP is for instance often a good choice for GUI's , IMO). But they're not the best choice for everything

    So this is Yet Another Buzzword. At least he didn't shorten it to XP. ;)

  • Why XML? (Score:5, Insightful)

    by ProfessionalCookie ( 673314 ) on Wednesday May 26, 2004 @07:10PM (#9263472) Journal
    XML is great and all but there are a few killer disadvantages. It can be really really slow. It can mean generating huge files. A well documented open format is better than a "human readable" XML template. How many XML files have you looked at that have this kind of thing :

    <DATA>2ED4F64676766DC7B87A76A65B1722303FFF</DATA&g t;


    Sometimes XML is not the answer. That being said there are also so really great uses, but XML was not made for everything.
    • The love of XML and Java stun me. It's the younger, LISP-ignorant generations who are infatuated with this crap.

      I blame the education system, with all mealy mouthed teachers who wince if Little Johnny cries about too much homework causing him to lose 10 minutes from his seven hours of Halo every evening.

      Teacher of the Future: Come on, Johnny! You can do it! You can program it!

      Johnny: It's too hard!

      TotF: Johnny! Johnny! (starts cabbage patching)

      Johnny: Computer, create a program to figure the fif
    • there's nothing wrong with that if you use XML like I do,
      So that packet of data can be transformed and fed into something else.
  • by Pan T. Hose ( 707794 ) on Wednesday May 26, 2004 @07:11PM (#9263479) Homepage Journal

    An interesting article written by a professor at the University of Toronto argues that next-generation programming systems will combine compilers, linkers, debuggers, and that other tools will be plugin frameworks, rather than monolithic applications.

    This is not the next generation of programming systems but rather the present one [gnu.org] for pretty much everyone except for those using Microsoft tools.

    Programmers will be able to extend the syntax of programming languages,

    Again, nothing [perl.org] new [parrotcode.org].

    and programs will be stored as XML documents so that programmers can represent and process data and meta-data uniformly.

    There is no way in hell that would ever happen. Ever.

    It's a very insightful and thought-provoking read. Is this going to be the next generation of extensible programming?

    No.

    Now, I will read the entire article, but somehow, I am not holding my breath...

  • Yeah huh... (Score:2, Insightful)

    by grasshoppa ( 657393 )
    ...and by the 21st century, weren't we all supposed to have flying cars?

    Personally, I'll take the flying cars...;)
  • by fuzzy12345 ( 745891 ) on Wednesday May 26, 2004 @07:12PM (#9263489)
    So this guy's familiar with UNIX, he's familiar with Lisp, yet he thinks the future is XML and hideous frameworks with ever-changing APIs? Not often you se e someone with a hammer AND a screwdriver using the hammer to pound screws.
  • yup (Score:4, Informative)

    by Janek Kozicki ( 722688 ) on Wednesday May 26, 2004 @07:15PM (#9263514) Journal
    that next-generation programming systems will combine compilers, linkers, debuggers, and that other tools will be plugin frameworks, rather than monolithic applications.

    yup, it already happened. more than 10 years ago. it's called Rule of modularity and Rule of Composition [catb.org]. In case you don't know. It's the Basics of the Unix Philosophy
  • by Space_Soldier ( 628825 ) <not4_u@hotmail.com> on Wednesday May 26, 2004 @07:15PM (#9263515)
    We've seen what can happen to languages when countries get conquered. English is one of the best examples. Try to read some old English to see for yourself. With XPL (Extensible Programming Language), you cannot say anymore that I know C++, or I know C#. Someone will ask you to maintain some code, and you'll take a look at it and have no idea what is going on, until you learn the extensions. This will happen over and over again with every project you are supposed to maintain. This is BRAIN FRYING and huge possibilities for mistakes. It is just like waking up everyday and being asked to speak in another human language. Today English, tomorrow French, the day after tomorrow Bengali, can you do it?
    • Bad? (Score:5, Insightful)

      by Yobgod Ababua ( 68687 ) on Wednesday May 26, 2004 @08:39PM (#9264057)
      That's not how I read the article's proposal at all!

      The code you've been asked to maintain is stored in some standard machine readable format. When you come in you then use the code-editor program to view it using -your- extensions, and the underlying primatives of the code objects are presented in the manner you're used to.

      Whatever extensions and transformations the original author used to create the code would be relatively meaningless, which (for many of the reasons you descibe) is a good thing.
    • Someone will ask you to maintain some code, and you'll take a look at it and have no idea what is going on, until you learn the extensions. This will happen over and over again with every project you are supposed to maintain.

      Eh, that's no different from the usual: starting a new job and being asked to maintain and extend a regular 100 000 line program: you'll take a look at it and have no idea what is going on, until you learn the objects and functions.

      In both cases, standard libraries and readable, main
  • People see this as bloat, but it's always been an issue. As computers get faster, we can remove ourselves from the nitty gritty of computer programming and back to computer science. Sixty years ago computer scientists probabably would have thought the same thing about the most efficient methods we make applications today. "IDE? What the hell is that for? Use punch cards!" My point is that as cpu and memory increase in speed and size, we take advantage of that in ways other than making the final resul
  • Let's see here.. (Score:5, Interesting)

    by k98sven ( 324383 ) on Wednesday May 26, 2004 @07:16PM (#9263521) Journal
    If I recall correctly,
    Fourth-Generation languages [wikipedia.org] was going to be the future of programming back in the early 80's?
    (Machine code, Fortran/Basic-type languages and Pascal/C-type languages being the supposed first, second and third generations, IIRC)

    Then in the early 90's.. OOP was going to save the world. Not that it hasn't had impact, but it certainly hasn't fundamentally changed things.

    And now it's XML that's going to save the programmers, while the old-timers whine that we should all really be using Lisp [lisp.org].

    Not that I'm a computer-language conservative myself, but it's worth pointing out that historically, there has been quite a big discrepancy between which languages the Comp-Sci researchers feel everyone should be using, and the ones which actually are used.
    • by ezzzD55J ( 697465 )
      Not that I'm a computer-language conservative myself, but it's worth pointing out that historically, there has been quite a big discrepancy between which languages the Comp-Sci researchers feel everyone should be using, and the ones which actually are used.
      True, but that doesn't mean the researchers are wrong..!
    • by Tony ( 765 )
      ...while the old-timers whine that we should all really be using Lisp.

      That could be because Lisp provides most of the features outlined in the article, without the problems?
    • A lot of people today still see OOP is a fad that will pass. On the otherhand, there has always been resistance to new programming paradigms. The early machine-code programmers resisted FORTRAN. The FORTRAN programmers resisted the Algol-like languages. People (including myself) are still not sure about the tradeoffs of OOP. If this is the next big thing, it is really no suprise that the majority of people here seem to oppose it.
  • by treerex ( 743007 ) on Wednesday May 26, 2004 @07:18PM (#9263536) Homepage

    that next-generation programming systems will combine compilers, linkers, debuggers,

    ...THINK Pascal (for the Mac) was doing this almost 20 years ago: the editor served as the front end to the compiler --- so the syntax highlighting in the THINK Pascal editor was driven by the lexer (really was the lexer): you knew about syntax errors immediately. The debugger was fully integrated into the environment. It was really sweet, and probably one of the best programming environments ever written.

    and that other tools will be plugin frameworks

    Like Unix pipes and Eclipse?

    Tomorrow arrived yesterday and appears today.

  • eXtensible Programming sounds too much like eXtreme Programming.

    Maybe they can call it Extensible Fox?
  • Is it just me, or did this "visionary genius" just describe a Lisp Machine from 30 years ago? Man, if this is "progress", we're screwed.
  • The arguments being posted here are exactly the same that would've been posted about arguments against XML replacing EDI.

    An EDI message looks like garbage:

    ILD=1+0+0+1222+3+0+0+S+17500'STL=1+1+S+ASSOR. NP11+?'EXT?''

    and people said "XML will never replace it because no-one's meant to read this stuff and the resulting files will be huge."

    XML is replacing EDI already. The EDI networks didn't see it coming, mostly because they tried to use XML as an excuse to hike their kilocharacter transfer charges. Doh.

    N
    • What benefits does XML have over EDI when transmitting business data? I know of none. The only reason this substitution is happening is that XML happens to be the big buzzword among PHB's.

      For something - retriveing stock qootes over the internet, I certainly can see its use. But when you're talking about company A's computer talking to company B's computer, maybe EDI is a better way to go, especially if company A or company B is still using dial-up lines (there are a lot of small companies out therer),

  • by X ( 1235 ) <x@xman.org> on Wednesday May 26, 2004 @07:24PM (#9263575) Homepage Journal
    It's kind of funny when you think about it. Smalltalk and CLOS (and for that matter their predicessors) seem quite close to what he's describing (admittedly without the XML side of things). I guess it just follows the "those who ignore history are doomed to repeat it" mantra.

    I'm not sure why he thinks it is important that the meta object protocol stuff be done in XML. I mean, why not just in the language itself? This has been shown to work with both of the above.

    The problem he's not seeing of course, is that this approach essentially results in each project having it's own "language", which must be understood before one can participate in it.
  • by EatenByAGrue ( 210447 ) on Wednesday May 26, 2004 @07:25PM (#9263583)
    but much of this 'vision' is implemented in Microsoft's .Net Framework and Visual Studio!
  • Hmm (Score:5, Funny)

    by AdrianFletcher666 ( 783207 ) <AdrianFletcher666@btinternet.com> on Wednesday May 26, 2004 @07:26PM (#9263592)
    So basically, we get to combine the speed of Java/.NET with the user friendliness of XML and the security of COM? May god have mercy on our souls...
  • XML is not the solution to everything. Next english is replaces with an XML based language.
    SeeSpotrun
  • by Squidbait ( 716932 ) on Wednesday May 26, 2004 @07:39PM (#9263683)
    ...I'd say roughly 10% of you have actually read the article. He mentions specifically many of the criticisms you've mentioned. I don't think this is earth shattering, but some of the ideas are pretty good.

    I for one like the idea of source code stored as XML, but not displayed or edited as XML. Imagine, viewing source code in the format you specify (eg positioning of braces). And it would be really nice to be able to treat source code as data without breaking your back writing a parser. And for those of you worried about bloat - honestly, we're talking about text files here!
    • Agreed.

      The original development of XML came from the desire to separate HTML content and formatting into two distinct parts. Soon you will use XHTML and CSS to have machine parsable, web-spiderable content and any formatting you like.

      The same goal of separating content and formatting applies to programs. Use XML internally. Your choice of editor can use braces, brackets, tab-indenting, BEGIN/END, or whatever you like best to format the code.

      The internal XML format will be easily manipulated by the progra
  • by rhysweatherley ( 193588 ) on Wednesday May 26, 2004 @07:41PM (#9263693)
    Being able to extend ones compiler with different plugins sounds good in theory. Until you need to send your code to someone in Florida who doesn't have exactly the same setup as you do.

    It's bad enough tracking down the umpteen libraries that an open source program depends upon now. Now we have to track "Bob's special compiler" as well?

    Besides, we already have compiler "plugins" for extending the syntax. They have names like bison and flex. Anyone can layer new functionality on a language through meta-translation, if there is a reason to do so. But you better have a reason!

  • Programmers will be able to extend the syntax of programming languages

    Seems to me that Lisp has had this ability for 40, almost 50 years now.

    Everything old is new again
  • by EmbeddedJanitor ( 597831 ) on Wednesday May 26, 2004 @08:09PM (#9263862)
    why simple application software needs 2G of RAM and multi-GHz CPUs to get the responsiveness I got on a 100MHz 486 with Win3.11.
  • by jfdawes ( 254678 ) on Wednesday May 26, 2004 @08:14PM (#9263897)
    What maybe a few people have missed is that there will be some incredibly interesting "hardware" out there in the future.

    Some people have already demonstrated things like using DNA computers to solve travelling salesman problems [nature.com], Quantum Computing [qubit.org] and Grid Computers [thocp.net].

    Perhaps what this article is suggesting is one way for developers of entirely new "hardware" to easily supply operators and types (syntax) to any programming language.

    It would be interesting to be able to write program a that talked directly to the nervous system using fairly standard <your language of choice> syntax, that when compiled produced a real piece of nano "machinery".
  • by hak1du ( 761835 ) on Wednesday May 26, 2004 @08:20PM (#9263940) Journal
    We have had these kinds of integrated, extensible systems: Smalltalk-80, Lisp, and others. And we have had the same tired, old arguments against UNIX since its original design (you can read up on them in the UNIX Hater's Handbook [microsoft.com]). Smalltalk-80 and Lisp didn't fail because there was some grand conspiracy against them, they failed because people voted with their feet.

    Most real-world programmers apparently just want to put up a bunch of dialog boxes and windows, interact with the user a little, and interact with a database. They don't want to extend the programming tools or language or modify the optimizer, they want it to just do what they need it to do. And if it doesn't do what they need it to do, they just pick a different language and environment and don't go on a crusade to develop zillions of plug-ins and modifications. Programmers stick with text files not because they believe that they are the best representation, but because they actually work pretty much everywhere.

    Some of the changes Wilson advocates are happening. That's not surprising, given that the features he advocates have been around for decades and many people are familiar with them. But they are happening in an incremental way and people pick and choose carefully which aspects of Lisp and Smalltalk-80 they like and which ones they don't. For example, you can get versions of GNU C that output interface definitions in XML format. IBM VisualAge maintains Java sources inside databases (not text files) and permits incremental recompilation. Many Java development environments have plug-in architectures. Many editors now permit structure-based editing operations ("refactoring") and display "styled" source code, using the raw ASCII text just as a formal (non-XML) representation of the program structure. Aspect-oriented programming adds a great deal of extensibility to languages like C++ and Java. On the other hand, general-purpose macros are out--language designers made deliberate decisions not to include them in Java, C#, and similar languages.

    Altogether, it looks to me like Wilson is merely restating what is already happening and combining that with a good dose of UNIX hatred. If he would like the industry to move in a different direction, there is a simple way of doing that: he should implement what he thinks needs to be done. I think an XML-based programming language (and several have been proposed) has about as much chance at flying as a lead balloon, but, hey, surprise us.
  • by blair1q ( 305137 ) on Wednesday May 26, 2004 @08:28PM (#9263992) Journal
    How about developing Maintainable programming?
  • I doubt it. (Score:4, Interesting)

    by srussell ( 39342 ) on Wednesday May 26, 2004 @08:55PM (#9264133) Homepage Journal
    That would be a departure from what I see happening in softare development today. There seem to be three dominant camps:
    • Low level developers. People programming in C; the ones writing Linux and KDE.
    • Quasi-low level developers. People programming in Java; the one writing much of the business software right now.
    • High level developers. People programming in scripting languages, like Ruby, Python, PHP, JSP, Javascript.
    The second group is the most visible, because business loves them. The first group is the second most visible, because -- while it isn't as "hot" a technology in Monster -- most of the software we use is written at this level. I suspect that the third group is the one that will goose the business community in the future, and will probably eclipse the second group. I'd guess that this is a submarine technology; you don't see many job postings for Ruby programmers, but a heck of a lot of software is being written in it. Even more is being written in PHP, JSP, and Python.

    I imagine something like Python or Ruby, or some other high-level language that's easy to write software with, coupled with a decent compiler will be the real winner in the near future. Get some type inferrence for one of these languages, and the ability to compile it (as with Parrot), and group two will mostly go away. Java claims to be a more productive language than C because of higher level features; modern scripting languages are even better at increasing productivity, and their only real limitation is their speed, or lack of it. Just as Java eventually overcame the speed issue, so, too, do I expect some future version of a scripting language.

    But, maybe Java will hang in there. If you look at Java 1.5, you see a lot of increased syntactic sugar that has usually been only available in languages like Ruby -- I've heard that this was motivated by similar constructions in C#. Perhaps Java or C# will evolve enough syntactic sugar that hacking out code will be as easy as doing so in Ruby. IMO, it'll take a more radical language change than that provided by 1.5; my biggest complaint about Java these days is that it gets in your way; a large chunk of the code you write for any application is infrastructure, and you write it over, and over, and over (anybody else sick of ActionListeners yet?). I'd like to see the typing system changed to type inferrence... but it is possible.

    I doubt, however, that software development is going to evolve into choosing black boxes from a set of tools and plugging them into each other, mostly because to do cover all possible jobs, the framework would have to have access to a huge amount of fine-grained tools, and by that point, you might as well just write the code yourself. Look at the size of the Java APIs. How many packages are there? How many classes? How many methods? This is making our lives, as programmers, easier... how?

  • by jdkane ( 588293 ) on Wednesday May 26, 2004 @08:55PM (#9264134)
    I think Microsoft is already addressing the professor's points in the .NET platform ... or at least starting to head in that general direction already:

    * compilers, linkers, debuggers, and other tools will be plugin frameworks, rather than monolithic applications;

    For example, see the .Net Microsoft.CSharp Namespace [microsoft.com], the System.Codedom namespace [microsoft.com] to represent code as objects, etc. in the framework class library [microsoft.com].

    * programmers will be able to extend the syntax of programming languages; and

    don't know about extension of languages yet, but the next one is interesting ....

    * programs will be stored as XML documents, so that programmers can represent and process data and meta-data uniformly.

    take a look at Microsoft's XAML [ondotnet.com] technology -- describing code by using XML. That's the general direction.

    I'm sure other technology frameworks have similar things, but I'm not as familiar with those technologies.

  • by MaineCoon ( 12585 ) on Wednesday May 26, 2004 @09:04PM (#9264176) Homepage
    Metrowerks Codewarrior is an IDE (and I believe has a commandline tool for processing the project file ala Make) that uses plugin based preprocessors, compilers, prelinkers, linkers, postlinkers, and other tools, which the master project controls execution of (and through a nice GUI, allows easy association of file extensions with their tools and build information). It's been doing this since at least '97.
  • Inventing Lisp again (Score:3, Informative)

    by richieb ( 3277 ) <richieb@gmai l . com> on Wednesday May 26, 2004 @09:19PM (#9264245) Homepage Journal
    Paul Graham said that all languages hope to become Lisp [paulgraham.com]. This sounds like just another attempt.

    Why not just use Lisp?

  • by voodoo1man ( 594237 ) on Wednesday May 26, 2004 @09:19PM (#9264249)
    Terry Winograd wrote a paper, called Breaking the complexity barrier (again) [acm.org], in 1973 (it is reprinted as the first paper in Barstow, Shrobe, and Sandewall's excellent compilation, Interactive Programming Environments, 1984). In it he described the integrated programming environment of the future, speculated on the role AI would play in it, described the importance of extendable syntax and the need for data-code representation, and noted that all this would need to be deeply and intimately interconnected, all the while taking a technology agnostic view.

    Where Wilson goes wrong is in assuming that this kind of environment will be built based on plug-ins. The interrelationships needed between the components to get the required level of functionality are too great. What many people have already noted is that the current Unix environment is in fact based on plug-in development. Editors, debuggers and compilers are modularized as programs, with clean lines of communication between them in the forms of files and streams (which Unix again abstracts to one concept). The limitation of this system lies in the fact that the modules all use their own separate address spaces, and hence each one has to have a private representation of the program. This can't be mitigated by having the separate tools communicate to a central database (this is the most that Wilson's proposal of using XML as the underlying format can accomplish), because then the method of communication would be the limiting factor. Of course, you can use the neutral code-data representation to make the communications between the modules and the database be in terms of sending closures (from reading the paper, I don't think Wilson even considers this), but then you've just designed a single distributed address space, and in the process removed all the encapsulation and modularity advantages of the communication links (not to mention introducing a whole slew of concurrency issues)!

    One such integrated system has been built in the past, called Interlisp. Barstow, Shrobe, and Sandewall's book (mentioned above) has a few papers that describe the system, but briefly a few lessons can be distilled from it. First of all, the system itself was an integrated development environment for a dialect of Lisp, where everything was done in one in-core address space: source code (including comments) was represented by data structures in memory, upon which the structure editor (residing in the same address space) operated directly. Code could either be interpreted from the data structure or compiled by the (yes, in-core) compiler. There were several extended packages (besides a Lisp macro-like facility), notably the structure editor and "Conversational LISP," a pseudo-natural language command-prompt parsing system. Although source code (and data) could be serialized to files (there was a sophisticated change-tracking facility that took care of this), the usual way of working was by saving the core image to disk and loading it next session, so the whole environment was persistent. There were hooks for everything from the parser to the compiler to error handling down to the most basic frame-handling code of the stack-based VM, and in order to implement the facilities mentioned above (and some other ones I left out, like the ever-present DWIM automatic error-correction facility) the code took full advantage of them. This caused some trouble when it came to portability of the components and the Interlisp itself (the heavy interdependence caused many problems in bootstrapping the system). Some of these incidents are documented in Barstow et al.'s book, but the Interlisp bootstrapping difficulty has been mentioned in all of the Interlisp porting papers I've read. Unfortunately, I don't think a system with those capabilities can be built with the rescrictions of modularization, since all of the things it did are applicable to programming in any language, and to do them required precisely the

  • by Animats ( 122034 ) on Thursday May 27, 2004 @01:44AM (#9264373) Homepage
    Most of what he's talking about was in Interlisp, the first Really Big Integrated Programming Environment. Integrated debugging. EMACS. Program storage as "workspaces". Extensibility. Intelligent assistants ("DWIM", or "Do What I Mean", a set of heuristics for correcting Warren Titelbaum's most common typing errors.)

    The ultimate expression of this was realized with the Symbolics LISP machine. Everything was in LISP. Everything was hackable. The MIT Space Cadet keyboard, with six shift keys (Shift, Ctrl, Meta, Super, Hyper, and Top). All 2^16 keycodes could be bound to any EMACS function.

    I've used both. They sucked. Partly because they didn't work very well, but mostly because all that flexibilty and programmability had negative value. Language and UI design are hard. Evading the problem by making everything changeable does not fix the problem.

    His point about XML being another way to put LISP S-expressions into textual form is well taken, though. They're both trees. The problem with LISP is that while the data structures are valuable, the programming notation really is a pain.

    LISP works well as a web development environment. Viamall, which later became Yahoo Store, was written in LISP. That was one of the first web applications that really did something elaborate on the server. You could create web pages on the server from a web browser. And the overhead was lower than with XML, where you're forever re-parsing text strings into trees.

  • XML metadata. (Score:3, Interesting)

    by Edward Kmett ( 123105 ) on Thursday May 27, 2004 @02:40AM (#9264665) Homepage
    I've been working on a pet project very similar to this for a couple of years now off and on.

    Currently, I'm constructing the editor as a javascript/xul/xbl based application under mozilla (not yet publicly released) and tossing the documents over jabber to a code repository which connects as another client. Other pieces in the suite, such as the compiler, talk over jabber to the repository, helping to ensure modularity.

    Why mozilla? It gives me a cross platform editing environment and I can take advantage of the built in xhtml/mathml rendering. (Although, I admit I'm largely hamstrung by the faulty mathml rendering on Mac OS X at the moment)

    Why jabber? It serves as a glorified RPC mechanism for exchanging XML document fragments for me. Its primary advantage compared to SOAP, XML RPC, etc, is that I can allow the repository or execution environment to send out updates to the clients, rather than rely on client based polling. After all, in this day and age of everything lying being NAT, you usually can't open sockets to clients directly. It also has the advantage that it makes evolving the platform into a collaboration environment a simple logical progression, rather than something grafted on as an afterthought.

    My main interest is in what advantages you derive from allowing a rich text markup language and extensible grammar, and the ability to tag information and retain markup across versions.

    A smarter editor allows you to move towards allowing dynamically defined operators, which can have their precedence defined in terms of a partial ordering with respect to one or more existing operator, that way you can red flag during the editing process when something is ambiguous. Superscripts, subscripts, radicals, Riemann sums are allowed by defining small extensions to the grammar in the language and loading them into the editor.

    The potential for language tagging comments or method labels for internationalization is nifty, but more than a bit of a Pandora's box.

    An XML namespace for version control means the repository can store one document much like a cvs system. By having the editor submit a series of change requests to the repository rather than edit the document directly, integrity is ensured.

    Since you have a fairly stable set of tags you can now embed more information for statistical collection, loop counting from debugging compiles. Links to hand- or auto-generated proofs of algorithmic correctness, big-O information, etc.

    So, yes, there is a value to storing the data in XML and making the editor smarter.

    However, one primary is that any such project has a rather high bar to clear to become even marginally useful.

    There are also a number of interesting problems regarding how to handle certain types of code refactoring and traditional text editing operations in this sort of environment.
  • by RAMMS+EIN ( 578166 ) on Thursday May 27, 2004 @07:07AM (#9265384) Homepage Journal
    This so much reminds me of the LISP machine. How many times are they going to reinvent it?

    It's such a pity that the code for the LISP machine is guarded by intellectual property protections that even the copyright holder doesn't know its way about.
  • Rebuttal (Score:5, Insightful)

    by Oestergaard ( 3005 ) on Thursday May 27, 2004 @07:43AM (#9265481) Homepage
    compilers, linkers, debuggers, and other tools will be plugin frameworks, rather than monolithic applications

    Uh? My compiler acts as a "plugin" via. make, which is called from emacs. If I want another compiler, I tell make, and voila' it's "plugged in". Welcome to the world of 'NIX Mr. Wilson.

    What is worse, every tool's command-line mini-language is different from every other's

    But this is their strength! Different tools solve different problems - and they use different languages to describe what they do, because they are *fundamentally* different (awk is not sed is not grep is not ls). How would you possibly write up a single language to describe what both sed and awk does, without poorly re-creating perl?

    Attempts to stick to simple on-or-off options lead to monsters like gcc, which now has so many flags that programmers are using genetic algorithms to explore them

    Most CS majors will know that modern CPU architectures are complex beasts, and that it is pretty hard to come up with which combination of optimization methods will yield the best performance on some particular revision of some particular CPU on some particular hardware configuration. Nothing mysterious about that. I completely fail to see what that has to do with command line options.

    And instead of squeezing their intentions through the narrow filter of command-line mini-languages, programmers can specify their desires using loops, conditionals, method calls, and all the other features of familiar languages

    Instead of squeezing my intentions thru the narrow filter of command-line mini-languages, I can specify my desires using what a standard shell (like bash) has to offer. Ladies and gentlemen of the jury, this is not making sense!

    The result is that today's Windows developers can write programs in Visual Basic, C++, or Python that use Visual Studio to compile and run a program, Excel to analyze its performance, and Word to check the spelling of the final report.

    Oh come on, please... So if I develop on windows, I can use VB, C++ and Python. How is this relevant? There are more useful languages available on the dreaded "command line systems" ('NIX), but let's just agree that there are plenty of languages available on most OS'es out there - regardless of the windowiness or commandlineness of the system.

    Using VS to compile and run the application? Well, if your command line absolutely sucks, they I can imagine why you would want to launch your app from your editor - a matter of taste too maybe. But relevant? How?

    Somehow I need COM in order to put numbers into Excel? Ever heard of CSV? You know, new-line terminated lines of T-E-X-T which can be processed by these little all-different tools, like, for example, Excel.

    The part about Word and spell-checking of a final report... What? What's your point? If I use COM for developing software, I can spell-check in Word? If I use a command line, I cannot spell check a report that I write about it in Word?

    A similar API allows the popular memory-checking tool Purify to be used in place of Visual Studio's own debugger, and so on.

    Absolutely! Plusings make perfect sense certain places. Dude - GUD is written in Emacs LISP, it's a plugin for GDB. You could write an elisp file for Purify as well - in fact, Intel actually ships an elisp file for their debugger, even on Windows... Plugins make sense some places, other places they don't. Which, lo and behold, is why they are used certain places and not others.

    One of the great ironies of the early 21st Century is that the programmers who build component systems for others are strangely reluctant to componentize their own tools. Compilers and linkers are still monolithic command-line applications: files go in, files come out

    Why does he not see what he's writing?!? A compiler reads a number of input files and generate an output file - this is a perfect match for a command-line too
  • by fikx ( 704101 ) on Thursday May 27, 2004 @09:17AM (#9265816) Journal
    This is not the golden, unifying solution or anything, but there's some ideas in there that could be useful.
    I saw his thougts in the first section of the paper and took the rest as some quick examples on how it might look.
    I can think of plenty of directions this could go. The first thing I got out of it is applying the same level of abstraction we try to implement in programs to the act of programming itself. This is happening in all kinds of areas of computers anyway (like abstracting file systems, GUI's, etc.) why not put programming into the mix?
    It's not about using scripting instead of programming languages, it sounded more like building the same features into our programming tools as we build into the apps we write with 'em.

    why all the negative reactions? If its about loosing your editor to write code, you didn't read the article. If it's about too much abtraction to program, then it seems kinda hypocritical considering all the frameworks we use for other people's tools. Or is it just irritation about having to relearn a bit and keep on coding as before? The complaints about XML are odd too. He choose a machine-parseable, human-readable widely used format as a possible way to store programs at a low level.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...