Forgot your password?
Software Programming IT Technology

Generic VMs Key To Future of Coding 139

Posted by Soulskill
from the acme-brand dept.
snydeq writes "Fatal Exception's Neil McAllister calls for generic VMs divorced from the syntactic details of specific languages in order to provide developers with some much-needed flexibility in the years ahead: 'Imagine being able to program in the language of your choice and then choose from any of several different underlying engines to execute your code, depending upon the needs of your application.' This 'next major stage in the evolution of programming' is already under way, he writes, citing Jim Hugunin's work with Python on the CLR, Microsoft's forthcoming Dynamic Language Runtime, Jython, Sun's Da Vinci Machine, and the long-delayed Perl/Python Parrot. And with modern JITs capable of outputting machine code almost as efficient as hand-coded C, the idea of running code through a truly generic VM may be yet another key factor that will shape the future of scripting."
This discussion has been archived. No new comments can be posted.

Generic VMs Key To Future of Coding

Comments Filter:
  • correct me if im wrong but isn't silverlight 2 out and it has the DLR with python and ruby etc

    • Being a programmer, I feel I can say this.

      I really hate it when programmers get architectural ideas. This is how we ended up with Java, and look how that turned out. Hasn't lived up to its promises, is completely pointless now and outdone by a lot of other languages.

  • by CRCulver (715279) <> on Saturday October 18, 2008 @08:19AM (#25423219) Homepage
    I remember some years ago the elation people felt when Parrot was announced. At last, we could leverage the strengths of either Python or Perl--or whatever other interpreted languages--but work with a common interpreter. But then the hype started to die down, and the last edition of O'Reilly's book on the subject [] appeared over four years ago. Within the Python community, interest in Parrot seems completely dead. Are the Perl folks going it alone, and when might we see the project reach a successful deployment?
    • by YA_Python_dev (885173) on Saturday October 18, 2008 @08:46AM (#25423309) Journal

      Within the Python community, interest in Parrot seems completely dead.

      Generic VMs are so 2005, the future of Python runtime is PyPy []. From a single implementation of Python (written in Python), they can compile Python code to C, JVM, automatically create a customizable JITed VM, etc...

      Check them out: they are doing some seriously cool stuff and they can use a bit of help.

    • Look, matey, I know a dead parrot when I see one, and I'm looking at one right now.

      No no he's not dead, he's, he's restin'! Remarkable bird, the Norwegian Blue, idn'it, ay? Beautiful plumage! ...
    • by chromatic (9471) on Saturday October 18, 2008 @02:32PM (#25425319) Homepage

      Patrick Michaud wrote a bare-bones Python implementation in eight hours. It doesn't support all of Python, but it supports a large amount -- and, to my knowledge, he'd never implemented a Python compiler or interpreter before. That project, Pynie, has languished for a while, as he's spending more time working on Rakudo [] (the Perl 6 implementation on Parrot), but it's a viable port just waiting for someone to work on it. Lua is functionally complete as of 5.1 (I believe), and Tcl, PHP, and Ruby are in progress.

      You can play with the latest versions of all of these languages on Tuesday, 21 October, when we make our next monthly stable release (though partcl [] just moved to a separate repository, so you can check out the current version there on a different schedule).

      • Re: (Score:3, Interesting)

        by jimdread (1089853)

        Thanks mate, you're doing a great job. I downloaded Parrot and gave it a go. Perl6 is looking good. But, Parrot tells me that Larry got one of his perl6 programs wrong. If you look at Apocalypse 12 [], Larry has this:

        class Point {
        has $.x;
        has $.y is rw;
        method clear () { $.x = 0; $.y = 0; }

        Note that x is read-only, and y is read-write. I assume that if you don't put rw after an attribute, it's read-only. Otherwise, there's not much point having rw. Later in the example program, Larry wrote th

        • Re: (Score:3, Informative)

          by chromatic (9471)

          Note that x is read-only, and y is read-write. I assume that if you don't put rw after an attribute, it's read-only. Otherwise, there's not much point having rw.

          That's true, but note that the rw attribute only applies to the accessor method. A12 says:

          The traits of the generated method correspond directly to the traits on the variable.

          Further, it says:

          In any event, even without "is rw" the attribute variable is always writable within the class itself (unless you apply the trait is constant to it).

          The idea

  • by Anonymous Coward on Saturday October 18, 2008 @08:21AM (#25423225)

    One standard, several implementations? Sounds nice in theory, just like the numerous standards that Sun has outputted where each vendor delivers its own implementation (JPA, JDBC, J2EE among others). However, in practise you pick *one* vendor and *one* implementation and run with it. Only a fool would dare switching implementation mid-development, making the choice really just academic, because there are always minor differences that "shouldn't" matter, but does.

    • Re: (Score:3, Insightful)

      You also run into problems where someone creating an implementation of your language's VM may create one that's less complete or robust than another.

      This will also get interesting when you see an implementation of a language for one vendor's VM significantly outperform one on another, or also implementation-specific security issues; where a certain framework is secure on one vendor's platform, but not so much in another.

      Also, security will be very platform/vendor-specific. Imagine a product where there's a

    • Re: (Score:3, Insightful)

      by jlarocco (851450)

      One standard, several implementations? Sounds nice in theory, just like the numerous standards that Sun has outputted where each vendor delivers its own implementation (JPA, JDBC, J2EE among others). However, in practise you pick *one* vendor and *one* implementation and run with it. Only a fool would dare switching implementation mid-development, making the choice really just academic, because there are always minor differences that "shouldn't" matter, but does.

      That's true, but there are still several re

    • Only a fool would dare switching implementation mid-development

      Unless the requirements change mid-development. For example, an application originally intended to run on a notebook computer (which has an x86 CPU) might get retargeted to run on a handheld device (which more than likely has an ARM CPU), or vice versa. With C++, you switch to a different implementation that supports a different instruction set. Or perhaps you want to develop a product and deploy it on multiple platforms. For example, XNA for Xbox 360 can only run CLR bytecode, and MIDP for mobile phones c

    • by afidel (530433)
      Only a fool would dare switching implementation mid-development,

      Huh? All of my apps that run on platforms like JDBC and J2EE support the majority of implementations, ie I can choose to run on Apache/Tomcat or Websphere or Oracle's implementation without any issues with code or support.
  • by sphealey (2855) on Saturday October 18, 2008 @08:22AM (#25423229)

    Reminds me of architects and developers who create generic database access engines so their product can be "platform independent" and then wonder why its performance is so bad no matter which of the six major databases is used.


  • And... (Score:5, Insightful)

    by Colin Smith (2679) on Saturday October 18, 2008 @08:28AM (#25423253)

    Software development recursively disappears up it's own arse.

    We already have different, generic, virtual machines. They are called operating systems. They run on bits of silicon and steel.

    You can't fix the problems you have writing software by running away from them

    • Re: (Score:3, Funny)

      And I read the headline as "Generic VMS ...", promptly shitting myself in the process :)
    • Re:And... (Score:5, Insightful)

      by TheRaven64 (641858) on Saturday October 18, 2008 @09:11AM (#25423421) Journal

      I totally agree. The summary explained exactly how my code works already. I write C, Smalltalk, Objective-C and C++ (if I really can't avoid it) code. I then use a magical tool called a 'compiler' which turns it in to code for a language-agnostic virtual machine called 'the {x86,SPARC,PowerPC,ARM} instruction set' which then runs it. The important part is not the VM, it's the libraries. With my Smalltalk compiler I can add methods to objects written in Objective-C, subclass classes written in either language with the other. I can write high-level application logic in Smalltalk, mid-level code in Objective-C, and really performance-critical stuff in inline assembly in some C functions called from Objective-C methods. I can access a wealth of libraries written in C, C++, or Objective-C.

      Actually, I do use a virtual machine, since my Smalltalk compiler is built on top of LLVM, but this VM is similar to an idealised form of a real CPU, and fairly language agnostic. Currently, I only use it for optimisation and statically emitting native code, but I could use it for run-time profiling and dynamic optimisations too.

      Oh, and real men write their own compilers.

      • Bollocks (Score:4, Funny)

        by Colin Smith (2679) on Saturday October 18, 2008 @09:44AM (#25423549)

        Oh, and real men write their own compilers.

        Real men code in P".


      • Re: (Score:2, Interesting)

        by mqsoh (1002513)

        I think they'd like to bring that convenience to a 'higher' level. I make my living writing ActionScript and JavaScript and I felt like a jerk when I read a book recently that described C as a 'high-level' language.

        Most of what I write everyday has problems between browsers on the same operating system. The flexibility you describe would be a joy for me.

    • Re:And... (Score:5, Insightful)

      by Dolda2000 (759023) <fredrik@dolda2000 . c om> on Saturday October 18, 2008 @09:19AM (#25423457) Homepage

      I could not agree more, and none to my surprise, TFA was full of inflated fluff and very little substance. It was hard enough to wade through it even to find anything substantial at all, but let me highlight some of the things that can be found:

      In fact, many developers would rather be freed from the hassles imposed by traditional systems programming languages. VM-based languages offer such features as automatic garbage collection, runtime bytecode verification, and security sandboxes -- all of which translate into peace of mind.

      Of course garbage collection has been a feature of LISP since its inception, which has been compilable to machine code since... the 60s? Not to mention the garbage collection libraries available for C and other languages. I'd care to call that point bogus.

      Likewise, runtime bytecode verification isn't necessary with a hardware CPU. It's just made to ensure that a JVM doesn't encounter any illegal instructions or jump to code outside the current protection domain. Hardware CPUs can do illegal instruction checking in parellel with execution without penalties, and virtual memory makes the jump checks pointless as well. Not to mention that it is less restricted, so that one can implement such things as tail-call optimization or continuations without reimplementing the CPU.

      Oh, and of course, operating systems have had security sandboxes called "processes" since... the 60s? Of course, one could well argue that it would be swell to be able to further control a process' privileges to a degree not available on, say, Linux or NT, but that isn't exactly something that requires a VM.

      Dynamic languages, on the other hand, mean efficient coding; their high-level syntax makes it easy to conceptualize applications and build prototypes rapidly.

      Yeah. But as Lisp, Psycho and countless others have demonstrated, they don't need a VM to run efficiently.

      The great advantage of a generic VM, as opposed to a language-specific one, is flexibility.

      Of course, exactly what a "generic" VM entails does not seem to be entirely clear to the author. Or at least, I can't find anything about it in TFA.

      • Well, look, this is a weird thing. As a language researcher, the idea of having such VMs as targets is very exciting, but it rests on the assumption of them not being total crap, and we all know that, in practise, this isn't going to happen. To take your example of bytecode verification: you say, well, processes and hardware checks deal with illegal instructions already. Ask a language theorist, and they will say, sure, but bytecode verification can check things like:

        • there are no infinite loops
        • your passwor
        • by tepples (727027)

          but bytecode verification can check things like: there are no infinite loops

          Since when did the halting problem get solved, or since when has a practical solution appeared for even the subset of the halting problem that applies to finite computers?

          • Nobody said you had to solve the halting problem. Sometimes a loop obviously terminates, and sometimes a proof can be supplied along with the code and all the runtime has to do is verify it. We were talking about bytecode condition verifiers, remember, not bytecode all-possible-property provers!

        • Ask a language theorist, and they will say, sure, but bytecode verification can check things like:

          there are no infinite loops

          You can not check for infinite loops. That is "uncomputeable". You only can check for very special cases like comparisions that are always true or always false ...


          • You are wrong. You cannot in general decide in finite time whether a loop is infinite or not, but you can definitely (a) prove that some loops terminate (e.g. "for (i = 0; i almost all practical computation. (The only obvious thing it fails for is writing its own verifier.)

        • Such 'real world' VMs tend to get (or be able to get) unfiltered access to the native file system, to choose a random example, and they do it in the absence of a general security verifier (a thing that is, admittedly, much more technical and harder to get right than a mere JIT compiler).

          I have the feeling you mix up Java Applets with Java Applications/programs. Ofc a program has full access to the file system. It is subject of the operatin system, not of the VM to secure the system.

          Applet dont have access t

        • Ask a language theorist, and they will say, sure, but bytecode verification can check things like:

          there are no infinite loops

          Well, no, it can't, at least not comprehensively. It can identify provably finite and provably infinite loops, but there are large classes of loops that can be written in any Turing-complete language that don't fall into either category.

          • Of course it can't, "comprehensively." I must say I utterly don't understand the responses I am getting to my comment; it's as if people imagine that the job of a type checker is to derive all possible types that an expression might have. It isn't; it's merely to validate that the claimed proof of the claimed properties of an expression applied.

            Think about it. How likely is it that I would have been that I meant what you are pretending I said? Would you design a language that appears to have infinite loops

            • Of course it can't, "comprehensively."

              IOW, it can't show that there are no infinite loops, unless all the actual loops are carefully constructed to be provably finite, in which case you don't need it to do that in the first place.

              • Aargh. Look, we're not talking about the compiler, we're talking about the bytecode verifier. Of course the compiler "carefully constructs" the loops to have the properties they are supposed to have! The bytecode verifier is there to make sure that the compiler isn't lying, and thus it's safe to link the code into the runtime system and execute it. So of course you need to "do that in the first place"—first, because you didn't want an infinite loop unless you wanted an infinite loop (compilers should

    • We already have different, generic, virtual machines. They are called operating systems.

      So if I have some customers who own hardware that runs one operating system, and other customers who runs another operating system, how do I deploy a solution to both?

  • LLVM plug (Score:5, Informative)

    by Anonymous Coward on Saturday October 18, 2008 @08:35AM (#25423271)

    article didn't include it, but this open source project seems to have similar goals []

    • Re: (Score:3, Informative)

      by naasking (94116)

      I'm still very surprised how few people are aware of LLVM. It's a truly low-level hardware abstraction layer, on which you can implement any language. OCaml, Haskell and Python have bindings for it IIRC.

  • by neokushan (932374) on Saturday October 18, 2008 @08:38AM (#25423283)

    Sure this sounds quite a bit like something Microsoft, of all people, tried to create? That's right, I'm talking about .Net! Microsoft loved touting how you could develop .Net applications in C#, C++ or even good ol' VB and it should all work the same and even interoperate.
    But it's .Net and I'm sure anyone with any experience knows that despite the supposed advantages, it has quite a few disadvantages as well. But at least it made VB somewhat useful again.

    None the less, I wouldn't hold my breath on this one, sounds like a pipe dream to me and I'm sure some would argue - what's the point in running your code through a VM if you can just run it natively?

    On a side note: As efficient as hand-coded C? In my experience, 90% of the time someone tries to write "efficient" C, they end up causing more problems than it's worth (early-optimisation and all that). Perhaps it should be reworded to say something like Hand-crafted C from a C Master".

    • Re: (Score:3, Insightful)

      by Psychotria (953670)
      Well I have to agree (mostly). What on Earth is "hand-coded" C? And why is it better than... wait... what other kind of C is there?
  • The point? (Score:5, Insightful)

    by orclevegam (940336) on Saturday October 18, 2008 @08:52AM (#25423329) Journal
    Am I the only one that sees this as completely ass backwards? I mean, part of the lure of scripting languages is that we skip that whole compile phase of things, and so achieve a certain degree of platform independence. So long as the system being targeted has a implementation of the scripting languages interpreter, you just run the script inside of it, and you can distribute the same script (more or less) for any system with an interpreter. Now they're talking about essentially compiling a scripting language to one of several different byte codes to target one of several different VMs, which then of course need implementations on whatever systems you're targeting. How is this an improvement over the previous way of doing things?

    What exactly are we getting out of this? The language developers don't have to worry about the details of the underlying machine, but as a trade off they now need to write implementations for whatever VM is out there, which is turn will require them to worry about the details of the underlying machine, so we've just pushed that pain point down one level of abstraction, but not eliminated it. The only up side I can see to the entire thing is language interoperability which is nice and all, but how does that fit in with the multiple-VM approach being touted here? Each language is most likely going to require some minor changes in order to support interoperability at the VM level, and of course there will be quirks and gotchas on each VM as well. Unless all the VM developers get together and agree on the exact changes that will be required to each language we could end up with a situation in which each language will come in multiple slightly different syntaxes depending on exactly which VM it targets.
    • by DarkOx (621550)

      I have to agree, if performance is not a major concern and for anything not number crunching these days its probably not, an interpreter is going to be more flexible the compiled byte code, and probably can still be pretty quick even if its runtime nature prevents certain optimization you might do with a compiler. Why must we keep going after this one tool for every job approach. There is a place C,C++,Java,Perl,Python,Ruby as they exist to day.

    • by dkf (304284)

      Each language is most likely going to require some minor changes in order to support interoperability at the VM level, and of course there will be quirks and gotchas on each VM as well.

      I think you underestimate the problem. At the VM level, you're dealing with the deep language semantics only; simple stuff tends to be either syntax or in the language libraries. When you mess with the deep semantics, you have far reaching consequences. For example, consider the differences caused by switching between mutable and immutable values, or between simple variables and vars where accessing them can cause reentrant calls to the VM, or between eager and lazy evaluation of expressions.

      Those who argue

    • you have to bear in mind that scripting languages, in order to be _reasonably_ efficient, have to do intermediate byte code _anyway_.

      python uses a FORTH-like intermediate byte code, for example. the similarity to CLR will be pretty high.

      when you come to things like V8, that does on-the-fly _compilation_ which is basically the same thing as intermediate byte code, only a bit more extreme and aggressive.

      so the technology is beginning to move in the direction of "grey area" - thinning the distinctions.

      i like

  • All I know is that every large Java system seems to have parts written in native code called through the JNI.

    The JVM has been around for a long time and still can't do things like device drivers. Performance code, like parts of Java Advance Imaging, are native. A lot of people turn the native parts off though because they use ridiculous amounts of memory.

    I think it's just too hard to make VM's that do everything well.

    • by joshv (13017)

      "All I know is that every large Java system seems to have parts written in native code called through the JNI."

      I use several "large systems" written in Java that use no native code, other than what might be embedded in the JDK, and I imagine most of that is just string manipulation.

      "A lot of people turn the native parts off though because they use ridiculous amounts of memory."

      What the hell are you talking about? How does one "turn the native parts off" in Java? And why do you think these native bits use

      • JAI has native codecs for some formats and also Java codecs. The native ones are used by default but they can be turned off i.e. the Java codecs are used instead.

        They used a lot of memory because they don't cache, I think. The whole uncompressed image gets read into memory which can use hundreds of MB.

      • by gbjbaanb (229885)

        And why do you think these native bits use ridiculous amounts of memory?

        because his Java apps use ridiculous amounts of memory and he has to find something else to blame :-)

  • This all sounds great for a single programmer or small team, but how does this play in today's corporate programming environment? Today you can have teams split up into 3 or 4 time-zones, contractors and perms, outsource coders in India, China, and who knows where else...all working on the same project with their own opinions of what is "best" for the project. Will allowing each to code in their own programming "dialect" really work?
  • by Anonymous Coward

    Intel stock rose sharply as investors realized that ubiquitous VMs will require faster processors because more programs will be written in scripting languages. Shortly after, Intel stock plummeted as investors realized that intermediate VMs decouple the programs from the processor architecture.

  • by itsybitsy (149808) * on Saturday October 18, 2008 @09:10AM (#25423413)

    Ian Piumarta and the VPRI [] are doing some amazing work related to this story.

    COLAs: Combined Object Lambda Architectures - A Complete System in 20,000 Lines of Code. []
    The system is slowly evolving towards version 1.0 which
            * is completely self-describing (from the metal, or even FPGA gates, up) exposing all aspects of its implementation for inspection and incremental modification;
            * treats state and behaviour as orthogonal but mutually-completing descriptions of computation;
            * treats static and dynamic compilation as two extremes of a continuum;
            * treats static and dynamic typing as two extremes of a continuum; and
            * late-binds absolutely everything: programming (parsing through codegen to runtime and ABI), applications (libraries, communications facilities), interaction (graphics frameworks, rendering algorithms), and so on. [] [] [] []

    Allen Wirfs-Brock and Dan Ingalls are currently working on bringing notions like Colas to the browser so that we can use any programming language WE choose to for our browser based applications. Check out their interview here. []

  • So I heard you like coding on VM's? So we put VM on you VM so you could code while you code.

  • by the_skywise (189793) on Saturday October 18, 2008 @09:17AM (#25423445)

    Microsoft promised this with .NET. (Just buy our tools and you build to .NET and run on all Windows platforms, XP SP1, XP SP2 AND Vista! It's sooo much better than that... Java thing.)

    Microsoft promised us this with Windows CE. (Just buy our tools and with a simple compiler switch, voila, you're targetting CE... it couldn't be easier.)

    Microsoft couldn't even do it with DirectX where OpenGL could (Oh hey, that XBox directX.. it works a little differently than Windows DirectX)

    For that matter, the Windows Printer driver APIs aren't consistent (Yeah, we know it's called GetMarginSpaceFromEdge but driver A measures the edge from half an inch in and driver b measures the edge from the print head detects the edge of the page which is sometimes an inch greater than the page itself...)

    Y'know what the greatest VM is right now? i386! And has been for nigh-on 10 years!

    I LIKE Microsoft product, don't get me wrong... but I'm not going to buy Visual Studio 2011 which has no other changes than a GUI enhancement and the ability to target my development towards the hot new sweetness.DNET API's so 3 years later, Microsoft can abandon .DNET for DCOM# because, hey, thats what our research said people wanted and it'll be supported on Windows 7.1.1 along with Blackbird 2.0

  • Plus ca change.... (Score:5, Interesting)

    by bfwebster (90513) on Saturday October 18, 2008 @09:20AM (#25423459) Homepage

    My first thought on reading this was an old software engineering maxim, usually (and probably correctly) attributed to Don Knuth []:

    There is no complexity problem in programming that cannot be eased by adding a layer of indirection. And there is no performance problem in programming that cannot be eased by removing a layer of indirection.

    Universal VMs are old as the hills (anyone [else] here old enough to have programmed on the UCSD p-System []?). We shift towards VMs to gain independence and portability, and then we shift back to direct, spot or JIT compilation to improve performance. It's an old, old dance, and one that will likely go on for years to come. ..bruce..

    • Used to work for the UK source Licensees (TDI in Bristol UK) back in the early 80's. It had it's place then as there was no such thing as a PC standard. Even so, it wasn't quite as portable as you'd think - floating point was often not IEEE and differed between implementations. Byte ordering (byte sex) mattered (even on Version IV). Performance constraints on those little machines meant that p-code had special fast "short load" instructions. The net effect was that high level programmers abused this with fo
      • by bfwebster (90513)

        I first used the p-System in 1981 on a Northstar microcomputer at the Lunar and Planetary Institute to write an HP graphics terminal emulator for a Houston Instruments large-bed plotter. I was having problems with the p-System (with floating point calculations, no less) and so ended up writing a p-code disassembler so that I could figure out what was going wrong. I came up with some solution, though I don't remember what it was.

        The disassembler came in handy a few years later when I ended up writing SunDog:

  • I find it kind of funny how there's the battle between wresting the most performance out of the hardware versus the ease of use for the programmers and users. Back in the day, every character was significant and code with too much documentation simply ate up too much space. (and this is talking about after we gave up on punch cards and were typing the code into terminal screens.) Every step we take to make computers easier to understand, easier to use makes the backend so much more complicated. A base insta

  • by MarkWatson (189759) on Saturday October 18, 2008 @09:36AM (#25423525) Homepage

    There is a JSR to address this on the JVM but I am not convinced that interop between languages on a single VM will be transparent. I mix Java libraries with JRuby and I often end up writing thin facade classes to make interop better.

  • .... wanting to fully understand it I followed the links where I typically found a new link after the first paragraph, recursively. So after 15 minutes of reading I determined that I hadn't gotten anywhere in understanding much of anything except for one thing:

    How many programs must we run, layer upon layer, in order to run an application?
    Doesn't adding more and more layers of complexity contribute to the failure side of the failure vs. success equation?

    I do really understand the ideals behind .net, such as

    • ... since its all about abstractions and translation, by doing it up front your have more control and opportunity to advance.

      To deal with translation on the back end is avoidance or hindrance of genuine programming advancement in exchange for licensing fees for another level of abstraction/translation.

  • by jonaskoelker (922170) <.jonaskoelker. .at.> on Saturday October 18, 2008 @10:15AM (#25423695) Homepage

    generic VMs divorced from the syntactic details of specific languages

    The syntax of programming languages is something understood by the front-end of a compiler. It then translates the code into code that does the same thing in the back-end language (such as JVM/PyVM/x86/LLVM bytecode). Neither back-end knows about the syntax of the front-end language.

    The real challenge is to adopt conventions on the back-end VM that allows different languages to talk together. It'd be straightforward to implement an x86 emulator on top of the JVM and run the ${language} VM on that x86. Wow, you now have ${language} running on the JVM. So? You can't talk to the Java library that way.

    If you want languages to talk together, they need to agree on data representation formats and calling conventions. Try getting object.field if you don't know where field is relative to the base address of object. Try calling object.method() if you don't know the format (or location) of object.__vtbl.

    Also, the semantics of some operations have to be considered if a language has to deal with a foreign object model. Let's say we target the Java VM. How do you implement multiple inheritance? What does .super do on a class with multiple parents? How do you implement "Object *p = malloc(...); *p = my_object;"? How do you implement C++'s delete? How do you implement python's generators?

    To support a set of languages, the VM must support the union of features. To make the languages talk together smoothly, the VM must support each feature in a reasonably straightforward way. The two demands pull the VM in opposite directions.

    I don't want to just poo-poo this idea, but my experience with dealing with the Java VM (I've written a java-important-subset compiler in my compiler course) is that it's tightly coupled to the Java way of doing things. My experience with different languages (C, C++, Java, python, perl, ruby, haskell, scheme) says that things are different enough that you can transfer most of what you know from one language to another [at least for the oo/procedural], but that the devil is in the details, and the VM has to handle all values of $details.

    • by dkf (304284)

      I don't want to just poo-poo this idea

      Well, I don't mind doing just that!

      VM's come in two basic varieties: low-level and high-level. Low-level VMs are really software-implemented microprocessors, and targeting them is like writing another back-end for GCC, though with some odd instructions. High-level VMs are much much easier to generate code for, but tend to be locked to a particular front-end language (or group of semantically-similar languages) because the operations of the VM capture a lot of high-level details.

      If someone is peddling a univ

      • If someone is peddling a universal VM, they are either doing a new low-level VM (oh great, we've already got x86, JVM and CLR and they have a lot more existing tool support thankyouverymuch)

        I disagree with the characterization of JVM as a low-level VM. Of course there is the instructions that map one-to-one onto x86 instructions (arithmetic for ints and floats, jmp{,zero,neg}). But in it we also find the notion of classes and their relationships, plus introspection, i.e. accessing types (classes), data (objects) and code (methods) by name at runtime. I would say those are fairly high-level concepts which are not found on the x86 or the LLVM (= Low-Level Virtual Machine).

        Also, there's the wh

  • by tcgroat (666085) on Saturday October 18, 2008 @10:31AM (#25423763)
    Wasn't platform independence the selling point of UCSD's p-system []? Yes, it worked, but it never really caught on. One camp of software development says that hardware is always getting faster, cheaper and more efficient, so adding a layer of abstraction between the source code and the hardware is not a problem. The other camp says we can use those same performance improvements to build software that does more things, on larger data sets, with better graphics, and in general make what once were impractically large and complex software tasks run on the average users' systems. Over the last three decades, the market has favored the latter.
    • Over the last three decades, the market has favoured the latter.

      I'd argue that in the time from when Java started to become popular, (1995-1996) through to the .net VM coming out with a similar philosophy in 2002, we have agreed to take a one-off performance hit of 10-50% (which was offset within the year by faster computers) in return for a VM with garbage collection and some platform independence.

      So, for the last decade, the market has had some preference for the former. Perhaps 1996 was about when there

  • long as it's Python.

    No thanks.

  • by lkcl (517947) <> on Saturday October 18, 2008 @11:25AM (#25424139) Homepage

    no don't laugh, it works very well! there are a number of very good reasons for this.

    1) javascript is actually an incredibly powerful language, in particular due to the concept of "prototype"ing.

    2) javascript, thanks to web browsers, has an unbelievably large amount of attention spent on it, to optimise the stuffing out of it. as a result, the latest incarnation to hit the streets - the V8 engine - actually compiles to i386 or ARM assembler.

    3) the number of "-to-javascript" compilers is really quite staggering. see the comments from pyv8 article [] for an incomplete list.

    GWT has a java-to-javascript compiler; Pyjamas [] has a python-to-javascript compiler. There's a ruby-to-javascript compiler - the list just goes on.

    then there's the pypy compiler collection, which has javascript as a back-end. (and, for completeness, it's worth mentioning that it also has a CLR backend, backend, and a java backend).

    • 3) the number of "-to-javascript" compilers is really quite staggering. see the comments from pyv8 article [] for an incomplete list.

      This may be an indication that people actually hate javascript, so much that they would rather write a compiler to avoid the pain of actually writing in it. I personally would.....although the thing that really bothers me is browser incompatibilities. And I would like to write a compiler.

    • There's a ruby-to-javascript compiler

      Is there? The closest thing I've heard of is HotRuby, which is a VM implemented in JavaScript that runs (currently, a subset of) the same bytecode as the Ruby 1.9 VM. This is not a "Ruby-to-JavaScript" compiler, or even a compiler at all.

  • by YesIAmAScript (886271) on Saturday October 18, 2008 @01:10PM (#25424821)

    The future is the 70s?

  • by WillAdams (45638) on Saturday October 18, 2008 @01:50PM (#25425073) Homepage

    If memory serves, all of their compilers compiled to a genericized ``pcode'' for which multiple engines existed (one per processor architecture I believe it was) --- all that was missing was multiple implementations per architecture.


  • I read that as Genetic VMs and that sounded really cool. It even made interesting sense almost all the way through the OP.

    I was sadly disappointed when I realized my error. Generic VMs? Like everybody else said, boring.

A sheet of paper is an ink-lined plane. -- Willard Espy, "An Almanac of Words at Play"