Parrot Updates 91
BorrisYeltsin writes: "A couple of updates for Parrot are in a recent This Week on Perl 6, most imporantly Parrot 0.03 is out! Get it here , the release notes are here. Also Adam Turoff has got together the Parrot FAQ version 0.2 which addresses some of the more common questions about Parrot and Perl 6."
poly wants a cracker (Score:1)
gr!
John Cleese was wrong (Score:2)
Perl articles of the highest quality (Score:3, Informative)
The complete developerWorks "Cultured Perl" series:
Spam (Score:3, Insightful)
I'm not disputing the quality of the articles there, just pointing out that this has gone to several places, and even been posted on a few sites. I didn't post it on the one I admin because it was totally impersonal.
Re:Spam (Score:1)
somebody go fix that faq! (Score:2, Informative)
My favorite quote from the FAQ: (Score:2, Funny)
C
8. For the love of god, man, why?!?!?!?
Because it's the best we've got.
9. That's sad.
So true. Regardless, C's available pretty much everywhere. Perl 5's in C, so we can potentially build any place perl 5 builds.
Re:My favorite quote from the FAQ: (Score:2)
April Fool's Joke (Score:4, Funny)
Re:April Fool's Joke (Score:2)
You can read more here (Score:2, Informative)
Seems like a cool thing, I don't know much about it though.
-Vic
Old News (Score:3, Insightful)
If you want *new* news on perl/parrot, the latest parrot in CVS is now "fully-functional [perl.org]" (interpret that however you want.)
Parrot?! (Score:1, Funny)
0.0.3? 0.0.4 is in the works... (Score:1)
If you're interested in Parrot, get the version from cvs, and get on the mailing list. There's a hell of a lot of interesting and cool stuff thats gone in since 0.0.3, not least of which is JIT on a few platforms (linux among them). Just check out the mailing list [develooper.com] for details.
Oh, and if you run an unusual system, then get in contact with the parrot team! They need more exotic systems to get parrot building on!/p
performance of big programs? compatibility? (Score:5, Interesting)
On a different topic, what about compatibility? Their FAQ says that, for instance, localtime will no longer return year-1900. Doesn't this break old code? They say there will be an automated Perl 5->Perl 6 converter, but it isn't going to fix stuff like year-1900...or is it?
Re:performance of big programs? compatibility? (Score:4, Interesting)
As for compatibility, your perl 5 code will call routines that return perl 5 compatible values. The perl 6 code will call routines that return perl 6 compatible values. It'll work--it's a simple enough problem to deal with.
Re:performance of big programs? compatibility? (Score:1)
Hmm...but the routines have the same name? Say you have a Perl 5 program that you want to convert to Perl 6. Now it's calling a different localtime routine. Doesn't your code break. How do you detect such breakage and fix it?
Re:performance of big programs? compatibility? (Score:1)
I'll file a note to add a "potential compatibility issue" warning to things so you can get warned if you call any routine that behaves differently in perl 5 and perl 6. (Off by default, of course)
well... (Score:1)
Re:well... (Score:1)
best FAQ read in awhile (Score:1)
Re:best FAQ read in awhile (Score:1)
Comment removed (Score:5, Interesting)
Re:FAQ leaves me concerned. (Score:5, Interesting)
One example I didn't mention but should have was Visual Basic, which is apparently a stack-based interpreter system.
And register based systems have advantages over stack based systems. An awful lot of stack thrash is avoided in a register system--with more temp values handy you don't have the extraneous dups, stack rotates, and name lookups.And just because we have registers doesn't mean we don't have a stack. We do. (Several, actually) But registers let us toss a lot of the useless shuffling.
The 68K emulation on the Mac is proof that it works and is viable, not that it's the way to go. And UAE's troubles are as much architectural issues (the x86 chip is really register starved) as anything else.Parrot's ops are generally not as low-level as machine instructions, so even if the register system was dreadfully inefficient compared to a stack system (and it's not, parrot's numbers are good) not that much time is spent dealing with it anyway. (Though I don't want to trade off even 3% speed hits)
This is dead-on. I'll get some more meat into the FAQ. I might, actually. Adam just pasted in my answers to the technical stuff.Re: (Score:1)
Re:FAQ leaves me concerned. (Score:2)
Implementation. The registers (how many?) could be below the base of one stack with an absolute address. So, similar to the 6502 a part of memory could be mapped directly. This is a virtual machine, but you still get advantages be knowing at compile time where the data resides. (This is one of the disadvantages of stack based machines. Everything has an extra level of indirection. OTOH, how many registers should one have? I would propose using the same instruction set for accessing the registers and the stack, but with the stack set only allowed to be accessed via indirection from the stack pointer, and the stack pointer not allowed to go "negative" and get at the registers. The second and third stacks will need to be handled separately. If the absolute stack is limited in size (to, say, 256 entries) then another stack can be placed above it, and allowed to grow down toward it (say this one is also limited to 256 entried) and the third above that one, and growing up again. Say this one has a limit of 512 entries, and is composed entirely of pointers to the heap (Objects?). These sizes are picked out of the air, and would need LOTS of tuning.
OTOH: consider a machine with three stacks, but no registers. From the example of Forth, a lot of the operations will be devoted to accessing values two or three cells down in the stack. But this has the benefit that one doesn't need to track register allocations. This can be made to work, but one really needs to track stack frames, so one needs to be able to operate in a essentially unpredictable way on items within the current frame. So here one is basically using a set of register operations on the stack (i.e. within the current frame of the stack).
The reason for hardware registers is that in hardware fast memory is an expensive and scarse resource. So hardware designers tend to allocate it sparingly. But in a software implementation, the registers run in the same memory space as the rest of the program. So they loose many of their advantages. They can, however, save a level of indirection. And that can speed things up if it's on something that is being done frequently. So one alternative is to define the largest stack frame that one wishes to allow for, provide a way for a stack frame to be copied to a fixed location, and then have analogous operations based either on the fixed location, or on a stack frame pointer in the stack. So the stack is a set of modifiable stack frames (though only the top stack frame at any time is resizeable).
As far as I know, nobody has defined what the "perfect virtual machine" would look like. There have been several attempts, but trade-offs are inherent in the process, and what is best for one circumstance will not necessarily be best for another. And size, as well as speed, is a real constraint.
However, if one isn't modeling an existing hardware implementation, then the virtual machine should not end up looking like any of the hardware. This is because the trade offs are different for designing hardware and for designing software. (Fast RAM is only one example.)
.
Re:FAQ leaves me concerned. (Score:3, Insightful)
There's also an awful lot of literature on writing optimizers which is geared towards register machines, as that's what everyone's CPU is these days. I've not found much readable literature on optimizing a pure stack-based system.
Congrats, you just described a register-frame system. Which is what we already have.Re:FAQ leaves me concerned. (Score:1)
I am a very big fan of stack machines and I think that an intelligent implementation can be built that would blow the doors off any finite register machine: think of the cache hits you can have operating on such local data instead of the random access model. Two stacks can also be used, one for short lived data and one for long lived data. There has also been a ton of research on variable lifetime and scoping analysis to know where to keep data.
Why does everybody always want to make dumb interpreters and not something more intelligent?
Re:FAQ leaves me concerned. (Score:1)
Re:FAQ leaves me concerned. (Score:1)
sorry for the confusion.
Re:FAQ leaves me concerned. (Score:1)
That doesn't mean we haven't taken lazy evaluation into account, but there are limits to what we can do. All the languages that are likely to run on the parrot engine have the potential for a lot of side-effects, and runtime dependency checking for that stuff usually ends up costing more than you gain.
There are some areas where this sort of partial and lazy evaluation get you big wins, but they're pretty specialized areas. APL and Fortran will probably always be faster in their areas of particular specialty.
Re:FAQ leaves me concerned. (Score:2)
So the 68k emulator is not the only VM to be register based..
Now, I've never seen interesting paper which compares both approach, I don't think that one approach is necessarily better than the other..
PS (off-topic):
The story in the acme journal linked in the FAQ is really impressive..
http://use.perl.org/~acme/journal
additional corrections (Score:2)
Re:additional corrections (Score:1)
Re:additional corrections (Score:2)
A different way of looking at what the JVM and the CLR do is that they are a binary postfix representation of the Java parse tree. It doesn't make register allocation any harder or easier, it simply draws the line between source->byte-code->execution at a slightly different point. And it makes sense to draw the line where the JVM (and the CLR and the Smalltalk VM and lots of other virtual machines) does because register allocation is target machine specific.
Arguing theoretically against the architecture of some of the most efficient and successful virtual machine based systems is futile. The proof is in the pudding. Current Perl interpreter performance is noticeably worse than many other interpreters (even if some Perl primitives like regular expressions are quite fast). Let's hope that will improve in Perl6 implemetations. If you can do that with a register based VM, great.
More on stack vs. register based VM (Score:1)
Re:I hear Perl 6 will use Mono's JIT engine (Score:2, Informative)
Re:I hear Perl 6 will use Mono's JIT engine (Score:1)
OT: question about hyper operators. (Score:2, Interesting)
My question is about how hyperness is applied to hyperness and how you make hyperness apply to only one side of the operator. Here are how you do things like this in K; are these possible in the new Perl? If not, these would be monumental omissions.
' is read each
\: is read each left
Given the lists x:1 2 3 and y:10 20 30 and value z:100
x+'y is 11 22 33
x+\:z is 101 102 103
z+/:y is 110 120 130
but these can all be done implicitly, so you really do not need the decorations: x+y, x+z, and z+y are fine. This allows you walk down only a single list, while aggregating results.
x+\:y is (11 21 31; 12 22 32; 13; 23 33)
x+/:y is (11 12 13; 21 22 23; 31 32 33)
Then you can also walk down the left then right sides of list by \:/:(left each right each).
You can walk down a list unarily:
There are other adverbs and they can be combined to modify each other arbitrarily. This winds up being an incredibly powerway to write programs. It removes the programer from the burden of flow control and compacts code enormously. Think of removing all the loops from your code and replacing them with a couple charcters, instead.
Re:OT: question about hyper operators. (Score:2, Insightful)
Your examples become:
There is no @a ^^+ @b for the last two examples, but you might be able to defined your own operators and have the hyperoperator work on it.
But Perl6 does not seem to want to go as far as your language K does. However, modifying the syntaxt of Perl6 on the fly is going to be VERY easy. Something like:
-LL.
Re:OT: question about hyper operators. (Score:1)
The map examples show that Perl didn't quite go as far as I would have liked to see it go in that direction. These bulk operators are one of the features that make K a far faster language that C many times.
Is there a reason these hyper meta-operators only scratched the surface to the concept? Not trying to pick on Perl, but it just seems that other people have these examples as great ammunition to the argument that Perl is just a kitchen sink approach to a language: tossing the surface level of everything imaginable intoa language without trying to think of the underlying concepts and unite them, for example shouldn't map and hyperness be the same thing?
Re:OT: question about hyper operators. (Score:1)
Two possabilities come to mind:
1) My answer was limited by my knowlege of how far Larry Wall is going with this stuff.
2) Perl guys are getting gun shy about creating line-noise like syntax.
What I want to see is explicit iterators. Then you would have the power/freedom to create a syntax enhancement module to create line noise syntax.
What I mean about explicit iterators is very Object Oriented idea like:
With iterators alot of the large list I walk in my day job, could be handled much more intelligently and efficiently. We use POE and don't want one state to block others for too long.You could build a syntax modifying module to emulate K very efficiently.
Re:OT: question about hyper operators. (Score:1)
This construct is so prevalent, that it should be more concise and optimizable. The example is of a mostly effect free statement. There is a single assignment necessary, back to the iter_a. That is probably not sematically needed and just used for "efficiency". But the iterator code requires 4 additional assignments, the two original iterator creation assignments, then the implicit state changes in the next function. This destroys the ability for optimization, complicates the code and increases source bloat both increasing the probability of bugs.
or if you like describes the same process and seems far superior to the 5 line alternative.
Re:OT: question about hyper operators. (Score:1)
Why would you opt for this explicit iterator syntax?
This was my motivation/answer:
With iterators alot of the large list I walk in my day job, could be handled much more intelligently and efficiently. We use POE and don't want one state to block others for too long.
I will try to expand on this. POE is an Perl module that gives the programmer a framework to write state machines. It can be used to make your program behaive alot like a multi-threaded application (ie. doing two things a once). I does this by slicing your work into descrete chunks (re: states). However, POE is a cooperative multitasking system. If your state takes alot of time, while something else (like network traffic) has to be handled, that other thing will be ignored/dropped/whatever.
The Point: It would be helpful to be able to do array/hash manipulations efficiently (not using indexing or copying out a list of keys) AND to use arrays/hashes X number at a time, then go do something else, then comeback and do X more manipulations.
Further, iterators map well to how we interact with external data sources. Imagine tie()ing a hash to a SQL DB view, and using hyper-operators or iterators on big data.
BTW, this is my point. I work with big datasets. Each machine we have keeps statistics, by the minute, of the OS and processes. Other administrative machines sum up all the data collected for a cluster of machines. We aggregate that data into a machine for clusters of clusters. And finnaly we have a machine with the aggregates of all our machines. Roughly 3,000 machines world wide. Our processes must be able to both aggregate data from its children, but also respond to its' parent's queries at the same time. Iterators would really help.
Hyper-operators are cool for doing all your work in one fell swoop (even if it takes 10 seconds). Iterators are good for walking a list, stopping for some reason (like the data item you just iterated over required immediate action), then getting back to work on the list. If I had to do a (0..1_000_000) $a[$i] indexing or keys %hash for a hash with 500_000 keys, I could run out of memory or wait 60 seconds just for the copy to finish.
Here is a good quote: "Threads are for programmers who don't know how to program state machines" -Alan Cox.
Later.
Re:OT: question about hyper operators. (Score:1)
I do not use nearly as much Perl as I should, so I cannot comment as to specific Perl usages. However, you should never run out of memory with hyper operators, since it is too easy for the language to play swap games itself. This is my opinion, at least.
I too use very large data sets at work tied to databases. I do massive text processing tasks working as a core developer at one of the largest search engines on the web. For this taks we use a database called KDB.
<PLUG>
KDB [kx.com] is entirely based around bulk data operations. This database is written in K [kx.com]. It is the fastest database I know of, doing 50,000 transactions a second on a 100 million records [kx.com].
</PLUG>
Re:OT: question about hyper operators. (Score:1)
On the other hand, every system for denoting iteration and recursion eventually runs out of steam; it's just a question of how soon. So I don't think it's a problem that K goes further than Perl; remember that K may be intended for great things, but Perl has already achieved great things, and you don't want to kill the baby while trying to improve its training...
Be glad to... (Score:1)
And it seems as if you could do with a more constructive hobby than bashing the Slashdot community? Perhaps, maybe, next time you could try contributing to the discussion. Or if you don't know about the topic, read the level 5 posts and learn. But, don't come in here with ignorance bashing those who know what they're talking about.
Perl Documentation (Score:1, Insightful)
Perl is emphatically not an object-oriented language. Perl's OO features were crudely hacked in after-the-fact. This unfortunate compromise is the equivalent of trying to bolt an internal-combustion engine onto a stagecoach instead of designing an automobile from the ground up.
Too many simple tasks are pointlessly complicated. Take the simple example of creating an array whose elements are arrays. Not only does the developer need to use additional inner brackets for each element, but they must also remember to use the unique @{$a[1]} syntax when referencing. Why all the extra steps? Who knows.
Perl is notoriously impossible read and maintain. Walk into any bar frequented after-hours by veteran developers and you'll hear story after story being swapped about having to decipher brain-crushing lines of text like :" (my @parsed =$URL =~ m@(\w+)://([^/:]+)(:\d*)?([^#]*)@) || return undef;". This unreadability is in part the result of the fact that:
Perl attempts to be all things to all people and ends up being second-rate at everything.Perl is widely known as the "duct tape of the internet", and it performs superbly in this role. However, just as you cannot build a house out of duct tape alone, so attempting to turn a language that was originally developed for scrpiting brief, handy utilities into a do-all, be-all programming language will only result in the buggy, bloated, "write-only" mess that Perl has become.
Subroutine signatures, orthogonals, method access, data inheritance: this list could go on and on. But there is no real need. Its is now clear that Perl is doomed. At this very moment, Perl 6.0 is being cobbled together, with bulletins about the myriad upcoming features of the new version being issued with titles referring to the Biblical Book of the Apocalypse, the favorite text of messianic streetcorner lunatics. There is no better indicator of the deranged states of mind of the developers behind Perl than this unfortunate choice of imagery. Software developers with any interest in future employment/relevance should sieze this opportunity to attain fluency in Ruby or Python and donate their Perl books to the History Department of their local University.
Re:Perl Documentation (Score:1)
2 Too many simple tasks are pointlessly complicated
3 Perl is notoriously impossible read and maintain.
Re:Perl Documentation (Score:3, Informative)
I happen to disagree with a lot of what the writer says, but I think he was making substantive comments, to which I think I can respond.
Perl is emphatically not an object-oriented language.
Perhaps true depending on the criteria you use, but over the years I've come to care a lot less about this kind of taxonomic issue and more about getting things done without a fuss. The question I'd prefer to ask is, does it support object oriented design? Personally, I'd probably go with Python or Java for a project requiring large scale OO design. However, serious Perl hackers get by pretty well in Perl.
Too many simple tasks are pointlessly complicated.
True, but many complex tasks are very easy in Perl too. A lot of language flamewars unconsciously adopt a desert island scenario: if you had to do ALL your work in one language, what would that language be like? Well, you don't have to get by with one language.
Perl is notoriously impossible read and maintain.
This is simply untrue. Speaking as a very occaisional and non-expert Perl user who has had at various points needed to maintain some fairly complex perl CGIs, well structured Perl is quite easy for the non-Perl guru to maintain. This is not to say there aren't subtle issues (e.g. confusing scalar and array contexts) that can't bite a newbie trying to add substantial new functionality to Perl software, but when code has been written by a Perl expert and it is well structured, Perl is in fact very easy to maintain.
If there is a kernel of truth to this myth, it is perhaps that Perl tempts the inexperienced to create obfuscated code. However, an experienced coder with good habits will produce highly maintainable code.
Perl attempts to be all things to all people and ends up being second-rate at everything.
First of all, Perl beats anything else I've tried hands down for doing filter type programs (i.e. transforming input streams). When you do a lot of this kind of thing, it's very convenient to have a tool that spans the range of complexity from things you would do in awk to things you'd use lex and yacc for. If this is the kind of work you do, then Perl is for you. If this kind of work is not what you do or is just a very small part of what you do, then Perl will probably seem somewhat pointless to you. However, you shouldn't expect your experience to be universal.
Secondly, IMO the fact that a language is general purpose (like Perl) doesn't mean it has to be the best, or even a very good solution for every kind of problem you can imagine. There are some languages out there which are fairly good for a wide variety of problems (Java comes to mind); that is an important niche. However there are niches for languages that are excellent at one or two things. These still need to support styles of work that may not be their strongest suit, however, because real world programs have to address a mix of issues.
Re:Perl Documentation (Score:2)
And the biggest problem with Python is that it is too fundamentally object-oriented. Seriously. Lack of OO features in non-OO languages is a classic newbie complaint.
Too many simple tasks are pointlessly complicated.
You could argue the same for Python. In Perl, you can check for the existence of a file or use regular expressions with built in operators. In Python you have to import a module and work with the supplied API, which is much awkward than the Perl way. Both this argument and the "impossible to read and maintain" one can be leveled against *any* language, depending on the examples chosen.
Perl is dead. Long live Perl! (Score:1)
But Perl's creators and users aren't ready to quit.
That's why there's a Perl 6. :-)
Parrot Applets? (Score:2)
It would be a nice replacement for Java. I'd just love to write client-side web applications in Perl.
Gubbins x.y is out!-type posts (Score:1)
I assume we're not talking about the next stage of evolution in brightly coloured birds...
Cheers, Ian
Design arguments (Score:1)
Linux as a platform won't continue to grow relative to
Linus should therefore make a New Year resolution be to 'anoint' a VM as the future target platform and encourage libraries to be built around it. Parrot could be this VM, but from this point of view Parrot doesn't offer any fundamental technical advantage over the Java or
While code distribution requirements alone are sufficient to justify a 'Linux VM', there is another very interesting potential benefit which I haven't seen discussed yet. This applies specifically to Open Source and the principles behind it. This is that a VM could be implemented where the source code and compiled code were semantically equivalent. This means, of course, that all that ever needs to be distributed is the 'compiled' form and, by its very nature, this code is always open.
This is almost the case with Java bytecode, in that decompilers such as JAD can (usually) produce editable code from compiled
Perhaps a better example of sourcecode equivalence goes back to the old days of home computers with interpreted BASIC. Most people will remember that BASIC code was 'tokenized', including keywords, blank lines, punctuation and comments, in a way that could be both edited and executed. Unfortunately, the spread of this useful model was halted when PCs became big enough for 'real' compilers to run.
What's needed, therefore, is a more sophisticated version of the old tokenized program representation, and such systems have been developed for Scheme (and FORTH, I think), usually known as Abstract Syntax Tree interpreters, in contrast to bytecode interpreters like Parrot's current implementation. There may be some middle ground here - I have a few pointers to research in this area, such as Anton Ertl's work [tuwien.ac.at].
Now, having solved Linux's portability problem and given a big boost to open source, we could perhaps rest there with some satisfaction. However, I'd argue that there's one more important requirement to address and this time it will be one that is familiar to programmers already, if mostly only those working with languages like LISP and Scheme. This is the provision of dynamic reflective capabilities, meaning the ability to treat programs as data.
They say that any large C application includes an interpreter for a higher-level language. In finance, we see applications supporting the entry of complex formulas; in CAD/CAM people build parts which are themselves programmable such as a parameterizable aircraft undercarriage. These features turn users into programmers, but programmers working with a high-level, specialized language.
LISP is a much better language to build such applications (which is why a lot of AutoCAD is in LISP) since LISP programs are themselves data structures that can be added and played with at run-time. Java,
So, to wrap up, it looks like others have left the goalposts wide open for us and there is a significant opportunity to exploit. It will be interesting to see whether this Parrot can do more than mimic other platforms.