Mojolicious 2.0: Modern Perl For the Web 132
Kvorg writes "After a year of rapid development, newly released version 2.0 of Mojolicious, the new generation real-time Perl web framework written by Sebastian Riedel and many others, offers a versatile and elegant web framework that is as good at web scraping and simple scripts as it is at building complex, interactive real-time applications with HTML5 and websockets. It supports easy 0-dependency installs, excellent developer mode, multiple deployment scenarios, many CPAN modules and plugins."
Wasn't that supposed to be Ruby? (Score:3, Interesting)
Not to take anything away from this framework, but now I'm curious since at first I took the post title too literally. Wasn't Ruby supposed to be modern Perl for the web? Whatever happened to that? People get bored? Web developer ADHD?
Re:Useful for just certain applications (Score:4, Interesting)
If I'm reading the docs correctly, the webserver part of Mojolicious is optional:
http://mojolicio.us/perldoc/Mojolicious/Guides/Cookbook#DEPLOYMENT [mojolicio.us]
It also supports a pretty substantial list of alternatives to its build in web server. Want to only run it as a CGI script? Then do that. Run your app as a PSGI script? Yeah, you can do that too. Started as a CGI script by now need a lot more HP? Not so hard to move what you started on something simple to something with a little more Umph.
So that's I guess, "neat"
Re:Wasn't that supposed to be Ruby? (Score:1, Interesting)
Hard to do serious web apps in a language that can't even handle the basics of unicode.
And once you got past scaffolding, rails turned out to be a toy MVC framework that 1) monkeypatches base classes and breaks other libraries, 2) has a toy ORM that can't generate joins properly, and 3) has a template language that basically amounts to script-in-page.
mojo (Score:4, Interesting)
Totally apart from a pretty slick MVC framework the Mojolicious project has my undying affection for producing the mojo tool.
How many times have you wanted to scrape something out of a web page and you thought "I know, I'll use wget (or curl) and sed! Easy enough." so you write
# get story titles from slashdot
wget slashdot.org -O - 2>/dev/null | sed -e 's/uh, what now?//'
And then you get stuck fiddling with ever-crazier sed expressions to filter down to just the data you want? I know I've been there a dozen times and wound up with various unpleasant solutions or, when necessary, I've broken down and written a proper perl script which parses the HTML (and taken about 20 times as long as I planned to take to do it!) Maybe you try
And just go with it, because it's good enough. Well, no more! Now I can say
And have my results just like that!
Just as jQuery was a revolution in DOM scripting, to the point where I just won't write JS without it, so is mojo a revolution for these kinds of applications. I can now pull down pages and parse the actual structure and select just what I need. Beautiful.