Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Ruby 1.9.1 Released 226

Janwedekind writes "Yuki Sonoda yesterday announced the release of Ruby 1.9.1 (Ruby Inside coverage with lots of links). The VM of Ruby 1.9, formerly known as YARV, was initiated by Koichi Sasada and has been in the making for quite some time. Ruby's creator Yukihiro Matsumoto has already presented many of the upcoming features in his keynote at RubyConf 2007. Most notably, Ruby 1.9 now supports native threads and an implementation of fibers. A lot of work also went into encoding awareness of strings. The 1.9.1 version is said to be twice as fast as the stable 1.8.7. It will take some time though until the majority of existing Ruby extensions get ported to 1.9."
This discussion has been archived. No new comments can be posted.

Ruby 1.9.1 Released

Comments Filter:
  • hmmm... (Score:3, Funny)

    by bsDaemon ( 87307 ) on Sunday February 01, 2009 @12:31AM (#26682227)

    No wifi, garbage collection not as good as Lisp; Lame.

    • Re: (Score:2, Informative)

      by Narmi ( 161370 )

      No wifi, garbage collection not as good as Lisp; Lame.

      Why was this modded as flamebait? And what's that WOOSHing [slashdot.org] sound?

  • According to this video [confreaks.com], there is lack of direction. This is by Dave Thomas [wikipedia.org], and important figure in the Ruby world.

    On a side note, I will use PHP on my servers before touching Ruby since I see no advantages for using it over PHP.

    • Rubys speed and scalability hasn't compared in the past but each iteration I always try to re-evaluate and give it another shot. The only argument for it in the past for web dev is RAILs and there are plenty of MVC frameworks for PHP now including PHPulse, Cake and Codeigniter.

      My company tried to make the switch and could find enough developers, could find enough module support and just couldn't get the same functionality for RUBY as with languages that have larger communities. Plus under large loads, it
      • Re: (Score:3, Interesting)

        You do realize neither Ruby nor Rails is an acronym, right? Ok.

        The only argument for it in the past for web dev is RAILs and there are plenty of MVC frameworks for PHP now including PHPulse, Cake and Codeigniter.

        And those don't compare well [slideshare.net], even to Rails, certainly not to Merb.

        And Merb is going to be merged into Rails.

        under large loads, it buckled

        Hardware is cheap. Couldn't you throw more at the problem?

        • Re: (Score:3, Insightful)

          by AuMatar ( 183847 )

          Hardware is only cheap if someone else is paying for it. Why spend money you don't have to? AMazon and other big companies have already come to that realization. When I worked there a year or so ago it was impossible to get new hardware- not because they were cheap, but to encourage you to use the hardware more efficiently. The end result- a lot more use of virtualization for small stuff, a lot more thought into efficiency of services, and millions saved. Smaller organizations won't save millions, bu

          • Hardware is only cheap if someone else is paying for it.

            No, hardware is cheap, relative to programmer time. Moore's Law only reinforces this. In fact, you're making my argument for me:

            Why spend money you don't have to?

            Let's suppose it takes me half the time to code it in Rails than it does to code it in PHP, but requires twice the power to run. And let's suppose I make minimum wage.

            Before it even comes close to costing more to run the hardware than it does to run my salary, you're already running six or seven extra-large instances. Go look up the specs for an extra-large instance. And keep in mind, that's additional -- that assumes the optimized version requires six or seven of those, and my inefficient version requires six or seven more.

            You can run the numbers yourself, but it ultimately tends to work out the same. And all of that assumes the Ruby version is slower -- and, following my link above, it really isn't.

            Smaller organizations won't save millions, but they'll save a significant chunk of cash.

            Smaller organizations, I would think the "just throw hardware at it" argument makes even more sense. The speed of a nonworking app is irrelevant. The speed of an app serving a dozen programmers and testers, before public release, is similarly irrelevant. By the time you're getting hundreds or thousands of requests per second, you're probably making enough money from ads alone to cover the costs -- but while you're still "only" getting dozens of requests per second, a single Rails server might work just fine.

            Now, I agree, throwing hardware at it is not a good long-term solution. The good long-term solution is to optimize the better systems. However, investing in a demonstrably worse architecture to gain a little performance -- maybe -- in the short term, does not seem like a good move.

            • Hardware costs in a hosted environment can be pretty outrageous.

              But I really wanted to take issue with the insinuation by Foofoobar, which I have heard so many times, that Ruby is "not scalable". Even if he meant Rails, not just Ruby, he is just plain wrong. This scalability "issue" has never been a real issue at all... as long as you didn't mind getting your hands a little dirty in server configurations.

              Look at some of the "top" 100 Ruby on Rails sites, and try to tell me again that Ruby "doesn't sca
          • Re: (Score:3, Insightful)

            How do you know the tons of manhours put into optimization work won't cost you more than the added hardware?

            • by suggsjc ( 726146 )
              How do you know its going to take tons of manhours to get efficiency gains? I'd imagine that if you hired jr. programmers to do the initial round of programming it would only take a few hours of an experienced sr. programmer to identify potential bottlenecks, and either come up with a solution or just fix the problem.

              Besides, inefficient code will add up almost exponentially as your load increases. So you could throw hardware as a stopgap, but eventually (if you get large enough) you would have saved a
              • by An Onerous Coward ( 222037 ) on Sunday February 01, 2009 @12:18PM (#26685209) Homepage

                > Besides, inefficient code will add up almost exponentially as your load increases.

                Eh?

                We're not talking about poor algorithm selection. We're talking about using a slower language rather than a faster language. Unless you deliberately adopt a "slower languages deserve slower algorithms" mentality, you're talking about a linear increase in hardware.

                In that case, doubling the hardware requirements in exchange for even a 25% cut in coding time is going to be a huge boon for your company. If writing in a cleaner, more elegant language makes the code base smaller and easier to read, those senior developers are going to have a much easier time finding the bugs and the bottlenecks. Plus, if you can write the thing that much faster, your developers are going to have a lot more fun.

                Since we're talking specifically about Rails here, the first optimization pass is usually A) finding pages and partials that can be cached, and B) tweaking your ActiveRecord queries so that the database grabs all the records it needs on the first pass. Both are simple to do, and once you've accomplished them it's going to be quite a while before a normal site is going to run into scaling issues.

                • by jadavis ( 473492 )

                  We're not talking about poor algorithm selection. We're talking about using a slower language rather than a faster language.

                  You assume that the same algorithm used in two languages will have the same big-O characteristics for CPU and memory consumption. That is not true.

                  For instance, some languages can optimize tail recursion into a loop, and some cannot. For those that can, the memory consumption for a given algorithm may be constant. For those that cannot, the memory consumption may be O(n).

        • Hardware is cheap. Couldn't you throw more at the problem?

          I hear that answer alot from the RUBY community. Why would I as a business be expected to switch to a language that requires twice as much hardware? What you MAY save with software dev speed, you now lose with day to day server maintenance.

          The key to good engineering is to simplify because the more parts, the more chance of failure. So throwing more hardware at the issue when similar languages such as PHP, PERL and PYTHON can do it with far less

    • by Chandon Seldon ( 43083 ) on Sunday February 01, 2009 @01:30AM (#26682433) Homepage

      On a side note, I will use PHP on my servers before touching Ruby since I see no advantages for using it over PHP.

      Choice of programming language actually matters, and dismissing languages you haven't used much is foolhardy. If this isn't obvious to you, this article [paulgraham.com] may prove enlightening.

      • by AuMatar ( 183847 ) on Sunday February 01, 2009 @01:55AM (#26682539)

        No, it really doesn't. What matter is availability of libraries, and your personal profficiency with the language. Given C and a regex library, I'll write better, cleaner, and faster doing string parsing than I would in perl. Why? Because I've used C and C++ every day for the past 8 years. Even though I use and know perl and regex is built into the language, I make more mistakes in it due to using it only a few times a year. And yes, I've actually tried this- I was 5x faster in C with the regex library. Do the test again with a perl maven, and I'm sure the opposite result would occur, even if you gave a problem that's more traditionally a C thing.

        Now there are some languages that better suit individual people than other languages, due to the way they approach problems. Lisp is good for people who think very mathematically. C is good for those who think in a very step by step manner. OOP is good for people who think in terms of models and interactions. But you'll always be more efficient in a language you know well than one thats new to you.

        Which isn't to say there's no reason to learn a new language- you may find one that fits you a bit better, especially if you learn a new paradigm like functional. But you'll never solve a problem quicker by using a language you aren't as familiar with, unless a library for a major piece of functionality exists in that language but not yours.

        • Re: (Score:3, Insightful)

          That approach makes sense for small jobs, but for projects that take more than (say) two months, it makes more sense to choose a roughly suitable language, even if your proficiency is lower.

          Also, for any code that isn't throwaway code, you have to assume that some unknown person will eventually inherit and maintain your code. Under that assumption, it's more important for the language to be appropriate for the task than for the language to be convenient for the initial programmer. You wouldn't want to inh

          • Re: (Score:3, Insightful)

            by AuMatar ( 183847 )

            Your maintenance point is good. For that reason alone I wouldn't use C for a web front end (back end service sure, not a front end), because the vast majority of people they'd hire to maintain it wouldn't be experts in it. And for any team project its not your best language that matters, but the best language of the team as a whole.

            I disagree with your time argument though. If anything, the time just makes it more important to use what you're familiar with. If you're talking about something taking 1 hr

          • by BerntB ( 584621 )

            I agree with your points. That said, the PCRE library seems quite sweet and I could probably live quite happily with it. Sure, Perl has advantages because regexps are built into the language and don't need quoting etc.

            (-: And that from someone totally opposite of you; I wss never really happy with C++ but do Perl for fun. :-).

        • Lisp is good for people who think very mathematically.

          Which aspects of mathematical thinking is it that is well aligned with Lisp?

          What I think characterizes mathematical thinking (as opposed to programming thinking) is the declarative and/or pure nature of math: variables don't change, and there's no notion of time.

          I think a pure functional language, such as Haskell (or at least pure code) would fit better with mathematical thinking, because it has the same unchanging nature that math has.

          I'm guessing that a pure logic programming language would also make a go

        • Question for you:

          Which is greater:

          $total_time_using_language_you_know = $time_per_project_using_language_you_know * $nprojects

          $total_time_using_new_language = $time_to_learn_new_language + $time_per_project_using_new_language * $nprojects

          I think you will find that the answer is "it depends". Specifically, it depends on how long it takes to learn the new language, how much more productive that makes you, and how many projects you do. Assuming that a new language makes you more productive, if you do

          • by AuMatar ( 183847 )

            Not at all. You're assuming that it really will be faster to do a project in $newLanguage. I disagree- unless there's some library that doesn't exist in $oldLanguage, the productivity of languages at equal levels of experience are equal. I have never in my life seen a problem made simpler by changing languages rather than downloading a library. If you want to dispure my point you need to prove the assertion that it isn't so, not just throw some pseudomath in a post.

      • Re: (Score:3, Interesting)

        by Foofoobar ( 318279 )
        That's an oversimplification of the issue. You don't simply use a new tool because it CAN do something, it is also a matter of whether it can do it BETTER that the current tool, whether you can find people who can use that tool, whether you can find information on how to use that tool with other systems and tools and whether that tools has a robust set of expansions/libraries/modules.

        To date for alot of people and companies, RUBY was hype (and this has alot to do with the community that hyped it). Now th
    • On a side note, I will use PHP on my servers before touching Ruby since I see no advantages for using it over PHP.

      If you see no advantages to using it over PHP, you obviously haven't looked very hard.

      Off the top of my head: Ruby has better syntax, a better object model, runs faster (really [slideshare.net]), better standard libraries -- Rails aside, Ruby tends not to pollute the global namespace with bullshit like mysql_escape_magic_quotes_no_really_I_mean_it_this_time...

      PHP's advantage? Lots of unimaginative programmers like you know it, and it's slightly better at mixing code and data, since it's really just a Turing-complete templat

      • It's not just that, either. Ruby is more consistent. Object-orientation was a latecomer to PHP, and the language still shows it's non-object roots. Ruby is consistently, 100% object oriented, and it shows.

        The old tales about being able to do things faster and with many fewer lines of code in Ruby are not just fluff. For example being able, in RoR, to take the submission from a 40-element form on a page, and put it in your database with just "my_object.create(params)" is pretty sligging frick, if you ask
        • by Pulzar ( 81031 )

          The old tales about being able to do things faster and with many fewer lines of code in Ruby are not just fluff. For example being able, in RoR, to take the submission from a 40-element form on a page, and put it in your database with just "my_object.create(params)" is pretty sligging frick, if you ask me. Of course that is only a very simple example, but still.

          Well, in Cake it's '$this->Model->create(); $this->Model->save($this->data);'. As examples go, that's not very convincing.

    • I have now seen the video you linked to, and I can't noticed where he said Ruby lack direction. He said in fact that Ruby is great as it is, what he want to see however, is that new ideas for Ruby experimented on as forks rather than tried within the mainline Ruby. I think the video was quite good and worth seeing even though it last 47 minutes!

      The parent I believe is incorrect in using that video to claim there is a lack of direction.

  • Unicode? (Score:5, Interesting)

    by shutdown -p now ( 807394 ) on Sunday February 01, 2009 @03:20AM (#26682795) Journal

    A lot of work also went into encoding awareness of strings

    That's quite a fancy way to say "a lot of work went into making dealing with strings and encodings as messy as possible".

    So far, Ruby 1.9/2.0 is the only high-level language I know of which allows strings within the same program to be in different encodings (attaching a reference to encoding to every string). For double fun, the encodings need not be compatible with each other (not even with Unicode). This might also make Ruby the first language in which string comparison and concatenation are not well-defined for two arbitrary strings (as you get an exception if encodings are incompatible). Just wonderful - imagine writing a well-behaved library which does any sort of string processing with input parameters with these constraints...

    • Cocoa and CoreFoundation, though not as high level, do something similar.

      • by _merlin ( 160982 )

        Yeah, but all Cocoa and CoreFondation string encodings can be mapped to Unicode, it's possible to access any string object as Unicode characters, and all comparisons are effectively done in the Unicode character set. You never end up with two strings for which the comparison results are undefined, which I think was the GP's point. (The GP may or may not actually be correct - Ruby could be just as good as Cocoa/CF for all I know - I'm a C/C++/Objective-C guy and don't really know much about Ruby.)

    • Why should they be comparable? They are fundamentally different things. A "Unicode String" is not the same as an "ASCII String" at all.

      How should your string compare work? All similar characters in all popular encodings should map to each other? Hmmmm... that would be a pretty damned big code base to handle all those permutations. Maybe as big as the rest of Ruby.
      • by _merlin ( 160982 )

        Well that's what Unicode is for - you can map all* characters from all text encodings to it, do your processing in one universal character set, and convert to the required encoding for output. That's how Java, .NET, Cocoa, CoreFoundation and even Windows CE and NewtonOS do things.

        *yes, I know there are a few characters from Big5-HKSCS that aren't in Unicode, and the Apple logo from legacy Macintosh text encodings isn't there, either. But for 99.999% of cases, Unicode contains the character.

        • I understand what you are saying, but that is not exactly what I was talking about. Unicode encompasses a huge data space. A "string compare" function would be absolutely huge and efficient unless it were specific to the kind of compare you were doing. E.g., ASCII Unicode, Kanji Unicode, Cyrillic Unicode, etc. One "universal" string compare would be much too large and slow for general-purpose use.

          Of course, what kind of strings you "normally" use could be handled in your internationalization configur
      • Re:So? (Score:4, Interesting)

        by spitzak ( 4019 ) on Sunday February 01, 2009 @11:05AM (#26684639) Homepage

        An ASCII string can be mapped to a Unicode string. For each byte in an ASCII string there is a matching Unicode symbol.

        How should string compare work? Well for simple ==, I think it should fail if the encodings are different, as they really are different. It can also fail if the two strings encode the same Unicode but in different ways (this is possible in some encodings).

        There can however be a more complex call that converts both strings to Unicode and compares them. One huge problem with most current imlementations, including Python, is absolutely brain-dead (and "politically correct") handling of invalid UTF-8, where invalid encodings throw errors, which makes use of UTF-8 actually impossible for non-trivial programs. Instead it should never throw errors. Error bytes should represent something unique in Unicode, one popular proposal is U+D8xx (which is also an "error" in UTF-16).

        A problem Python (and a lot of other programs have) is that they think "Unicode" means "some sort of encoding where each Unicode symbol takes the same space". This requires 21 bits per symbol, the only practical way to support this to use 32 bits. Almost immediatly they run into difficulty in that Windows uses UTF-16, so they punt and use UTF-16, basically abandoning their entire idea. They should instead use UTF-8 which has enormous advantages: UTF-8 errors are not lost and can be translated differently when finally used (ie for display turning them into CP1252 is better, but for a filename turning them into the above error codes is better), all other encodings (which are byte based) can be stored in the same object, and no translation is needed when you load UTF-8 data. Also UTF-16 (including invalid UTF-16) can be losslessly translated to UTF-8 and back, so there is no problem supporting Windows backends either.

        • by tuffy ( 10202 )

          There can however be a more complex call that converts both strings to Unicode and compares them. One huge problem with most current imlementations, including Python, is absolutely brain-dead (and "politically correct") handling of invalid UTF-8, where invalid encodings throw errors, which makes use of UTF-8 actually impossible for non-trivial programs. Instead it should never throw errors. Error bytes should represent something unique in Unicode, one popular proposal is U+D8xx (which is also an "error" in

        • I understand, but that wasn't my point. The question was why strings in Unicode are not treated exactly the same as strings in the native encoding, so that a "universal" string compare would work.

          Ruby is not limited to UTF-8, though that is the default. But that is my point. If you wanted to implement a "universal" string compare that would map all valid characters in Unicode to every encoding available to your system, you would have a monster on your hands. Not all Kanji character encodings are Unicode.
    • Re: (Score:3, Interesting)

      by ubernostrum ( 219442 )

      So far, Ruby 1.9/2.0 is the only high-level language I know of which allows strings within the same program to be in different encodings

      Perhaps you need to spend time trying more languages; you'd learn not only that other languages have done this, but that many of them later rejected the idea and moved to, typically, a pure Unicode string type with a separate non-string type for working with sequences of bytes in particular encodings.

      • Perhaps you need to spend time trying more languages; you'd learn not only that other languages have done this, but that many of them later rejected the idea and moved to, typically, a pure Unicode string type with a separate non-string type for working with sequences of bytes in particular encodings.

        One of my hobby is research of the history of PLs, but I guess I never ventured that far back to stumble into it. If you could give any examples of languages that did the same thing, I would be grateful.

        Otherwise, I agree that the only thing that makes sense today from a pragmatic perspective is an all-Unicode string type.

    • Re:Unicode? (Score:5, Funny)

      by TheSunborn ( 68004 ) <mtilsted@NoSPAm.gmail.com> on Sunday February 01, 2009 @01:24PM (#26685787)

      Php also allows strings to use different encodings within the same program. With the extra twist that php don't keep track of the encoding, so if you want to find the number of characters in a string, you the developer must know the current encoding of the string, and then call the right method, based on which encoding the string has.

      Sometimes I think that the developers of php just take all the bad things in other languages, and say "I can make a worse implementation of this."

       

      • Php also allows strings to use different encodings within the same program. With the extra twist that php don't keep track of the encoding, so if you want to find the number of characters in a string, you the developer must know the current encoding of the string, and then call the right method, based on which encoding the string has.

        That's rather different, and it's also how C/C++ work. On low-level, that's how anything works, really - a java.lang.String, for example, is really just an array of 16-bit ints at heart. Where the difference appears is when you start performing encoding-aware operations on strings - case-insensitive comparisons, lowercase & uppercase, and so on. At that point, the language operator (or the library functions) that do that have to decide how to figure out the encoding of the string. The most common one to

Technology is dominated by those who manage what they do not understand.

Working...