Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

Anticipatory Scheduler in Kernel 2.5+ Benchmarked 252

gmuslera points to this article at KernelTrap comparing available benchmarks for schedulers available for the 2.5 kernel, with the 2.4's scheduler as a reference poin. "In some cases, the new Anticipatory Scheduler performs several times better than the others, doing a task in a few seconds instead minutes like the others."
This discussion has been archived. No new comments can be posted.

Anticipatory Scheduler in Kernel 2.5+ Benchmarked

Comments Filter:
  • by daeley ( 126313 ) on Wednesday February 26, 2003 @07:23PM (#5390964) Homepage
    Somehow, I just *knew* this was coming. ;)
    • by nairnr ( 314138 ) on Wednesday February 26, 2003 @07:27PM (#5391007)
      Just like they couldn't anticipate the Slashdotting they were about to receive...
    • Man, they should've just anticipated the coming Slashdotting and used it as a real-life benchmark for the scheduler...
    • > Mac OS X: because making Unix user-friendly is easier than debugging Windows.

      Heh. More like "because making user-friendly Unix is easier than making unix-sturdy MacOS".
    • by Anonymous Coward

      Somehow, I just *knew* this was coming. ;)

      Ah, grasshopper, but did you know it will be coming to Slashdot yet again in a day or so? Anyone can predict the future; it takes a sage to predict a Slashdot duplicate.

  • by Sayten241 ( 592677 ) on Wednesday February 26, 2003 @07:29PM (#5391025)
    The anticipation is killing me .
  • by ChangeOnInstall ( 589099 ) on Wednesday February 26, 2003 @07:30PM (#5391033)
    If only kerneltrap.org were running the new schedu... oh, never mind.
  • Oops (Score:3, Funny)

    by AnotherShep ( 599837 ) on Wednesday February 26, 2003 @07:31PM (#5391043)
    Seems to be slashdotted already... Wonder if it saw that coming.
  • by DonkeyJimmy ( 599788 ) on Wednesday February 26, 2003 @07:33PM (#5391065)
    In some cases, the new Anticipatory Scheduler performs several times better than the others, doing a task in a few seconds instead minutes like the others.

    The task in question was anticipating things, so the test might not be all that fair.
  • by huhmz ( 216967 ) on Wednesday February 26, 2003 @07:33PM (#5391068)
    I have a multithreaded perl app (yes perl has descent thread support in 5.6 and later) which does a lot of mixed write/read I/O to and from a database. With 2.4 and 40 threads I can hardly use the system (of course I dont have to abuse my computer with 80 threads but Im trying to prove a point here).
    Switching to a new tabbed terminal in fluxbox it takes ages to redraw and switching between virtual desktops is an act of futility.
    With 2.5 It get good interactive performance and don't see this effect much at all. For sure this is also a bit due to the new VM code.
    Of course I would probably get the best interactivity with the SFQ scheduler but thats secondary in this case. At least xmms doesn't skip with this during very heavy I/O. I do not use the new NPTL code which would help further I suppose.
    • by brer_rabbit ( 195413 ) on Wednesday February 26, 2003 @08:14PM (#5391396) Journal
      For sure this is also a bit due to the new VM code.

      Fer sure.

    • I have noticed that there is a huge latency involved when creating threads in Perl 5.8 - in fact I benchmarked it and it was about 20 times slower than a fork! I think this has something to do with Perl making duplicates of all the variables for each new thread (they aren't shared by default) and possibly not using a copy-on-write algorithm to do so. I don't know. But while the performance of the threads was decent once they got going, the latency to spawn them was really crippling, and I went back to using forks... I wonder if this new scheduler will have any impact on Perl thread performance?
    • Preemption patches (Score:3, Informative)

      by Sits ( 117492 )
      Are you sure that you are not seeing the improved desktop interactivity from the kernel premption and low latency patches in 2.5? I suspect that they would affect desktop interactivity more than this scheduler...
  • by Penguin2 ( 62270 ) on Wednesday February 26, 2003 @07:36PM (#5391088)
    It reminds of the old song and Heinz Ketchup commercial:
    "Anticipation is making me wait"...

  • Poin..... (Score:5, Funny)

    by akiy ( 56302 ) on Wednesday February 26, 2003 @07:38PM (#5391117) Homepage
    "with the 2.4's scheduler as a reference poin."

    I'm still anticipating the "t" in "point" myself...

  • by Mr. Sketch ( 111112 ) <`mister.sketch' `at' `gmail.com'> on Wednesday February 26, 2003 @07:39PM (#5391122)
    Too many connections
  • doing a task in a few seconds instead minutes like the others


    A task? I thought schedulers were for multitasking...

  • by Dark Lord Seth ( 584963 ) on Wednesday February 26, 2003 @07:41PM (#5391143) Journal
    Too many connections

    You know, you can ALMOST feel an admin over there just itching to type in "Fuck you Taco! And your site!" instead of the connection stuff...

  • by DonkeyJimmy ( 599788 ) on Wednesday February 26, 2003 @07:47PM (#5391198)
    Me: Computer, I would like to open Netscape
    Computer: I have anticipated you would like to open IE and have already opened it for you.
    Me: Ok, then I would like to go to the game review site to see what I want to buy.
    Computer: I have already begun the download of the new Age of Empires game, your account has been charged.
    Me: Can I at least go to the bathroom?
    Computer: No.
  • by Anonymous Coward on Wednesday February 26, 2003 @07:50PM (#5391221)
    It's a trap! We've got to give this website more time. Concentrate all firepower on the next article below!
    /ackbar
  • by FyRE666 ( 263011 ) on Wednesday February 26, 2003 @07:50PM (#5391224) Homepage
    ... the Mozilla developers added a special "Slashdotted" plugin you know. So you could launch a special tab that would keep hammering away at a site in the background until it did bloodywell load ;-)
    • which would fit into one two categories: 1. solution for already solved problem, 2. killer feature.
    • You know it's already been done. But if *you* had a plugin that meant you could load slashdotted pages when no one else could easily, and you knew it would only work as long as you didn't give it to other people, would you *really* give to everyone? Then no one would benefit, and the servers would stay hosed even longer...
    • Yeah! Come on everone! Lets hammer the slashdotted sites!
      That'll fix the problem. Honestly. Promise! =)
      *looks serious*
    • by ottffssent ( 18387 ) on Wednesday February 26, 2003 @08:22PM (#5391452)
      I know that's Score:5 Funny but there's a serious note here.

      If a site is down, Mozilla could pretty easily grab the google cache instead. Or, if there's no google cache or if the cache matches the current page, check archive.org. Mozilla could auto-generate a page offering the user some options. Think about it - it would be the end of 404 errors. Instead of

      404 The requested page could not be found.

      you could get

      The site you requested is currently down. Would you like to use Google's cache [216.239.51.100] instead? I also have a snapshot of the page you requested from August 12, 2002 [archive.org] but older ones are available here [archive.org].
      • I like this idea very much. I hope some kind soul writes it up as a plugin.
      • by Gorphrim ( 11654 ) on Wednesday February 26, 2003 @09:08PM (#5391783)
        i wonder did anyone bother submitting this idea to the Mozilla people instead of to /. ?
        • i wonder did anyone bother submitting this idea to the Mozilla people instead of to /. ?


          I wonder if anyone bothered to ask GOOGLE if it was ok. They did just threaten to sue some guy for using the term GOOGLE as a verb on his website. Had an article on it here just 2 days ago.

          I'm too lazy to look up the original link from 2 days ago. Just wait a day or two and read it then, I'm sure it will get duped ;)
          • by Anonymous Coward
            They did just threaten to sue some guy for using the term GOOGLE as a verb on his website.
            No, they didn't. And it wasn't a cease-and-desist letter. All they said was, "Hi, I'm a google lawyer (TM). We like our trademark, which for obvious reasons you'll understand. Could you please indicate that google is a trademark in your definition, or, if it's easier, just remove it? Thanks.".

            Unlike you, I'm not too lazy to type "google" into the Slashdot search box you probably see at the top-right of this page, click search, and click through to the article [slashdot.org] and from there the (NON-cease-and-desist) letter [linguistlist.org].
            It reads, in full:

            Dear Mr. McFedries:

            I am trademark counsel for Google. I have recently become aware of a
            definition of "google" on your website, www.wordspy.com. This definition
            implies that "google" is a verb synonymous with "search." Please note that
            Google is a trademark of Google Technology Inc. Our brand is very
            important to us, and as I'm sure you'll understand, we want to make sure
            that when people use "Google," they are referring to the services our
            company provides and not to Internet searching in general. I attach a copy
            of a short, informative piece regarding the proper use of "Google" for your
            reference.

            We ask that you help us to protect our brand by deleting the definition of
            "google" found at wordspy.com or revising it to take into account the
            trademark status of Google.
            [basically saying: Note: Google (TM) is a registered trademark of Google Technologies].

            Now quit spreading FUD. I love Google.
        • "i wonder did anyone bother submitting this idea to the Mozilla people instead of to slashdot ?"

          Where do you think the mozilla hackers live?
      • Why not instead use Gnutella or one of the lovely peer to peer swarming protocols to download it instead? That's what they're for. I don't understand why Mozilla doesn't already do this when several of these protocols already work explicitly on URLs.

    • Don't you think mozilla is bloated enough already? If they added that, then they might as well just go ahead and include a kitchen si... wait... err... nevermind.
    • Opera. Right-click. Reload every 30 seconds. Done.
  • how it works (Score:5, Informative)

    by diegocgteleline.es ( 653730 ) on Wednesday February 26, 2003 @08:09PM (#5391355)
    Given that kerneltrap has "Too many connections", i don't know if they have this link: http://www.cs.rice.edu/~ssiyer/r/antsched [rice.edu]

    where it explains what anticipatory scheduling does.


    (btw, it seems that freebsd had it for ages)
  • by fr2asbury ( 462941 ) on Wednesday February 26, 2003 @08:10PM (#5391361)
    I see you shiver with antici - - - - (Say it!) - - - (Consti!) - - - pation!

    ;-)

  • by !Xabbu ( 1769 ) on Wednesday February 26, 2003 @08:11PM (#5391369) Homepage
    What does it do for kernel performance? Or does it do anything for performance? ...I haven't run Linux on a desktop for almost a year. It hurts mommy.. please make the bad man go away!
    • by Caoch93 ( 611965 ) on Wednesday February 26, 2003 @08:21PM (#5391445)
      In this case, they're talking about an I/O Scheduler, which is in charge of planning I/O through some resource so that activities on that resource correctly complete as quickly as possible. To be specific, this scheduler is for the hard disk I/O Scheduler, which plans reads and writes from/to your hard disks.

      The Anticipatory Scheduler is designed to optimize your disk I/O based on the assumption that reads of related material tend to happen in short succession, while writes tend to be singular and larger. As a result, when the scheduler encounters a read, it anticipates that there will be other reads in short succession, so it waits and then checks for them and, if they're there, they move to the front of the line. The name comes from the tiny waiting period when it anticipates future reads.

      This is, of course, a condensed version of what I think I've learned from reading KernelTrap for the last few months. Someone's bound to tell you I'm talking arse.

    • by RainbowSix ( 105550 ) on Wednesday February 26, 2003 @11:32PM (#5392778) Homepage
      Other forms of disk scheduling are a little more simple. Assume that the disk is really slow, and lots of requests are coming in that are buffered somewhere. Obviously you want to handle requests that are close to where the disk head currently is since it is faster and you won't have your head going all over the place.

      FCFS (first come first serve) - easy stupid way. Take requests as they come. If you need front end front end, performance suffers because obviously you want to do front front end end.

      SSTF (shortest seek time first) - do the request that is shortest to the head first. The problem with this is if you keep asking for stuff at the front of the disk and have a lone request for the end of the disk, the lone request could get ignored for a long time (starved) since the scheduler does the stuff at the front since seek times are much lower.

      SCAN - the head starts at one end of the disk and goes to the end, servicing requests along the way, but never going back so that that lone request from the previous method does get serviced. When it gets to the end the head moves back toward the front, servicing requests along the way. It keeps going back and forth.

      C-SCAN -Variant of SCAN where it doesn't go back and forth. It goes from front to back servicing requests then goes all the way back to the front before it starts servicing again. It gives more uniform times because if you're using SCAN and your request at the beginning is just missed by the head, you have to wait until it goes all the way to the other end and comes all the way back. In this method it goes to the end and then you're the next request to be serviced if you are at the beginning.

      LOOK - The same as CSCAN and SCAN except it doesn't blindly go to the end of the disk; it stops and turns around when there aren't any more requests in the direction. Of course, if you show up right after the head changes direction, sucks to be you :)
  • doing a task in a few seconds instead minutes like the others

    Excellent point the article! I'm glad I saw this Slashdot first. I can't wait the new kernel to be released.

  • by Wesley Felter ( 138342 ) <wesley@felter.org> on Wednesday February 26, 2003 @08:21PM (#5391444) Homepage
    The blurb didn't mention that the article is comparing disk schedulers, not CPU schedulers.
  • I remember Ingo Molnar introducing this scheduler running in O(1) time months ago, sometimes late in 2002... AFAIK it is a part of the 2.5 kernel for quite a long time.. and at the time it was first tested there were some benchmarks.. I vaguely remember something about "we tried to launch several hundreds of processes, w.o. the scheduler: 15 minutes, w. the scheduler: 2 seconds." So what is so new about some benchmarks being available?

    Or am I completely off-topic? ;)

  • Good stuff, but... (Score:5, Informative)

    by Corbet ( 5379 ) on Wednesday February 26, 2003 @08:35PM (#5391540) Homepage
    ...of course, LWN readers knew about the anticipatory scheduler back in January [lwn.net]. We also looked at the SFQ and CFQ I/O schedulers two weeks ago [lwn.net].
    • ...of course, LWN readers knew about the anticipatory scheduler back in January

      So did Kerneltrap-readers:

      http://www.kerneltrap.org/node.php?id=574

      Ouch! Touché!
  • by Miles ( 79172 ) on Wednesday February 26, 2003 @08:56PM (#5391673) Homepage
    If you're really curious, you can check out the mailing list for more info. Try searching for "IO scheduler benchmarking" or "iosched". To save the mailing lists, here's a few interesting benchmarks:

    Parallel streaming reads:
    Here we see how well the scheduler can cope with multiple processes reading
    multiple large files. We read ten well laid out 100 megabyte files in
    parallel (ten readers):

    for i in $(seq 0 9)
    do
    time cat 100-meg-file-$i > /dev/null &
    done

    2.4.21-pre4:

    0.00s user 0.18s system 2% cpu 6.115 total
    0.02s user 0.22s system 1% cpu 14.312 total ...(up to)
    0.01s user 0.16s system 0% cpu 37.007 total

    2.5.61+hacks:

    0.01s user 0.16s system 0% cpu 2:12.00 total
    0.01s user 0.15s system 0% cpu 2:12.12 total ...(up to)
    0.01s user 0.19s system 0% cpu 2:13.51 total

    2.5.61+CFQ:

    0.01s user 0.16s system 0% cpu 50.778 total
    0.01s user 0.16s system 0% cpu 51.067 total ...(up to)
    0.01s user 0.18s system 0% cpu 1:32.34 total

    2.5.61+AS

    0.01s user 0.17s system 0% cpu 27.995 total
    0.01s user 0.18s system 0% cpu 30.550 total ...(up to)
    0.01s user 0.16s system 0% cpu 34.832 total

    streaming write and interactivity:
    It peeves me that if a machine is writing heavily, it takes *ages* to get a
    login prompt.

    Here we start a large streaming write, wait for that to reach steady state
    and then see how long it takes to pop up an xterm from the machine under
    test with

    time ssh testbox xterm -e true

    there is quite a lot of variability here.

    2.4.21-4: 62 seconds
    2.5.61+hacks: 14 seconds
    2.5.61+CFQ: 11 seconds
    2.5.61+AS: 12 seconds

    Streaming reads and interactivity:
    Similarly, start a large streaming read on the test box and see how long it
    then takes to pop up an x client running on that box with

    time ssh testbox xterm -e true

    2.4.21-4: 45 seconds
    2.5.61+hacks: 5 seconds
    2.5.61+CFQ: 8 seconds
    2.5.61+AS: 9 seconds

    copy many small files:
    This test is very approximately the "busy web server" workload. We set up a
    number of processes each of which are reading many small files from different
    parts of the disk.

    Set up six separate copies of the 2.4.19 kernel tree, and then run, in
    parallel, six processes which are reading them:

    for i in 1 2 3 4 5 6
    do
    time (find kernel-tree-$i -type f | xargs cat > /dev/null ) &
    done

    With this test we have six read requests in the queue all the time. It's
    what the anticipatory scheduler was designed for.

    2.4.21-pre4:
    6m57.537s ...(up to)
    6m57.916s

    2.5.61+hacks:
    3m40.188s ...(up to)
    3m56.791s

    2.5.61+CFQ:
    5m15.932s ...(others)
    5m50.602s

    2.5.61+AS:
    0m44.573s ...(up to)
    0m53.087s

    This was a little unfair to 2.4 because three of the trees were laid out by
    the pre-Orlov ext2. So I reran the test with 2.4.21-pre4 when all six trees
    were laid out by 2.5's Orlov allocator:

    6m12.767s ...(up to)
    6m13.085s

    Not much difference there, although Orlov is worth a 4x speedup in this test
    when there is only a single reader (or multiple readers + anticipatory
    scheduler)
  • by AxelTorvalds ( 544851 ) on Wednesday February 26, 2003 @10:40PM (#5392430)
    They should really call it 3.0...

    When I first started hacking on Linux, I was working with a seasoned Linux kernel hacker who my company hired as a consultant. He helped us with some I/O issues and such, did some other tweaks and gave us a ton of inspiration to go get after it ourselves. (You be amazed at how many people are afraid to just start making changes to kernel code) He is a wickedly cool individual and as someone whose had a lot of schooling and experience it was one of the best learning experiences I can remember.

    The first thing I started dorking with after that experience was the scheduler because I, like all other hakers, know how to schedule stuff. At the time, (early 2.x) the scheduler was also a fairly easy to digest piece of code that could have impacts on the system in great ways.

    Well all my stuff got bit bucketed. I called up our consultant guy who my friend by now, "what's the deal? Linus doesn't like my stuff. How do you mail him stuff?" And his answer was that pretty much every body wants to tweak the scheduler, everybody sends stuff in. Linus is sage in his wisdom, schedulers are freaking hard because there is always a pedantic worst case that sucks and actually shows up in the real world. Linus has always done fairly simple things that aren't best but certainly aren't worst. So 2.0 had pretty straight round robin. 2.2 and 2.4 they started to add queuing schedulers with niceness. 2.5 we're going to get a pretty killer scheduler that has taken a ton of effort to tweak and there are still discussions to expose parameters to the user via /proc or something because you can find cases were it doesn't perform as well.

    Now this IO scheduler is opening up a whole new can of worms, it's a new chunk of code called "scheduler" and all hackers know scheduling. In the past it has been fairly simple. It should be fun to watch and the kernel is going to kick mucho ass in the end. There will be a lot of talk and debate about this stuff. It's also distilled down to the trusted set that Linus will let play with things called "scheduler"

  • WOW CS 162 Lecture (Score:3, Interesting)

    by vandel405 ( 609163 ) on Wednesday February 26, 2003 @11:54PM (#5392884) Homepage Journal
    I'm currently enrolled in cs 162 (OSs) at UC Berkeley, todays lecture was on different flavors of schedulers and this scheduler was mentioned breifly. For more theoretical info see http://webcast.berkeley.edu/courses/archive.html?p rog=116&group=52 for a webcast of the lecture or http://inst.eecs.berkeley.edu/~cs162/Lectures/L11. pdf for a pdf of the lecutre notes.

  • File I/O primitives (Score:4, Informative)

    by anonymous cupboard ( 446159 ) on Wednesday February 26, 2003 @11:56PM (#5392895)
    It seems that the main problem is that the file I/O primitives in Linux are a little too primitive. On some operating systems, there is another layer sitting between the file system and the user which provides record and block management. If I open a file for sequential read or write then the record/block manager knows that it could use read-ahead or write-behind.

    The problem is that such I/O layers need to be implemented at least partially outside user-space in the case where the file is being simultaneously accessed to allow interprocess coordination. Also, to get best use, everything should use it.

    • by Wesley Felter ( 138342 ) <wesley@felter.org> on Thursday February 27, 2003 @12:12AM (#5392990) Homepage
      On Linux the kernel detects sequential I/O and does the readahead automatically. So it's not really clear what Linux is missing.
      • Yes, but it doesn't seem to be getting it right hence the problem requiring adaptive I/O scheduling. Certainly ext2 should automatically determine that, for example, our html web page read by Apache is sequential and the data should be read-ahead buffered. However, this doesn't seem to be helping.

        At the same time I'm not quite sure where a solution like adaptive I/O scheduling would help on a real system because, in our example of an Apache server, whilst it is stalled writing a web page to you over TCP, it can be reading another page off disk for me. In a true server load, there is little think time because the next request comes in.

        • Yes, but it doesn't seem to be getting it right hence the problem requiring adaptive I/O scheduling. Certainly ext2 should automatically determine that, for example, our html web page read by Apache is sequential and the data should be read-ahead buffered. However, this doesn't seem to be helping.

          You're misunderstanding the issue here. Linux does read-ahaead on long linear reads just fine. That's not the issue. The issue is primarily not a server issue, in fact. It's primarily a desktop/workstation issue.

          There are often multiple processes on any machine wanting to do I/O simultaneously. Since the hard drive can only process one request at a time, the processes are competing for a limited resource (hard drive thoroughput). Thus, their access has to be scheduled properly to ensure the most efficient use of the hard drive.

          The reason this is a big win is because of a system's apparent speed, or interactiveness. If you've ever been processing audio at a workstation and tried to open a calculator or an xterm or something like that while streaming media is being written to disk, you would know that it sometimes appears as if the computer has practically locked up, except for the sound of the hard drive. If reads can be pushed to the head of the queue, this helps immensely, for several reasons.

          • Interactive processes are dependent on reads, not writes. If a video game or audio processing application or 3d modeling program or web server or almost anything else needs to read data from the disk, it is almost always the case that no more work will be done by that program until it has the data it is looking for. If the read is being blocked by a large sequential write, then it will appear as though the reading process has locked up until the write is done and the read can be serviced.
          • Reads are often small and non-contiguous. In the case of a web server serving a page, it has to read the text of the page, then process additional requests for all of the graphic content included in the body of that page. If it's 2 am and your backup process is tarring up all the changed files in preparation to burning CDs or tape backup, you don't want the person viewing your site to have to wait an extraordinary amount of time for the backup to finish, do you?

          There's a couple more examples I could give, but I'm sure you get the picture. This doesn't have anything to do with read-ahead. That works just fine :)

  • That's just great. If this is gonna be anything like the db benchmarking boom a year ago in lkml for the new vm which resulted some increase in db benches and fscked up the interactivity completely (by being a really happy page thrower), we're looking for a real nice desktop scheduler... :)

    Seriously though, Morton's a great chap and one of the few that has really worked for also a desktopwise usable kernel (low latency patches, lock breaking patches, and the list would go on forever).

    I can barely wait for 61-AS to compile...
  • The article mentions multiple simultaneous writes and reads... Doing two tasks at once in much more expensive than doing them sequentially.

    Why not use the download manager programs... for all file transfering?

    My priorities:
    1. user interface responds effectively in realtime.
    2. CD writes don't fail
    3. Video doesn't skip
    4. files transfer quickly.

    I would actually like the ability to switch the mode of the file schedualer.

    If I am not doing 2. or 3. then why not switch to something that makes 4 happen?

    I saw something rediculous, like a 10 second wait for a login prompt??!?!

    The system should have that all ready ahead of time, and it should take no more than .5 second to get a login screen.

    --I don't care about spelling enough to spend the vast quantity of time to get this to the spell checker.
  • The static copy of the story [kerneltrap.org] dosen't indicate if the I/O scheduler blocks on all filesystems, or separately on each one? Ie if I have a read on filesystem A, would another write on filesystem B be affected?
  • by master_p ( 608214 ) on Thursday February 27, 2003 @12:50PM (#5396860)

    Here [rice.edu] is the explanation of what anticipatory scheduling is. From what I have understood (please correct me if I am wrong, I am not a kernel hacker), 'anticipatory scheduling' means the following:

    The I/O subsystem (the part of the operating system that reads/writes to/from the hard disk) waits a little longer before servicing an I/O request from an application other than the current one; if the current application issues another I/O request while the I/O subsystem is waiting, the overall system throughput is higher because the hard disk's head moves less.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...