Linux Gets O(1) SMP Patch As Late Christmas Gift 38
bodin writes: "Now that new-year's parties are over things are getting boring again. For those who want to see and perhaps even try something more complex, Ingo Molnar is
announcing this patch that is a pretty radical rewrite of the Linux
scheduler. This is big stuff!"
Oxymoron (Score:3, Funny)
I liked O Xymoron's enthusiastic response [iu.edu]:
On Fri, 4 Jan 2002, Ingo Molnar wrote:
> this is more than 6 million context switches per second!
Everyone knows scheduling is boring.
:)
Re:Oxymoron (Score:3, Funny)
O(1) == constant time (Score:5, Informative)
The Big O-notation is a way to describe how the (asymptotic) execution time of an algorithm depends on the inputs to the algorithm. For instance, an algorithm that loops over n values is said to have the asymptotic execution time O(n) - it is proportional to the number of times the loop is executed.
Similarly, an algorithm that runs in constant time, i.e., that takes equally long to execute for 10 values and for 1000000 values, is said to be O(1). The execution time is proportional to 1.
For the Linux scheduler, switching processes is O(p), where p is the number of currently running processes. This new scheduler switches processes in O(1) time.
This means that even though the old scheduler might be fast for low numbers of running processes, it will take longer and longer timer to switch processes when the number of active processes grow. The new scheduler will switch processes just as fast for 2 processes or for 200 processes. Even though the new scheduler might be slower in when there are few processes, it will be faster when the number of running processes increase.
What I would like to know. (Score:1, Interesting)
Re:What I would like to know. (Score:2, Insightful)
Re:What I would like to know. (Score:2)
It's also possible to pool processes, but pooling threads is easier and more efficient, because you don't need interprocess communication or special shared-memory segments. Instead, you just have the threads all block on the same job queue.
Re:What I would like to know. (Score:2, Informative)
The real reason threads can be faster than processes if all other things are equal is context switch time. Switching between threads that share all their page tables is just a matter of restoring all the registers. When switching between processes, you must flush the TLB also.
Re:What I would like to know. (Score:5, Informative)
But when threads modify memory on SMP, they have to ensure cache coherency between CPUs. Processes can don't have this restriction, which is why processes can often be faster than threads. It all depends on your usage patterns and OS. On e.g. Solaris or Win32, threads are far faster at context switching than processes (mainly because processes are so heavyweight on those systems; Win32 also doesn't offer an API to efficiently implement multiprocess programs). On identical hardware, Linux's processes switches are faster than Solaris or Win32 thread switches and Linux thread switches aren't that much faster than process switches (beacause process switching is implemented in an efficient manner).
So on Linux, the fact that memory isn't shared between processes (hence CPU coherency isn't the same issue) often makes processes a substantial performance _win_.
This is aside from the fact that threads are extremely difficult to program properly--there's a reason that OSes spent all that time implementing protected memory (remember all the hoopla about that?) and threads throw that concept out the window, plus tend to lure the programmer into architectures that need nasty locking and synchronization.
That said, for some applications threads are the right answer. But using multiple processes, possibly with shared memory for a limited set of data, is generally a more maintainable solution that's easier to architect and implement. It certainly makes it explicit which data structures are meant to be shared, which a multithreaded solution does not.
Sumner
Re:What I would like to know. (Score:1)
Cache coherency is maintained in a UMA SMP (i.e. all 2 and 4-way x86 platforms) at all times, and is done by the hardware not the software, with a few minor exceptions like I/O and the page tables. But the only time that you need to modify the page tables in a way that requires you to update the other CPUs is when you are mallocing or freeing large chunks of memory, and that's an OS call (read: context switch) anyway.
Also keep in mind that x86 Linux process switches are fairly low cost to begin with because there isn't that much x86 state to save in comparison to other ISAs that have a lot more architectual state associated with any given process. Also, a lot of effort has gone into making context switches as low-cost as possible since it is a large contributor to OS performance, so the process context switches are highly optimized.
What I would consider more interesting is how to you schedule in a SMP with on a multithreaded processor design (like Intel's upcoming Hyperthreading Xeon, or some of the MIT research CPUs) where threads 'borrow' resources from each other. It's also called Shared Resource Multiprocessing (ShReMP). And while the virtual CPUs are symetric, they differ in "effective" computing power based on what is (or isn't) running on the hardware next to them.
  - Mike http://pooh.asric.com/~mikem/index.php [asric.com]
Re:What I would like to know. (Score:2)
Thus, I still don't see any reason that processes are inherently more efficient than threads. Unless you think you can settle the shared-memory-versus-message-passing debate once and for all. :-)
Re:What I would like to know. (Score:2)
Right, that's why I said that for some things threads are better. But a lot of people pick threads over processes for reasons other than "I need to share all my memory". Which is bad. Choosing to use threads should be a conscious decision, made only when you've determined that e.g. processes with a small shared memory segment, message passing, or other approaches are worse.
Sumner
Re:What I would like to know. (Score:2, Insightful)
Apache would be MUCH faster if the Apache developers would give up on the outdated and innefficient method of using a separate process per connection, and went with a non blocking I/O model like Boa [boa.org], thttpd [acme.com] or Zeus [zeus.com]. A non blocking I/O model is much better, from many standpoints. It places the I/O handling back in the kernel where it belongs, reduces memory usage and drastically increases speed.
I've talked to a lot of people about this issue, and spent a lot of time reading through the Apache developer mailing list. The main reasons that come up is that Apache is meant to be correct first, fast second, and that it is harder for people to extend Apache because it makes the code more complicated. The first reason is invalid, because it's quite possible to make a web server correct and fast at the same time. The three web servers I mentioned prove this. And there are several more non blocking web servers out there. The I/O is a very small part. Given a proper design, everything else can easily work with that model. Extending it is not made any more difficult, because one would very rarely, if ever, modify the base I/O model. You have normal HTTP and HTTPS, and that's it. Anyone who would be extending that critical portition of the web server should be able to understand the conceptually simple concept of non blocking I/O, especially as it is explained very well in a variety of sources.
Scheduling details: switching process != O(p) (Score:1)
I haven't really worked with Linux, but despite my technical aversions toward this OS then I sure don't hope that switching processes is linear in time.
An OS maintain a list of running tasks (processes) and each time a task has used up its timeslice (which is around one tenth of a second) or volunterely given up the processor (because it has no more work to compute), then the next task is taken from the list (and the previously running task is appended to the end, unless it volunterely gave up the processor) -- this is "task switching" and the principle is often referred to as roundrobin. Something that most should be able to implement as a constant-time operation.
Often one would use a priority queue instead of a list, that is, the task each have a priority, and instead of adding tasks to the end of the list, they are inserted according to their priority (this can still be done in constant time by e.g. having an array of lists, where the index in the array is the priority).
Using priorities you ensure that tasks with high priority always get the processor when they need it (e.g. a mouse driver) and tasks with low priorities to never interfer with the users work (like a batch job running in the background).
The above is often called static scheduling, because the priorities are set once, and never touched. But this is often not desireable, because it's a tough job adjusting all those priorities, and perhaps not even possible on a multi user system.
So what do we do? Well... this is where black magic comes into play. One simple solution could be to boost the priority of all the tasks which doesn't use their full time slice (so they'll have a higher priority next time they run, because we assume that they won't run for long).
Many heuristics exist to try and make dynamic scheduling (priority rearranging) fair. Often these heuristics only rearrange priorities a few times pr. minute, based on statistics on how the tasks have behaved the last n seconds.
Since this process involve all the running tasks, then it is in nature linear in time (but of course some tricks exist). But as said, this algorithm runs only a few times pr. minute, but the result affect the entire fairness/responsiveness of the machine. So time is probably better spent crafting a better heuristic than making it O(1) instead of O(n).
Re:Scheduling details: switching process != O(p) (Score:2, Informative)
ehm... most of the time there are no processes to run, they all wait for a message/interrupt. When one arrives, they are moved to the list (priority queue) of running/ready processes.
The schedular always just pick the next in this list of running/ready processes, which is O(1). At no time does it need to look at the iddle processes, to pick one to run, as they don't need to be run... this is not Microsoft's cooperative multitasking...
I hope Linus listens (and I'm not making a fool.. (Score:3, Insightful)
I don't know if in 2.ed kernels Linus still likes the "small patch" idea. but this is pretty amazing. I am no kernel coder, but some of these tests showed 600% percent improvement and (seemed to me to be) impressive scaling. All the kernel gurus out there, what is the chance that this will make it into the kernel? (2.5)
Re:I hope Linus listens (and I'm not making a fool (Score:3, Insightful)
Will this help... (Score:3, Interesting)
Any information or direction about this would be very helpful.
Re:Will this help... (Score:2)
I'm looking forward to the day it appears in kernel 2.6.
Re:Will this help... (not really) (Score:1)
The whole point of this patch is that it makes the switch time linar O(1) as opposed to exponetial O(p), most people not run machines with enough seperate processes (i only have 80, and im in a java dev sessio now), to see a difference.
What it does mean is that the big Oracle / IBM (the people who wanted the improvement), database systems running on 8 / 16 CPU untis with 10,000s of database transactions will be
-matthew
Re:Will this help... (not really) (Score:1)
No, it makes it constant, instead of linear with the number of processes that are ready to run. Processes that are waiting for I/O (or otherwise sleeping) are not on this queue, and thus aren't slowing you down with even with the old scheduler. Your 80 processes are _not_ all running.
Besides the algorithmic complexity improvement, Ingo says he's improved CPU affinity, and made fork()+exec() more efficient by running the child on the same processor.
But as you say, the main improvement will be in large multiproc systems that are worked to the limit (so they have lots of _runnable_ processes). You won't, of course, see 600% improvement in Oracle performance or anything. That was only in synthetic benchmark. The 600% improvement is in something that takes up a fairly small (but not insignificant) part of a DB server's time.
Another thing this should help is people who run distributed.net or SETI clients. They run at nice 19, and if I read Ingo's message correctly, such processes will be treated as SCHED_IDLE: That is, they will get _no_ CPU time while you're recompiling something, or compressing a vorbis audio file, instead of the usual 7% or so they get now, when a nice=0 process wants all the CPU it can get.
Well, I'm impressed. I like how well he presented his work, it was pretty easy to understand.
Warm Fuzzy (Score:5, Interesting)
I have a BP6 dual-celeron Debian machine which already gives me the benefits of countless hours of volunteer time, including the SMP kernel and ReiserFS, along with dozens of free development tools. Now I see this guy working like a dog to tune the heck out of the scheduler for SMP machines, and I know that when I eventually run the 2.6 kernel, I'm again going to reap the benefits of his work.
It's almost enough to make me learn to hack the Linux kernel out of a sense of obligation.
Re:Warm Fuzzy (Score:2)
Re:Warm Fuzzy (Score:1, Insightful)
Why do we want everyone's grandmother to be using Linux? To do that, we'd basically need to design a system like MacOS (pre-X) -- very simple UI, lots of big-name applications (Office, Photoshop, etc), and good marketing. All 3 of these are completely unappealing to me, and are not the strengths of the open source community. Why not just design the best OS we can, the OS we want to use? For me, that means a functional GUI (Gnome or KDE more than suffice), but an absolutely rocking server OS + software development environment.
I'd much rather see Linux replace Solaris/AIX/HPUX/Win2k. As for desktop dominance, who cares?
Re:Warm Fuzzy (Score:2, Insightful)
O(1) and Grandma (Score:2, Funny)
My grandma refuses to use Linux because its scheduler is O(n) and its "goodness" function can blow her L1 cache. If Linus adopts Ingo's O(1) scheduler, she just might reconsider.
Software in the Public Interest Donations (Score:2, Insightful)
Bob-
Playing with patch....... (Score:5, Informative)
[redhat.com]
http://people.redhat.com/mingo/O(1)-scheduler/
I've created diffs between 2.5.1 and 2.5.2pre8 with the O(1) scheduler, and between 2.5.2pre8 and 2.5.2pre8 with the O(1) scheduler.
2.5.2pre8 actually patches pretty well with the original scheduler patch (drivers/char/serial.c.rej can be ignored, and you have to make a few changes to kernel/sched.c in order for it to patch correctly), but because it took me at least ten minutes of fiddling with sched.c I've decided to make a diff for 2.5.2pre8.
No guarantee that either of these works, though
# diff -ru linux-2.5.1 linux-2.5.2pre8schedO1|grep -v '^Only in '|gzip -f >/home/web/patch-2.5.1-2.5.2pre8schedO1.patch.g z
[markbach.com]
http://os.markbach.com:8080/patch-2.5.1-2.5.2pr
(396,961 bytes)
# diff -ru linux-orig linux |grep -v '^Only in ' |gzip -f >/home/web/patch-2.5.2pre8-2.5.2pre8schedO1.pat ch.gz
[markbach.com]
http://os.markbach.com:8080/patch-2.5.pre8-2.5.
(31,124 bytes)
Good luck to anyone who tries to use these
And no, I didn't patch in the kdev_t stuff from people/aeb on kernel.org because there's lots of kdev_t stuff in the Changelog for pre7 and pre8, so I decided to assume (yes, I know, assuming makes an ass out of u and me) I didn't need it... of course, when the system crashes after five seconds, maybe I'll change my mind
And if, for some odd reason, you can't connect on port 8080, just connect on port 80 and let's hope you're not blocked by @home's or my firewall.
Damn, I'm using too many smileys
Ingo Molnar (Score:3, Funny)
Am I the only person that thinks that every time I see his name?
SMP Quality and Speed is a very GoodIdea(tm) (Score:2)
Clustering and SMP are different answers to different questions, but they both lead to multiplying your firepower without multiplying your cost. Add that to the issue of purchase cost for Win and Lin, and the differences become even more severe. Every effective Linux improvement multiplies Linux effectiveness overall.
As an individual user, I have little use for SMP. Games run just fine on one CPU. So for now, I'm just a fan. We shall see what happens.
BTW, is it just me, or is the fact that these two developers, with different ideas of "right", are working together to make something even better, lighting up other peoples days too?
Bob-
Learning scheduler (Score:2, Interesting)
Each time a process became runnable, the NN could assign a priority, and the process would be placed on a priority queue (this isn't O(1), but it's better than O(p)). It seems to me like this would work; it would slow down wake_up_process() (i can't remember the exact name; i haven't looked at the scheduler since october) a bit, but the payoff as the NN got trained should make up for it...
(if this is a terrible idea, tell me so before you moderate me down
Re:Learning scheduler (Score:3, Interesting)
But the real reason it's a terrible idea is because of the cache-line impact. At this level of the system, keeping the CPU cache intact and full of process data is critical to good performance. Otherwise when you switch tasks you wind up having to go out to main memory, which is sloooow. So compact code is more important than fast code in a lot of these situations, and a neural net like you describe is going to be enormous relative to the approaches of Davide and Mingo.
And Linus rightly pointed out that as simple as the current scheduler is, it's basically the only part of the kernel that's pretty much the same code as it was 8 years ago, mostly because it does work very well for the vast majority of cases. Not to say that Mingo, Davide, Rob Love, et al aren't doing a huge boon, and on 4-way and bigger systems it's more of a concern, but e.g. the VM and I/O subsystems are areas that could reap far greater performance rewards for the most common cases (and the VM is getting a ton of eyes, and the I/O subsystem has been the focus of most of the early 2.5 work thanks to Jens Axboe and others)
Sumner Hayes
Linus has included it in 2.5 (Score:4, Interesting)
Cool!