Writing High-Availability Services? 21
bigattichouse asks: "I have a project coming up that will require some serious load capabilities accepting socket connections. while I have a design that can be distributed over multiple servers (using queued reads/writes to the db) and is as low-overhead as I can make it - I am concerned about falling into common problems that may have been overcome in many other projects. What strategies (threading, forks, etc) give the best capability? What common pitfalls should I avoid?"
NIH? (Score:2, Insightful)
Why do you need to reinvent the wheel. There are plenty of other high-performance web/application servers that connect to databases.
Re:NIH? (Score:1)
I re-read the article several times, and not once did I see the original poster say s/he was writing a web server. Did I miss something?
Re:NIH? (Score:1)
instead of worrying about connection pools, socket protocols and the like you could do something 'nutty' like solving business problems. ...just an idea.
One common pitfall ... (Score:3, Informative)
All too often I've read the argument: "Oh, performance isn't good, so I'll parallelize it". That doesn't hold much weight, as not all things are efficiently parallelizable.
So, before anyone suggests that you start pthread_create()ing threads everywhere, give some serious thought as to maxing out the serial performance first.
When to parallelize (Score:2)
Parallelizing it WAS the answer and it ran like a dream from then onwards; arrivals were more synchronized and end-to-end time was much less and CPU was more utilized.
Sam
Re:One common pitfall ... (Score:3, Informative)
I'd not say this is perfectly good advice.
When you carefully optimize your code to acheive maximum serial performance, you get just that, maximum serial performance.
The algorithm that acheives maximal parallel performance, in my experience, is often quite different. What you really need to do is to carefully plan your code for maximum benefit in the resources you have available.
If you want to design a parallel code, start with that assumption, not from the standpoint of parallelizing a serial code.
Beware slow connections (Score:5, Informative)
Fat, dumb and happy we figured that the real world couldn't hammer us as hard as we could internally. Wrong! Slow connections require maintaining connection resources much longer than on an internal network where the response can be created and dispensed with almost instantly.
Maintaining all those simultaneous connections depleted our resources and the app went into full meltdown mere seconds after being released on the public servers.
We beat a hasty retreat to the old code, licked our wounds, and learned a valuable lesson.
The C10K problem (Score:5, Informative)
You probably know about this paper already, but just in case you don't:
The paper deals with web servers handling ten thousand simultaneous TCP connections. But most of it is not particularly related to HTTP or web problems, but with more general socket I/O stuff --particulary with the ways of dealing with readiness/error notifications (e.g. select(), poll(), asynchronous signals, etc.). It also discusses other kind of limits (threads, processes, descriptors).
It is quite enlightening. It may be a bit outdated --I remember reading it about the time Netcraft was doing all that noise about Windows being faster than Linux as a web server-- but I'm sure most of it is very relevant.
What kind of system? (Score:3, Interesting)
Are you writing a web app where you have to hold session data across TCP connections?
Are you writing an app that will have sustained connections (more than one request per connection?)?
These different situations require different strategies.
DB reads more common or writes? How big's the difference?
What kind of system is your target? Can you trade memory for speed (caching)?
Take a look at SEDA http://seda.sourceforge.net. While you probably won't be rewriting your app to use this framework, many of the strategies may be useful and applicable to your app.
Also, just note the difference between efficient and scalable: some designs will take longer than others on short loads, but many of those make tradeoffs that are only noticable under high stress. Consider what tradeoffs you've made so far: some may be good or bad, and more may need to be made.
All this was said without knowledge of what you app is other than a DB app. I am not an expert, but I doubt an expert could say all that much with that little information.
Multiple strategies for HA systems (Score:4, Informative)
Devise a mechanishm for dealing with the situation where the component is unavailable for several hours. If that is not possible you must implement redundancy.
Another (or additional) strategy is to implement self-monitoring. The component should monitor themselves for faults, and optionally monitor other components and restart them if necessary. The gotcha here is not to mask any errors for any high-level monitoring system.
You also need error detection&recovery in all components.
One thing that sometimes really bites you with TP is the long time it takes to detect that a connection is broken. You need application-layer keep-alives to detect this rapidly. Changing the kernel parameters for TCP timeouts can be necessary too.
Finally, you may want to have a look at Self-healing servers [slashdot.org]
In one case.... (Score:2, Interesting)
use erlang (Score:2, Informative)
Check out this tutoral [www.sics.se] on making a fault-tolerant server in Erlang.
Less fluff, more detail (Score:2)
You haven't given any detail about the nature of the application. You also appear more concerned with achieving high performance than high availability (which you only mention in the title). If this is such a big application why are you even talking about socket connections?
I must assume that you are developing an enterprise application, given your performance and availability needs. Contemporary systems of this nature fall loosely into one of two categories: web technology based, or not.
If you're ba
Re:Less fluff, more detail (Score:1)
All of these technologies are well and good at an enterprise level in which latency is not an issue, but move into (say) telecomms and suddenly your drivers are:
1) Availability
2) Latency
When I pick up my prepaid cell phone, dial a number and press send there are milliseconds for the entire
Re:Less fluff, more detail (Score:1)
Re:Less fluff, more detail (Score:1)
Re:Less fluff, more detail (Score:3, Interesting)
Ah, telecoms :) Is this the industry/application in question, or just hypothetical? There was mention of throughput (indirectly) and availability, but not of latency in the original question. Also there was mention of queuing queries to a back-end database ... this doesn't sound like a minimal-latency scenario.
Anyway, the technologies you mention are not likely to be acceptable in such a scenario -- but MOM is quite likely to be appropriate. In fact many cellular services are based on MOM (conceptuall
Re:Less fluff, more detail (Score:2)
Though you wouldn't know it from our horribly out-of-date website, our primary product at the company I work for (Mission Critical Linux [missioncriticallinux.com]) is a high availability middleware product that can be tightly integrated with custom software so that you don't have to reinvent the wheel when it comes to HA clustering services. I'm talking about things like inter-node communications, distributed lock management, heartbeating, service location management... If you have a tight s
Slow connections, and lots of 'em! (Score:3, Informative)
Never forget how a lot of idle connections can kill you, for example a thousand of people connecting to your fast server over 56k modems, sucking only a packet now and then. If you have a thread/process-per-connection design, like Apache, you'll get screwed real hard when you have a bazillion thread/process doing *almost* (but not quite) nothing, swamping the I/O scheduler and context switching like mad. If you use a select/poll-based approach, scanning all these inactive file descriptors, looking for those that are readable/writable, wastes a lot of time. Check out the new epoll stuff or Ben LaHaise's callback-based AIO interface.
You should use something like libevent or liboop to abstract your event loop, so that you can use select/poll on old or unpatched kernel, but so that you use epoll and other fancy event dispatching mechanisms on your production servers.
Here are a few URLs for you:
http://kegel.com/c10k.html
http://pl.atyp.us/c