Forgot your password?
typodupeerror
Technology

Simple HA/HP clustering Using Only DNS 26

Posted by Hemos
from the interesting-concepts dept.
holviala writes "I cooked up a way to achieve high-availability and high-performance clustering using nothing but a few strangely configured dns zones. In case someone else is interested in an extremely easy clustering solution, I wrote a document about it. It's a bit technical, but the included examples should make it clear for anyone who's used to configuring dns. And yes, the linked site is clustered too, so... ummm... no need to be gentle :-)."
This discussion has been archived. No new comments can be posted.

Simple HA/HP clustering Using Only DNS

Comments Filter:
  • by passthecrackpipe (598773) * <passthecrackpipe ... m ['l.c' in gap]> on Tuesday January 25, 2005 @08:20AM (#11466661)
    This only guarantees DNS HA, since it will not test for apache being alive, or any other service being alive. More of a round robin type of setup, with automatic drop of dead addresses. Although it is a nice DNS experiment, I would never use this for HA, as there are better, and - critically - more reliable ways of doing HA, and some of those are pretty affordable.

    Face it, you do HA if your business depends on it, and would you really want to rely on a DNS hack in that case?

    Having said that - Cool Hack Dude!
    • by holviala (124278) on Tuesday January 25, 2005 @09:15AM (#11467104)
      This only guarantees DNS HA, since it will not test for apache being alive, or any other service being alive.

      True, which is why I called it "simple". But with this setup you only need to monitor local processes and services, and if those die only shut down the nameserver. No need for complicated setups where you need to decide if it was the application of the network that died.

      Face it, you do HA if your business depends on it, and would you really want to rely on a DNS hack in that case?

      My business, yes, I'd rely on this. I do "offical" HA for living for customers who don't like hacks like this. But that's something I'd personally never use, not even if I'd own a million billion zillion dollar company.

      Then again, I suffer from the Not Invented Here -syndrome. Guess I'd make a bad leader: "You'll use my DNS hack or you're fired!" :-)

      • Then again, I suffer from the Not Invented Here -syndrome. Guess I'd make a bad leader: "You'll use my DNS hack or you're fired!" :-)

        There's a nice executive position opening up in less than 4 years. You seem to share a couple ideas with the current executive, maybe you should apply.

        P.S. If you're anti-Bush, please take no offense, I'm just joking. If you're pro-Bush, well, let's not go there.
      • I must be missing something. Your page says:

        "The serials should always be the same on all nodes." ... "But the most serious limitation are the buggy DNS servers around the world. This setup assumes that a DNS server or resolver obeys the expire time of a zone record (the 60 seconds used above). Unfortunatly, there are a lot of servers out there which don't do that."

        Aren't other DNS servers allowed to look at your SOA serial number, notice it hasn't changed, and not bother doing any other work? Isn't

        • Aren't other DNS servers allowed to look at your SOA serial number, notice it hasn't changed, and not bother doing any other work? Isn't that the point of having serial numbers?

          I'm glad you told me that - now I can go and take down the setup that has proven to work well....

          Yeah, they could check the SOA but they don't. The reason I want all SOAs to be the same is that no matter what, the SOA won't decrease. Basically this setup is the same as the traditional rr dns, but with dead node detection.

          It

          • First of all, let me say I love the idea and will be using it myself. It is perfect for a company that uses multiple cheap connections (read DSL) and needs to deal with the possibility of one going down. I only wonder why I didn't think of it myself, it makes every service work just like SMTP with MX records...

            Second, if you are still a customer of that ISP you could do a test easily enough to see if they still cache the information beyond the expiration. (Maybe even if you are not a customer, depending
    • would you really want to rely on a DNS hack in that case?

      A hack doesn't have to be unreliable. The debian stable tree has programs with hacks in their configs, but they've been deemed stable and are trusted. Really, the only thing separating a hack from an accepted practice is how widespread it's use is.

      Sounds like you just want someone to blame... or flame?
    • For web server availability, this works fine.

      I work at a broker dealer where we have a set of machines that sit on two different ISPs, and this is the technique we use in case one line goes down.

  • One comment (Score:3, Informative)

    by tdemark (512406) on Tuesday January 25, 2005 @08:54AM (#11466923) Homepage
    Don't use "Domain.dom". There are well-known domains that are reserved explicitly for this purpose [rfc-editor.org].
    • Don't use "Domain.dom". There are well-known domains that are reserved explicitly for this purpose.

      Good point. It's all fixed now...

  • DNS caching? (Score:2, Interesting)

    by Anonymous Coward
    What about client programs that cache DNS lookups (I think some web browsers do this)? I'd hardly call something HA if I have to do something clientside to flush any cached lookups.
    • That's why the TTL is set to a low value (60 seconds) so caching period is kept short. See yahoo.com, google.com, gmail.com, etc. They both set low TTL value too for A records.
      • Browsers ignore the TTL on records. If you have a DNS-based balancing solution, like this or GSLB, it's going to bite you in the ass every time. You have to restart the browser (possibly even reboot the computer) in order to clear the cache.
    • What about client programs that cache DNS lookups (I think some web browsers do this)?

      Many web browsers do, nscd does, DNS caches do...

      Speaking of DNS caches, think about the case when an ISP is providing DNS for their customers - even cycling once per minute isn't good for load-balancing the hits routed via a large DNS cache. Further, when I used to run DNS for a large ISP, I set a minimum timeout for data, because I explicitly did NOT want my caches pulling zone data once per minute. (I set it to

  • Quite clever (Score:2, Insightful)

    Regardless of what the nit-pickers say, I think this is quite a clever idea. The author isn't suggesting this is the best HA solution in the world; but it's certainly simple and effective.
  • Seriously, what is so complex about using something like Heartbeat/LDirectord and setting up a HA/LB cluster?
    Took me about 3 hours to read through the docs, google for examples and setup a 2-Load Balancer/3-Node Cluster, using downloaded packages from ultramonkey.org .

    With a 30 sec deadtime, full takeover takes about 1-2 minutes.
    • As someone who maintains two clusters that run LVS, I'd agree that there's nothing that magical about setting it up. However, for a simple two node cluster LVS is a massive overkill - you've got to have as many director boxes as you have nodes!

      I'm not sure I'd use this guy's method, but it's interesting nonetheless.


  • This is just HA load-balancing of your inbound web traffic. Clustering is what happens on the back end between the servers, which the articles doesn't cover at all, presumably because in the example case the servers are just serving static content over http, and all that's needed to "cluster" it is to copy your changes to both machines when you change the static data.

    The hard part of clustering is getting real HA and/or Loadbalancing for non-trivial content. Imagine if the websrever behind Kimmy's DNS se
  • Per the article: "If the above was common knowledge, I'd be grateful get links to other docs about it."

    OK, how about this article [rpanetwork.co.uk] from December 2002 (see diagram and description on page 4).

He who is content with his lot probably has a lot.

Working...