[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Kerberos and Load balancing



You get fault tolerance by configuring your clients with multiple  
"kdc = ..." entries in the realm definition.  If a client doesn't get  
a response within 1 second from the fist entry, it moves on down the  
list. . .  The behavior is good enough that I can routinely take one  
or another of my servers down without any concerns about the effect  
on anyone else.

You can get the same effect by having multiple SRV records as well.   
In fact the SRV records have some priority and load balancing  
features you can't specify from the config file.  Since you're doing  
things with DNS anyway, perhaps you would prefer to go with a "zero  
conf" setup and put all the realm configuration in DNS?

The MIT Kerberos Admin manual has a good description of all the DNS  
settings, and most of them apply to Heimdal as well.  This is a good  
idea anyway because you will occasionally have clients that don't  
have your standard config files installed, and it makes them work  
properly.

On Jan 31, 2008, at 11:59 AM, Annelise Stighall wrote:

> Ok,
> We already have a working Kerberos realm (which is 5 years old and  
> has been working fab) but we are in a upgrade mode and we would  
> like to replace our software lb with hardware lb. Does anyone have  
> any actual experience with this? There were problems with the lvs  
> and Kerberos translation which broke the usability of the system  
> and the lbnamed solved that problem. I am not  worried about the  
> overloading of the systems it is for fault tolerance that I want to  
> know about this stuff.
>
> Thanks,
> Annelise
>
>
> >>> bacchi@rpi.edu 1/31/08 2:22:36 PM >>>
> I agree with Henry that it's hard to overload a modern server.  I'm
> doing over 1 million hits per day on my primary kdc and not having any
> recurring problems.
>
> You could simply create two versions of your krb5.conf file each  
> with a
> different primary kdc
> kdc = server1
> kdc = server2
>
> -------------------
>
> kdc = server2
> kdc = server1
>
> Then split the distribution to your clients.
>
> Henry B. Hotz wrote:
> > It's not worth it.
> >
> > It's pretty hard to imagine a load that a single, modern server  
> can't
> > handle nicely.  You should run multiple servers for redundancy and
> > reliability, not performance.  I'm running 7 servers, but that's due
> > entirely to disaster recovery, firewall, and network topology *NOT*
> > performance.
> >
> > A single 5-year-old Sun could handle at least twice our total  
> load for
> > the entire service.  I say that because our test framework poops  
> out at
> > that level, not because it couldn't do more than that.  That's  
> somewhere
> > well over 25 authentications/second.
> >
> > Running Kerberos through a load balancer may confuse the name  
> resolution
> > code and break a lot of things.  There may be workarounds for these
> > issues, but honestly I don't think it's worth the effort unless  
> you know
> > you need to.
> >
> > I trust you have multiple entries in your krb5.conf files and  
> you're not
> > depending entirely on LB or RRDNS.  In my experience that's better
> > failover than a front end because a front end would need to see some
> > actual failures before it can adjust.  Use CNAME entries for your  
> KDC's
> > so you can replace servers easily without changing the krb5.conf.
> >
> > On Jan 31, 2008, at 9:37 AM, Annelise Stighall wrote:
> >
> >> Hi All,
> >>
> >> Does anyone of you have any experience with Kerberos and  
> hardware load
> >> balancing ? We are currently running our Kerberos realm using  
> lbnamed
> >> for DNS round robin lb but we would like to move to a hardware  
> based
> >> load balancer to speed things up and also to load balance many  
> other
> >> of our services that currently are running in a lvs  environment.
> >> Opinions ? Thoughts ? Ideas ?
> >>
> >> Thanks!

------------------------------------------------------------------------
The opinions expressed in this message are mine,
not those of Caltech, JPL, NASA, or the US Government.
Henry.B.Hotz@jpl.nasa.gov, or hbhotz@oxy.edu