LDAP how is Failover done?

Christopher Browne cbbrowne-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org
Mon Aug 8 16:48:06 UTC 2011


On Mon, Aug 8, 2011 at 12:25 PM, Alejandro Imass <aimass-EzYyMjUkBrFWk0Htik3J/w at public.gmane.org> wrote:
> On Mon, Aug 8, 2011 at 12:03 PM, Christopher Browne <cbbrowne-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> wrote:
>> On Fri, Aug 5, 2011 at 8:57 AM, Ivan Avery Frey
>> <ivan.avery.frey-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> wrote:
>>
>>> At first glance I prefer Model 1. Even for the postgres folk and Chris will
>>> correct me if I'm wrong, multi-mastering is a "hard" problem.
>>
>> It's *really* hard for the RDBMS case, basically because foreign keys
>> + triggers provide a large amount of "magic" where there may be more
>> going on behind the scenes when you do an update, and keeping that
>> consistent across nodes becomes much harder.
>>
>
> Depending on the type replication/clustering, technology/strategy you
> are using. In Pg for example there stuff like:
>
> - Slony (triggers, etc.),
> - WAL replication (native, warm slave copy)
> -PgCluster - best IMHO but complex set-up.

I'm fairly familiar with all three (I'm one of the 'core' devs for
Slony!); none of these are properly multimaster replication systems.

>> Consider the case where you're managing inventory...
>>
>
> [...]
>
> In any distributed computing env you have to deal with CAP theorem and
> a concept known as eventual consistency.

"Eventual Consistency" is one of the strategies for coping with
implications of CAP.

It's more or less an altogether alternative to multimaster replication.

> If you are selling stuff, it is the business that will drive the
> consistency model. For your Inventory examples:
>
> For sales
> ------------
> Ask any business owner and they'll tell you: make the sale first and
> then we'll deal with the back order.
> Other times, you just have to deal with stuff like negative quantities.
> Imagine your in a supermarket cashier and the teller can't process the
> sale because the guys in the receiving area did not input the
> receiving items although they are alreay on the shelves.
> Beleive it or not I have seen this happen in supermarket systems that
> don't support negative inventory qtys

The devil's in the business details.

> Anyway, using your inventory examples, it is the business that will
> determine your eventual consistency rules.

Right.

For the grocery store, there's enough shrinkage that you can't depend
on an inventory system having authoritative information in it.  That's
a "technical" aspect, but not of a computing sort.  Rather,
"technically," food goes bad for a number of reasons that haven't much
to do with computers.

And the fuzziness is actually handled pretty appropriately: a grocery
store's computers will use inventory information to control
re-ordering, but it'll be pretty irrelevant to purchasing.  If a
customer has a block of Gouda cheese in their shopping cart, then it's
pretty irrelevant what inventory the system imagines there is of Gouda
cheese.

The case I described wouldn't happen with a grocery store.

> If you need the best of both worlds, you will probably need to
> separate the write and read paths. Supposing you are in a Web-based
> env you can model your inventory items as real RESTful resources, so
> when you update (POST/PUT) you go through Pg, but when you read
> (GET,HEAD, etc.) you are looking at a de-normalized version of the
> data in a noSQL DB such as Couch. This will scale very nicely because
> if you look at any DB usage you will se that there are about 3-10
> SELECTS per every INSERT/UPDATE in a typical RDBMS app. The trick is
> to update the noSQL DB when the RDBMS is updated and you can do this
> with simple stored procedures.

It would be pretty plausible, in a grocery store case, to have three
sorts of databases:

a) An LDAP system, with a node at each store, feeding from a master
node at HQ, which associates UPC codes with prices and descriptions
(and probably tax data).

b) Some sort of "message queueing" system which collects store data to
push to HQ:
  i) When customers buy stuff, record the cash register information of
what they purchased, and for how much
  ii) When stock comes in, record what's delivered to the store
  iii) When stock folk count cans and such, record how much inventory
is in stock

c) At head office, they might have several RDBMS-based systems to deal with:
  i) Accounting for purchases and sales
  ii) Managing prices (which then feeds to the LDAP "master")
  iii) Inventory management
  i) and iii) feed off of the data that comes in from the stores, and
then determine what should get delivered to stores.

There's not any particular value, in this context, for multimaster
replication.  It's pretty hierarchical, head office being "master" of
everything.

The business case for multimaster LDAP is a bit different.

A characteristic case would be where an organization wants integrated
control over a number of systems that feed off of LDAP, and has
several locations, each of which is sufficiently "trusted" to be
considered an authority..
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"
--
The Toronto Linux Users Group.      Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists





More information about the Legacy mailing list