LDAP how is Failover done?

Alejandro Imass aimass-EzYyMjUkBrFWk0Htik3J/w at public.gmane.org
Mon Aug 8 16:25:01 UTC 2011


On Mon, Aug 8, 2011 at 12:03 PM, Christopher Browne <cbbrowne-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> wrote:
> On Fri, Aug 5, 2011 at 8:57 AM, Ivan Avery Frey
> <ivan.avery.frey-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> wrote:
>
>> At first glance I prefer Model 1. Even for the postgres folk and Chris will
>> correct me if I'm wrong, multi-mastering is a "hard" problem.
>
> It's *really* hard for the RDBMS case, basically because foreign keys
> + triggers provide a large amount of "magic" where there may be more
> going on behind the scenes when you do an update, and keeping that
> consistent across nodes becomes much harder.
>

Depending on the type replication/clustering, technology/strategy you
are using. In Pg for example there stuff like:

- Slony (triggers, etc.),
- WAL replication (native, warm slave copy)
-PgCluster - best IMHO but complex set-up.

> Consider the case where you're managing inventory...
>

[...]

In any distributed computing env you have to deal with CAP theorem and
a concept known as eventual consistency.

If you are selling stuff, it is the business that will drive the
consistency model. For your Inventory examples:

For sales
------------
Ask any business owner and they'll tell you: make the sale first and
then we'll deal with the back order.
Other times, you just have to deal with stuff like negative quantities.
Imagine your in a supermarket cashier and the teller can't process the
sale because the guys in the receiving area did not input the
receiving items although they are alreay on the shelves.
Beleive it or not I have seen this happen in supermarket systems that
don't support negative inventory qtys

For maintenance
-----------------------
If your inventory is for managing replacement parts in a CMMS/EAM then
is completely different story: you better not accept over demand
because it may cause a plant shutdown

Anyway, using your inventory examples, it is the business that will
determine your eventual consistency rules.


If you need the best of both worlds, you will probably need to
separate the write and read paths. Supposing you are in a Web-based
env you can model your inventory items as real RESTful resources, so
when you update (POST/PUT) you go through Pg, but when you read
(GET,HEAD, etc.) you are looking at a de-normalized version of the
data in a noSQL DB such as Couch. This will scale very nicely because
if you look at any DB usage you will se that there are about 3-10
SELECTS per every INSERT/UPDATE in a typical RDBMS app. The trick is
to update the noSQL DB when the RDBMS is updated and you can do this
with simple stored procedures.


In today's cloud computing envs we have to start thinking outside the
RDBMS paradigm and combine our data layers with the best solution for
each case:

In terms of CAP
- For CA use RDBMS like Pg
- For AP use noSQL like Couch, Cassandra etc. - and LDAP where applicable

You can link these worlds but you must think in terms of resources
(de-normalized in the case of noSQL).


LDAP can easily be linked to databases using the operational attribute
entryUUID (RFC4530)

>
> LDAP is a bit of a different story; it certainly doesn't include those
> sorts of constraints or triggers, with the attendant consequence that
> people can't model that, and so don't have those sorts of challenges
> in their systems.
>

That's because DAP is optimized for READ not for write, just like noSQL

> As a DIRECTORY service, (the "D" in LDAP), you don't capture balances
> of things - what you're supposed to record are things that other
> systems might want to reference.  And that fits reasonably well with
> the ability to 'go multimaster.'
>

Agree, but a much better model is to divide your DIT and use
referrals. This is because even with multimaster, or master-slave you
will _always_ have a single point of failure. In the multi-master it
will will be your LDAP proxy/balancer


> Now, the experiences I have had working with LDAP tend to make me wish
> that I had instead done something less unpleasant, like poking burning
> needles in my eyes.  I don't quite know why this is; I don't think
> it's a "relational myopia" or anything such.
>

LDAP is just like any other DB. Again you must always weigh in the
cost/benefit of every solution. Master/Slave with a warm standby and a
downtime of a few minutes is probably fine for any organization. Heck,
if they've put up with _hours_ downtime using MS solutions till now, I
agree w/ you the cost/benefit if setting up a MM LDAP is probably not
worth it.

Best,

--
Alejandro Imass
--
The Toronto Linux Users Group.      Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists





More information about the Legacy mailing list