andykimpe / rpms / 389-ds-base

Forked from rpms/389-ds-base 5 months ago
Clone
Blob Blame History Raw
From 9e58aecdd4265759a1c9aac2817da858849f08a1 Mon Sep 17 00:00:00 2001
From: Thierry Bordaz <tbordaz@redhat.com>
Date: Wed, 10 Feb 2016 15:17:02 +0100
Subject: [PATCH 86/86] Ticket 48445: keep alive entries can break replication

Bug Description:
	On the consumer side, at the end of a total update the replica is enabled and the changelog recreated.
	When the replica is enabled the keep alive entry (for that replica) is created .
	There is a race condition (that look quite systematic in our tests) if the creation of the entry is added to the changelog
	before the changelog is recreated.
	In that case the ADD is erased from the CL and will never be replicated.

	The keep alive entry is created (if it does not already exist) :
		- during a total update (as supplier)
		- when the keep alive is updated
		- when the replica is enabled

Fix Description:
	It is not strictly necessary to create the keep alive when the replica is enabled.
	So we can skip the creation during that step.

https://fedorahosted.org/389/ticket/48445

Reviewed by: Mark Reynolds (thank you Mark)

Platforms tested: F23

Flag Day: no

Doc impact: no

(cherry picked from commit 71a891f0dcfd1aafeb3913279d42e33ed2355312)
(cherry picked from commit 02af085c2a9c23536c8d276ee35794ec6efc81f5)
---
 ldap/servers/plugins/replication/repl5_replica.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/ldap/servers/plugins/replication/repl5_replica.c b/ldap/servers/plugins/replication/repl5_replica.c
index 8b53f3c..31c5f0f 100644
--- a/ldap/servers/plugins/replication/repl5_replica.c
+++ b/ldap/servers/plugins/replication/repl5_replica.c
@@ -3972,7 +3972,6 @@ replica_enable_replication (Replica *r)
         /* What to do ? */
     }
 
-    replica_subentry_check(r->repl_root, replica_get_rid(r));
     /* Replica came back online, Check if the total update was terminated.
        If flag is still set, it was not terminated, therefore the data is
        very likely to be incorrect, and we should not restart Replication threads...
-- 
2.4.3