zrhoffman / rpms / 389-ds-base

Forked from rpms/389-ds-base 3 years ago
Clone
Blob Blame History Raw
From f0b41ec12f957612c69ae5be3bbbb6e2d6db2530 Mon Sep 17 00:00:00 2001
From: Ludwig Krispenz <lkrispen@redhat.com>
Date: Thu, 17 May 2018 10:31:58 +0200
Subject: [PATCH]     Ticket 49696: replicated operations should be serialized

    Bug: there was a scenario where two threads could process replication operations in parallel.
         The reason was that for a new repl start request the repl conn flag is not set and the
         connection is made readable.
         When the start repl op is finished, the flagi set, but in a small window the supplier could
         already have sent updates and more_data would trigger this thread also to continue to process
         repl operations.

    Fix: In the situation where a thread successfully processed a start repl request and just set the repl_conn
         flag  do not use more_data.

    Reviewed by: Thierry, thanks
---
 ldap/servers/slapd/connection.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/ldap/servers/slapd/connection.c b/ldap/servers/slapd/connection.c
index 5ca32a333..b5030f0cb 100644
--- a/ldap/servers/slapd/connection.c
+++ b/ldap/servers/slapd/connection.c
@@ -1822,9 +1822,17 @@ connection_threadmain()
 
             /* If we're in turbo mode, we keep our reference to the connection alive */
             /* can't use the more_data var because connection could have changed in another thread */
-            more_data = conn_buffered_data_avail_nolock(conn, &conn_closed) ? 1 : 0;
-            slapi_log_err(SLAPI_LOG_CONNS, "connection_threadmain", "conn %" PRIu64 " check more_data %d thread_turbo_flag %d\n",
-                          conn->c_connid, more_data, thread_turbo_flag);
+            slapi_log_err(SLAPI_LOG_CONNS, "connection_threadmain", "conn %" PRIu64 " check more_data %d thread_turbo_flag %d"
+                          "repl_conn_bef %d, repl_conn_now %d\n",
+                          conn->c_connid, more_data, thread_turbo_flag,
+                          replication_connection, conn->c_isreplication_session);
+            if (!replication_connection &&  conn->c_isreplication_session) {
+                /* it a connection that was just flagged as replication connection */
+                more_data = 0;
+            } else {
+                /* normal connection or already established replication connection */
+                more_data = conn_buffered_data_avail_nolock(conn, &conn_closed) ? 1 : 0;
+            }
             if (!more_data) {
                 if (!thread_turbo_flag) {
                     /*
-- 
2.13.6