render / rpms / libvirt

Forked from rpms/libvirt 10 months ago
Clone
Blob Blame History Raw
From 8d5bef256e0b58bb6f45910d0e9e724da72e100c Mon Sep 17 00:00:00 2001
Message-Id: <8d5bef256e0b58bb6f45910d0e9e724da72e100c@dist-git>
From: Michal Privoznik <mprivozn@redhat.com>
Date: Wed, 26 Feb 2014 10:05:24 +0100
Subject: [PATCH] virNetDevVethCreate: Serialize callers

https://bugzilla.redhat.com/show_bug.cgi?id=1014604

Consider dozen of LXC domains, each of them having this type of interface:

    <interface type='network'>
      <mac address='52:54:00:a7:05:4b'/>
      <source network='default'/>
    </interface>

When starting these domain in parallel, all workers may meet in
virNetDevVethCreate() where a race starts. Race over allocating veth
pairs because allocation requires two steps:

  1) find first nonexistent '/sys/class/net/vnet%d/'
  2) run 'ip link add ...' command

Now consider two threads. Both of them find N as the first unused veth
index but only one of them succeeds allocating it. The other one fails.
For such cases, we are running the allocation in a loop with 10 rounds.
However this is very flaky synchronization. It should be rather used
when libvirt is competing with other process than when libvirt threads
fight each other. Therefore, internally we should use mutex to serialize
callers, and do the allocation in loop (just in case we are competing
with a different process). By the way we have something similar already
since 1cf97c87.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
(cherry picked from commit c0d162c68c2f19af8d55a435a9e372da33857048)
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
---
 src/util/virnetdevveth.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/src/util/virnetdevveth.c b/src/util/virnetdevveth.c
index 25eb282..e698ce2 100644
--- a/src/util/virnetdevveth.c
+++ b/src/util/virnetdevveth.c
@@ -39,6 +39,19 @@
 
 /* Functions */
 
+virMutex virNetDevVethCreateMutex;
+
+static int virNetDevVethCreateMutexOnceInit(void)
+{
+    if (virMutexInit(&virNetDevVethCreateMutex) < 0) {
+        virReportSystemError(errno, "%s", _("unable to init mutex"));
+        return -1;
+    }
+    return 0;
+}
+
+VIR_ONCE_GLOBAL_INIT(virNetDevVethCreateMutex);
+
 static int virNetDevVethExists(int devNum)
 {
     int ret;
@@ -117,6 +130,10 @@ int virNetDevVethCreate(char** veth1, char** veth2)
      * We might race with other containers, but this is reasonably
      * unlikely, so don't do too many retries for device creation
      */
+    if (virNetDevVethCreateMutexInitialize() < 0)
+        return -1;
+
+    virMutexLock(&virNetDevVethCreateMutex);
 #define MAX_VETH_RETRIES 10
 
     for (i = 0; i < MAX_VETH_RETRIES; i++) {
@@ -179,6 +196,7 @@ int virNetDevVethCreate(char** veth1, char** veth2)
                    MAX_VETH_RETRIES);
 
 cleanup:
+    virMutexUnlock(&virNetDevVethCreateMutex);
     virCommandFree(cmd);
     VIR_FREE(veth1auto);
     VIR_FREE(veth2auto);
-- 
1.9.0