Blame SOURCES/libvirt-numa_conf-Properly-check-for-caches-in-virDomainNumaDefValidate.patch

7548c0
From 8521a431d3da3cc360eb8102eda1c0d649f1ecc3 Mon Sep 17 00:00:00 2001
7548c0
Message-Id: <8521a431d3da3cc360eb8102eda1c0d649f1ecc3@dist-git>
7548c0
From: Michal Privoznik <mprivozn@redhat.com>
7548c0
Date: Wed, 7 Oct 2020 18:45:45 +0200
7548c0
Subject: [PATCH] numa_conf: Properly check for caches in
7548c0
 virDomainNumaDefValidate()
7548c0
MIME-Version: 1.0
7548c0
Content-Type: text/plain; charset=UTF-8
7548c0
Content-Transfer-Encoding: 8bit
7548c0
7548c0
When adding support for HMAT, in f0611fe8830 I've introduced a
7548c0
check which aims to validate /domain/cpu/numa/interconnects. As a
7548c0
part of that, there is a loop which checks whether all <latency/>
7548c0
with @cache attribute refer to an existing cache level. For
7548c0
instance:
7548c0
7548c0
  <cpu mode='host-model' check='partial'>
7548c0
    <numa>
7548c0
      <cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'>
7548c0
        <cache level='1' associativity='direct' policy='writeback'>
7548c0
          <size value='8' unit='KiB'/>
7548c0
          <line value='5' unit='B'/>
7548c0
        </cache>
7548c0
      </cell>
7548c0
      <interconnects>
7548c0
        <latency initiator='0' target='0' cache='1' type='access' value='5'/>
7548c0
        <bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/>
7548c0
      </interconnects>
7548c0
    </numa>
7548c0
  </cpu>
7548c0
7548c0
This XML defines that accessing L1 cache of node #0 from node #0
7548c0
has latency of 5ns.
7548c0
7548c0
However, the loop was not written properly. Well, the check in
7548c0
it, as it was always checking for the first cache in the target
7548c0
node and not the rest. Therefore, the following example errors
7548c0
out:
7548c0
7548c0
  <cpu mode='host-model' check='partial'>
7548c0
    <numa>
7548c0
      <cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'>
7548c0
        <cache level='3' associativity='direct' policy='writeback'>
7548c0
          <size value='10' unit='KiB'/>
7548c0
          <line value='8' unit='B'/>
7548c0
        </cache>
7548c0
        <cache level='1' associativity='direct' policy='writeback'>
7548c0
          <size value='8' unit='KiB'/>
7548c0
          <line value='5' unit='B'/>
7548c0
        </cache>
7548c0
      </cell>
7548c0
      <interconnects>
7548c0
        <latency initiator='0' target='0' cache='1' type='access' value='5'/>
7548c0
        <bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/>
7548c0
      </interconnects>
7548c0
    </numa>
7548c0
  </cpu>
7548c0
7548c0
This errors out even though it is a valid configuration. The L1
7548c0
cache under node #0 is still present.
7548c0
7548c0
Fixes: f0611fe8830
7548c0
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
7548c0
Reviewed-by: Laine Stump <laine@redhat.com>
7548c0
(cherry picked from commit e41ac71fca309b50e2c8e6ec142d8fe1280ca2ad)
7548c0
7548c0
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1749518
7548c0
7548c0
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
7548c0
Message-Id: <4bb47f9e97ca097cee1259449da4739b55753751.1602087923.git.mprivozn@redhat.com>
7548c0
Reviewed-by: Ján Tomko <jtomko@redhat.com>
7548c0
---
7548c0
 src/conf/numa_conf.c | 2 +-
7548c0
 1 file changed, 1 insertion(+), 1 deletion(-)
7548c0
7548c0
diff --git a/src/conf/numa_conf.c b/src/conf/numa_conf.c
7548c0
index 5a92eb35cc..a20398714e 100644
7548c0
--- a/src/conf/numa_conf.c
7548c0
+++ b/src/conf/numa_conf.c
7548c0
@@ -1423,7 +1423,7 @@ virDomainNumaDefValidate(const virDomainNuma *def)
7548c0
 
7548c0
         if (l->cache > 0) {
7548c0
             for (j = 0; j < def->mem_nodes[l->target].ncaches; j++) {
7548c0
-                const virDomainNumaCache *cache = def->mem_nodes[l->target].caches;
7548c0
+                const virDomainNumaCache *cache = &def->mem_nodes[l->target].caches[j];
7548c0
 
7548c0
                 if (l->cache == cache->level)
7548c0
                     break;
7548c0
-- 
7548c0
2.29.2
7548c0