Blob Blame History Raw
From 9b4bc5423447796134efeabaf76fbd669428a028 Mon Sep 17 00:00:00 2001
From: Yannick Cote <ycote@redhat.com>
Date: Fri, 2 Dec 2022 12:58:38 -0500
Subject: [KPATCH CVE-2022-43945] kpatch fixes for CVE-2022-43945

Kernels:
4.18.0-425.3.1.el8
4.18.0-425.10.1.el8_7


Kpatch-MR: https://gitlab.com/redhat/prdsc/rhel/src/kpatch/rhel-8/-/merge_requests/76
Approved-by: Joe Lawrence (@joe.lawrence)
Changes since last build:
arches: x86_64 ppc64le
callback_xdr.o: changed function: nfs_callback_dispatch
mremap.o: changed function: move_page_tables
nfs3proc.o: changed function: nfsd3_init_dirlist_pages
nfs3proc.o: changed function: nfsd3_proc_read
nfsproc.o: changed function: nfsd_proc_read
nfsproc.o: changed function: nfsd_proc_readdir
nfssvc.o: changed function: nfsd_dispatch
svc.o: changed function: nlmsvc_dispatch
---------------------------

Modifications:
- Take out modifications to header routines and roll new kpatch versions
  of them to be included at all call sites.
- Wrapped new routines: svcxdr_init_decode(), svcxdr_init_encode()

commit e608b2dbbb132ebed6b63967b0915159cdf52ce4
Author: Scott Mayhew <smayhew@redhat.com>
Date:   Thu Nov 10 13:46:47 2022 -0500

    SUNRPC: Fix svcxdr_init_decode's end-of-buffer calculation

    Bugzilla: https://bugzilla.redhat.com/2143172
    CVE: CVE-2022-43945
    Y-Commit: 804daece339d51a00b7b466cc648139009b1c0fb

    O-Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2141774
    O-CVE: CVE-2022-43945

    commit 90bfc37b5ab91c1a6165e3e5cfc49bf04571b762
    Author: Chuck Lever <chuck.lever@oracle.com>
    Date:   Thu Sep 1 15:09:53 2022 -0400

        SUNRPC: Fix svcxdr_init_decode's end-of-buffer calculation

        Ensure that stream-based argument decoding can't go past the actual
        end of the receive buffer. xdr_init_decode's calculation of the
        value of xdr->end over-estimates the end of the buffer because the
        Linux kernel RPC server code does not remove the size of the RPC
        header from rqstp->rq_arg before calling the upper layer's
        dispatcher.

        The server-side still uses the svc_getnl() macros to decode the
        RPC call header. These macros reduce the length of the head iov
        but do not update the total length of the message in the buffer
        (buf->len).

        A proper fix for this would be to replace the use of svc_getnl() and
        friends in the RPC header decoder, but that would be a large and
        invasive change that would be difficult to backport.

        Fixes: 5191955d6fc6 ("SUNRPC: Prepare for xdr_stream-style decoding on the server-side")
        Reviewed-by: Jeff Layton <jlayton@kernel.org>
        Signed-off-by: Chuck Lever <chuck.lever@oracle.com>

    Signed-off-by: Scott Mayhew <smayhew@redhat.com>
    Signed-off-by: Jarod Wilson <jarod@redhat.com>

commit c43d2ebcf1b3893e044ee76490cbc225212dcd09
Author: Scott Mayhew <smayhew@redhat.com>
Date:   Thu Nov 10 13:46:47 2022 -0500

    SUNRPC: Fix svcxdr_init_encode's buflen calculation

    Bugzilla: https://bugzilla.redhat.com/2143172
    CVE: CVE-2022-43945
    Y-Commit: 0dfe425436b4e11f1cd68ed8cbf4a8c4276d47c6

    O-Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2141774
    O-CVE: CVE-2022-43945

    commit 1242a87da0d8cd2a428e96ca68e7ea899b0f4624
    Author: Chuck Lever <chuck.lever@oracle.com>
    Date:   Thu Sep 1 15:09:59 2022 -0400

        SUNRPC: Fix svcxdr_init_encode's buflen calculation

        Commit 2825a7f90753 ("nfsd4: allow encoding across page boundaries")
        added an explicit computation of the remaining length in the rq_res
        XDR buffer.

        The computation appears to suffer from an "off-by-one" bug. Because
        buflen is too large by one page, XDR encoding can run off the end of
        the send buffer by eventually trying to use the struct page address
        in rq_page_end, which always contains NULL.

        Fixes: bddfdbcddbe2 ("NFSD: Extract the svcxdr_init_encode() helper")
        Reviewed-by: Jeff Layton <jlayton@kernel.org>
        Signed-off-by: Chuck Lever <chuck.lever@oracle.com>

    Signed-off-by: Scott Mayhew <smayhew@redhat.com>
    Signed-off-by: Jarod Wilson <jarod@redhat.com>

commit 2c36509ef08d24a538bca8369ec0e88420813c3b
Author: Scott Mayhew <smayhew@redhat.com>
Date:   Thu Nov 10 13:46:47 2022 -0500

    NFSD: Protect against send buffer overflow in NFSv2 READDIR

    Bugzilla: https://bugzilla.redhat.com/2143172
    CVE: CVE-2022-43945
    Y-Commit: f4e8bb36f1ddd7d2111db324154793c7c6d3cad1

    O-Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2141774
    O-CVE: CVE-2022-43945

    commit 00b4492686e0497fdb924a9d4c8f6f99377e176c
    Author: Chuck Lever <chuck.lever@oracle.com>
    Date:   Thu Sep 1 15:10:05 2022 -0400

        NFSD: Protect against send buffer overflow in NFSv2 READDIR

        Restore the previous limit on the @count argument to prevent a
        buffer overflow attack.

        Fixes: 53b1119a6e50 ("NFSD: Fix READDIR buffer overflow")
        Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
        Reviewed-by: Jeff Layton <jlayton@kernel.org>
        Signed-off-by: Chuck Lever <chuck.lever@oracle.com>

    Signed-off-by: Scott Mayhew <smayhew@redhat.com>
    Signed-off-by: Jarod Wilson <jarod@redhat.com>

commit e7f5ff9960dd6667538bf642a42168473e1d987d
Author: Scott Mayhew <smayhew@redhat.com>
Date:   Thu Nov 10 13:46:47 2022 -0500

    NFSD: Protect against send buffer overflow in NFSv3 READDIR

    Bugzilla: https://bugzilla.redhat.com/2143172
    CVE: CVE-2022-43945
    Y-Commit: c33579e1e26c423888888482a0e1ca9a0980f292

    O-Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2141774
    O-CVE: CVE-2022-43945

    commit 640f87c190e0d1b2a0fcb2ecf6d2cd53b1c41991
    Author: Chuck Lever <chuck.lever@oracle.com>
    Date:   Thu Sep 1 15:10:12 2022 -0400

        NFSD: Protect against send buffer overflow in NFSv3 READDIR

        Since before the git era, NFSD has conserved the number of pages
        held by each nfsd thread by combining the RPC receive and send
        buffers into a single array of pages. This works because there are
        no cases where an operation needs a large RPC Call message and a
        large RPC Reply message at the same time.

        Once an RPC Call has been received, svc_process() updates
        svc_rqst::rq_res to describe the part of rq_pages that can be
        used for constructing the Reply. This means that the send buffer
        (rq_res) shrinks when the received RPC record containing the RPC
        Call is large.

        A client can force this shrinkage on TCP by sending a correctly-
        formed RPC Call header contained in an RPC record that is
        excessively large. The full maximum payload size cannot be
        constructed in that case.

        Thanks to Aleksi Illikainen and Kari Hulkko for uncovering this
        issue.

        Reported-by: Ben Ronallo <Benjamin.Ronallo@synopsys.com>
        Cc: <stable@vger.kernel.org>
        Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
        Reviewed-by: Jeff Layton <jlayton@kernel.org>
        Signed-off-by: Chuck Lever <chuck.lever@oracle.com>

    Signed-off-by: Scott Mayhew <smayhew@redhat.com>
    Signed-off-by: Jarod Wilson <jarod@redhat.com>

commit be58a0f8fe047e3cbc9891133b34cd323d01f8e3
Author: Scott Mayhew <smayhew@redhat.com>
Date:   Thu Nov 10 13:46:47 2022 -0500

    NFSD: Protect against send buffer overflow in NFSv2 READ

    Bugzilla: https://bugzilla.redhat.com/2143172
    CVE: CVE-2022-43945
    Y-Commit: 436325d0734d253f50a547b842a95ed4bb752ae3

    O-Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2141774
    O-CVE: CVE-2022-43945

    commit 401bc1f90874280a80b93f23be33a0e7e2d1f912
    Author: Chuck Lever <chuck.lever@oracle.com>
    Date:   Thu Sep 1 15:10:18 2022 -0400

        NFSD: Protect against send buffer overflow in NFSv2 READ

        Since before the git era, NFSD has conserved the number of pages
        held by each nfsd thread by combining the RPC receive and send
        buffers into a single array of pages. This works because there are
        no cases where an operation needs a large RPC Call message and a
        large RPC Reply at the same time.

        Once an RPC Call has been received, svc_process() updates
        svc_rqst::rq_res to describe the part of rq_pages that can be
        used for constructing the Reply. This means that the send buffer
        (rq_res) shrinks when the received RPC record containing the RPC
        Call is large.

        A client can force this shrinkage on TCP by sending a correctly-
        formed RPC Call header contained in an RPC record that is
        excessively large. The full maximum payload size cannot be
        constructed in that case.

        Cc: <stable@vger.kernel.org>
        Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
        Reviewed-by: Jeff Layton <jlayton@kernel.org>
        Signed-off-by: Chuck Lever <chuck.lever@oracle.com>

    Signed-off-by: Scott Mayhew <smayhew@redhat.com>
    Signed-off-by: Jarod Wilson <jarod@redhat.com>

commit b0be04c547fb3f2c98bda81f9f93e68dfdcad398
Author: Scott Mayhew <smayhew@redhat.com>
Date:   Thu Nov 10 13:46:46 2022 -0500

    NFSD: Protect against send buffer overflow in NFSv3 READ

    Bugzilla: https://bugzilla.redhat.com/2143172
    CVE: CVE-2022-43945
    Y-Commit: cc970cde839098c5dd3d1dd013620ca972d4c9db

    O-Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2141774
    O-CVE: CVE-2022-43945

    commit fa6be9cc6e80ec79892ddf08a8c10cabab9baf38
    Author: Chuck Lever <chuck.lever@oracle.com>
    Date:   Thu Sep 1 15:10:24 2022 -0400

        NFSD: Protect against send buffer overflow in NFSv3 READ

        Since before the git era, NFSD has conserved the number of pages
        held by each nfsd thread by combining the RPC receive and send
        buffers into a single array of pages. This works because there are
        no cases where an operation needs a large RPC Call message and a
        large RPC Reply at the same time.

        Once an RPC Call has been received, svc_process() updates
        svc_rqst::rq_res to describe the part of rq_pages that can be
        used for constructing the Reply. This means that the send buffer
        (rq_res) shrinks when the received RPC record containing the RPC
        Call is large.

        A client can force this shrinkage on TCP by sending a correctly-
        formed RPC Call header contained in an RPC record that is
        excessively large. The full maximum payload size cannot be
        constructed in that case.

        Cc: <stable@vger.kernel.org>
        Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
        Reviewed-by: Jeff Layton <jlayton@kernel.org>
        Signed-off-by: Chuck Lever <chuck.lever@oracle.com>

    Signed-off-by: Scott Mayhew <smayhew@redhat.com>
    Signed-off-by: Jarod Wilson <jarod@redhat.com>

Signed-off-by: Yannick Cote <ycote@redhat.com>
---
 fs/lockd/svc.c                        |  6 ++-
 fs/nfs/callback_xdr.c                 |  6 ++-
 fs/nfsd/nfs3proc.c                    | 11 +++---
 fs/nfsd/nfsproc.c                     |  6 +--
 fs/nfsd/nfssvc.c                      |  6 ++-
 include/linux/kpatch_cve_2022_43945.h | 53 +++++++++++++++++++++++++++
 6 files changed, 74 insertions(+), 14 deletions(-)
 create mode 100644 include/linux/kpatch_cve_2022_43945.h

diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
index 31fd841010c2..13f3813b38a8 100644
--- a/fs/lockd/svc.c
+++ b/fs/lockd/svc.c
@@ -767,6 +767,8 @@ static void __exit exit_nlm(void)
 module_init(init_nlm);
 module_exit(exit_nlm);
 
+#include <linux/kpatch_cve_2022_43945.h>
+
 /**
  * nlmsvc_dispatch - Process an NLM Request
  * @rqstp: incoming request
@@ -780,7 +782,7 @@ static int nlmsvc_dispatch(struct svc_rqst *rqstp, __be32 *statp)
 {
 	const struct svc_procedure *procp = rqstp->rq_procinfo;
 
-	svcxdr_init_decode(rqstp);
+	kpatch_cve_2022_43945_svcxdr_init_decode(rqstp);
 	if (!procp->pc_decode(rqstp, &rqstp->rq_arg_stream))
 		goto out_decode_err;
 
@@ -790,7 +792,7 @@ static int nlmsvc_dispatch(struct svc_rqst *rqstp, __be32 *statp)
 	if (*statp != rpc_success)
 		return 1;
 
-	svcxdr_init_encode(rqstp);
+	kpatch_cve_2022_43945_svcxdr_init_encode(rqstp);
 	if (!procp->pc_encode(rqstp, &rqstp->rq_res_stream))
 		goto out_encode_err;
 
diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
index a67c41ec545f..9438f91b9c2d 100644
--- a/fs/nfs/callback_xdr.c
+++ b/fs/nfs/callback_xdr.c
@@ -983,13 +983,15 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp)
 	return rpc_success;
 }
 
+#include <linux/kpatch_cve_2022_43945.h>
+
 static int
 nfs_callback_dispatch(struct svc_rqst *rqstp, __be32 *statp)
 {
 	const struct svc_procedure *procp = rqstp->rq_procinfo;
 
-	svcxdr_init_decode(rqstp);
-	svcxdr_init_encode(rqstp);
+	kpatch_cve_2022_43945_svcxdr_init_decode(rqstp);
+	kpatch_cve_2022_43945_svcxdr_init_encode(rqstp);
 
 	*statp = procp->pc_func(rqstp);
 	return 1;
diff --git a/fs/nfsd/nfs3proc.c b/fs/nfsd/nfs3proc.c
index f1d81704d5bd..233f07958b09 100644
--- a/fs/nfsd/nfs3proc.c
+++ b/fs/nfsd/nfs3proc.c
@@ -146,7 +146,6 @@ nfsd3_proc_read(struct svc_rqst *rqstp)
 {
 	struct nfsd3_readargs *argp = rqstp->rq_argp;
 	struct nfsd3_readres *resp = rqstp->rq_resp;
-	u32 max_blocksize = svc_max_payload(rqstp);
 	unsigned int len;
 	int v;
 
@@ -155,7 +154,8 @@ nfsd3_proc_read(struct svc_rqst *rqstp)
 				(unsigned long) argp->count,
 				(unsigned long long) argp->offset);
 
-	argp->count = min_t(u32, argp->count, max_blocksize);
+	argp->count = min_t(u32, argp->count, svc_max_payload(rqstp));
+	argp->count = min_t(u32, argp->count, rqstp->rq_res.buflen);
 	if (argp->offset > (u64)OFFSET_MAX)
 		argp->offset = (u64)OFFSET_MAX;
 	if (argp->offset + argp->count > (u64)OFFSET_MAX)
@@ -451,13 +451,14 @@ static void nfsd3_init_dirlist_pages(struct svc_rqst *rqstp,
 {
 	struct xdr_buf *buf = &resp->dirlist;
 	struct xdr_stream *xdr = &resp->xdr;
-
-	count = clamp(count, (u32)(XDR_UNIT * 2), svc_max_payload(rqstp));
+	unsigned int sendbuf = min_t(unsigned int, rqstp->rq_res.buflen,
+				     svc_max_payload(rqstp));
 
 	memset(buf, 0, sizeof(*buf));
 
 	/* Reserve room for the NULL ptr & eof flag (-2 words) */
-	buf->buflen = count - XDR_UNIT * 2;
+	buf->buflen = clamp(count, (u32)(XDR_UNIT * 2), sendbuf);
+	buf->buflen -= XDR_UNIT * 2;
 	buf->pages = rqstp->rq_next_page;
 	rqstp->rq_next_page += (buf->buflen + PAGE_SIZE - 1) >> PAGE_SHIFT;
 
diff --git a/fs/nfsd/nfsproc.c b/fs/nfsd/nfsproc.c
index e4b0e0e6d1bc..87d0c443767c 100644
--- a/fs/nfsd/nfsproc.c
+++ b/fs/nfsd/nfsproc.c
@@ -182,6 +182,7 @@ nfsd_proc_read(struct svc_rqst *rqstp)
 		argp->count, argp->offset);
 
 	argp->count = min_t(u32, argp->count, NFSSVC_MAXBLKSIZE_V2);
+	argp->count = min_t(u32, argp->count, rqstp->rq_res.buflen);
 
 	v = 0;
 	len = argp->count;
@@ -561,12 +562,11 @@ static void nfsd_init_dirlist_pages(struct svc_rqst *rqstp,
 	struct xdr_buf *buf = &resp->dirlist;
 	struct xdr_stream *xdr = &resp->xdr;
 
-	count = clamp(count, (u32)(XDR_UNIT * 2), svc_max_payload(rqstp));
-
 	memset(buf, 0, sizeof(*buf));
 
 	/* Reserve room for the NULL ptr & eof flag (-2 words) */
-	buf->buflen = count - XDR_UNIT * 2;
+	buf->buflen = clamp(count, (u32)(XDR_UNIT * 2), (u32)PAGE_SIZE);
+	buf->buflen -= XDR_UNIT * 2;
 	buf->pages = rqstp->rq_next_page;
 	rqstp->rq_next_page++;
 
diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index c3c5613c29a8..b9aaf205ec0f 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -985,6 +985,8 @@ nfsd(void *vrqstp)
 	return 0;
 }
 
+#include <linux/kpatch_cve_2022_43945.h>
+
 /**
  * nfsd_dispatch - Process an NFS or NFSACL Request
  * @rqstp: incoming request
@@ -1006,7 +1008,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp, __be32 *statp)
 	 */
 	rqstp->rq_cachetype = proc->pc_cachetype;
 
-	svcxdr_init_decode(rqstp);
+	kpatch_cve_2022_43945_svcxdr_init_decode(rqstp);
 	if (!proc->pc_decode(rqstp, &rqstp->rq_arg_stream))
 		goto out_decode_err;
 
@@ -1023,7 +1025,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp, __be32 *statp)
 	 * Need to grab the location to store the status, as
 	 * NFSv4 does some encoding while processing
 	 */
-	svcxdr_init_encode(rqstp);
+	kpatch_cve_2022_43945_svcxdr_init_encode(rqstp);
 
 	*statp = proc->pc_func(rqstp);
 	if (*statp == rpc_drop_reply || test_bit(RQ_DROPME, &rqstp->rq_flags))
diff --git a/include/linux/kpatch_cve_2022_43945.h b/include/linux/kpatch_cve_2022_43945.h
new file mode 100644
index 000000000000..d52392485a57
--- /dev/null
+++ b/include/linux/kpatch_cve_2022_43945.h
@@ -0,0 +1,53 @@
+#ifndef _KPATCH_CVE_2022_43945_H
+#define _KPATCH_CVE_2022_43945_H
+
+/*
+ * svcxdr_init_decode - Prepare an xdr_stream for Call decoding
+ * @rqstp: controlling server RPC transaction context
+ *
+ * This function currently assumes the RPC header in rq_arg has
+ * already been decoded. Upon return, xdr->p points to the
+ * location of the upper layer header.
+ */
+static inline void kpatch_cve_2022_43945_svcxdr_init_decode(struct svc_rqst *rqstp)
+{
+	struct xdr_stream *xdr = &rqstp->rq_arg_stream;
+	struct xdr_buf *buf = &rqstp->rq_arg;
+	struct kvec *argv = buf->head;
+
+	/*
+	 * svc_getnl() and friends do not keep the xdr_buf's ::len
+	 * field up to date. Refresh that field before initializing
+	 * the argument decoding stream.
+	 */
+	buf->len = buf->head->iov_len + buf->page_len + buf->tail->iov_len;
+
+	xdr_init_decode(xdr, buf, argv->iov_base, NULL);
+	xdr_set_scratch_page(xdr, rqstp->rq_scratch_page);
+}
+
+/*
+ * svcxdr_init_encode - Prepare an xdr_stream for svc Reply encoding
+ * @rqstp: controlling server RPC transaction context
+ *
+ */
+static inline void kpatch_cve_2022_43945_svcxdr_init_encode(struct svc_rqst *rqstp)
+{
+	struct xdr_stream *xdr = &rqstp->rq_res_stream;
+	struct xdr_buf *buf = &rqstp->rq_res;
+	struct kvec *resv = buf->head;
+
+	xdr_reset_scratch_buffer(xdr);
+
+	xdr->buf = buf;
+	xdr->iov = resv;
+	xdr->p   = resv->iov_base + resv->iov_len;
+	xdr->end = resv->iov_base + PAGE_SIZE - rqstp->rq_auth_slack;
+	buf->len = resv->iov_len;
+	xdr->page_ptr = buf->pages - 1;
+	buf->buflen = PAGE_SIZE * (rqstp->rq_page_end - buf->pages);
+	buf->buflen -= rqstp->rq_auth_slack;
+	xdr->rqst = NULL;
+}
+
+#endif /* _KPATCH_CVE_2022_43945_H */
-- 
2.39.1