about summary refs log tree commit diff
path: root/nptl
diff options
context:
space:
mode:
authorSzabolcs Nagy <szabolcs.nagy@arm.com>2017-12-13 15:50:21 +0000
committerSzabolcs Nagy <szabolcs.nagy@arm.com>2020-10-02 09:57:44 +0100
commit238032ead6f34c41542890b968d973eb5c839673 (patch)
treee75973d3676e26ecb1bfe37c0d7d4947d48c55c7 /nptl
parent2deb7793907c7995b094b3778017c0ef0bd432d5 (diff)
downloadglibc-238032ead6f34c41542890b968d973eb5c839673.tar.gz
glibc-238032ead6f34c41542890b968d973eb5c839673.tar.xz
glibc-238032ead6f34c41542890b968d973eb5c839673.zip
aarch64: enforce >=64K guard size [BZ #26691]
There are several compiler implementations that allow large stack
allocations to jump over the guard page at the end of the stack and
corrupt memory beyond that. See CVE-2017-1000364.

Compilers can emit code to probe the stack such that the guard page
cannot be skipped, but on aarch64 the probe interval is 64K by default
instead of the minimum supported page size (4K).

This patch enforces at least 64K guard on aarch64 unless the guard
is disabled by setting its size to 0.  For backward compatibility
reasons the increased guard is not reported, so it is only observable
by exhausting the address space or parsing /proc/self/maps on linux.

On other targets the patch has no effect. If the stack probe interval
is larger than a page size on a target then ARCH_MIN_GUARD_SIZE can
be defined to get large enough stack guard on libc allocated stacks.

The patch does not affect threads with user allocated stacks.

Fixes bug 26691.
Diffstat (limited to 'nptl')
-rw-r--r--nptl/allocatestack.c14
1 files changed, 12 insertions, 2 deletions
diff --git a/nptl/allocatestack.c b/nptl/allocatestack.c
index 4ae4b5a986..4b45f8c884 100644
--- a/nptl/allocatestack.c
+++ b/nptl/allocatestack.c
@@ -521,6 +521,7 @@ allocate_stack (const struct pthread_attr *attr, struct pthread **pdp,
     {
       /* Allocate some anonymous memory.  If possible use the cache.  */
       size_t guardsize;
+      size_t reported_guardsize;
       size_t reqsize;
       void *mem;
       const int prot = (PROT_READ | PROT_WRITE
@@ -531,8 +532,17 @@ allocate_stack (const struct pthread_attr *attr, struct pthread **pdp,
       assert (size != 0);
 
       /* Make sure the size of the stack is enough for the guard and
-	 eventually the thread descriptor.  */
+	 eventually the thread descriptor.  On some targets there is
+	 a minimum guard size requirement, ARCH_MIN_GUARD_SIZE, so
+	 internally enforce it (unless the guard was disabled), but
+	 report the original guard size for backward compatibility:
+	 before POSIX 2008 the guardsize was specified to be one page
+	 by default which is observable via pthread_attr_getguardsize
+	 and pthread_getattr_np.  */
       guardsize = (attr->guardsize + pagesize_m1) & ~pagesize_m1;
+      reported_guardsize = guardsize;
+      if (guardsize > 0 && guardsize < ARCH_MIN_GUARD_SIZE)
+	guardsize = ARCH_MIN_GUARD_SIZE;
       if (guardsize < attr->guardsize || size + guardsize < guardsize)
 	/* Arithmetic overflow.  */
 	return EINVAL;
@@ -740,7 +750,7 @@ allocate_stack (const struct pthread_attr *attr, struct pthread **pdp,
       /* The pthread_getattr_np() calls need to get passed the size
 	 requested in the attribute, regardless of how large the
 	 actually used guardsize is.  */
-      pd->reported_guardsize = guardsize;
+      pd->reported_guardsize = reported_guardsize;
     }
 
   /* Initialize the lock.  We have to do this unconditionally since the