about summary refs log tree commit diff
path: root/sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S
diff options
context:
space:
mode:
authorH.J. Lu <hjl.tools@gmail.com>2019-01-21 11:35:18 -0800
committerH.J. Lu <hjl.tools@gmail.com>2019-01-21 11:35:34 -0800
commitc7c54f65b080affb87a1513dee449c8ad6143c8b (patch)
tree7843555220fda4dbbe023b2035a8de081810128a /sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S
parentee915088a0231cd421054dbd8abab7aadf331153 (diff)
downloadglibc-c7c54f65b080affb87a1513dee449c8ad6143c8b.tar.gz
glibc-c7c54f65b080affb87a1513dee449c8ad6143c8b.tar.xz
glibc-c7c54f65b080affb87a1513dee449c8ad6143c8b.zip
x86-64 strncpy: Properly handle the length parameter [BZ# 24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits.  The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.

This pach fixes strncpy for x32.  Tested on x86-64 and x32.  On x86-64,
libc.so is the same with and withou the fix.

	[BZ# 24097]
	CVE-2019-6488
	* sysdeps/x86_64/multiarch/strcpy-avx2.S: Use RDX_LP for length.
	* sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S: Likewise.
	* sysdeps/x86_64/multiarch/strcpy-ssse3.S: Likewise.
	* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-strncpy.
	* sysdeps/x86_64/x32/tst-size_t-strncpy.c: New file.
Diffstat (limited to 'sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S')
-rw-r--r--sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S4
1 files changed, 2 insertions, 2 deletions
diff --git a/sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S b/sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S
index b7c79976ea..0d6914e113 100644
--- a/sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S
+++ b/sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S
@@ -40,8 +40,8 @@
 .text
 ENTRY (STRCPY)
 #  ifdef USE_AS_STRNCPY
-	mov	%rdx, %r8
-	test	%r8, %r8
+	mov	%RDX_LP, %R8_LP
+	test	%R8_LP, %R8_LP
 	jz	L(ExitZero)
 #  endif
 	mov	%rsi, %rcx