diff options
author | H.J. Lu <hjl.tools@gmail.com> | 2019-01-21 11:36:36 -0800 |
---|---|---|
committer | H.J. Lu <hjl.tools@gmail.com> | 2019-01-21 11:36:47 -0800 |
commit | 5165de69c0908e28a380cbd4bb054e55ea4abc95 (patch) | |
tree | 6be5f660262a283870db3ca16d7f07210859b58f /sysdeps/x86_64/multiarch | |
parent | c7c54f65b080affb87a1513dee449c8ad6143c8b (diff) | |
download | glibc-5165de69c0908e28a380cbd4bb054e55ea4abc95.tar.gz glibc-5165de69c0908e28a380cbd4bb054e55ea4abc95.tar.xz glibc-5165de69c0908e28a380cbd4bb054e55ea4abc95.zip |
x86-64 strnlen/wcsnlen: Properly handle the length parameter [BZ# 24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a 64-bit register with the non-zero upper 32 bits. The string/memory functions written in assembly can only use the lower 32 bits of a 64-bit register as length or must clear the upper 32 bits before using the full 64-bit register for length. This pach fixes strnlen/wcsnlen for x32. Tested on x86-64 and x32. On x86-64, libc.so is the same with and withou the fix. [BZ# 24097] CVE-2019-6488 * sysdeps/x86_64/multiarch/strlen-avx2.S: Use RSI_LP for length. Clear the upper 32 bits of RSI register. * sysdeps/x86_64/strlen.S: Use RSI_LP for length. * sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-strnlen and tst-size_t-wcsnlen. * sysdeps/x86_64/x32/tst-size_t-strnlen.c: New file. * sysdeps/x86_64/x32/tst-size_t-wcsnlen.c: Likewise.
Diffstat (limited to 'sysdeps/x86_64/multiarch')
-rw-r--r-- | sysdeps/x86_64/multiarch/strlen-avx2.S | 9 |
1 files changed, 6 insertions, 3 deletions
diff --git a/sysdeps/x86_64/multiarch/strlen-avx2.S b/sysdeps/x86_64/multiarch/strlen-avx2.S index aa0ade503f..3e7f14a846 100644 --- a/sysdeps/x86_64/multiarch/strlen-avx2.S +++ b/sysdeps/x86_64/multiarch/strlen-avx2.S @@ -42,12 +42,15 @@ ENTRY (STRLEN) # ifdef USE_AS_STRNLEN /* Check for zero length. */ - testq %rsi, %rsi + test %RSI_LP, %RSI_LP jz L(zero) # ifdef USE_AS_WCSLEN - shl $2, %rsi + shl $2, %RSI_LP +# elif defined __ILP32__ + /* Clear the upper 32 bits. */ + movl %esi, %esi # endif - movq %rsi, %r8 + mov %RSI_LP, %R8_LP # endif movl %edi, %ecx movq %rdi, %rdx |