| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
MicroBlaze has a special version of futimesat.c because it gained the
futimesat syscall later than other non-asm-generic architectures. Now
the minimum kernel is recent enough that this syscall can always be
assumed to be present for MicroBlaze, so this patch removes the
special version and the __ASSUME_FUTIMESAT macro, resulting in the
sysdeps/unix/sysv/linux/futimesat.c version being used.
Untested.
* sysdeps/unix/sysv/linux/microblaze/kernel-features.h
(__ASSUME_FUTIMESAT): Remove macro.
* sysdeps/unix/sysv/linux/microblaze/futimesat.c: Remove file.
|
|
|
|
|
|
|
|
|
|
| |
The newer Intel processors support Enhanced REP MOVSB/STOSB (ERMS) which
has a feature bit in CPUID. This patch adds the Enhanced REP MOVSB/STOSB
(ERMS) bit to x86 cpu-features.
* sysdeps/x86/cpu-features.h (bit_cpu_ERMS): New.
(index_cpu_ERMS): Likewise.
(reg_ERMS): Likewise.
|
|
|
|
|
|
|
|
|
| |
<sys/personality.h> is out of sync with kernel headers, missing the
UNAME26, FDPIC_FUNCPTRS and PER_LINUX_FDPIC entries. Fix that.
Changelog:
* sysdeps/unix/sysv/linux/sys/personality.h (UNAME26, FDPIC_FUNCPTRS,
PER_LINUX_FDPIC): Add.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since x86-64 memcpy-avx512-no-vzeroupper.S implements memmove, make
__memcpy_avx512_no_vzeroupper an alias of __memmove_avx512_no_vzeroupper
to reduce code size of libc.so.
* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Remove
memcpy-avx512-no-vzeroupper.
* sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S: Renamed
to ...
* sysdeps/x86_64/multiarch/memmove-avx512-no-vzeroupper.S: This.
(MEMCPY): Don't define.
(MEMCPY_CHK): Likewise.
(MEMPCPY): Likewise.
(MEMPCPY_CHK): Likewise.
(MEMPCPY_CHK): Renamed to ...
(__mempcpy_chk_avx512_no_vzeroupper): This.
(MEMPCPY_CHK): Renamed to ...
(__mempcpy_chk_avx512_no_vzeroupper): This.
(MEMCPY_CHK): Renamed to ...
(__memmove_chk_avx512_no_vzeroupper): This.
(MEMCPY): Renamed to ...
(__memmove_avx512_no_vzeroupper): This.
(__memcpy_avx512_no_vzeroupper): New alias.
(__memcpy_chk_avx512_no_vzeroupper): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement x86-64 multiarch mempcpy in memcpy to share most of code. It
reduces code size of libc.so.
[BZ #18858]
* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Remove
mempcpy-ssse3, mempcpy-ssse3-back, mempcpy-avx-unaligned
and mempcpy-avx512-no-vzeroupper.
* sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S (MEMPCPY_CHK):
New.
(MEMPCPY): Likewise.
* sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S
(MEMPCPY_CHK): New.
(MEMPCPY): Likewise.
* sysdeps/x86_64/multiarch/memcpy-ssse3-back.S (MEMPCPY_CHK): New.
(MEMPCPY): Likewise.
* sysdeps/x86_64/multiarch/memcpy-ssse3.S (MEMPCPY_CHK): New.
(MEMPCPY): Likewise.
* sysdeps/x86_64/multiarch/mempcpy-avx-unaligned.S: Removed.
* sysdeps/x86_64/multiarch/mempcpy-avx512-no-vzeroupper.S:
Likewise.
* sysdeps/x86_64/multiarch/mempcpy-ssse3-back.S: Likewise.
* sysdeps/x86_64/multiarch/mempcpy-ssse3.S: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On AMD processors, memcpy optimized with unaligned SSE load is
slower than emcpy optimized with aligned SSSE3 while other string
functions are faster with unaligned SSE load. A feature bit,
Fast_Unaligned_Copy, is added to select memcpy optimized with
unaligned SSE load.
[BZ #19583]
* sysdeps/x86/cpu-features.c (init_cpu_features): Set
Fast_Unaligned_Copy with Fast_Unaligned_Load for Intel
processors. Set Fast_Copy_Backward for AMD Excavator
processors.
* sysdeps/x86/cpu-features.h (bit_arch_Fast_Unaligned_Copy):
New.
(index_arch_Fast_Unaligned_Copy): Likewise.
* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check
Fast_Unaligned_Copy instead of Fast_Unaligned_Load.
|
|
|
|
|
|
| |
[BZ# 19860]
* sysdeps/x86_64/tst-audit10.c (avx512_enabled): Always return
zero if the compiler does not provide the AVX512F bit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug 19848 reports cases where powl on x86 / x86_64 has error
accumulation, for small integer exponents, larger than permitted by
glibc's accuracy goals, at least in some rounding modes. This patch
further restricts the exponent range for which the
small-integer-exponent logic is used to limit the possible error
accumulation.
Tested for x86_64 and x86 and ulps updated accordingly.
[BZ #19848]
* sysdeps/i386/fpu/e_powl.S (p3): Rename to p2 and change value
from 8 to 4.
(__ieee754_powl): Compare integer exponent against 4 not 8.
* sysdeps/x86_64/fpu/e_powl.S (p3): Rename to p2 and change value
from 8 to 4.
(__ieee754_powl): Compare integer exponent against 4 not 8.
* math/auto-libm-test-in: Add more tests of pow.
* math/auto-libm-test-out: Regenerated.
* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Update.
* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the 2.6.32 minimum kernel on x86 and 3.2 on other architectures,
__NR_utimensat is always defined.
Changelog:
* sysdeps/unix/sysv/linux/futimens.c (futimens) [__NR_utimensat]:
Make code unconditional.
[!__NR_utimensat]: Remove conditional code.
* sysdeps/unix/sysv/linux/lutimes.c (lutimes) [__NR_utimensat]:
Make code unconditional.
[!__NR_utimensat]: Remove conditional code.
* sysdeps/unix/sysv/linux/utimensat.c (utimensat) [__NR_utimensat]:
Make code unconditional.
[!__NR_utimensat]: Remove conditional code.
|
|
|
|
|
|
|
|
|
| |
With the 2.6.32 minimum kernel on x86 and 3.2 on other architectures,
__NR_openat is always defined.
Changelog:
* sysdeps/unix/sysv/linux/dl-openat64.c (openat64) [__NR_openat]:
Make code unconditional.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The x86-specific versions of both pthread_cond_wait and
pthread_cond_timedwait have (in their fall-back-to-futex-wait slow
paths) calls to __pthread_mutex_cond_lock_adjust followed by
__pthread_mutex_unlock_usercnt, which load the parameters before the
first call but then assume that the first parameter, in %eax, will
survive unaffected. This happens to have been true before now, but %eax
is a call-clobbered register, and this assumption is not safe: it could
change at any time, at GCC's whim, and indeed the stack-protector canary
checking code clobbers %eax while checking that the canary is
uncorrupted.
So reload %eax before calling __pthread_mutex_unlock_usercnt. (Do this
unconditionally, even when stack-protection is not in use, because it's
the right thing to do, it's a slow path, and anything else is dicing
with death.)
* sysdeps/unix/sysv/linux/i386/pthread_cond_timedwait.S: Reload
call-clobbered %eax on retry path.
* sysdeps/unix/sysv/linux/i386/pthread_cond_wait.S: Likewise.
|
|
|
|
|
| |
* sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S (MEMCPY):
Don't set %rcx twice before "rep movsb".
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since only Intel processors with AVX2 have fast unaligned load, we
should set index_arch_AVX_Fast_Unaligned_Load only for Intel processors.
Move AVX, AVX2, AVX512, FMA and FMA4 detection into get_common_indeces
and call get_common_indeces for other processors.
Add CPU_FEATURES_CPU_P and CPU_FEATURES_ARCH_P to aoid loading
GLRO(dl_x86_cpu_features) in cpu-features.c.
[BZ #19583]
* sysdeps/x86/cpu-features.c (get_common_indeces): Remove
inline. Check family before setting family, model and
extended_model. Set AVX, AVX2, AVX512, FMA and FMA4 usable
bits here.
(init_cpu_features): Replace HAS_CPU_FEATURE and
HAS_ARCH_FEATURE with CPU_FEATURES_CPU_P and
CPU_FEATURES_ARCH_P. Set index_arch_AVX_Fast_Unaligned_Load
for Intel processors with usable AVX2. Call get_common_indeces
for other processors with family == NULL.
* sysdeps/x86/cpu-features.h (CPU_FEATURES_CPU_P): New macro.
(CPU_FEATURES_ARCH_P): Likewise.
(HAS_CPU_FEATURE): Use CPU_FEATURES_CPU_P.
(HAS_ARCH_FEATURE): Use CPU_FEATURES_ARCH_P.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch removes the __ASSUME_GETDENTS64_SYSCALL macro, as its
definition is constant given the new kernel version requirements (and
was constant anyway before those requirements except for MIPS n32).
Note that the "#ifdef __NR_getdents64" conditional *is* still needed,
because MIPS n64 only has the getdents syscall (being a 64-bit ABI,
that syscall is 64-bit; the difference between the two on 64-bit
architectures is where d_type goes). If MIPS n64 were to gain the
getdents64 syscall and we wanted to use it conditionally on the kernel
version at runtime we'd have to revert this patch, but I think that's
unlikely (and in any case, we could follow the simpler approach of
undefining __NR_getdents64 if the syscall can't be assumed, just like
we do for accept4 / recvmmsg / sendmmsg syscalls on architectures
where socketcall support came first).
Most of the getdents.c changes are reindentation.
Tested for x86_64 and x86 that installed stripped shared libraries are
unchanged by the patch.
* sysdeps/unix/sysv/linux/kernel-features.h
(__ASSUME_GETDENTS64_SYSCALL): Remove macro.
* sysdeps/unix/sysv/linux/getdents.c
[!__ASSUME_GETDENTS64_SYSCALL]: Remove conditional code.
[!have_no_getdents64_defined]: Likewise.
(__GETDENTS): Remove __have_no_getdents64 conditional.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current Linux kernel version requirements mean the signalfd4 syscall
can always be assumed to be available. This patch removes
__ASSUME_SIGNALFD4 and associated conditionals.
Tested for x86_64 and x86 that installed stripped shared libraries are
unchanged by the patch.
* sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_SIGNALFD4):
Remove macro.
* sysdeps/unix/sysv/linux/signalfd.c: Do not include
<kernel-features.h>.
(signalfd) [__NR_signalfd4]: Make code unconditional.
(signalfd) [!__ASSUME_SIGNALFD4]: Remove conditional code.
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes the implicit check style add in 2a69f853c for the
general convention one.
Checked on x86_64.
* sysdeps/unix/sysv/linux/spawni.c (__spawnix): Fix implict checks
style.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When PLT may be used, JUMPTARGET should be used instead calling the
function directly.
* sysdeps/unix/sysv/linux/x86_64/cancellation.S
(__pthread_enable_asynccancel): Use JUMPTARGET to call
__pthread_unwind.
* sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S
(__condvar_cleanup2): Use JUMPTARGET to call _Unwind_Resume.
* sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S
(__condvar_cleanup1): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current Linux posix_spawn spawn do not test if the pid argument is
valid before trying to update it for success case. This patch fixes
it.
Tested on x86_64 and i686.
* sysdeps/unix/sysv/linux/spawni.c (__spawnix): Fix invalid memory
access where posix_spawn success and pid argument is null.
* posix/tst-spawn.c (do_test): Add posix_spawn null pid argument for
success case.
|
|
|
|
| |
* sysdeps/mach/hurd/i386/c++-types.data: New file.
|
|
|
|
| |
* sysdeps/mach/hurd/libc-lock.h (_IO_lock_inexpensive): Define to 1.
|
|
|
|
|
|
|
|
|
| |
* sysdeps/generic/dl-fcntl.h: New file, adds attribute_hidden to __open
and __fcntl.
* sysdeps/mach/hurd/dl-fcntl.h: New file, adds attribute_hidden to
__fcntl only.
* include/fcntl.h [IS_IN (rtld)]: Include <dl-fcntl.h> instead of
adding attribute_hidden to __open and __fcntl.
|
|
|
|
|
|
|
|
| |
Generating errnos.d does not actually need libc-modules.h.
* sysdeps/mach/hurd/Makefile ($(common-objpfx)errnos.d): Strip
"-include $(common-objpfx)libc-modules.h" from CPPFLAGS, and do not
depend on libc-modules.h,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Given current Linux kernel version requirements, we can assume the
presence of the eventfd2 syscall. This means that __ASSUME_EVENTFD2
can be removed, and a syscalls.list entry suffices for eventfd instead
of needing a .c file. This patch implements those changes.
Tested for x86_64 and x86 (not that that means much, given the lack of
testsuite coverage for eventfd).
* sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_EVENTFD2):
Remove macro.
* sysdeps/unix/sysv/linux/eventfd.c: Remove file.
* sysdeps/unix/sysv/linux/syscalls.list (eventfd): New syscall
entry.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Given current Linux kernel version requirements, we can always assume
the fallocate syscall to be available. This patch removes
__ASSUME_FALLOCATE and a test for whether __NR_fallocate is defined.
Tested for x86_64 and x86 that installed stripped shared libraries are
unchanged by the patch.
* sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_FALLOCATE):
Remove macro.
* sysdeps/unix/sysv/linux/wordsize-64/posix_fallocate.c: Do not
include <kernel-features.h>.
[!__ASSUME_FALLOCATE]: Remove conditional code.
(posix_fallocate) [__NR_fallocate]: Make code unconditional.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When PLT may be used, JUMPTARGET should be used instead calling the
function directly.
* sysdeps/x86_64/fpu/multiarch/svml_d_cos2_core_sse4.S
(_ZGVbN2v_cos_sse4): Use JUMPTARGET to call cos.
* sysdeps/x86_64/fpu/multiarch/svml_d_cos4_core_avx2.S
(_ZGVdN4v_cos_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_cos8_core_avx512.S
(_ZGVdN4v_cos): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core_sse4.S
(_ZGVbN2v_exp_sse4): Use JUMPTARGET to call exp.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core_avx2.S
(_ZGVdN4v_exp_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core_avx512.S
(_ZGVdN4v_exp): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_log2_core_sse4.S
(_ZGVbN2v_log_sse4): Use JUMPTARGET to call log.
* sysdeps/x86_64/fpu/multiarch/svml_d_log4_core_avx2.S
(_ZGVdN4v_log_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_log8_core_avx512.S
(_ZGVdN4v_log): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core_sse4.S
(_ZGVbN2vv_pow_sse4): Use JUMPTARGET to call pow.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core_avx2.S
(_ZGVdN4vv_pow_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core_avx512.S
(_ZGVdN4vv_pow): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin2_core_sse4.S
(_ZGVbN2v_sin_sse4): Use JUMPTARGET to call sin.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin4_core_avx2.S
(_ZGVdN4v_sin_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core_avx512.S
(_ZGVdN4v_sin): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core_sse4.S
(_ZGVbN2vvv_sincos_sse4): Use JUMPTARGET to call sin and cos.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core_avx2.S
(_ZGVdN4vvv_sincos_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core_avx512.S
(_ZGVdN4vvv_sincos): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_cosf16_core_avx512.S
(_ZGVdN8v_cosf): Use JUMPTARGET to call cosf.
* sysdeps/x86_64/fpu/multiarch/svml_s_cosf4_core_sse4.S
(_ZGVbN4v_cosf_sse4): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_cosf8_core_avx2.S
(_ZGVdN8v_cosf_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_expf16_core_avx512.S
(_ZGVdN8v_expf): Use JUMPTARGET to call expf.
* sysdeps/x86_64/fpu/multiarch/svml_s_expf4_core_sse4.S
(_ZGVbN4v_expf_sse4): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_expf8_core_avx2.S
(_ZGVdN8v_expf_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S
(_ZGVdN8v_logf): Use JUMPTARGET to call logf.
* sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core_sse4.S
(_ZGVbN4v_logf_sse4): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core_avx2.S
(_ZGVdN8v_logf_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_powf16_core_avx512.S
(_ZGVdN8vv_powf): Use JUMPTARGET to call powf.
* sysdeps/x86_64/fpu/multiarch/svml_s_powf4_core_sse4.S
(_ZGVbN4vv_powf_sse4): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_powf8_core_avx2.S
(_ZGVdN8vv_powf_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S
(_ZGVdN8vv_powf): Use JUMPTARGET to call sinf and cosf.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core_sse4.S
(_ZGVbN4vvv_sincosf_sse4): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core_avx2.S
(_ZGVdN8vvv_sincosf_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sinf16_core_avx512.S
(_ZGVdN8v_sinf): Use JUMPTARGET to call sinf.
* sysdeps/x86_64/fpu/multiarch/svml_s_sinf4_core_sse4.S
(_ZGVbN4v_sinf_sse4): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sinf8_core_avx2.S
(_ZGVdN8v_sinf_avx2): Likewise.
* sysdeps/x86_64/fpu/svml_d_wrapper_impl.h (WRAPPER_IMPL_SSE2):
Use JUMPTARGET to call callee.
(WRAPPER_IMPL_SSE2_ff): Likewise.
(WRAPPER_IMPL_SSE2_fFF): Likewise.
(WRAPPER_IMPL_AVX): Likewise.
(WRAPPER_IMPL_AVX_ff): Likewise.
(WRAPPER_IMPL_AVX_fFF): Likewise.
(WRAPPER_IMPL_AVX512): Likewise.
(WRAPPER_IMPL_AVX512_ff): Likewise.
* sysdeps/x86_64/fpu/svml_s_wrapper_impl.h (WRAPPER_IMPL_SSE2):
Likewise.
(WRAPPER_IMPL_SSE2_ff): Likewise.
(WRAPPER_IMPL_SSE2_fFF): Likewise.
(WRAPPER_IMPL_AVX): Likewise.
(WRAPPER_IMPL_AVX_ff): Likewise.
(WRAPPER_IMPL_AVX_fFF): Likewise.
(WRAPPER_IMPL_AVX512): Likewise.
(WRAPPER_IMPL_AVX512_ff): Likewise.
(WRAPPER_IMPL_AVX512_fFF): Likewise.
|
|
|
|
|
|
|
|
| |
* sysdeps/mach/hurd/openat.c (__openat): Add missing ellipsis.
* resolv/gai_sigqueue.c (__gai_sigqueue): Add missing internal_function
qualifier.
* /rt/aio_sigqueue.c (__aio_sigqueue): Add missing attribute_hidden
internal_function qualifiers.
|
|
|
|
|
|
|
|
|
|
| |
When building on i686, x86_64, and arm, and with NDEBUG, or --with-cpu
there are various variables and functions which are unused based on
these settings.
This patch marks all such variables with __attribute__((unused)) to
avoid the compiler warnings when building with the aformentioned
options.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With current kernel version requirements, the ppoll Linux syscall can
be assumed to be present on all architectures; this patch removes the
__ASSUME_PPOLL macro and conditionals on it and on whether __NR_ppoll
is defined. (Note that the same can't yet be done for pselect,
because MicroBlaze only wired that up in the syscall table in 3.15.)
Tested for x86_64 and x86 that installed stripped shared libraries are
unchanged by the patch.
* sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_PPOLL):
Remove macro.
* sysdeps/unix/sysv/linux/ppoll.c: Do not include
<kernel-features.h>.
[__NR_ppoll]: Make code unconditional.
[!__ASSUME_PPOLL]: Remove conditional code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adjusts the defaults for kernel-features.h macros relating
to availability of accept4, recvmmsg and sendmmsg. It is not intended
to affect which macros end up getting defined in any configuration.
At present, all architectures with syscalls for those functions need
to define __ASSUME_*_SYSCALL macros; in particular, any new
architecture needs its own kernel-features.h file for that purpose,
though it may not otherwise need such a header. Those macros are then
used together with __ASSUME_SOCKETCALL to define macros for whether
the functions in question are available.
This patch changes the defaults so that the syscalls are assumed to be
available by default with recent-enough kernels, and it is the
responsibility of architecture headers to undefine the macros if they
are unavailable in supported kernels at least as recent as the version
where the architecture-independent functionality was introduced. The
__ASSUME_<function> macros are defaulted similarly instead of being
defined based on other macros (defining based on other macros would no
longer work because the #undefs appear after the generic header is
included), so where the syscall being unavailable means the function
is unavailable this means the architecture header has to undefine the
__ASSUME_<function> macro; this only affects __ASSUME_ACCEPT4 for
ia64, as other cases where the syscalls were added late enough to be
relevant with current kernel version requirements are all on
socketcall architectures.
As a consequence, the AArch64 and Nios II kernel-features.h header
files are removed, and others simplified. When the minimum kernel
version becomes 4.3 or later on all architectures, the syscalls in
question can just be assumed unconditionally, permitting further
simplification.
Tested for x86_64, x86 and powerpc (that installed shared libraries
are unchanged by the patch, and testsuite for x86_64 and x86).
* sysdeps/unix/sysv/linux/kernel-features.h
(__ASSUME_ACCEPT4_SYSCALL): Define unconditionally.
(__ASSUME_ACCEPT4): Likewise.
[__LINUX_KERNEL_VERSION >= 0x020621] (__ASSUME_RECVMMSG_SYSCALL):
Define.
[__LINUX_KERNEL_VERSION >= 0x020621] (__ASSUME_RECVMMSG):
Likewise.
[__LINUX_KERNEL_VERSION >= 0x030000] (__ASSUME_SENDMMSG_SYSCALL):
Likewise.
[__LINUX_KERNEL_VERSION >= 0x030000] (__ASSUME_SENDMMSG):
Likewise.
* sysdeps/unix/sysv/linux/aarch64/kernel-features.h: Remove file.
* sysdeps/unix/sysv/linux/nios2/kernel-features.h: Likewise.
* sysdeps/unix/sysv/linux/alpha/kernel-features.h
(__ASSUME_RECVMMSG_SYSCALL): Do not define.
(__ASSUME_ACCEPT4_SYSCALL): Likewise.
(__ASSUME_SENDMMSG_SYSCALL): Likewise.
* sysdeps/unix/sysv/linux/arm/kernel-features.h
(__ASSUME_RECVMMSG_SYSCALL): Likewise.
(__ASSUME_ACCEPT4_SYSCALL): Likewise.
(__ASSUME_SENDMMSG_SYSCALL): Likewise.
* sysdeps/unix/sysv/linux/hppa/kernel-features.h
(__ASSUME_ACCEPT4_SYSCALL): Likewise.
(__ASSUME_RECVMMSG_SYSCALL): Likewise.
(__ASSUME_SENDMMSG_SYSCALL): Likewise.
* sysdeps/unix/sysv/linux/i386/kernel-features.h
[__LINUX_KERNEL_VERSION >= 0x020621] (__ASSUME_RECVMMSG_SYSCALL):
Likewise.
[__LINUX_KERNEL_VERSION >= 0x030000] (__ASSUME_SENDMMSG_SYSCALL):
Likewise.
(__ASSUME_ACCEPT4_SYSCALL): Undefine if [__LINUX_KERNEL_VERSION <
0x040300] instead of defining if [__LINUX_KERNEL_VERSION >=
0x040300].
* sysdeps/unix/sysv/linux/ia64/kernel-features.h
(__ASSUME_RECVMMSG_SYSCALL): Do not define.
(__ASSUME_SENDMMSG_SYSCALL): Likewise.
(__ASSUME_ACCEPT4_SYSCALL): Undefine if [__LINUX_KERNEL_VERSION <
0x030300] instead of defining if [__LINUX_KERNEL_VERSION >=
0x030300].
[__LINUX_KERNEL_VERSION < 0x030300] (__ASSUME_ACCEPT4): Undefine.
* sysdeps/unix/sysv/linux/m68k/kernel-features.h
(__ASSUME_ACCEPT4_SYSCALL): Undefine if [__LINUX_KERNEL_VERSION <
0x040300] instead of defining if [__LINUX_KERNEL_VERSION >=
0x040300].
(__ASSUME_RECVMMSG_SYSCALL): Likewise.
(__ASSUME_SENDMMSG_SYSCALL): Likewise.
* sysdeps/unix/sysv/linux/microblaze/kernel-features.h
(__ASSUME_ACCEPT4_SYSCALL): Do not define.
(__ASSUME_RECVMMSG_SYSCALL): Likewise.
(__ASSUME_SENDMMSG_SYSCALL): Undefine if [__LINUX_KERNEL_VERSION <
0x030300] instead of defining if [__LINUX_KERNEL_VERSION >=
0x030300].
* sysdeps/unix/sysv/linux/mips/kernel-features.h
(__ASSUME_ACCEPT4_SYSCALL): Do not define.
(__ASSUME_RECVMMSG_SYSCALL): Likewise.
(__ASSUME_SENDMMSG_SYSCALL): Likewise.
* sysdeps/unix/sysv/linux/powerpc/kernel-features.h
(__ASSUME_ACCEPT4_SYSCALL): Likewise.
(__ASSUME_RECVMMSG_SYSCALL): Likewise.
(__ASSUME_SENDMMSG_SYSCALL): Likewise.
* sysdeps/unix/sysv/linux/s390/kernel-features.h
(__ASSUME_ACCEPT4_SYSCALL): Undefine if [__LINUX_KERNEL_VERSION <
0x040300] instead of defining if [__LINUX_KERNEL_VERSION >=
0x040300].
(__ASSUME_RECVMMSG_SYSCALL): Likewise.
(__ASSUME_SENDMMSG_SYSCALL): Likewise.
* sysdeps/unix/sysv/linux/sh/kernel-features.h
(__ASSUME_ACCEPT4_SYSCALL): Do not define.
(__ASSUME_RECVMMSG_SYSCALL): Likewise.
(__ASSUME_SENDMMSG_SYSCALL): Likewise.
* sysdeps/unix/sysv/linux/sparc/kernel-features.h
(__ASSUME_ACCEPT4_SYSCALL): Likewise.
(__ASSUME_RECVMMSG_SYSCALL): Likewise.
(__ASSUME_SENDMMSG_SYSCALL): Likewise.
* sysdeps/unix/sysv/linux/tile/kernel-features.h
(__ASSUME_ACCEPT4_SYSCALL): Likewise.
(__ASSUME_RECVMMSG_SYSCALL): Likewise.
(__ASSUME_SENDMMSG_SYSCALL): Likewise.
* sysdeps/unix/sysv/linux/x86_64/kernel-features.h
(__ASSUME_ACCEPT4_SYSCALL): Likewise.
[__LINUX_KERNEL_VERSION >= 0x020621] (__ASSUME_RECVMMSG_SYSCALL):
Likewise.
[__LINUX_KERNEL_VERSION >= 0x030000] (__ASSUME_SENDMMSG_SYSCALL):
Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch updates the glibc headers with the defines MADV_FREE,
IPV6_HDRINCL and EPOLLEXCLUSIVE that are added in Linux 4.5.
Tested for x86_64 and x86 (testsuite, and that installed stripped
shared libraries are unchanged by the patch).
* bits/mman-linux.h [__USE_MISC] (MADV_FREE): New macro.
* sysdeps/unix/sysv/linux/hppa/bits/mman.h [__USE_MISC]
(MADV_FREE): Likewise.
* sysdeps/unix/sysv/linux/bits/in.h (IPV6_HDRINCL): Likewise.
* sysdeps/unix/sysv/linux/sys/epoll.h (enum EPOLL_EVENTS): Add
EPOLLEXCLUSIVE.
|
|
|
|
|
| |
* sysdeps/posix/waitid.c (OUR_WAITID): Test against WSTOPPED instead of
WUNTRACED.
|
|
|
|
|
| |
This patch rearranges cfi_offset() calls after the last store
so as to avoid extra DW_CFA_advance opcodes in unwind information.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
index_* and bit_* macros are used to access cpuid and feature arrays o
struct cpu_features. It is very easy to use bits and indices of cpuid
array on feature array, especially in assembly codes. For example,
sysdeps/i386/i686/multiarch/bcopy.S has
HAS_CPU_FEATURE (Fast_Rep_String)
which should be
HAS_ARCH_FEATURE (Fast_Rep_String)
We change index_* and bit_* to index_cpu_*/index_arch_* and
bit_cpu_*/bit_arch_* so that we can catch such error at build time.
[BZ #19762]
* sysdeps/unix/sysv/linux/x86_64/64/dl-librecon.h
(EXTRA_LD_ENVVARS): Add _arch_ to index_*/bit_*.
* sysdeps/x86/cpu-features.c (init_cpu_features): Likewise.
* sysdeps/x86/cpu-features.h (bit_*): Renamed to ...
(bit_arch_*): This for feature array.
(bit_*): Renamed to ...
(bit_cpu_*): This for cpu array.
(index_*): Renamed to ...
(index_arch_*): This for feature array.
(index_*): Renamed to ...
(index_cpu_*): This for cpu array.
[__ASSEMBLER__] (HAS_FEATURE): Add and use field.
[__ASSEMBLER__] (HAS_CPU_FEATURE)): Pass cpu to HAS_FEATURE.
[__ASSEMBLER__] (HAS_ARCH_FEATURE)): Pass arch to HAS_FEATURE.
[!__ASSEMBLER__] (HAS_CPU_FEATURE): Replace index_##name and
bit_##name with index_cpu_##name and bit_cpu_##name.
[!__ASSEMBLER__] (HAS_ARCH_FEATURE): Replace index_##name and
bit_##name with index_arch_##name and bit_arch_##name.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In makecontext the FDE needs to be terminated before the return
trampoline otherwise backtrace called within a context created by
makecontext yields infinite backtrace.
This bug has been present for a long time, stdlib/tst-makecontext did
not fail until recent commit e535ce25. Tested on mips-linux-gnu and
mips64el-linux-gnuabi64 and mips-linux-gnu, no regression.
This fixes stdlib/tst-makecontext on MIPS.
Changelog:
[BZ #19792]
* sysdeps/unix/sysv/linux/mips/makecontext.S (__makecontext):
Terminate FDE before return label.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ldbl-128ibm implementation of nearbyintl uses logic that only
works in round-to-nearest mode. This contrasts with rintl, which
works in all rounding modes.
Now, arguably nearbyintl could simply be aliased to rintl, given that
spurious "inexact" is generally allowed for ldbl-128ibm, even for the
underlying arithmetic operations. But given that the only point of
nearbyintl is to avoid "inexact", this patch follows the more
conservative approach of adding conditionals to the rintl
implementation to make it suitable for use to implement nearbyintl,
then builds it for nearbyintl with USE_AS_NEARBYINTL defined. The
test test-nearbyint-except-2 shows up issues when traps on "inexact"
are enabled, which turn out to be problems with the powerpc
fenv_private.h implementation (two functions that should disable
exception traps potentially failing to do so in some cases); this
patch duly fixes that as well (I don't see any other existing cases
where this would be user-visible; there isn't much use of *_NOEX,
*hold* etc. in libm that requires exceptions to be discarded and not
trapped on).
Tested for powerpc.
[BZ #19790]
* sysdeps/ieee754/ldbl-128ibm/s_rintl.c [USE_AS_NEARBYINTL]
(rintl): Define as macro.
[USE_AS_NEARBYINTL] (__rintl): Likewise.
(__rintl) [USE_AS_NEARBYINTL]: Use SET_RESTORE_ROUND_NOEX instead
of fesetround. Ensure results are evaluated before end of scope.
* sysdeps/ieee754/ldbl-128ibm/s_nearbyintl.c: Define
USE_AS_NEARBYINTL and include s_rintl.c.
* sysdeps/powerpc/fpu/fenv_private.h (libc_feholdsetround_ppc):
Disable exception traps in new environment.
(libc_feholdsetround_ppc_ctx): Likewise.
|
| |
|
|
|
|
|
|
|
|
| |
Since x86 has an optimized mempcpy and GCC can inline mempcpy on x86,
define _HAVE_STRING_ARCH_mempcpy to 1 for x86.
[BZ #19759]
* sysdeps/x86/bits/string.h (_HAVE_STRING_ARCH_mempcpy): New.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The operand modifier %s on powerpc is an undocumented internal implementation
detail of GCC. Besides that, the GCC community wants to remove it. This patch
rewrites the expressions that use this modifier with logically equivalent
expressions that don't require it.
Explanation for the substitution:
The %s modifier takes an immediate operand and prints 32 less such immediate.
Thus, in the previous code, the expression resulted in:
32 - __builtin_ffs(e)
where e was guaranteed to have exactly a single bit set, by the following
expressions:
(e & (e-1) == 0) : e has at most one bit set.
(e != 0) : e is not zero, thus it has at least one bit set.
Since we guarantee that there is exactly only one bit set, the following
statement is true:
32 - __builtin_ffs(e) == __builtin_clz(e)
Thus, we can replace __builtin_ffs with __builtin_clz and remove the %s operand
modifier.
|
|
|
|
|
|
| |
HWCAP-related code should had been updated when the 32 bits of HWCAP were
used. This patch updates the code in dl-procinfo.h to loop through all
the 32 bits in HWCAP and updates _dl_powerpc_cap_flags accordingly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ldbl-128ibm implementation of remainderl has logic resulting in
incorrect tests for equality of the absolute values of the arguments
in the case of zero low parts. If the low parts are both zero but
with different signs, this can wrongly cause equal arguments to be
treated as different, resulting in turn in incorrect signs of zero
result in nondefault rounding modes arising from the subtractions done
when the arguments are not equal.
This patch fixes the logic to convert -0 low parts into +0 before the
comparison (remquo already has separate logic to deal with signs of
zero results, so doesn't need such a change). Tests are added for
remainderl and remquol similar to that for fmodl, and based on a
refactoring of it, since the bug depends on low parts which should not
be relied upon in tests not setting the representation explicitly
(although in fact the bug shows up in test-ldouble with current GCC).
Tested for powerpc.
[BZ #19677]
* sysdeps/ieee754/ldbl-128ibm/e_remainderl.c
(__ieee754_remainderl): Put zero low parts in canonical form.
* sysdeps/ieee754/ldbl-128ibm/test-fmodrem-ldbl-128ibm.c: New
file. Based on
sysdeps/ieee754/ldbl-128ibm/test-fmodl-ldbl-128ibm.c.
* sysdeps/ieee754/ldbl-128ibm/test-fmodl-ldbl-128ibm.c: Replace
with wrapper round test-fmodrem-ldbl-128ibm.c.
* sysdeps/ieee754/ldbl-128ibm/test-remainderl-ldbl-128ibm.c: New
file.
* sysdeps/ieee754/ldbl-128ibm/test-remquol-ldbl-128ibm.c:
Likewise.
* sysdeps/ieee754/ldbl-128ibm/Makefile (tests): Add
test-remainderl-ldbl-128ibm and test-remquol-ldbl-128ibm.
|
|
|
|
|
| |
This ensures that GCC will not use unsupported instructions before
the run-time check to ensure support.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch implements a new posix_spawn{p} implementation for Linux. The main
difference is it uses the clone syscall directly with CLONE_VM and CLONE_VFORK
flags and a direct allocated stack. The new stack and start function solves
most the vfork limitation (possible parent clobber due stack spilling). The
remaning issue are related to signal handling:
1. That no signal handlers must run in child context, to avoid corrupt
parent's state.
2. Child must synchronize with parent to enforce stack deallocation and
to possible return execv issues.
The first one is solved by blocking all signals in child, even NPTL-internal
ones (SIGCANCEL and SIGSETXID). The second issue is done by a stack allocation
in parent and a synchronization with using a pipe or waitpid (in case or error).
The pipe has the advantage of allowing the child signal an exec error (checked
with new tst-spawn2 test).
There is an inherent race condition in pipe2 usage for architectures that do not
support the syscall directly. In such cases the a pipe plus fctnl is used
instead and it may lead to file descriptor leak in parent (as decribed by fcntl
documentation).
The child process stack is allocate with a mmap with MAP_STACK flag using
default architecture stack size. Although it is slower than use a stack buffer
from parent, it allows some slack for the compatibility code to run scripts
with no shebang (which may use a buffer with size depending of argument list
count).
Performance should be similar to the vfork default posix implementation and
way faster than fork path (vfork on mostly linux ports are basically
clone with CLONE_VM plus CLONE_VFORK). The only difference is the syscalls
required for the stack allocation/deallocation.
It fixes BZ#10354, BZ#14750, and BZ#18433.
Tested on i386, x86_64, powerpc64le, and aarch64.
[BZ #14750]
[BZ #10354]
[BZ #18433]
* include/sched.h (__clone): Add hidden prototype.
(__clone2): Likewise.
* include/unistd.h (__dup): Likewise.
* posix/Makefile (tests): Add tst-spawn2.
* posix/tst-spawn2.c: New file.
* sysdeps/posix/dup.c (__dup): Add hidden definition.
* sysdeps/unix/sysv/linux/aarch64/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/alpha/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/arm/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/hppa/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/i386/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/ia64/clone2.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/m68k/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/microblaze/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/mips/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/nios2/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/clone.S (__clone):
Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S (__clone):
Likewise.
* sysdeps/unix/sysv/linux/s390/s390-32/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/sh/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc32/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc64/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/tile/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/x86_64/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/nptl-signals.h
(____nptl_is_internal_signal): New function.
* sysdeps/unix/sysv/linux/spawni.c: New file.
|
|
|
|
|
|
|
| |
* sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S:
Replace .text with .text.avx512.
* sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S:
Likewise.
|
|
|
|
|
|
| |
Changelog:
* sysdeps/generic/libnsl.abilist: New file.
* sysdeps/generic/libutil.abilist: New file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
HAS_ARCH_FEATURE, not HAS_CPU_FEATURE, should be used with
Fast_Rep_String.
[BZ #19762]
* sysdeps/i386/i686/multiarch/bcopy.S (bcopy): Use
HAS_ARCH_FEATURE with Fast_Rep_String.
* sysdeps/i386/i686/multiarch/bzero.S (__bzero): Likewise.
* sysdeps/i386/i686/multiarch/memcpy.S (memcpy): Likewise.
* sysdeps/i386/i686/multiarch/memcpy_chk.S (__memcpy_chk):
Likewise.
* sysdeps/i386/i686/multiarch/memmove_chk.S (__memmove_chk):
Likewise.
* sysdeps/i386/i686/multiarch/mempcpy.S (__mempcpy): Likewise.
* sysdeps/i386/i686/multiarch/mempcpy_chk.S (__mempcpy_chk):
Likewise.
* sysdeps/i386/i686/multiarch/memset.S (memset): Likewise.
* sysdeps/i386/i686/multiarch/memset_chk.S (__memset_chk):
Likewise.
|
|
|
|
|
|
|
|
|
| |
Since we have loaded address of PREINIT_FUNCTION into %rax, we can
avoid extra branch to PLT slot.
[BZ #19745]
* sysdeps/x86_64/crti.S (_init): Replace PREINIT_FUNCTION@PLT
with *%rax in call.
|
|
|
|
|
|
|
|
|
| |
Since __libc_start_main is called very early, lazy binding isn't relevant
here. Use indirect branch via GOT to avoid extra branch to PLT slot.
[BZ #19745]
* sysdeps/x86_64/start.S (_start): __libc_start_main@PLT
with *__libc_start_main@GOTPCREL(%rip) in call.
|
|
|
|
|
|
| |
Mention recursive calls when ENTRY is used in _mcount.S.
* sysdeps/x86_64/Makefile (sysdep_noprof): Add a comment.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Chek Fast_Unaligned_Load, instead of Slow_BSF, and also check for
Fast_Copy_Backward to enable __memcpy_ssse3_back. Existing selection
order is updated with following selection order:
1. __memcpy_avx_unaligned if AVX_Fast_Unaligned_Load bit is set.
2. __memcpy_sse2_unaligned if Fast_Unaligned_Load bit is set.
3. __memcpy_sse2 if SSSE3 isn't available.
4. __memcpy_ssse3_back if Fast_Copy_Backward bit it set.
5. __memcpy_ssse3
[BZ #18880]
* sysdeps/x86_64/multiarch/memcpy.S: Check Fast_Unaligned_Load,
instead of Slow_BSF, and also check for Fast_Copy_Backward to
enable __memcpy_ssse3_back.
|
|
|
|
|
|
|
|
|
| |
We should turn on bit_Prefer_MAP_32BIT_EXEC in EXTRA_LD_ENVVARS without
overriding other bits.
[BZ #19758]
* sysdeps/unix/sysv/linux/x86_64/64/dl-librecon.h
(EXTRA_LD_ENVVARS): Or bit_Prefer_MAP_32BIT_EXEC.
|