| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to GCC bug:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58066
__tls_get_addr may be called with 8-byte stack alignment. Although
this bug has been fixed in GCC 4.9.4, 5.3 and 6, we can't assume
that stack will be always aligned at 16 bytes. Since SSE optimized
memory/string functions with aligned SSE register load and store are
used in the dynamic linker, we must set DL_RUNTIME_UNALIGNED_VEC_SIZE
to 8 so that _dl_runtime_resolve_sse will align the stack before
calling _dl_fixup:
Dump of assembler code for function _dl_runtime_resolve_sse:
0x00007ffff7deea90 <+0>: push %rbx
0x00007ffff7deea91 <+1>: mov %rsp,%rbx
0x00007ffff7deea94 <+4>: and $0xfffffffffffffff0,%rsp
^^^^^^^^^^^ Align stack to 16 bytes
0x00007ffff7deea98 <+8>: sub $0x100,%rsp
0x00007ffff7deea9f <+15>: mov %rax,0xc0(%rsp)
0x00007ffff7deeaa7 <+23>: mov %rcx,0xc8(%rsp)
0x00007ffff7deeaaf <+31>: mov %rdx,0xd0(%rsp)
0x00007ffff7deeab7 <+39>: mov %rsi,0xd8(%rsp)
0x00007ffff7deeabf <+47>: mov %rdi,0xe0(%rsp)
0x00007ffff7deeac7 <+55>: mov %r8,0xe8(%rsp)
0x00007ffff7deeacf <+63>: mov %r9,0xf0(%rsp)
0x00007ffff7deead7 <+71>: movaps %xmm0,(%rsp)
0x00007ffff7deeadb <+75>: movaps %xmm1,0x10(%rsp)
0x00007ffff7deeae0 <+80>: movaps %xmm2,0x20(%rsp)
0x00007ffff7deeae5 <+85>: movaps %xmm3,0x30(%rsp)
0x00007ffff7deeaea <+90>: movaps %xmm4,0x40(%rsp)
0x00007ffff7deeaef <+95>: movaps %xmm5,0x50(%rsp)
0x00007ffff7deeaf4 <+100>: movaps %xmm6,0x60(%rsp)
0x00007ffff7deeaf9 <+105>: movaps %xmm7,0x70(%rsp)
[BZ #19679]
* sysdeps/x86_64/dl-trampoline.S (DL_RUNIME_UNALIGNED_VEC_SIZE):
Renamed to ...
(DL_RUNTIME_UNALIGNED_VEC_SIZE): This. Set to 8.
(DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ...
(DL_RUNTIME_RESOLVE_REALIGN_STACK): This. Updated.
(DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ...
(DL_RUNTIME_RESOLVE_REALIGN_STACK): This.
* sysdeps/x86_64/dl-trampoline.h
(DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ...
(DL_RUNTIME_RESOLVE_REALIGN_STACK): This.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When x86-64 assmebler doesn't support AVX512, we should make
_dl_runtime_resolve_avx512/_dl_runtime_profile_avx512 as aliases of
_dl_runtime_resolve_avx/_dl_runtime_profile_avx. Tested on x86-64
using GCC 5.2 with binutils 20151008 and GCC 4.8 with binutils 20130219.
There are no differences in ld.so with binutils 20151008. There are no
unexpected failures with binutils 20130219 and 20151008.
[BZ #19124]
* sysdeps/x86_64/dl-trampoline.S [!HAVE_AVX512_ASM_SUPPORT]
(_dl_runtime_resolve_avx512): Make it a hidden alias of
_dl_runtime_resolve_avx.
(_dl_runtime_profile_avx512): Make it a hidden alias of
_dl_runtime_profile_avx.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The change in 0b5395f052ee09cd7e3d219af4e805c38058afb5 replaced calls
to __get_cpu_features@plt followed by a mov from rax to rdx, with a
single macro LOAD_RTLD_GLOBAL_RO_RDX. It is pretty clear that there
was a typo in s_floorf and __nearbyint due to which the (now incorrect)
mov was not removed. This patch removes that mov.
* sysdeps/x86_64/fpu/multiarch/s_floorf.S (__floorf): Remove
unnecessary movq.
* sysdeps/x86_64/fpu/multiarch/s_nearbyint.S (__nearbyint):
Likewise.
|
|
|
|
|
|
|
| |
Since _dl_x86_64_save_sse and _dl_x86_64_restore_sse are removed now,
we don't need to run tst-getpid2 with LD_BIND_NOW=1.
* sysdeps/unix/sysv/linux/Makefile (tst-getpid2-ENV): Removed.
|
|
|
|
|
|
|
| |
Since ld.so preserves vector registers now, we can SSE optimized strcmp
in x86-64 ld.so.
* sysdeps/x86_64/strcmp.S: Remove "#if !IS_IN (libc)".
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since ld.so preserves vector registers now, we can use the regular,
non-ifunc string and memory functions in ld.so.
* sysdeps/x86_64/rtld-memcmp.c: Removed.
* sysdeps/x86_64/rtld-memset.S: Likewise.
* sysdeps/x86_64/rtld-strchr.S: Likewise.
* sysdeps/x86_64/rtld-strlen.S: Likewise.
* sysdeps/x86_64/multiarch/rtld-memcmp.c: Likewise.
* sysdeps/x86_64/multiarch/rtld-memset.S: Likewise.
|
|
|
|
|
|
|
| |
Since ld.so preserves vector registers now, we can use %xmm0 to avoid
the REX prefix.
* sysdeps/x86_64/memset.S: Replace %xmm8 with %xmm0.
|
|
|
|
|
|
|
| |
Since ld.so preserves vector registers now, we can use %xmm[0-4] to
avoid the REX prefix.
* sysdeps/x86_64/strlen.S: Replace %xmm[8-12] with %xmm[0-4].
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since ld.so preserves vector registers now, we can use SSE in ld.so.
* sysdeps/i386/Makefile [$(subdir) == elf] (CFLAGS-.os): Add
-mno-sse -mno-mmx for $(all-rtld-routines).
[$(subdir) == elf] (tests-special): Add
$(objpfx)tst-ld-sse-use.out.
[$(subdir) == elf] ($(objpfx)tst-ld-sse-use.out): New rule.
* sysdeps/x86/Makefile [$(subdir) == elf] (CFLAGS-.os): Removed.
[$(subdir) == elf] (tests-special): Likewise.
[$(subdir) == elf] ($(objpfx)tst-ld-sse-use.out): Likewise.
* sysdeps/x86_64/Makefile [$(subdir) == elf] (CFLAGS-.os): Add
-mno-mmx for $(all-rtld-routines).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch initiaizes GLRO(dl_x86_xstate) in dl_platform_init to
indicate if the processor supports SSE, AVX or AVX512. It uses
this information to properly save and restore vector registers in
ld.so. Now we can use SSE in ld.so and delete FOREIGN_CALL macros.
[BZ #15128]
* sysdeps/x86_64/Makefile [$(subdir) == elf] (tests): Add
ifuncmain8.
(modules-names): Add ifuncmod8.
($(objpfx)ifuncmain8): New rule.
* sysdeps/x86_64/dl-machine.h: Include <dl-procinfo.h> and
<cpuid.h>.
(elf_machine_runtime_setup): Use _dl_runtime_resolve_sse,
_dl_runtime_resolve_avx, or _dl_runtime_resolve_avx512,
_dl_runtime_profile_sse, _dl_runtime_profile_avx, or
_dl_runtime_profile_avx512, based on HAS_ARCH_FEATURE.
* sysdeps/x86_64/dl-trampoline.S: Rewrite.
* sysdeps/x86_64/dl-trampoline.h: Likewise.
* sysdeps/x86_64/ifuncmain8.c: New file.
* sysdeps/x86_64/ifuncmod8.c: Likewise.
* sysdeps/x86_64/nptl/tcb-offsets.sym (RTLD_SAVESPACE_SSE):
Removed.
* sysdeps/x86_64/nptl/tls.h (__128bits): Removed.
(tcbhead_t): Change rtld_must_xmm_save to __glibc_unused1.
Change rtld_savespace_sse to __glibc_unused2.
(RTLD_CHECK_FOREIGN_CALL): Removed.
(RTLD_ENABLE_FOREIGN_CALL): Likewise.
(RTLD_PREPARE_FOREIGN_CALL): Likewise.
(RTLD_FINALIZE_FOREIGN_CALL): Likewise.
|
|
|
|
|
|
|
|
|
|
| |
We should align stack to 16 bytes when calling __errno_location.
[BZ #18661]
* sysdeps/x86_64/fpu/s_cosf.S (__cosf): Align stack to 16 bytes
when calling __errno_location.
* sysdeps/x86_64/fpu/s_sincosf.S (__sincosf): Likewise.
* sysdeps/x86_64/fpu/s_sinf.S (__sinf): Likewise.
|
|
|
|
|
|
|
|
|
|
| |
Subtract stack by 24 bytes instead of 16 bytes so that stack is aligned
to 16 bytes when calling __gettimeofday.
[BZ #18661]
* sysdeps/unix/sysv/linux/x86_64/lowlevellock.S
(__lll_timedwait_tid): Align stack to 16 bytes when calling
__gettimeofday.
|
|
|
|
|
|
|
|
|
|
| |
Don't use pop to restore %rdi so that stack is aligned to 16 bytes
when calling __setcontext.
[BZ #18661]
* sysdeps/unix/sysv/linux/x86_64/__start_context.S
(__start_context): Don't use pop to restore %rdi so that stack
is aligned to 16 bytes when calling __setcontext.
|
|
|
|
|
|
|
|
| |
{memcpy,strcmp}-sse2-unaligned.S aren't needed in ld.so.
* sysdeps/x86_64/multiarch/memcpy-sse2-unaligned.S: Compile
only for libc.
* sysdeps/x86_64/multiarch/strcmp-sse2-unaligned.S: Likewise.
|
|
|
|
|
|
|
|
| |
This patch updates x86 elision-conf.c to use the newly defined
HAS_CPU_FEATURE from <cpu-features.h>.
* sysdeps/unix/sysv/linux/x86/elision-conf.c (elision_init):
Replace HAS_RTM with HAS_CPU_FEATURE (RTM).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch updates libmvec multiarch functions to use the newly defined
HAS_CPU_FEATURE, HAS_ARCH_FEATURE and LOAD_RTLD_GLOBAL_RO_RDX from
<cpu-features.h>.
* math/Makefile ($(addprefix $(objpfx), $(libm-vec-tests))):
Remove $(objpfx)init-arch.o.
* sysdeps/x86_64/fpu/Makefile (libmvec-support): Remove
init-arch.
* sysdeps/x86_64/fpu/math-tests-arch.h (avx_usable): Removed.
(INIT_ARCH_EXT): Defined as empty.
(CHECK_ARCH_EXT): Replace HAS_XXX with HAS_ARCH_FEATURE (XXX).
* sysdeps/x86_64/fpu/multiarch/svml_d_cos2_core.S: Remove
__init_cpu_features call. Replace HAS_XXX with
HAS_CPU_FEATURE/HAS_ARCH_FEATURE (XXX).
* sysdeps/x86_64/fpu/multiarch/svml_d_cos4_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_cos8_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_log2_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_log4_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_log8_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin2_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin4_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_cosf16_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_cosf4_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_cosf8_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_expf16_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_expf4_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_expf8_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_powf16_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_powf4_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_powf8_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sinf16_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sinf4_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sinf8_core.S: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch updates i686 multiarch functions to use the newly defined
HAS_CPU_FEATURE, HAS_ARCH_FEATURE, LOAD_GOT_AND_RTLD_GLOBAL_RO and
LOAD_FUNC_GOT_EAX from <cpu-features.h>.
* sysdeps/i386/i686/fpu/multiarch/e_expf.c: Replace HAS_XXX
with HAS_CPU_FEATURE/HAS_ARCH_FEATURE (XXX).
* sysdeps/i386/i686/fpu/multiarch/s_cosf.c: Likewise.
* sysdeps/i386/i686/fpu/multiarch/s_cosf.c: Likewise.
* sysdeps/i386/i686/fpu/multiarch/s_sincosf.c: Likewise.
* sysdeps/i386/i686/fpu/multiarch/s_sinf.c: Likewise.
* sysdeps/i386/i686/multiarch/ifunc-impl-list.c: Likewise.
* sysdeps/i386/i686/multiarch/s_fma.c: Likewise.
* sysdeps/i386/i686/multiarch/s_fmaf.c: Likewise.
* sysdeps/i386/i686/multiarch/bcopy.S: Remove __init_cpu_features
call. Merge SHARED and !SHARED. Add LOAD_GOT_AND_RTLD_GLOBAL_RO.
Use LOAD_FUNC_GOT_EAX to load function address. Replace HAS_XXX
with HAS_CPU_FEATURE/HAS_ARCH_FEATURE (XXX).
* sysdeps/i386/i686/multiarch/bzero.S: Likewise.
* sysdeps/i386/i686/multiarch/memchr.S: Likewise.
* sysdeps/i386/i686/multiarch/memcmp.S: Likewise.
* sysdeps/i386/i686/multiarch/memcpy.S: Likewise.
* sysdeps/i386/i686/multiarch/memcpy_chk.S: Likewise.
* sysdeps/i386/i686/multiarch/memmove.S: Likewise.
* sysdeps/i386/i686/multiarch/memmove_chk.S: Likewise.
* sysdeps/i386/i686/multiarch/mempcpy.S: Likewise.
* sysdeps/i386/i686/multiarch/mempcpy_chk.S: Likewise.
* sysdeps/i386/i686/multiarch/memrchr.S: Likewise.
* sysdeps/i386/i686/multiarch/memset.S: Likewise.
* sysdeps/i386/i686/multiarch/memset_chk.S: Likewise.
* sysdeps/i386/i686/multiarch/rawmemchr.S: Likewise.
* sysdeps/i386/i686/multiarch/strcasecmp.S: Likewise.
* sysdeps/i386/i686/multiarch/strcat.S: Likewise.
* sysdeps/i386/i686/multiarch/strchr.S: Likewise.
* sysdeps/i386/i686/multiarch/strcmp.S: Likewise.
* sysdeps/i386/i686/multiarch/strcpy.S: Likewise.
* sysdeps/i386/i686/multiarch/strcspn.S: Likewise.
* sysdeps/i386/i686/multiarch/strlen.S: Likewise.
* sysdeps/i386/i686/multiarch/strncase.S: Likewise.
* sysdeps/i386/i686/multiarch/strnlen.S: Likewise.
* sysdeps/i386/i686/multiarch/strrchr.S: Likewise.
* sysdeps/i386/i686/multiarch/strspn.S: Likewise.
* sysdeps/i386/i686/multiarch/wcschr.S: Likewise.
* sysdeps/i386/i686/multiarch/wcscmp.S: Likewise.
* sysdeps/i386/i686/multiarch/wcscpy.S: Likewise.
* sysdeps/i386/i686/multiarch/wcslen.S: Likewise.
* sysdeps/i386/i686/multiarch/wcsrchr.S: Likewise.
* sysdeps/i386/i686/multiarch/wmemcmp.S: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch updates x86_64 multiarch functions to use the newly defined
HAS_CPU_FEATURE, HAS_ARCH_FEATURE and LOAD_RTLD_GLOBAL_RO_RDX from
<cpu-features.h>.
* sysdeps/x86_64/fpu/multiarch/e_asin.c: Replace HAS_XXX with
HAS_CPU_FEATURE/HAS_ARCH_FEATURE (XXX).
* sysdeps/x86_64/fpu/multiarch/e_atan2.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/e_exp.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/e_log.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/e_pow.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_atan.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_fmaf.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_sin.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_tan.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_ceil.S: Use
LOAD_RTLD_GLOBAL_RO_RDX and HAS_CPU_FEATURE (SSE4_1).
* sysdeps/x86_64/fpu/multiarch/s_ceilf.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_floor.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_floorf.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_nearbyint.S : Likewise.
* sysdeps/x86_64/fpu/multiarch/s_nearbyintf.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_rintf.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_rintf.S : Likewise.
* sysdeps/x86_64/multiarch/ifunc-impl-list.c: Likewise.
* sysdeps/x86_64/multiarch/sched_cpucount.c: Likewise.
* sysdeps/x86_64/multiarch/strstr.c: Likewise.
* sysdeps/x86_64/multiarch/memmove.c: Likewise.
* sysdeps/x86_64/multiarch/memmove_chk.c: Likewise.
* sysdeps/x86_64/multiarch/test-multiarch.c: Likewise.
* sysdeps/x86_64/multiarch/memcmp.S: Remove __init_cpu_features
call. Add LOAD_RTLD_GLOBAL_RO_RDX. Replace HAS_XXX with
HAS_CPU_FEATURE/HAS_ARCH_FEATURE (XXX).
* sysdeps/x86_64/multiarch/memcpy.S: Likewise.
* sysdeps/x86_64/multiarch/memcpy_chk.S: Likewise.
* sysdeps/x86_64/multiarch/mempcpy.S: Likewise.
* sysdeps/x86_64/multiarch/mempcpy_chk.S: Likewise.
* sysdeps/x86_64/multiarch/memset.S: Likewise.
* sysdeps/x86_64/multiarch/memset_chk.S: Likewise.
* sysdeps/x86_64/multiarch/strcat.S: Likewise.
* sysdeps/x86_64/multiarch/strchr.S: Likewise.
* sysdeps/x86_64/multiarch/strcmp.S: Likewise.
* sysdeps/x86_64/multiarch/strcpy.S: Likewise.
* sysdeps/x86_64/multiarch/strcspn.S: Likewise.
* sysdeps/x86_64/multiarch/strspn.S: Likewise.
* sysdeps/x86_64/multiarch/wcscpy.S: Likewise.
* sysdeps/x86_64/multiarch/wmemcmp.S: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds _dl_x86_cpu_features to rtld_global in x86 ld.so
and initializes it early before __libc_start_main is called so that
cpu_features is always available when it is used and we can avoid
calling __init_cpu_features in IFUNC selectors.
* sysdeps/i386/dl-machine.h: Include <cpu-features.c>.
(dl_platform_init): Call init_cpu_features.
* sysdeps/i386/dl-procinfo.c (_dl_x86_cpu_features): New.
* sysdeps/i386/i686/cacheinfo.c
(DISABLE_PREFERRED_MEMORY_INSTRUCTION): Removed.
* sysdeps/i386/i686/multiarch/Makefile (aux): Remove init-arch.
* sysdeps/i386/i686/multiarch/Versions: Removed.
* sysdeps/i386/i686/multiarch/ifunc-defines.sym (KIND_OFFSET):
Removed.
* sysdeps/i386/ldsodefs.h: Include <cpu-features.h>.
* sysdeps/unix/sysv/linux/x86/Makefile
(libpthread-sysdep_routines): Remove init-arch.
* sysdeps/unix/sysv/linux/x86_64/dl-procinfo.c: Include
<sysdeps/x86_64/dl-procinfo.c> instead of
sysdeps/generic/dl-procinfo.c>.
* sysdeps/x86/Makefile [$(subdir) == csu] (gen-as-const-headers):
Add cpu-features-offsets.sym and rtld-global-offsets.sym.
[$(subdir) == elf] (sysdep-dl-routines): Add dl-get-cpu-features.
[$(subdir) == elf] (tests): Add tst-get-cpu-features.
[$(subdir) == elf] (tests-static): Add
tst-get-cpu-features-static.
* sysdeps/x86/Versions: New file.
* sysdeps/x86/cpu-features-offsets.sym: Likewise.
* sysdeps/x86/cpu-features.c: Likewise.
* sysdeps/x86/cpu-features.h: Likewise.
* sysdeps/x86/dl-get-cpu-features.c: Likewise.
* sysdeps/x86/libc-start.c: Likewise.
* sysdeps/x86/rtld-global-offsets.sym: Likewise.
* sysdeps/x86/tst-get-cpu-features-static.c: Likewise.
* sysdeps/x86/tst-get-cpu-features.c: Likewise.
* sysdeps/x86_64/dl-procinfo.c: Likewise.
* sysdeps/x86_64/cacheinfo.c (__cpuid_count): Removed.
Assume USE_MULTIARCH is defined and don't check it.
(is_intel): Replace __cpu_features with GLRO(dl_x86_cpu_features).
(is_amd): Likewise.
(max_cpuid): Likewise.
(intel_check_word): Likewise.
(__cache_sysconf): Don't call __init_cpu_features.
(__x86_preferred_memory_instruction): Removed.
(init_cacheinfo): Don't call __init_cpu_features. Replace
__cpu_features with GLRO(dl_x86_cpu_features).
* sysdeps/x86_64/dl-machine.h: <cpu-features.c>.
(dl_platform_init): Call init_cpu_features.
* sysdeps/x86_64/ldsodefs.h: Include <cpu-features.h>.
* sysdeps/x86_64/multiarch/Makefile (aux): Remove init-arch.
* sysdeps/x86_64/multiarch/Versions: Removed.
* sysdeps/x86_64/multiarch/cacheinfo.c: Likewise.
* sysdeps/x86_64/multiarch/init-arch.c: Likewise.
* sysdeps/x86_64/multiarch/ifunc-defines.sym (KIND_OFFSET):
Removed.
* sysdeps/x86_64/multiarch/init-arch.h: Rewrite.
|
|
|
|
|
| |
Using a ({ }) structure avoids the "value computed is not used"
that a simple () structure causes.
|
| |
|
|
|
|
|
|
|
|
| |
[BZ #18740]
* sysdeps/x86_64/fpu/Makefile (double-vlen2-arch-ext-cflags,
float-vlen4-arch-ext-cflags): Removed.
* math/Makefile (CFLAGS-test-double-vlen2-wrappers.c,
CFLAGS-test-float-vlen4-wrappers.c): Likewise.
|
|
|
|
|
| |
The conform tests flag the "aligned" symbol used inside the attributes,
so rename it to __aligned__ like other headers.
|
|
|
|
|
| |
This untested patch removes the custom lowlevellock.h on hppa. It seems
to contain an implementation equivalent to the generic lowlevellock.h.
|
|
|
|
|
| |
This fixes the conform test for the sigaction.h header and makes it match
all the other arches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The semi-recent SYSCALL_CANCEL inclusion broke hppa due to the sysdep.h
headers not including the unix/sysdep.h headers. Rework the includes so
we match the other ports:
* hppa/sysdep.h:
- Do not include sys/syscall.h as the unix sysdep.h headers do it.
- Do not include config.h as libc-symbols.h does it, and it has no
#ifdef multiple-include protection, and it breaks when some files
do things like #undef __OPTIMIZE__.
* sysdeps/unix/sysv/linux/hppa/sysdep-cancel.h:
- Drop the generic/sysdep.h as the unix sysdep.h headers include it.
* sysdeps/unix/sysv/linux/hppa/sysdep.h:
- Change to the unix & core hppa sysdep header stacks.
- Undef a few defines that the core headers already set up for us.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The semi-recent SYSCALL_CANCEL macro imposes a slight nuance on the
implementation of INLINE_SYSCALL: the nr argument cannot be expanded
directly but must be passed on to another macro which may expand it.
Most arches don't notice because INLINE_SYSCALL is defined in terms
of INTERNAL_SYSCALL which has the additional layer of expansion, but
on hppa, it was attempting to expand it directly. That causes build
errors like so:
../sysdeps/unix/sysv/linux/sigsuspend.c: In function '__sigsuspend':
../sysdeps/unix/sysv/linux/sigsuspend.c:31:62: error:
implicit declaration of function 'LOAD_ARGS___SYSCALL_NARGS'
../sysdeps/unix/sysv/linux/sigsuspend.c:31:304: error:
called object 'LOAD_ARGS___SYSCALL_NARGS(set, 8)' is not a function
So rewrite hppa's INLINE_SYSCALL to use INTERNAL_SYSCALL like other
arches do. This is also a nice clean up as the two macros had quite
a bit of duplicated logic.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On x86, linker in binutils 2.26 and newer consolidates R_*_JUMP_SLOT with
R_*_GLOB_DAT relocation against the same symbol. This patch extends
local PLT reference check to support alternate relocations.
[BZ #18078]
* scripts/check-localplt.awk: Support alternate relocations.
* scripts/localplt.awk: Also check relocations in DT_RELA/DT_REL
sections.
* sysdeps/unix/sysv/linux/i386/localplt.data: Mark free and
malloc entries with + REL R_386_GLOB_DAT.
* sysdeps/x86_64/localplt.data: New file.
|
|
|
|
|
|
|
| |
[BZ #18731]
* sysdeps/x86_64/fpu/math-tests-arch.h: Added AVX runtime check.
* sysdeps/x86_64/fpu/test-double-vlen4.c: Likewise.
* sysdeps/x86_64/fpu/test-float-vlen8.c: Likewise.
|
|
|
|
|
|
| |
This file was updated with an educated guess as to the symbols needed,
but on ia64, we don't have __tls_get_addr calls, so drop it from the
list.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Way back in 2005 the atomic_exchange_and_add function was cleaned up to
avoid the explicit size checking and instead let gcc handle things itself.
Unfortunately that change ended up leaving beyond a cast to int, even when
the incoming value was a long. This has flown under the radar for a long
time due to the function not being heavily used in the tree (especially as
a full 64bit field), but a recent change to semaphores made some nptl tests
fail reliably. This is due to the code packing two 32bit values into one
64bit variable (where the high 32bits contained the number of waiters), and
then the whole variable being atomically updated between threads. On ia64,
that meant we never atomically updated the count, so sometimes the sem_post
would not wake up the waiters.
|
|
|
|
|
|
| |
This define made more sense in the pre-sanitized kernel headers days,
but since we require kernel versions that are sanitized, we don't need
this hack anymore.
|
|
|
|
| |
and elf/tst-protected1b for Nios II.
|
| |
|
|
|
|
| |
in setcontext/swapcontext.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
AVX512 IFUNC implementations, implementations of wrappers to
AVX2 versions and KNL expf implementation fixed.
* sysdeps/x86_64/fpu/multiarch/svml_d_cos8_core.S: Fixed AVX512 IFUNC.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_log8_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_cosf16_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_expf16_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_powf16_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sinf16_core.S: Likewise.
* sysdeps/x86_64/fpu/svml_d_wrapper_impl.h: Fixed wrappers to AVX2.
* sysdeps/x86_64/fpu/svml_s_wrapper_impl.h: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_expf16_core_avx512.S: Fixed KNL
implementation.
|
|
|
|
|
|
|
|
|
| |
Fixes elf/tst-protected1a and elf/tst-protected1b tests.
Depends on a gcc patch that makes protected visibility data non-local:
https://gcc.gnu.org/ml/gcc-patches/2015-07/msg01871.html
and on a binutils patch so R_*_GLOB_DAT relocs are used for it:
https://sourceware.org/ml/binutils/2015-07/msg00247.html
|
|
|
|
|
|
|
|
|
| |
Fixes elf/tst-protected1a and elf/tst-protected1b tests.
Depends on a gcc patch that makes protected visibility data non-local:
https://gcc.gnu.org/ml/gcc-patches/2015-07/msg01871.html
and on a binutils patch so R_*_GLOB_DAT relocs are used for it:
https://sourceware.org/ml/binutils/2015-07/msg00246.html
|
| |
|
| |
|
|
|
|
|
|
|
| |
Since ia64 is little endian, sa_flags has to come before the padding
when splitting it from 64bits to 32bits.
Reported-by: Joseph Myers <joseph@codesourcery.com>
|
|
|
|
|
| |
These two fields have dedicated types, so change the ia64 header to match
all the other arches. This fixes the conform test for msg.h.
|
|
|
|
|
| |
This fixes the conform test for the stat.h header and makes it match
all the other arches.
|
|
|
|
|
| |
This fixes the conform test for the sigaction.h header and makes it match
all the other arches.
|
|
|
|
|
| |
This fixes the conform test for the siginfo.h header and makes it match
all the other arches.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It turns out tile suffered from the same problem as S390. However,
disabling CFI information for the __startcontext on tile was not
sufficient to fix the problem; I think the backtracer will just
blindly try to follow the link register (lr) in that case.
Instead, the change adds a cfi_undefined directive for "lr"
and then arranges to call __startcontext directly when the new
context starts, rather than just synthesizing a return to it.
In addition to being a bit easier now to understand the control
flow, this also allows the cfi_undefined directive to be placed in
a way that causes it to be in force at the address that the "lr"
from the called function points to.
|