about summary refs log tree commit diff
path: root/sysdeps
Commit message (Collapse)AuthorAgeFilesLines
* hppa: xfail debug/tst-ssp-1 when have-ssp is yes (gcc-12 and later)John David Anglin2023-07-011-0/+4
|
* hurd: Make getrandom return ENOSYS when /dev/random is not set upSamuel Thibault2023-07-011-2/+7
| | | | So that callers (e.g. __arc4random_buf) don't try calling it again.
* ld.so: Always use MAP_COPY to map the first segment [BZ #30452]H.J. Lu2023-06-303-0/+9
| | | | | | | | | | | The first segment in a shared library may be read-only, not executable. To support LD_PREFER_MAP_32BIT_EXEC on such shared libraries, we also check MAP_DENYWRITE to decide if MAP_32BIT should be passed to mmap. Normally the first segment is mapped with MAP_COPY, which is defined as (MAP_PRIVATE | MAP_DENYWRITE). But if the segment alignment is greater than the page size, MAP_COPY isn't used to allocate enough space to ensure that the segment can be properly aligned. Map the first segment with MAP_COPY in this case to fix BZ #30452.
* aarch64: Add vector implementations of exp routinesJoe Ramsay2023-06-3015-1/+597
| | | | | | | | | | | | Optimised implementations for single and double precision, Advanced SIMD and SVE, copied from Arm Optimized Routines. As previously, data tables are used via a barrier to prevent overly aggressive constant inlining. Special-case handlers are marked NOINLINE to avoid incurring the penalty of switching call standards unnecessarily. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* aarch64: Add vector implementations of log routinesJoe Ramsay2023-06-3015-1/+563
| | | | | | | | | | | | | | Optimised implementations for single and double precision, Advanced SIMD and SVE, copied from Arm Optimized Routines. Log lookup table added as HIDDEN symbol to allow it to be shared between AdvSIMD and SVE variants. As previously, data tables are used via a barrier to prevent overly aggressive constant inlining. Special-case handlers are marked NOINLINE to avoid incurring the penalty of switching call standards unnecessarily. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* aarch64: Add vector implementations of sin routinesJoe Ramsay2023-06-3013-6/+430
| | | | | | | | | | | | Optimised implementations for single and double precision, Advanced SIMD and SVE, copied from Arm Optimized Routines. As previously, data tables are used via a barrier to prevent overly aggressive constant inlining. Special-case handlers are marked NOINLINE to avoid incurring the penalty of switching call standards unnecessarily. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* aarch64: Add vector implementations of cos routinesJoe Ramsay2023-06-3013-123/+609
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the loop-over-scalar placeholder routines with optimised implementations from Arm Optimized Routines (AOR). Also add some headers containing utilities for aarch64 libmvec routines, and update libm-test-ulps. Data tables for new routines are used via a pointer with a barrier on it, in order to prevent overly aggressive constant inlining in GCC. This allows a single adrp, combined with offset loads, to be used for every constant in the table. Special-case handlers are marked NOINLINE in order to confine the save/restore overhead of switching from vector to normal calling standard. This way we only incur the extra memory access in the exceptional cases. NOINLINE definitions have been moved to math_private.h in order to reduce duplication. AOR exposes a config option, WANT_SIMD_EXCEPT, to enable selective masking (and later fixing up) of invalid lanes, in order to trigger fp exceptions correctly (AdvSIMD only). This is tested and maintained in AOR, however it is configured off at source level here for performance reasons. We keep the WANT_SIMD_EXCEPT blocks in routine sources to greatly simplify the upstreaming process from AOR to glibc. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* Update syscall lists for Linux 6.4Joseph Myers2023-06-285-2/+7
| | | | | | | | Linux 6.4 adds the riscv_hwprobe syscall on riscv and enables memfd_secret on s390. Update syscall-names.list and regenerate the arch-syscall.h headers with build-many-glibcs.py update-syscalls. Tested with build-many-glibcs.py.
* linux: Return unsupported if procfs can not be mount on tst-ttyname-namespaceAdhemerval Zanella2023-06-281-12/+16
| | | | | | | | | | | | | | | | Trying to mount procfs can fail due multiples reasons: proc is locked due the container configuration, mount syscall is filtered by a Linux Secuirty Module, or any other security or hardening mechanism that Linux might eventually add. The tests does require a new procfs without binding to parent, and to fully fix it would require to change how the container was created (which is out of the scope of the test itself). Instead of trying to foresee any possible scenario, if procfs can not be mount fail with unsupported. Checked on aarch64-linux-gnu. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* linux: Split tst-ttynameAdhemerval Zanella2023-06-284-206/+259
| | | | | | | | | | The tst-ttyname-direct.c checks the ttyname with procfs mounted in bind mode (MS_BIND|MS_REC), while tst-ttyname-namespace.c checks with procfs mount with MS_NOSUID|MS_NOEXEC|MS_NODEV in a new namespace. Checked on x86_64-linux-gnu and aarch64-linux-gnu. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* x86: Adjust Linux x32 dl-cache inclusion pathAdhemerval Zanella2023-06-261-1/+1
| | | | | | It fixes the x32 build failure introduced by 45e2483a6c. Checked on a x86_64-linux-gnu-x32 build.
* check_native: Get rid of allocaJoe Simmons-Talbott2023-06-261-24/+11
| | | | | | Use malloc rather than alloca to avoid potential stack overflow. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* ifaddrs: Get rid of allocaJoe Simmons-Talbott2023-06-261-26/+20
| | | | | | Use scratch_buffer and malloc rather than alloca to avoid potential stack overflows. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86: Make dl-cache.h and readelflib.c not Linux-specificSergey Bugaev2023-06-262-0/+0
| | | | | | | These files could be useful to any port that wants to use ld.so.cache. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* benchtests: fix warn unused resultFrederic Berat2023-06-221-1/+2
| | | | | | | Few tests needed to properly check for asprintf and system calls return values with _FORTIFY_SOURCE enabled. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* sysdeps/powerpc/fpu/tst-setcontext-fpscr.c: Fix warn unused resultFrederic Berat2023-06-221-1/+3
| | | | | | | The fread routine return value needs to be checked when fortification is enabled, hence use xfread helper. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* sysdeps/{i386, x86_64}/mempcpy_chk.S: fix linknamespace for __mempcpy_chkFrederic Berat2023-06-222-2/+2
| | | | | | | | | | | | | On i386 and x86_64, for libc.a specifically, __mempcpy_chk calls mempcpy which leads POSIX routines to call non-POSIX mempcpy indirectly. This leads the linknamespace test to fail when glibc is built with __FORTIFY_SOURCE=3. Since calling mempcpy doesn't bring any benefit for libc.a, directly call __mempcpy instead. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* hurd: readv: Get rid of allocaJoe Simmons-Talbott2023-06-201-16/+12
| | | | | | | Replace alloca with a scratch_buffer to avoid potential stack overflows. Checked on i686-gnu and x86_64-linux-gnu Message-Id: <20230619144334.2902429-1-josimmon@redhat.com>
* hurd: writev: Add back cleanup handlerJoe Simmons-Talbott2023-06-201-3/+7
| | | | | | | | | There is a potential memory leak for large writes due to writev being a "shall occur" cancellation point. Add back the cleanup handler removed in cf30aa43a5917f441c9438aaee201c53c8e1d76b. Checked on i686-gnu and x86_64-linux-gnu. Message-Id: <20230619143842.2901522-1-josimmon@redhat.com>
* Fix misspellings -- BZ 25337Paul Pluzhnikov2023-06-192-2/+2
|
* tests: replace read by xreadFrédéric Bérat2023-06-195-7/+9
| | | | | | | | | | | | | | | With fortification enabled, read calls return result needs to be checked, has it gets the __wur macro enabled. Note on read call removal from sysdeps/pthread/tst-cancel20.c and sysdeps/pthread/tst-cancel21.c: It is assumed that this second read call was there to overcome the race condition between pipe closure and thread cancellation that could happen in the original code. Since this race condition got fixed by d0e3ffb7a58854248f1d5e737610d50cd0a60f46 the second call seems superfluous. Hence, instead of checking for the return value of read, it looks reasonable to simply remove it. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* hurd: writev: Get rid of allocaJoe Simmons-Talbott2023-06-191-23/+14
| | | | | | | | Use a scratch_buffer rather than alloca to avoid potential stack overflows. Checked on i686-gnu and x86_64-linux-gnu Message-Id: <20230608155844.976554-1-josimmon@redhat.com>
* grantpt: Get rid of allocaJoe Simmons-Talbott2023-06-181-1/+11
| | | | | Replace alloca with a scratch_buffer to avoid potential stack overflows. Message-Id: <20230613191631.1080455-1-josimmon@redhat.com>
* hurd: Add strlcpy, strlcat, wcslcpy, wcslcat to libc.abilistFlorian Weimer2023-06-151-0/+8
|
* Add the wcslcpy, wcslcat functionsFlorian Weimer2023-06-1435-0/+140
| | | | | | | These functions are about to be added to POSIX, under Austin Group issue 986. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* Implement strlcpy and strlcat [BZ #178]Florian Weimer2023-06-1435-0/+140
| | | | | | | | | | | These functions are about to be added to POSIX, under Austin Group issue 986. The fortified strlcat implementation does not raise SIGABRT if the destination buffer does not contain a null terminator, it just inherits the non-failing regular strlcat behavior. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* tests: replace fgets by xfgetsFrederic Berat2023-06-131-1/+2
| | | | | | With fortification enabled, fgets calls return result needs to be checked, has it gets the __wur macro enabled. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* posix: Handle success in gai_strerror()Dridi Boukelmoune2023-06-131-0/+1
| | | | | Signed-off-by: Dridi Boukelmoune <dridi.boukelmoune@gmail.com> Reviewed-by: Arjun Shankar <arjun@redhat.com>
* LoongArch: Add support for dl_runtime_profilecaiyinyu2023-06-135-4/+220
| | | | This commit can fix the FAIL item: elf/tst-sprof-basic.
* x86: Make the divisor in setting `non_temporal_threshold` cpu specificNoah Goldstein2023-06-124-26/+51
| | | | | | | | | | | | | | | | Different systems prefer a different divisors. From benchmarks[1] so far the following divisors have been found: ICX : 2 SKX : 2 BWD : 8 For Intel, we are generalizing that BWD and older prefers 8 as a divisor, and SKL and newer prefers 2. This number can be further tuned as benchmarks are run. [1]: https://github.com/goldsteinn/memcpy-nt-benchmarks Reviewed-by: DJ Delorie <dj@redhat.com>
* x86: Refactor Intel `init_cpu_features`Noah Goldstein2023-06-121-81/+309
| | | | | | | | | | | | | | | This patch should have no affect on existing functionality. The current code, which has a single switch for model detection and setting prefered features, is difficult to follow/extend. The cases use magic numbers and many microarchitectures are missing. This makes it difficult to reason about what is implemented so far and/or how/where to add support for new features. This patch splits the model detection and preference setting stages so that CPU preferences can be set based on a complete list of available microarchitectures, rather than based on model magic numbers. Reviewed-by: DJ Delorie <dj@redhat.com>
* x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`Noah Goldstein2023-06-121-27/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 / ncores_per_socket'. This patch updates that value to roughly 'sizeof_L3 / 4` The original value (specifically dividing the `ncores_per_socket`) was done to limit the amount of other threads' data a `memcpy`/`memset` could evict. Dividing by 'ncores_per_socket', however leads to exceedingly low non-temporal thresholds and leads to using non-temporal stores in cases where REP MOVSB is multiple times faster. Furthermore, non-temporal stores are written directly to main memory so using it at a size much smaller than L3 can place soon to be accessed data much further away than it otherwise could be. As well, modern machines are able to detect streaming patterns (especially if REP MOVSB is used) and provide LRU hints to the memory subsystem. This in affect caps the total amount of eviction at 1/cache_associativity, far below meaningfully thrashing the entire cache. As best I can tell, the benchmarks that lead this small threshold where done comparing non-temporal stores versus standard cacheable stores. A better comparison (linked below) is to be REP MOVSB which, on the measure systems, is nearly 2x faster than non-temporal stores at the low-end of the previous threshold, and within 10% for over 100MB copies (well past even the current threshold). In cases with a low number of threads competing for bandwidth, REP MOVSB is ~2x faster up to `sizeof_L3`. The divisor of `4` is a somewhat arbitrary value. From benchmarks it seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs such as Broadwell prefer something closer to `8`. This patch is meant to be followed up by another one to make the divisor cpu-specific, but in the meantime (and for easier backporting), this patch settles on `4` as a middle-ground. Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable stores where done using: https://github.com/goldsteinn/memcpy-nt-benchmarks Sheets results (also available in pdf on the github): https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml Reviewed-by: DJ Delorie <dj@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* pthreads: Use _exit to terminate the tst-stdio1 testFlorian Weimer2023-06-061-1/+1
| | | | | | Previously, the exit function was used, but this causes the test to block (until the timeout) once exit is changed to lock stdio streams during flush.
* linux: Fail as unsupported if personality call is filteredAdhemerval Zanella2023-06-051-12/+21
| | | | | | | | | | | | | | | | | | | | | Container management default seccomp filter [1] only accepts personality(2) with PER_LINUX, (0x0), UNAME26 (0x20000), PER_LINUX32 (0x8), UNAME26 | PER_LINUX32, and 0xffffffff (to query current personality) Although the documentation only state it is blocked to prevent 'enabling BSD emulation' (PER_BSD, not implemented by Linux), checking on repository log the real reason is to block ASLR disable flag (ADDR_NO_RANDOMIZE) and other poorly support emulations. So handle EPERM and fail as UNSUPPORTED if we can really check for BZ#19408. Checked on aarch64-linux-gnu. [1] https://github.com/moby/moby/blob/master/profiles/seccomp/default.json Reviewed-by: Florian Weimer <fweimer@redhat.com>
* Remove MAP_VARIABLE from hppa bits/mman.hJoseph Myers2023-06-051-1/+0
| | | | | | | | | As suggested in <https://sourceware.org/pipermail/libc-alpha/2023-February/145890.html>, remove the MAP_VARIABLE define from the hppa bits/mman.h, for consistency with Linux 6.2 which removed the define there. Tested with build-many-glibcs.py for hppa-linux-gnu.
* hurd: Fix x86_64 sigreturn restoring bogus reply_portSergey Bugaev2023-06-041-38/+46
| | | | | | | | | | Since the area of the user's stack we use for the registers dump (and otherwise as __sigreturn2's stack) can and does overlap the sigcontext, we have to be very careful about the order of loads and stores that we do. In particular we have to load sc_reply_port before we start clobbering the sigcontext. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
* Fix a few more typos I missed in previous round -- BZ 25337Paul Pluzhnikov2023-06-025-5/+5
|
* Use __nonnull for the epoll_wait(2) family of syscallsAlejandro Colomar2023-06-011-4/+4
| | | | | Signed-off-by: Alejandro Colomar <alx@kernel.org> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Fix invalid use of NULL in epoll_pwait2(2) testAlejandro Colomar2023-06-011-1/+3
| | | | | | | | | epoll_pwait2(2)'s second argument should be nonnull. We're going to add __nonnull to the prototype, so let's fix the test accordingly. We can use a dummy variable to avoid passing NULL. Reported-by: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org> Signed-off-by: Alejandro Colomar <alx@kernel.org>
* getipv4sourcefilter: Get rid of allocaJoe Simmons-Talbott2023-06-011-17/+7
| | | | | | Use a scratch_buffer rather than alloca to avoid potential stack overflows. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* getsourcefilter: Get rid of alloca.Joe Simmons-Talbott2023-06-011-17/+7
| | | | | | Use a scratch_buffer rather than alloca to avoid potential stack overflows. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* tests: fix warn unused resultsFrédéric Bérat2023-06-012-3/+9
| | | | | | With fortification enabled, few function calls return result need to be checked, has they get the __wur macro enabled. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* tests: replace write by xwriteFrédéric Bérat2023-06-017-11/+19
| | | | | | | Using write without cheks leads to warn unused result when __wur is enabled. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* x86-64: Use YMM registers in memcmpeq-evex.SH.J. Lu2023-06-011-1/+1
| | | | | | | Since the assembly source file with -evex suffix should use YMM registers, not ZMM registers, include x86-evex256-vecs.h by default to use YMM registers in memcmpeq-evex.S Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* io: Fix F_GETLK, F_SETLK, and F_SETLKW for powerpc64Adhemerval Zanella2023-05-311-0/+6
| | | | | | | | | | | | | | | | | | Different than other 64 bit architectures, powerpc64 defines the LFS POSIX lock constants with values similar to 32 ABI, which are meant to be used with fcntl64 syscall. Since powerpc64 kABI does not have fcntl, the constants are adjusted with the FCNTL_ADJUST_CMD macro. The 4d0fe291aed3a476a changed the logic of generic constants LFS value are equal to the default values; which is now wrong for powerpc64. Fix the value by explicit define the previous glibc constants (powerpc64 does not need to use the 32 kABI value, but it simplifies the FCNTL_ADJUST_CMD which should be kept as compatibility). Checked on powerpc64-linux-gnu and powerpc-linux-gnu.
* Fix misspellings in sysdeps/ -- BZ 25337Paul Pluzhnikov2023-05-30153-214/+214
|
* io: Fix record locking contants on 32 bit arch with 64 bit default time_t ↵Adhemerval Zanella2023-05-301-1/+1
| | | | | | | | | | | | | | | | | | (BZ#30477) For architecture with default 64 bit time_t support, the kernel does not provide LFS and non-LFS values for F_GETLK, F_GETLK, and F_GETLK (the default value used for 64 bit architecture are used). This is might be considered an ABI break, but the currenct exported values is bogus anyway. The POSIX lockf is not affected since it is aliased to lockf64, which already uses the LFS values. Checked on i686-linux-gnu and the new tests on a riscv32. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* LoongArch: Fix inconsistency in SHMLBA macro values between glibc and kernelcaiyinyu2023-05-301-0/+24
| | | | | | | | | The LoongArch glibc was using the value of the SHMLBA macro from common code, which is __getpagesize() (16k), but this was inconsistent with the value of the SHMLBA macro in the kernel, which is SZ_64K (64k). This caused several shmat-related tests in LTP (Linux Test Project) to fail. This commit fixes the issue by ensuring that the glibc's SHMLBA macro value matches the value used in the kernel like other architectures.
* riscv: Add the clone3 wrapperAdhemerval Zanella2023-05-292-0/+80
| | | | | | | | | | | It follows the internal signature: extern int clone3 (struct clone_args *__cl_args, size_t __size, int (*__func) (void *__arg), void *__arg); Checked on riscv64-linux-gnu-rv64imafdc-lp64d. Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
* posix: Add error message for EAI_OVERFLOWDridi Boukelmoune2023-05-291-0/+1
| | | | | Signed-off-by: Dridi Boukelmoune <dridi.boukelmoune@gmail.com> Reviewed-by: Arjun Shankar <arjun@redhat.com>