about summary refs log tree commit diff
path: root/nptl
Commit message (Collapse)AuthorAgeFilesLines
* linux: Avoid shifting a negative signed on POSIX timer interfaceAdhemerval Zanella2022-10-201-1/+1
| | | | | | | | | | The current macros uses pid as signed value, which triggers a compiler warning for process and thread timers. Replace MAKE_PROCESS_CPUCLOCK with static inline function that expects the pid as unsigned. These are similar to what Linux does internally. Checked on x86_64-linux-gnu. Reviewed-by: Arjun Shankar <arjun@redhat.com>
* nptl: Convert tst-setuid2 to test-driverYu Chien Peter Lin2022-10-031-37/+15
| | | | | | | | Use <support/test-driver.c> and replace pthread calls to its xpthread equivalents. Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Use atomic_exchange_release/acquireWilco Dijkstra2022-09-262-2/+2
| | | | | | | Rename atomic_exchange_rel/acq to use atomic_exchange_release/acquire since these map to the standard C11 atomic builtins. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Use C11 atomics instead of atomic_decrement_and_testWilco Dijkstra2022-09-232-104/+1
| | | | | | | | | | Replace atomic_decrement_and_test with atomic_fetch_add_relaxed. These are simple counters which do not protect any shared data from concurrent accesses. Also remove the unused file cond-perf.c. Passes regress on AArch64. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Use C11 atomics instead of atomic_increment(_val)Wilco Dijkstra2022-09-233-3/+3
| | | | | | | | | | | Replace atomic_increment and atomic_increment_val with atomic_fetch_add_relaxed. One case in sem_post.c uses release semantics (see comment above it). The others are simple counters and do not protect any shared data from concurrent accesses. Passes regress on AArch64. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Use C11 atomics instead of atomic_and/orWilco Dijkstra2022-09-234-4/+4
| | | | | | | | | | Remove the 4 uses of atomic_and and atomic_or with atomic_fetch_and_acquire and atomic_fetch_or_acquire. This is preserves existing implied semantics, however relaxed MO on FUTEX_OWNER_DIED accesses may be correct. Passes regress on AArch64. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Use '%z' instead of '%Z' on printf functionsAdhemerval Zanella Netto2022-09-222-14/+14
| | | | | | | | The Z modifier is a nonstandard synonymn for z (that predates z itself) and compiler might issue an warning for in invalid conversion specifier. Reviewed-by: Florian Weimer <fweimer@redhat.com>
* Use relaxed atomics since there is no MO dependenceWilco Dijkstra2022-09-132-2/+3
| | | | | | | | Replace the 3 uses of atomic_bit_set and atomic_bit_test_set with atomic_fetch_or_relaxed. Using relaxed MO is correct since the atomics are used to ensure memory is released only once. Reviewed-by: Florian Weimer <fweimer@redhat.com>
* Use C11 atomics instead of atomic_decrement(_val)Wilco Dijkstra2022-09-092-2/+2
| | | | | | | Replace atomic_decrement and atomic_decrement_val with atomic_fetch_add_relaxed. Reviewed-by: DJ Delorie <dj@redhat.com>
* stdlib: Add arc4random, arc4random_buf, and arc4random_uniform (BZ #4417)Adhemerval Zanella Netto2022-07-221-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The implementation is based on scalar Chacha20 with per-thread cache. It uses getrandom or /dev/urandom as fallback to get the initial entropy, and reseeds the internal state on every 16MB of consumed buffer. To improve performance and lower memory consumption the per-thread cache is allocated lazily on first arc4random functions call, and if the memory allocation fails getentropy or /dev/urandom is used as fallback. The cache is also cleared on thread exit iff it was initialized (so if arc4random is not called it is not touched). Although it is lock-free, arc4random is still not async-signal-safe (the per thread state is not updated atomically). The ChaCha20 implementation is based on RFC8439 [1], omitting the final XOR of the keystream with the plaintext because the plaintext is a stream of zeros. This strategy is similar to what OpenBSD arc4random does. The arc4random_uniform is based on previous work by Florian Weimer, where the algorithm is based on Jérémie Lumbroso paper Optimal Discrete Uniform Generation from Coin Flips, and Applications (2013) [2], who credits Donald E. Knuth and Andrew C. Yao, The complexity of nonuniform random number generation (1976), for solving the general case. The main advantage of this method is the that the unit of randomness is not the uniform random variable (uint32_t), but a random bit. It optimizes the internal buffer sampling by initially consuming a 32-bit random variable and then sampling byte per byte. Depending of the upper bound requested, it might lead to better CPU utilization. Checked on x86_64-linux-gnu, aarch64-linux, and powerpc64le-linux-gnu. Co-authored-by: Florian Weimer <fweimer@redhat.com> Reviewed-by: Yann Droneaud <ydroneaud@opteya.com> [1] https://datatracker.ietf.org/doc/html/rfc8439 [2] https://arxiv.org/pdf/1304.1916.pdf
* nptl: Fix ___pthread_unregister_cancel_restore asynchronous restoreAdhemerval Zanella2022-07-131-1/+1
| | | | | | This was due a wrong revert done on 404656009b459658. Checked on x86_64-linux-gnu and i686-linux-gnu.
* Replace __libc_multiple_threads with __libc_single_threadedAdhemerval Zanella2022-07-054-36/+1
| | | | | | | | | | | And also fixes the SINGLE_THREAD_P macro for SINGLE_THREAD_BY_GLOBAL, since header inclusion single-thread.h is in the wrong order, the define needs to come before including sysdeps/unix/sysdep.h. The macro is now moved to a per-arch single-threade.h header. The SINGLE_THREAD_P is used on some more places. Checked on aarch64-linux-gnu and x86_64-linux-gnu.
* Refactor internal-signals.hAdhemerval Zanella2022-06-305-16/+17
| | | | | | | | | | | | | | | | | | The main drive is to optimize the internal usage and required size when sigset_t is embedded in other data structures. On Linux, the current supported signal set requires up to 8 bytes (16 on mips), was lower than the user defined sigset_t (128 bytes). A new internal type internal_sigset_t is added, along with the functions to operate on it similar to the ones for sigset_t. The internal-signals.h is also refactored to remove unused functions Besides small stack usage on some functions (posix_spawn, abort) it lower the struct pthread by about 120 bytes (112 on mips). Checked on x86_64-linux-gnu. Reviewed-by: Arjun Shankar <arjun@redhat.com>
* nptl: Remove unused members from struct pthreadAdhemerval Zanella2022-06-291-7/+0
| | | | | | | It removes both pid_ununsed and cpuclock_offset_ununsed, saving about 12 bytes from struct pthread. Reviewed-by: Arjun Shankar <arjun@redhat.com>
* misc: Optimize internal usage of __libc_single_threadedAdhemerval Zanella2022-06-241-1/+4
| | | | | | | | | | | | | | | By adding an internal alias to avoid the GOT indirection. On some architecture, __libc_single_thread may be accessed through copy relocations and thus it requires to update also the copies default copy. This is done by adding a new internal macro, libc_hidden_data_{proto,def}, which has an addition argument that specifies the alias name (instead of default __GI_ one). Checked on x86_64-linux-gnu and i686-linux-gnu. Reviewed-by: Fangrui Song <maskray@google.com>
* nptl: Fix __libc_cleanup_pop_restore asynchronous restore (BZ#29214)Adhemerval Zanella2022-06-081-1/+2
| | | | | | This was due a wrong revert done on 404656009b459658. Checked on x86_64-linux-gnu.
* nptl: Add backoff mechanism to spinlock loopWangyang Guo2022-05-091-2/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When mutiple threads waiting for lock at the same time, once lock owner releases the lock, waiters will see lock available and all try to lock, which may cause an expensive CAS storm. Binary exponential backoff with random jitter is introduced. As try-lock attempt increases, there is more likely that a larger number threads compete for adaptive mutex lock, so increase wait time in exponential. A random jitter is also added to avoid synchronous try-lock from other threads. v2: Remove read-check before try-lock for performance. v3: 1. Restore read-check since it works well in some platform. 2. Make backoff arch dependent, and enable it for x86_64. 3. Limit max backoff to reduce latency in large critical section. v4: Fix strict-prototypes error in sysdeps/nptl/pthread_mutex_backoff.h v5: Commit log updated for regression in large critical section. Result of pthread-mutex-locks bench Test Platform: Xeon 8280L (2 socket, 112 CPUs in total) First Row: thread number First Col: critical section length Values: backoff vs upstream, time based, low is better non-critical-length: 1 1 2 4 8 16 32 64 112 140 0 0.99 0.58 0.52 0.49 0.43 0.44 0.46 0.52 0.54 1 0.98 0.43 0.56 0.50 0.44 0.45 0.50 0.56 0.57 2 0.99 0.41 0.57 0.51 0.45 0.47 0.48 0.60 0.61 4 0.99 0.45 0.59 0.53 0.48 0.49 0.52 0.64 0.65 8 1.00 0.66 0.71 0.63 0.56 0.59 0.66 0.72 0.71 16 0.97 0.78 0.91 0.73 0.67 0.70 0.79 0.80 0.80 32 0.95 1.17 0.98 0.87 0.82 0.86 0.89 0.90 0.90 64 0.96 0.95 1.01 1.01 0.98 1.00 1.03 0.99 0.99 128 0.99 1.01 1.01 1.17 1.08 1.12 1.02 0.97 1.02 non-critical-length: 32 1 2 4 8 16 32 64 112 140 0 1.03 0.97 0.75 0.65 0.58 0.58 0.56 0.70 0.70 1 0.94 0.95 0.76 0.65 0.58 0.58 0.61 0.71 0.72 2 0.97 0.96 0.77 0.66 0.58 0.59 0.62 0.74 0.74 4 0.99 0.96 0.78 0.66 0.60 0.61 0.66 0.76 0.77 8 0.99 0.99 0.84 0.70 0.64 0.66 0.71 0.80 0.80 16 0.98 0.97 0.95 0.76 0.70 0.73 0.81 0.85 0.84 32 1.04 1.12 1.04 0.89 0.82 0.86 0.93 0.91 0.91 64 0.99 1.15 1.07 1.00 0.99 1.01 1.05 0.99 0.99 128 1.00 1.21 1.20 1.22 1.25 1.31 1.12 1.10 0.99 non-critical-length: 128 1 2 4 8 16 32 64 112 140 0 1.02 1.00 0.99 0.67 0.61 0.61 0.61 0.74 0.73 1 0.95 0.99 1.00 0.68 0.61 0.60 0.60 0.74 0.74 2 1.00 1.04 1.00 0.68 0.59 0.61 0.65 0.76 0.76 4 1.00 0.96 0.98 0.70 0.63 0.63 0.67 0.78 0.77 8 1.01 1.02 0.89 0.73 0.65 0.67 0.71 0.81 0.80 16 0.99 0.96 0.96 0.79 0.71 0.73 0.80 0.84 0.84 32 0.99 0.95 1.05 0.89 0.84 0.85 0.94 0.92 0.91 64 1.00 0.99 1.16 1.04 1.00 1.02 1.06 0.99 0.99 128 1.00 1.06 0.98 1.14 1.39 1.26 1.08 1.02 0.98 There is regression in large critical section. But adaptive mutex is aimed for "quick" locks. Small critical section is more common when users choose to use adaptive pthread_mutex. Signed-off-by: Wangyang Guo <wangyang.guo@intel.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* nptl: Fix pthread_cancel cancelhandling atomic operationsAdhemerval Zanella2022-04-201-1/+2
| | | | | | | | The 404656009b reversion did not setup the atomic loop to set the cancel bits correctly. The fix is essentially what pthread_cancel did prior 26cfbb7162ad. Checked on x86_64-linux-gnu and aarch64-linux-gnu.
* nptl: Handle spurious EINTR when thread cancellation is disabled (BZ#29029)Adhemerval Zanella2022-04-1410-85/+272
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some Linux interfaces never restart after being interrupted by a signal handler, regardless of the use of SA_RESTART [1]. It means that for pthread cancellation, if the target thread disables cancellation with pthread_setcancelstate and calls such interfaces (like poll or select), it should not see spurious EINTR failures due the internal SIGCANCEL. However recent changes made pthread_cancel to always sent the internal signal, regardless of the target thread cancellation status or type. To fix it, the previous semantic is restored, where the cancel signal is only sent if the target thread has cancelation enabled in asynchronous mode. The cancel state and cancel type is moved back to cancelhandling and atomic operation are used to synchronize between threads. The patch essentially revert the following commits: 8c1c0aae20 nptl: Move cancel type out of cancelhandling 2b51742531 nptl: Move cancel state out of cancelhandling 26cfbb7162 nptl: Remove CANCELING_BITMASK However I changed the atomic operation to follow the internal C11 semantic and removed the MACRO usage, it simplifies a bit the resulting code (and removes another usage of the old atomic macros). Checked on x86_64-linux-gnu, i686-linux-gnu, aarch64-linux-gnu, and powerpc64-linux-gnu. [1] https://man7.org/linux/man-pages/man7/signal.7.html Reviewed-by: Florian Weimer <fweimer@redhat.com> Tested-by: Aurelien Jarno <aurelien@aurel32.net>
* Allow for unpriviledged nested containersDJ Delorie2022-04-041-0/+4
| | | | | | | | | | | | | | | | | If the build itself is run in a container, we may not be able to fully set up a nested container for test-container testing. Notably is the mounting of /proc, since it's critical that it be mounted from within the same PID namespace as its users, and thus cannot be bind mounted from outside the container like other mounts. This patch defaults to using the parent's PID namespace instead of creating a new one, as this is more likely to be allowed. If the test needs an isolated PID namespace, it should add the "pidns" command to its init script. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* nptl: Use libc-diag.h with tst-thread-setspecificAdhemerval Zanella2022-03-311-7/+8
| | | | | | And also use libsupport. Checked on x86_64-linux-gnu and i686-linux-gnu.
* nptl: Fix cleanups for stack grows up [BZ# 28899]John David Anglin2022-02-281-1/+1
| | | | | | | | _STACK_GROWS_DOWN is defined to 0 when the stack grows up. The code in unwind.c used `#ifdef _STACK_GROWS_DOWN' to selct the stack grows down define for FRAME_LEFT. As a result, the _STACK_GROWS_DOWN define was always selected and cleanups were incorrectly sequenced when the stack grows up.
* elf: Fix initial-exec TLS access on audit modules (BZ #28096)Adhemerval Zanella2022-02-011-1/+1
| | | | | | | | | | | | | | | | | For audit modules and dependencies with initial-exec TLS, we can not set the initial TLS image on default loader initialization because it would already be set by the audit setup. However, subsequent thread creation would need to follow the default behaviour. This patch fixes it by setting l_auditing link_map field not only for the audit modules, but also for all its dependencies. This is used on _dl_allocate_tls_init to avoid the static TLS initialization at load time. Checked on x86_64-linux-gnu, i686-linux-gnu, and aarch64-linux-gnu. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
* nptl: Effectively skip CAS in spinlock loopJangwoong Kim2022-01-201-3/+2
| | | | | | | | | | | | | | | | The commit: "Add LLL_MUTEX_READ_LOCK [BZ #28537]" SHA1: d672a98a1af106bd68deb15576710cd61363f7a6 introduced LLL_MUTEX_READ_LOCK, to skip CAS in spinlock loop if atomic load fails. But, "continue" inside of do-while loop does not skip the evaluation of escape expression, thus CAS is not skipped. Replace do-while with while and skip LLL_MUTEX_TRYLOCK if LLL_MUTEX_READ_LOCK fails. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* Update copyright dates with scripts/update-copyrightsPaul Eggert2022-01-01274-274/+274
| | | | | | | | | | | | | | | | | | | | | | | I used these shell commands: ../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright (cd ../glibc && git commit -am"[this commit message]") and then ignored the output, which consisted lines saying "FOO: warning: copyright statement not found" for each of 7061 files FOO. I then removed trailing white space from math/tgmath.h, support/tst-support-open-dev-null-range.c, and sysdeps/x86_64/multiarch/strlen-vec.S, to work around the following obscure pre-commit check failure diagnostics from Savannah. I don't know why I run into these diagnostics whereas others evidently do not. remote: *** 912-#endif remote: *** 913: remote: *** 914- remote: *** error: lines with trailing whitespace found ... remote: *** error: sysdeps/unix/sysv/linux/statx_cp.c: trailing lines
* nptl: rseq failure after registration on main thread is fatalFlorian Weimer2021-12-091-1/+2
| | | | | | | | | | | | | | | | | | | | This simplifies the application programming model. Browser sandboxes have already been fixed: Sandbox is incompatible with rseq registration <https://bugzilla.mozilla.org/show_bug.cgi?id=1651701> Allow rseq in the Linux sandboxes. r=gcp <https://hg.mozilla.org/mozilla-central/rev/042425712eb1> Sandbox needs to support rseq system call <https://bugs.chromium.org/p/chromium/issues/detail?id=1104160> Linux sandbox: Allow rseq(2) <https://chromium.googlesource.com/chromium/src.git/+/230675d9ac8f1> Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* nptl: Add glibc.pthread.rseq tunable to control rseq registrationFlorian Weimer2021-12-091-1/+9
| | | | | | | | This tunable allows applications to register the rseq area instead of glibc. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com> Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* nptl: Add rseq registrationFlorian Weimer2021-12-092-0/+17
| | | | | | | | | | | | The rseq area is placed directly into struct pthread. rseq registration failure is not treated as an error, so it is possible that threads run with inconsistent registration status. <sys/rseq.h> is not yet installed as a public header. Co-Authored-By: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com> Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* misc, nptl: Remove stray references to __condvar_load_64_relaxedFlorian Weimer2021-12-061-1/+1
| | | | | | The function was renamed to __atomic_wide_counter_load_relaxed in commit 8bd336a00a5311bf7a9e99b3b0e9f01ff5faa74b ("nptl: Extract <bits/atomic_wide_counter.h> from pthread_cond_common.c").
* nptl: Increase default TCB alignment to 32Florian Weimer2021-12-032-1/+4
| | | | | | | | | | rseq support will use a 32-byte aligned field in struct pthread, so the whole struct needs to have at least that alignment. nptl/tst-tls3mod.c uses TCB_ALIGNMENT, therefore include <descr.h> to obtain the fallback definition. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* nptl: Do not set signal mask on second setjmp return [BZ #28607]Florian Weimer2021-11-241-2/+2
| | | | | | | | | | | | __libc_signal_restore_set was in the wrong place: It also ran when setjmp returned the second time (after pthread_exit or pthread_cancel). This is observable with blocked pending signals during thread exit. Fixes commit b3cae39dcbfa2432b3f3aa28854d8ac57f0de1b8 ("nptl: Start new threads with all signals blocked [BZ #25098]"). Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* nptl: Extract <bits/atomic_wide_counter.h> from pthread_cond_common.cFlorian Weimer2021-11-173-179/+52
| | | | | | | | | | | | | And make it an installed header. This addresses a few aliasing violations (which do not seem to result in miscompilation due to the use of atomics), and also enables use of wide counters in other parts of the library. The debug output in nptl/tst-cond22 has been adjusted to print the 32-bit values instead because it avoids a big-endian/little-endian difference. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Move assignment out of the CAS conditionH.J. Lu2021-11-152-8/+6
| | | | | | | | | | | | | | | | | | | | | | | | Update commit 49302b8fdf9103b6fc0a398678668a22fa19574c Author: H.J. Lu <hjl.tools@gmail.com> Date: Thu Nov 11 06:54:01 2021 -0800 Avoid extra load with CAS in __pthread_mutex_clocklock_common [BZ #28537] Replace boolean CAS with value CAS to avoid the extra load. and commit 0b82747dc48d5bf0871bdc6da8cb6eec1256355f Author: H.J. Lu <hjl.tools@gmail.com> Date: Thu Nov 11 06:31:51 2021 -0800 Avoid extra load with CAS in __pthread_mutex_lock_full [BZ #28537] Replace boolean CAS with value CAS to avoid the extra load. by moving assignment out of the CAS condition.
* Add LLL_MUTEX_READ_LOCK [BZ #28537]H.J. Lu2021-11-121-0/+7
| | | | | | | | | | | | | | | | CAS instruction is expensive. From the x86 CPU's point of view, getting a cache line for writing is more expensive than reading. See Appendix A.2 Spinlock in: https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-lock-scaling-analysis-paper.pdf The full compare and swap will grab the cache line exclusive and cause excessive cache line bouncing. Add LLL_MUTEX_READ_LOCK to do an atomic load and skip CAS in spinlock loop if compare may fail to reduce cache line bouncing on contended locks. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* Avoid extra load with CAS in __pthread_mutex_clocklock_common [BZ #28537]H.J. Lu2021-11-121-5/+5
| | | | | | Replace boolean CAS with value CAS to avoid the extra load. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* Avoid extra load with CAS in __pthread_mutex_lock_full [BZ #28537]H.J. Lu2021-11-121-5/+5
| | | | | | Replace boolean CAS with value CAS to avoid the extra load. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* nptl: Fix tst-cancel7 and tst-cancelx7 pidfile raceStafford Horne2021-10-181-6/+3
| | | | | | | | | | | | | | The check for waiting for the pidfile to be created looks wrong. At the point when ACCESS is run the pid file will always be created and accessible as it is created during DO_PREPARE. This means that thread cancellation may be performed before the pid is written to the pidfile. This was found to be flaky when testing on my OpenRISC platform. Fix this by using the semaphore to wait for pidfile pid write completion. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* nptl: pthread_kill must send signals to a specific thread [BZ #28407]Florian Weimer2021-10-011-3/+1
| | | | | | | | | | | | | | | | | The choice between the kill vs tgkill system calls is not just about the TID reuse race, but also about whether the signal is sent to the whole process (and any thread in it) or to a specific thread. This was caught by the openposix test suite: LTP: openposix test suite - FAIL: SIGUSR1 is member of new thread pendingset. <https://gitlab.com/cki-project/kernel-tests/-/issues/764> Fixes commit 526c3cf11ee9367344b6b15d669e4c3cb461a2be ("nptl: Fix race between pthread_kill and thread exit (bug 12889)"). Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
* nptl: Add CLOCK_MONOTONIC support for PI mutexesAdhemerval Zanella2021-10-012-23/+26
| | | | | | | | | | | | | | | Linux added FUTEX_LOCK_PI2 to support clock selection (commit bf22a6976897977b0a3f1aeba6823c959fc4fdae). With the new flag we can now proper support CLOCK_MONOTONIC for pthread_mutex_clocklock with Priority Inheritance. If kernel does not support, EINVAL is returned instead. The difference is the futex operation will be issued and the kernel will advertise the missing support (instead of hard-code error return). Checked on x86_64-linux-gnu and i686-linux-gnu on Linux 5.14, 5.11, and 4.15.
* nptl: Use FUTEX_LOCK_PI2 when availableAdhemerval Zanella2021-10-013-2/+67
| | | | | | | | | | | This patch uses the new futex PI operation provided by Linux v5.14 when it is required. The futex_lock_pi64() is moved to futex-internal.c (since it used on two different places and its code size might be large depending of the kernel configuration) and clockid is added as an argument. Co-authored-by: Kurt Kanzenbach <kurt@linutronix.de>
* nptl: Avoid setxid deadlock with blocked signals in thread exit [BZ #28361]Florian Weimer2021-09-231-2/+10
| | | | | | | | | | | | | | | | | | | | | | As part of the fix for bug 12889, signals are blocked during thread exit, so that application code cannot run on the thread that is about to exit. This would cause problems if the application expected signals to be delivered after the signal handler revealed the thread to still exist, despite pthread_kill can no longer be used to send signals to it. However, glibc internally uses the SIGSETXID signal in a way that is incompatible with signal blocking, due to the way the setxid handshake delays thread exit until the setxid operation has completed. With a blocked SIGSETXID, the handshake can never complete, causing a deadlock. As a band-aid, restore the previous handshake protocol by not blocking SIGSETXID during thread exit. The new test sysdeps/pthread/tst-pthread-setuid-loop.c is based on a downstream test by Martin Osvald. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
* nptl: pthread_kill needs to return ESRCH for old programs (bug 19193)Florian Weimer2021-09-201-8/+29
| | | | | | The fix for bug 19193 breaks some old applications which appear to use pthread_kill to probe if a thread is still running, something that is not supported by POSIX.
* nptl: Fix race between pthread_kill and thread exit (bug 12889)Florian Weimer2021-09-134-25/+63
| | | | | | | | | | | A new thread exit lock and flag are introduced. They are used to detect that the thread is about to exit or has exited in __pthread_kill_internal, and the signal is not sent in this case. The test sysdeps/pthread/tst-pthread_cancel-select-loop.c is derived from a downstream test originally written by Marek Polacek. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* nptl: pthread_kill, pthread_cancel should not fail after exit (bug 19193)Florian Weimer2021-09-132-5/+11
| | | | | | | | | | | | | | This closes one remaining race condition related to bug 12889: if the thread already exited on the kernel side, returning ESRCH is not correct because that error is reserved for the thread IDs (pthread_t values) whose lifetime has ended. In case of a kernel-side exit and a valid thread ID, no signal needs to be sent and cancellation does not have an effect, so just return 0. sysdeps/pthread/tst-kill4.c triggers undefined behavior and is removed with this commit. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Remove "Contributed by" linesSiddhesh Poyarekar2021-09-03176-177/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | We stopped adding "Contributed by" or similar lines in sources in 2012 in favour of git logs and keeping the Contributors section of the glibc manual up to date. Removing these lines makes the license header a bit more consistent across files and also removes the possibility of error in attribution when license blocks or files are copied across since the contributed-by lines don't actually reflect reality in those cases. Move all "Contributed by" and similar lines (Written by, Test by, etc.) into a new file CONTRIBUTED-BY to retain record of these contributions. These contributors are also mentioned in manual/contrib.texi, so we just maintain this additional record as a courtesy to the earlier developers. The following scripts were used to filter a list of files to edit in place and to clean up the CONTRIBUTED-BY file respectively. These were not added to the glibc sources because they're not expected to be of any use in future given that this is a one time task: https://gist.github.com/siddhesh/b5ecac94eabfd72ed2916d6d8157e7dc https://gist.github.com/siddhesh/15ea1f5e435ace9774f485030695ee02 Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* nptl: Fix tst-cancel7 and tst-cancelx7 race condition (BZ #14232)Adhemerval Zanella2021-08-261-57/+57
| | | | | | | | | | A mapped temporary file and a semaphore is used to synchronize the pid information on the created file, the semaphore is updated once the file contents is flushed. Checked on x86_64-linux-gnu. Reviewed-by: Florian Weimer <fweimer@redhat.com>
* Fix build of nptl/tst-thread_local1.cc with GCC 12Joseph Myers2021-08-021-0/+1
| | | | | | | | | | | | | | | The test nptl/tst-thread_local1.cc fails to build with GCC mainline because of changes to what libstdc++ headers implicitly include what other headers: tst-thread_local1.cc: In function 'int do_test()': tst-thread_local1.cc:177:5: error: variable 'std::array<std::pair<const char*, std::function<void(void* (*)(void*))> >, 2> do_thread_X' has initializer but incomplete type 177 | do_thread_X | ^~~~~~~~~~~ Fix this by adding an explicit include of <array>. Tested with build-many-glibcs.py for aarch64-linux-gnu.
* Move malloc hooks into a compat DSOSiddhesh Poyarekar2021-07-221-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | Remove all malloc hook uses from core malloc functions and move it into a new library libc_malloc_debug.so. With this, the hooks now no longer have any effect on the core library. libc_malloc_debug.so is a malloc interposer that needs to be preloaded to get hooks functionality back so that the debugging features that depend on the hooks, i.e. malloc-check, mcheck and mtrace work again. Without the preloaded DSO these debugging features will be nops. These features will be ported away from hooks in subsequent patches. Similarly, legacy applications that need hooks functionality need to preload libc_malloc_debug.so. The symbols exported by libc_malloc_debug.so are maintained at exactly the same version as libc.so. Finally, static binaries will no longer be able to use malloc debugging features since they cannot preload the debugging DSO. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
* Add an internal wrapper for clone, clone2 and clone3H.J. Lu2021-07-142-70/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The clone3 system call (since Linux 5.3) provides a superset of the functionality of clone and clone2. It also provides a number of API improvements, including the ability to specify the size of the child's stack area which can be used by kernel to compute the shadow stack size when allocating the shadow stack. Add: extern int __clone_internal (struct clone_args *__cl_args, int (*__func) (void *__arg), void *__arg); to provide an abstract interface for clone, clone2 and clone3. 1. Simplify stack management for thread creation by passing both stack base and size to create_thread. 2. Consolidate clone vs clone2 differences into a single file. 3. Call __clone3 if HAVE_CLONE3_WAPPER is defined. If __clone3 returns -1 with ENOSYS, fall back to clone or clone2. 4. Use only __clone_internal to clone a thread. Since the stack size argument for create_thread is now unconditional, always pass stack size to create_thread. 5. Enable the public clone3 wrapper in the future after it has been added to all targets. NB: Sandbox will return ENOSYS on clone3 in both Chromium: The following revision refers to this bug: https://chromium.googlesource.com/chromium/src/+/218438259dd795456f0a48f67cbe5b4e520db88b commit 218438259dd795456f0a48f67cbe5b4e520db88b Author: Matthew Denton <mpdenton@chromium.org> Date: Thu Jun 03 20:06:13 2021 Linux sandbox: return ENOSYS for clone3 Because clone3 uses a pointer argument rather than a flags argument, we cannot examine the contents with seccomp, which is essential to preventing sandboxed processes from starting other processes. So, we won't be able to support clone3 in Chromium. This CL modifies the BPF policy to return ENOSYS for clone3 so glibc always uses the fallback to clone. Bug: 1213452 Change-Id: I7c7c585a319e0264eac5b1ebee1a45be2d782303 Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/2936184 Reviewed-by: Robert Sesek <rsesek@chromium.org> Commit-Queue: Matthew Denton <mpdenton@chromium.org> Cr-Commit-Position: refs/heads/master@{#888980} [modify] https://crrev.com/218438259dd795456f0a48f67cbe5b4e520db88b/sandbox/linux/seccomp-bpf-helpers/baseline_policy.cc and Firefox: https://hg.mozilla.org/integration/autoland/rev/ecb4011a0c76 Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Reduce <limits.h> pollution due to dynamic PTHREAD_STACK_MINFlorian Weimer2021-07-121-0/+3
| | | | | | | | | | | | | | | | | | | | <limits.h> used to be a header file with no declarations. GCC's libgomp includes it in a #pragma GCC visibility hidden block. Including <unistd.h> from <limits.h> (indirectly) declares everything in <unistd.h> with hidden visibility, resulting in linker failures. This commit avoids C declarations in assembler mode and only declares __sysconf in <limits.h> (and not the entire contents of <unistd.h>). The __sysconf symbol is already part of the ABI. PTHREAD_STACK_MIN is no longer defined for __USE_DYNAMIC_STACK_SIZE && __ASSEMBLER__ because there is no possible definition. Additionally, PTHREAD_STACK_MIN is now defined by <pthread.h> for __USE_MISC because this is what developers expect based on the macro name. It also helps to avoid libgomp linker failures in GCC because libgomp includes <pthread.h> before its visibility hacks. Reviewed-by: Carlos O'Donell <carlos@redhat.com>