about summary refs log tree commit diff
path: root/elf
Commit message (Collapse)AuthorAgeFilesLines
* ldconfig: Avoid boolean coercion of opt_chrootSiddhesh Poyarekar2021-05-181-17/+17
| | | | Generated code is unchanged.
* ldconfig: Fix memory leaksSiddhesh Poyarekar2021-05-181-6/+14
| | | | | | | Coverity discovered that paths allocated by chroot_canon are not freed in a couple of routines in ldconfig. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf/cache.c: Fix resource leaks identified by static analyzersSiddhesh Poyarekar2021-05-181-4/+12
| | | | | | | | | | A coverity run identified a number of resource leaks in cache.c. There are a couple of simple memory leaks where a local allocation is not freed before function return. Then there is a mmap leak and a file descriptor leak where a map is not unmapped in the error case and a file descriptor remains open respectively. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Move static TLS size and alignment into _rtld_global_roFlorian Weimer2021-05-173-14/+20
| | | | | | | This helps to clarify that the caching of these fields in libpthread (in __static_tls_size, __static_tls_align_m1) is unnecessary. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Remove DL_STATIC_INITFlorian Weimer2021-05-171-4/+0
| | | | | | All users have been converted to the __rtld_static_init mechanism. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Partially initialize ld.so after static dlopen (bug 20802)Florian Weimer2021-05-175-5/+174
| | | | | | | | | | | | | | After static dlopen, a copy of ld.so is loaded into the inner namespace, but that copy is not initialized at all. Some architectures run into serious problems as result, which is why the _dl_var_init mechanism was invented. With libpthread moving into libc and parts into ld.so, more architectures impacted, so it makes sense to switch to a generic mechanism which performs the partial initialization. As a result, getauxval now works after static dlopen (bug 20802). Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Remove all usage of @BASH@ or ${BASH} in installed files, and hardcode ↵Romain GEISSLER2021-05-123-5/+3
| | | | | | | | | | | | | | | | | | | /bin/bash instead (FYI, this is a repost of https://sourceware.org/pipermail/libc-alpha/2019-July/105035.html now that FSF papers have been signed and confirmed on FSF side). This trivial patch attemps to fix BZ 24106. Basically the bash locally used when building glibc on the host shall not leak on the installed glibc, as the system where it is installed might be different and use another bash location. So I have looked for all occurences of @BASH@ or $(BASH) in installed files, and replaced it by /bin/bash. This was suggested by Florian Weimer in the bug report. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* elf: Fix DTV gap reuse logic [BZ #27135]Szabolcs Nagy2021-05-113-15/+6
| | | | | | | | | | | | | | | | | | | | | | | For some reason only dlopen failure caused dtv gaps to be reused. It is possible that the intent was to never reuse modids for a different module, but after dlopen failure all gaps are reused not just the ones caused by the unfinished dlopened. So the code has to handle reused modids already which seems to work, however the data races at thread creation and tls access (see bug 19329 and bug 27111) may be more severe if slots are reused so this is scheduled after those fixes. I think fixing the races are not simpler if reuse is disallowed and reuse has other benefits, so set GL(dl_tls_dtv_gaps) whenever entries are removed from the middle of the slotinfo list. The value does not have to be correct: incorrect true value causes the next modid query to do a slotinfo walk, incorrect false will leave gaps and new entries are added at the end. Fixes bug 27135. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Add test case for [BZ #19329]Szabolcs Nagy2021-05-113-2/+76
| | | | | | | | | | | | | Test concurrent dlopen and pthread_create when the loaded modules have TLS. This triggers dl-tls assertion failures more reliably than the nptl/tst-stack4 test. The dlopened module has 100 DT_NEEDED dependencies with TLS, they were reused from an existing TLS test. The number of created threads during dlopen depends on filesystem speed and hardware, but at most 3 threads are alive at a time to limit resource usage. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Use relaxed atomics for racy accesses [BZ #19329]Szabolcs Nagy2021-05-113-16/+40
| | | | | | | | | | | | | | | | | | This is a follow up patch to the fix for bug 19329. This adds relaxed MO atomics to accesses that were previously data races but are now race conditions, and where relaxed MO is sufficient. The race conditions all follow the pattern that the write is behind the dlopen lock, but a read can happen concurrently (e.g. during tls access) without holding the lock. For slotinfo entries the read value only matters if it reads from a synchronized write in dlopen or dlclose, otherwise the related dtv entry is not valid to access so it is fine to leave it in an inconsistent state. The same applies for GL(dl_tls_max_dtv_idx) and GL(dl_tls_generation), but there the algorithm relies on the fact that the read of the last synchronized write is an increasing value. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Fix data races in pthread_create and TLS access [BZ #19329]Szabolcs Nagy2021-05-111-16/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DTV setup at thread creation (_dl_allocate_tls_init) is changed to take the dlopen lock, GL(dl_load_lock). Avoiding data races here without locks would require design changes: the map that is accessed for static TLS initialization here may be concurrently freed by dlclose. That use after free may be solved by only locking around static TLS setup or by ensuring dlclose does not free modules with static TLS, however currently every link map with TLS has to be accessed at least to see if it needs static TLS. And even if that's solved, still a lot of atomics would be needed to synchronize DTV related globals without a lock. So fix both bug 19329 and bug 27111 with a lock that prevents DTV setup running concurrently with dlopen or dlclose. _dl_update_slotinfo at TLS access still does not use any locks so CONCURRENCY NOTES are added to explain the synchronization. The early exit from the slotinfo walk when max_modid is reached is not strictly necessary, but does not hurt either. An incorrect acquire load was removed from _dl_resize_dtv: it did not synchronize with any release store or fence and synchronization is now handled separately at thread creation and TLS access time. There are still a number of racy read accesses to globals that will be changed to relaxed MO atomics in a followup patch. This should not introduce regressions compared to existing behaviour and avoid cluttering the main part of the fix. Not all TLS access related data races got fixed here: there are additional races at lazy tlsdesc relocations see bug 27137. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* _dl_exception_create_format: Add missing va_endSiddhesh Poyarekar2021-05-111-0/+1
| | | | Coverity discovered a missing va_end.
* nptl: Move changing of stack permissions into ld.soFlorian Weimer2021-05-103-6/+10
| | | | | | | | | All the stack lists are now in _rtld_global, so it is possible to change stack permissions directly from there, instead of calling into libpthread to do the change. Tested-by: Carlos O'Donell <carlos@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* nptl: Move more stack management variables into _rtld_globalFlorian Weimer2021-05-101-0/+3
| | | | | | | | | | | | Permissions of the cached stacks may have to be updated if an object is loaded that requires executable stacks, so the dynamic loader needs to know about these cached stacks. The move of in_flight_stack and stack_cache_actsize is a requirement for merging __reclaim_stacks into the fork implementation in libc. Tested-by: Carlos O'Donell <carlos@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* elf: Introduce __tls_pre_init_tpFlorian Weimer2021-05-103-38/+31
| | | | | | | | | | | This is an early variant of __tls_init_tp, primarily for initializing thread-related elements of _rtld_global/GL. Some existing initialization code not needed for NPTL is moved into the generic version of this function. Tested-by: Carlos O'Donell <carlos@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* elf, nptl: Resolve recursive lock implementation earlyFlorian Weimer2021-05-103-1/+39
| | | | | | | | | | | If libpthread is included in libc, it is not necessary to delay initialization of the lock/unlock function pointers until libpthread is loaded. This eliminates two unprotected function pointers from _rtld_global and removes some initialization code from libpthread. Tested-by: Carlos O'Donell <carlos@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* nptl: Consolidate async cancel enable/disable implementation in libcFlorian Weimer2021-05-051-2/+4
| | | | | | | | | | | | | | Previously, the source file nptl/cancellation.c was compiled multiple times, for libc, libpthread, librt. This commit switches to a single implementation, with new __pthread_enable_asynccancel@@GLIBC_PRIVATE, __pthread_disable_asynccancel@@GLIBC_PRIVATE exports. The almost-unused CANCEL_ASYNC and CANCEL_RESET macros are replaced by LIBC_CANCEL_ASYNC and LIBC_CANCEL_ASYNC macros. They call the __pthread_* functions unconditionally now. The macros are still needed because shared code uses them; Hurd has different definitions. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf, nptl: Initialize static TLS directly in ld.soFlorian Weimer2021-05-055-5/+46
| | | | | | | | | | The stack list is available in ld.so since commit 1daccf403b1bd86370eb94edca794dc106d02039 ("nptl: Move stack list variables into _rtld_global"), so it's possible to walk the stack list directly in ld.so and perform the initialization there. This eliminates an unprotected function pointer from _rtld_global and reduces the libpthread initialization code.
* pthread: Introduce __pthread_early_initFlorian Weimer2021-04-211-0/+3
| | | | | | | This function is called from __libc_early_init to initialize the pthread subsystem. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Introduce __tls_init_tp for second-phase TCB initializationFlorian Weimer2021-04-213-13/+27
| | | | | | | | | | TLS_INIT_TP is processor-specific, so it is not a good place to put thread library initialization code (it would have to be repeated for all CPUs). Introduce __tls_init_tp as a separate function, to be called immediately after TLS_INIT_TP. Move the existing stack list setup code for NPTL to this function. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* dlfcn: dlerror needs to call free from the base namespace [BZ #24773]Florian Weimer2021-04-214-11/+52
| | | | | | | | | | | | Calling free directly may end up freeing a pointer allocated by the dynamic loader using malloc from libc.so in the base namespace using the allocator from libc.so in a secondary namespace, which results in crashes. This commit redirects the free call through GLRO and the dynamic linker, to reach the correct namespace. It also cleans up the dlerror handling along the way, so that pthread_setspecific is no longer needed (which avoids triggering bug 24774).
* dlfcn: Failures after dlmopen should not terminate process [BZ #24772]Florian Weimer2021-04-215-2/+97
| | | | | | | | | | | | | | | | | | | Commit 9e78f6f6e7134a5f299cc8de77370218f8019237 ("Implement _dl_catch_error, _dl_signal_error in libc.so [BZ #16628]") has the side effect that distinct namespaces, as created by dlmopen, now have separate implementations of the rtld exception mechanism. This means that the call to _dl_catch_error from libdl in a secondary namespace does not actually install an exception handler because the thread-local variable catch_hook in the libc.so copy in the secondary namespace is distinct from that of the base namepace. As a result, a dlsym/dlopen/... failure in a secondary namespace terminates the process with a dynamic linker error because it looks to the exception handler mechanism as if no handler has been installed. This commit restores GLRO (dl_catch_error) and uses it to set the handler in the base namespace. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* nptl: Move __pthread_unwind_next into libcFlorian Weimer2021-04-211-0/+2
| | | | | | | | | | | | | | | | | | | It's necessary to stub out __libc_disable_asynccancel and __libc_enable_asynccancel via rtld-stubbed-symbols because the new direct references to the unwinder result in symbol conflicts when the rtld exception handling from libc is linked in during the construction of librtld.map. unwind-forcedunwind.c is merged into unwind-resume.c. libc now needs the functions that were previously only used in libpthread. The GLIBC_PRIVATE exports of __libc_longjmp and __libc_siglongjmp are no longer needed, so switch them to hidden symbols. The symbol __pthread_unwind_next has been moved using scripts/move-symbol-to-libc.py. Reviewed-by: Adhemerva Zanella <adhemerval.zanella@linaro.org>
* elf: Remove lazy tlsdesc relocation related codeSzabolcs Nagy2021-04-211-47/+6
| | | | | | | | | | | Remove generic tlsdesc code related to lazy tlsdesc processing since lazy tlsdesc relocation is no longer supported. This includes removing GL(dl_load_lock) from _dl_make_tlsdesc_dynamic which is only called at load time when that lock is already held. Added a documentation comment too. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Fix missing include in test case [BZ #27136]Szabolcs Nagy2021-04-151-0/+1
| | | | | | | Broken test was introduced in commit 8f85075a2e9c26ff7486d4bbaf358999807d215c elf: Add a DTV setup test [BZ #27136]
* elf: Refactor _dl_update_slotinfo to avoid use after freeSzabolcs Nagy2021-04-151-16/+5
| | | | | | | | | | | | map is not valid to access here because it can be freed by a concurrent dlclose: during tls access (via __tls_get_addr) _dl_update_slotinfo is called without holding dlopen locks. So don't check the modid of map. The map == 0 and map != 0 code paths can be shared (avoiding the dtv resize in case of map == 0 is just an optimization: larger dtv than necessary would be fine too). Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Fix comments and logic in _dl_add_to_slotinfoSzabolcs Nagy2021-04-151-10/+1
| | | | | | | | | | | | | | | Since commit a509eb117fac1d764b15eba64993f4bdb63d7f3c Avoid late dlopen failure due to scope, TLS slotinfo updates [BZ #25112] the generation counter update is not needed in the failure path. That commit ensures allocation in _dl_add_to_slotinfo happens before the demarcation point in dlopen (it is called twice, first time is for allocation only where dlopen can still be reverted on failure, then second time actual dtv updates are done which then cannot fail). Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Add a DTV setup test [BZ #27136]Szabolcs Nagy2021-04-153-1/+109
| | | | | | | | | | | The test dlopens a large number of modules with TLS, they are reused from an existing test. The test relies on the reuse of slotinfo entries after dlclose, without bug 27135 fixed this needs a failing dlopen. With a slotinfo list that has non-monotone increasing generation counters, bug 27136 can trigger. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Fix a DTV setup issue [BZ #27136]Szabolcs Nagy2021-04-151-1/+1
| | | | | | | | | | | The max modid is a valid index in the dtv, it should not be skipped. The bug is observable if the last module has modid == 64 and its generation is same or less than the max generation of the previous modules. Then dtv[0].counter implies dtv[64] is initialized but it isn't. Fixes bug 27136. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Fix SXID_ERASE behavior in setuid programs (BZ #27471)Siddhesh Poyarekar2021-04-122-30/+52
| | | | | | | | | | | | | | | | | | | | | | | | | When parse_tunables tries to erase a tunable marked as SXID_ERASE for setuid programs, it ends up setting the envvar string iterator incorrectly, because of which it may parse the next tunable incorrectly. Given that currently the implementation allows malformed and unrecognized tunables pass through, it may even allow SXID_ERASE tunables to go through. This change revamps the SXID_ERASE implementation so that: - Only valid tunables are written back to the tunestr string, because of which children of SXID programs will only inherit a clean list of identified tunables that are not SXID_ERASE. - Unrecognized tunables get scrubbed off from the environment and subsequently from the child environment. - This has the side-effect that a tunable that is not identified by the setxid binary, will not be passed on to a non-setxid child even if the child could have identified that tunable. This may break applications that expect this behaviour but expecting such tunables to cross the SXID boundary is wrong. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* Enhance setuid-tunables testSiddhesh Poyarekar2021-04-122-23/+69
| | | | | | | | | | Instead of passing GLIBC_TUNABLES via the environment, pass the environment variable from parent to child. This allows us to test multiple variables to ensure better coverage. The test list currently only includes the case that's already being tested. More tests will be added later. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* tst-env-setuid: Use support_capture_subprogram_self_sgidSiddhesh Poyarekar2021-04-121-183/+14
| | | | | Use the support_capture_subprogram_self_sgid to spawn an sgid child. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* tunables: Fix comparison of tunable valuesSiddhesh Poyarekar2021-04-073-11/+49
| | | | | | | | | | The simplification of tunable_set interfaces took care of signed/unsigned conversions while setting values, but comparison with bounds ended up being incorrect; comparing TUNABLE_SIZE_T values for example will fail because SIZE_MAX is seen as -1. Add comparison helpers that take tunable types into account and use them to do comparison instead.
* elf: Fix data race in _dl_name_match_p [BZ #21349]Maninder Singh2021-04-062-2/+20
| | | | | | | | | | | | | | | | | | | dlopen updates libname_list by writing to lastp->next, but concurrent reads in _dl_name_match_p were not synchronized when it was called without holding GL(dl_load_lock), which can happen during lazy symbol resolution. This patch fixes the race between _dl_name_match_p reading lastp->next and add_name_to_object writing to it. This could cause segfault on targets with weak memory order when lastp->next->name is read, which was observed on an arm system. Fixes bug 21349. (Code is from Maninder Singh, comments and description is from Szabolcs Nagy.) Co-authored-by: Vaneet Narang <v.narang@samsung.com> Co-authored-by: Szabolcs Nagy <szabolcs.nagy@arm.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Fix not compiling ifunc tests that need gcc ifunc supportSamuel Thibault2021-03-242-21/+12
|
* elf: Add EM_INTELGT for Intel Graphics TechnologyH.J. Lu2021-03-191-1/+2
| | | | | | | Add EM_INTELGT (205) for Intel Graphics Technology which has been added to gABI: https://groups.google.com/g/generic-abi/c/ofBevXA48dM
* Build libc-start with stack protector for SHAREDSiddhesh Poyarekar2021-03-151-4/+0
| | | | | | | | | | This does not change the emitted code since __libc_start_main does not return, but is important for formal flags compliance. This also cleans up the cosmetic inconsistency in the stack protector flags in csu, especially the incorrect value of STACK_PROTECTOR_LEVEL. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Add inhibit_stack_protector to ifuncmain9 [BZ #25680]David Hughes2021-03-151-0/+1
| | | | | | | | | | | | | | | | | Enabling --enable-stack-protector=all causes the following tests to fail: FAIL: elf/ifuncmain9picstatic FAIL: elf/ifuncmain9static Nick Alcock (who committed the stack protector code) marked the IFUNC resolvers with inhibit_stack_protector when he done the original work and suggested doing so again @ BZ #25680. This patch adds inhibit_stack_protector to ifuncmain9. After patch is applied, --enable-stack-protector=all does not fail the above tests. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: ld.so --help calls _dl_init_paths without a main map [BZ #27577]Florian Weimer2021-03-152-3/+23
| | | | | | | | | | | In this case, use the link map of the dynamic loader itself as a replacement. This is more than just a hack: if we ever support DT_RUNPATH/DT_RPATH for the dynamic loader, reporting it for ld.so --help (without further command line arguments) would be the right thing to do. Fixes commit 332421312576bd7095e70589154af99b124dd2d1 ("elf: Always set l in _dl_init_paths (bug 23462)").
* elf: Always set l in _dl_init_paths (bug 23462)Carlos O'Donell2021-03-123-35/+65
| | | | | | | | | | | | | | | After d1d5471579eb0426671bf94f2d71e61dfb204c30 ("Remove dead DL_DST_REQ_STATIC code.") we always setup the link map l to make the static and shared cases the same. The bug is that in elf/dl-load.c (_dl_init_paths) we conditionally set l only in the #ifdef SHARED case, but unconditionally use it later. The simple solution is to remove the #ifdef SHARED conditional, because it's no longer needed, and unconditionally setup l for both the static and shared cases. A regression test is added to run a static binary with LD_LIBRARY_PATH='$ORIGIN' which crashes before the fix and runs after the fix. Co-Authored-By: Florian Weimer <fweimer@redhat.com>
* ld.so: Implement the --list-diagnostics optionFlorian Weimer2021-03-028-8/+380
|
* elf: Build __dl_iterate_phdr with unwinding support [BZ #27498]Florian Weimer2021-03-021-1/+1
|
* Reduce the statically linked startup code [BZ #23323]Florian Weimer2021-02-251-6/+2
| | | | | | | | | | | | | | | | | | | It turns out the startup code in csu/elf-init.c has a perfect pair of ROP gadgets (see Marco-Gisbert and Ripoll-Ripoll, "return-to-csu: A New Method to Bypass 64-bit Linux ASLR"). These functions are not needed in dynamically-linked binaries because DT_INIT/DT_INIT_ARRAY are already processed by the dynamic linker. However, the dynamic linker skipped the main program for some reason. For maximum backwards compatibility, this is not changed, and instead, the main map is consulted from __libc_start_main if the init function argument is a NULL pointer. For statically linked binaries, the old approach based on linker symbols is still used because there is nothing else available. A new symbol version __libc_start_main@@GLIBC_2.34 is introduced because new binaries running on an old libc would not run their ELF constructors, leading to difficult-to-debug issues.
* nptl: Move elision implementations into libcFlorian Weimer2021-02-231-0/+6
| | | | | | | | | | | | | | | | | | The elision interfaces are closely aligned between the targets that implement them, so declare them in the generic <lowlevellock.h> file. Empty .c stubs are provided, so that fewer makefile updates under sysdeps are needed. Also simplify initialization via __libc_early_init. The symbols __lll_clocklock_elision, __lll_lock_elision, __lll_trylock_elision, __lll_unlock_elision, __pthread_force_elision move into libc. For the time being, non-hidden references are used from libpthread to access them, but once that part of libpthread is moved into libc, hidden symbols will be used again. (Hidden references seem desirable to reduce the likelihood of transactions aborts.)
* elf: Do not copy vDSO soname when setting up link mapFlorian Weimer2021-02-121-12/+5
| | | | | | | | | | | | The kernel does not put the vDSO at special addresses, so writev can write the name directly. Also remove the incorrect comment about not setting l_name. Andy Lutomirski confirmed in <https://lore.kernel.org/linux-api/442A16C0-AE5A-4A44-B261-FE6F817EAF3C@amacapital.net/> that this copy is not necessary. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* tunables: Disallow negative values for some tunablesSiddhesh Poyarekar2021-02-102-1/+7
| | | | | | | | The glibc.malloc.mmap_max tunable as well as al of the INT_32 tunables don't have use for negative values, so pin the hardcoded limits in the non-negative range of INT. There's no real benefit in any of those use cases for the extended range of unsigned, so I have avoided added a new type to keep things simple.
* tunables: Simplify TUNABLE_SET interfaceSiddhesh Poyarekar2021-02-103-108/+61
| | | | | | | | | | | | | | | The TUNABLE_SET interface took a primitive C type argument, which resulted in inconsistent type conversions internally due to incorrect dereferencing of types, especialy on 32-bit architectures. This change simplifies the TUNABLE setting logic along with the interfaces. Now all numeric tunable values are stored as signed numbers in tunable_num_t, which is intmax_t. All calls to set tunables cast the input value to its primitive type and then to tunable_num_t for storage. This relies on gcc-specific (although I suspect other compilers woul also do the same) unsigned to signed integer conversion semantics, i.e. the bit pattern is conserved. The reverse conversion is guaranteed by the standard.
* Add NT_ARM_TAGGED_ADDR_CTRL from Linux 5.10 to elf.h.Joseph Myers2021-02-051-0/+2
| | | | | | | This patch adds the new NT_ARM_TAGGED_ADDR_CTRL constant from Linux 5.10 to glibc's elf.h. Tested for x86_64.
* tst-rtld-list-tunables.sh: Unset glibc tunablesH.J. Lu2021-02-021-0/+11
| | | | Unset glibc tunables and their aliases for --list-tunables test.
* sysconf: Add _SC_MINSIGSTKSZ/_SC_SIGSTKSZ [BZ #20305]H.J. Lu2021-02-012-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add _SC_MINSIGSTKSZ for the minimum signal stack size derived from AT_MINSIGSTKSZ, which is the minimum number of bytes of free stack space required in order to gurantee successful, non-nested handling of a single signal whose handler is an empty function, and _SC_SIGSTKSZ which is the suggested minimum number of bytes of stack space required for a signal stack. If AT_MINSIGSTKSZ isn't available, sysconf (_SC_MINSIGSTKSZ) returns MINSIGSTKSZ. On Linux/x86 with XSAVE, the signal frame used by kernel is composed of the following areas and laid out as: ------------------------------ | alignment padding | ------------------------------ | xsave buffer | ------------------------------ | fsave header (32-bit only) | ------------------------------ | siginfo + ucontext | ------------------------------ Compute AT_MINSIGSTKSZ value as size of xsave buffer + size of fsave header (32-bit only) + size of siginfo and ucontext + alignment padding. If _SC_SIGSTKSZ_SOURCE or _GNU_SOURCE are defined, MINSIGSTKSZ and SIGSTKSZ are redefined as /* Default stack size for a signal handler: sysconf (SC_SIGSTKSZ). */ # undef SIGSTKSZ # define SIGSTKSZ sysconf (_SC_SIGSTKSZ) /* Minimum stack size for a signal handler: SIGSTKSZ. */ # undef MINSIGSTKSZ # define MINSIGSTKSZ SIGSTKSZ Compilation will fail if the source assumes constant MINSIGSTKSZ or SIGSTKSZ. The reason for not simply increasing the kernel's MINSIGSTKSZ #define (apart from the fact that it is rarely used, due to glibc's shadowing definitions) was that userspace binaries will have baked in the old value of the constant and may be making assumptions about it. For example, the type (char [MINSIGSTKSZ]) changes if this #define changes. This could be a problem if an newly built library tries to memcpy() or dump such an object defined by and old binary. Bounds-checking and the stack sizes passed to things like sigaltstack() and makecontext() could similarly go wrong.