about summary refs log tree commit diff
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* fix arm __a_barrier_oldkuser when built as thumbRich Felker2019-09-111-2/+2
| | | | | | | | | | | as noted in commit 05870abeaac0588fb9115cfd11f96880a0af2108, mov lr,pc is not a valid method for saving the return address in code that might be built as thumb. this one is unlikely to matter, since any ISA level that has thumb2 should also have native implementations of atomics that don't involve kuser_helper, and the affected code is only used on very old kernels to begin with.
* fix code path where child function returns in arm __clone built as thumbRich Felker2019-09-111-7/+3
| | | | | | | | | | | | | | mov lr,pc is not a valid way to save the return address in thumb mode since it omits the thumb bit. use a chain of bl and bx to emulate blx. this could be avoided by converting to a .S file with preprocessor conditions to use blx if available, but the time cost here is dominated by the syscall anyway. while making this change, also remove the remnants of support for pre-bx ISA levels. commit 9f290a49bf9ee247d540d3c83875288a7991699c removed the hack from the parent code paths, but left the unnecessary code in the child. keeping it would require rewriting two code paths rather than one, and is useless for reasons described in that commit.
* synchronously clean up pthread_create failure due to scheduling errorsRich Felker2019-09-061-13/+18
| | | | | | | | | | | | | | | | | | | previously, when pthread_create failed due to inability to set explicit scheduling according to the requested attributes, the nascent thread was detached and made responsible for its own cleanup via the standard pthread_exit code path. this left it consuming resources potentially well after pthread_create returned, in a way that the application could not see or mitigate, and unnecessarily exposed its existence to the rest of the implementation via the global thread list. instead, attempt explicit scheduling early and reuse the failure path for __clone failure if it fails. the nascent thread's exit futex is not needed for unlocking the thread list, since the thread calling pthread_create holds the thread list lock the whole time, so it can be repurposed to ensure the thread has finished exiting. no pthread_exit is needed, and freeing the stack, if needed, can happen just as it would if __clone failed.
* set explicit scheduling for new thread from calling thread, not selfRich Felker2019-09-061-21/+12
| | | | | | | | | | | | | | | if setting scheduling properties succeeds, the new thread may end up with lower priority than the caller, and may be unable to continue running due to another intermediate-priority thread. this produces a priority inversion situation for the thread calling pthread_create, since it cannot return until the new thread reports success. originally, the parent was responsible for setting the new thread's priority; commits b8742f32602add243ee2ce74d804015463726899 and 40bae2d32fd6f3ffea437fa745ad38a1fe77b27e changed it as part of trimming down the pthread structure. since then, commit 04335d9260c076cf4d9264bd93dd3b06c237a639 partly reversed the changes, but did not switch responsibilities back. do that now.
* fix unsynchronized decrement of thread count on pthread_create errorRich Felker2019-09-061-1/+2
| | | | | | | commit 8f11e6127fe93093f81a52b15bb1537edc3fc8af wrongly documented that all changes to libc.threads_minus_1 were guarded by the thread list lock, but the decrement for failed SYS_clone took place after the thread list lock was released.
* add public declaration for optreset under appropriate feature profilesRich Felker2019-08-301-0/+1
| | | | | | | | commit 030e52639248ac8417a4934298caa78c21a228d1 added optreset, a BSD extension to getopt duplicating the functionality (also an extension) of setting optind to 0, but failed to provide a public declaration for it. according to the BSD documentation and headers, the application is not supposed to need to provide its own declaration.
* add posix_spawn [f]chdir file actionsRich Felker2019-08-304-0/+45
| | | | | | | these are presently extensions, thus named with _np to match glibc and other implementations that provide them; however they are likely to be standardized in the future without the _np suffix as a result of Austin Group issue 1208. if so, both names will be kept as aliases.
* add copy_file_range system call wrapperÁrni Dagur2019-08-231-0/+8
|
* fix external dummy_lock symbol inadvertently introduced in sigactionRich Felker2019-08-171-1/+1
| | | | | commit 9b14ad541068d4f7d0be9bcd1ff4c70090d868d3 introduced this namespace violation.
* fix accidentlly-external cmp symbol introduced with catgetsRich Felker2019-08-131-1/+1
| | | | commit 7590203c486d9002522019045d34ee3dee0a66f5 omitted static here.
* add support for powerpc/powerpc64 unaligned relocationsSamuel Holland2019-08-111-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | R_PPC_UADDR32 (R_PPC64_UADDR64) has the same meaning as R_PPC_ADDR32 (R_PPC64_ADDR64), except that its address need not be aligned. For powerpc64, BFD ld(1) will automatically convert between ADDR<->UADDR relocations when the address is/isn't at its native alignment. This will happen if, for example, there is a pointer in a packed struct. gold and lld do not currently generate R_PPC64_UADDR64, but pass through misaligned R_PPC64_ADDR64 relocations from object files, possibly relaxing them to misaligned R_PPC64_RELATIVE. In both cases (relaxed or not) this violates the PSABI, which defines the relevant field type as "a 64-bit field occupying 8 bytes, the alignment of which is 8 bytes unless otherwise specified." All three linkers violate the PSABI on 32-bit powerpc, where the only difference is that the field is 32 bits wide, aligned to 4 bytes. Currently musl fails to load executables linked by BFD ld containing R_PPC64_UADDR64, with the error "unsupported relocation type 43". This change provides compatibility with BFD ld on powerpc64, and any static linker on either architecture that starts following the PSABI more closely.
* add secure_getenv functionPetr Vaněk2019-08-081-0/+8
| | | | This function is a GNU extension introduced in glibc 2.17.
* in clock_getres, check for null pointer before storing resultRich Felker2019-08-071-1/+1
| | | | | POSIX allows a null pointer, in which case the function only checks the validity of the clock id argument.
* remove spurious null check in clock_settimeRich Felker2019-08-071-1/+1
| | | | | at the point of this check, the pointer has already been dereferenced. clock_settime is not defined for null pointer arguments.
* fix regression in recvmmsg with no timeoutRich Felker2019-08-071-1/+1
| | | | | | somewhat analogous to commit d0b547dfb5f7678cab6bc39dd736ed6454357ca4, but here the omission of the null timeout check was in the time64 syscall code path. this code is not yet used except on x32.
* add non-stub implementation of catgets localization functionsRich Felker2019-08-073-3/+114
| | | | | | | | | | | | these accept the netbsd/openbsd message catalog file format, consisting of a sorted list of set headers and a sorted list of message headers for each set, admitting trivial binary search for lookups. the gnu format was not chosen because it's unusably bad. it does not admit efficient (log time or better) lookups; rather, it requires linear search or hash table lookups, and the hash function is awful: it's literally set_id*msg_id.
* fix regression in select with no timeoutRich Felker2019-08-071-1/+2
| | | | | | | commit 722a1ae3351a03ab25010dbebd492eced664853b inadvertently passed a copy of {s,us} to the syscall even if the timeout argument tv was null, thereby causing immediate timeout (polling) in place of unlimited timeout. only archs using SYS_select were affected.
* fix failure of glob to match broken symlinks under some conditionsRich Felker2019-08-071-5/+12
| | | | | | | | | | | | | | | | | when the pattern ended with one or more literal path components, or when the GLOB_MARK flag was passed to request that glob flag directory results and the type obtained by readdir was unknown or inconclusive (symlink), the stat function was called to evaluate existence and/or determine type. however, stat fails with ENOENT for broken symlinks, and this caused the match to be omitted from the results. instead, use stat only for the unknown/inconclusive cases with GLOB_MARK, and otherwise, or if stat fails, use lstat existence still needs to be determined. this minimizes the number of costly syscalls, performing both only in the case where GLOB_MARK is in use and there is a final literal path component which is a broken symlink. based on/simplified from patch by James Y Knight.
* in arm cancellation point asm, don't unnecessarily preserve link registerPatrick Oppenlander2019-08-061-4/+4
| | | | | | | The only reason we needed to preserve the link register was because we were using a branch-link instruction to branch to __cp_cancel. Replacing this with a branch means we can avoid the save/restore as the link register is no longer modified.
* glob: implement GLOB_TILDE and GLOB_TILDE_CHECKIsmael Luceno2019-08-061-1/+41
|
* use setitimer function rather than syscall to implement alarmRich Felker2019-08-051-3/+3
| | | | | | otherwise alarm will break on 32-bit archs when time_t is changed to 64-bit. a second itimerval object is introduced for retrieving the old value, since the setitimer function has restrict-qualified arguments.
* fix build regression in i386 asm for atan2, atan2fRich Felker2019-08-052-2/+2
| | | | | commit f3ed8bfe8a82af1870ddc8696ed4cc1d5aa6b441 inadvertently removed labels that were still needed.
* fix x87 stack imbalance in corner cases of i386 math asmRich Felker2019-08-058-44/+14
| | | | | | | | | | | | | | | | | | | | | | | commit 31c5fb80b9eae86f801be4f46025bc6532a554c5 introduced underflow code paths for the i386 math asm, along with checks on the fpu status word to skip the underflow-generation instructions if the underflow flag was already raised. unfortunately, at least one such path, in log1p, returned with 2 items on the x87 stack rather than just 1 item for the return value. this is a violation of the ABI's calling convention, and could cause subsequent floating point code to produce NANs due to x87 stack overflow. if floating point results are used in flow control, this can lead to runaway wrong code execution. rather than reviewing each "underflow already raised" code path for correctness, remove them all. they're likely slower than just performing the underflow code unconditionally, and significantly more complex. all of this code should be ripped out and replaced by C source files with inline asm. doing so would preclude this kind of error by having the compiler perform all x87 stack register allocation and stack manipulation, and would produce comparable or better code. however such a change is a much larger project.
* fix regression in clock_gettime on 32-bit archs without vdsoRich Felker2019-08-051-0/+1
| | | | | commit 72f50245d018af0c31b38dec83c557a4e5dd1ea8 broke this by creating a code path where r is uninitialized.
* clock_gettime: add support for 32-bit vdso with 64-bit time_tRich Felker2019-08-021-0/+32
| | | | | | | | | | | | | | | this fixes a major upcoming performance regression introduced by commit 72f50245d018af0c31b38dec83c557a4e5dd1ea8, whereby 32-bit archs would lose vdso clock_gettime after switching to 64-bit time_t, unless the kernel supports time64 and provides a time64 version of the vdso function. this would incur not just one but two syscalls: first, the failed time64 syscall, then the fallback time32 one. overflow of the 32-bit result is detected and triggers a revert to syscalls. normally, on a system that's not Y2038-ready, this would still overflow, but if the process has been migrated to a time64-capable kernel or if the kernel has been hot-patched to add time64 syscalls, it may conceivably work.
* fix missing declarations for pthread_join extensions in source fileRich Felker2019-08-021-0/+1
| | | | | | | | per policy, define the feature test macro to get declarations for the pthread_tryjoin_np and pthread_timedjoin_np functions. in the past this has been only for checking; with 32-bit archs getting 64-bit time_t it will also be necessary for symbols to get redirected correctly.
* clock_gettime: add time64 syscall support, decouple 32-bit time_tRich Felker2019-08-021-0/+19
| | | | | | | | | | | | | | | | | | | | | | the time64 syscall has to be used if time_t is 64-bit, since there's no way of knowing before making a syscall whether the result will fit in 32 bits, and the 32-bit syscalls do not report overflow as an error. on 64-bit archs, there is no change to the code after preprocessing. on current 32-bit archs, the result is now read from the kernel through long[2] array, then copied into the timespec, to remove the assumption that time_t is the same as long. vdso clock_gettime is still used in place of a syscall if available. 32-bit archs with 64-bit time_t must use the time64 version of the vdso function; if it's not available, performance will significantly suffer. support for both vdso functions could be added, but would break the ability to move a long-lived process from a pre-time64 kernel to one that can outlast Y2038 with checkpoint/resume, at least without added hacks to identify that the 32-bit function is no longer usable and stop using it (e.g. by seeing negative tv_sec). this possibility may be explored in future work on the function.
* clock_adjtime: add time64 support, decouple 32-bit time_t, fix x32Rich Felker2019-08-021-0/+110
| | | | | | | | | | | | | | | | | | | | the 64-bit/time64 version of the syscall is not API-compatible with the userspace timex structure definition; fields specified as long have type long long. so when using the time64 syscall, we have to convert the entire structure. this was always the case for x32 as well, but went unnoticed, meaning that clock_adjtime just passed junk to the kernel on x32. it should be fixed now. for the fallback case, we avoid encoding any assumptions about the new location of the time member or naming of the legacy slots by accessing them through a union of the kernel type and the new userspace type. the only assumption is that the non-time members live at the same offsets as in the (non-time64, long-based) kernel timex struct. this property saves us from having to convert the whole thing, and avoids a lot of additional work in compat shims. the new code is statically unreachable for now except on x32, where it fixes major brokenness. it is permanently unreachable on 64-bit.
* ioctl: add fallback for new time64 SIOCGSTAMP[NS]Rich Felker2019-07-312-1/+31
| | | | | | | | | | | without this, the SIOCGSTAMP and SIOCGSTAMPNS ioctl commands, for obtaining timestamps, would stop working on pre-5.1 kernels after time_t is switched to 64-bit and their values are changed to the new time64 versions. new code is written such that it's statically unreachable on 64-bit archs, and on existing 32-bit archs until the macro values are changed to activate 64-bit time_t.
* get/setsockopt: add fallback for new time64 SO_RCVTIMEO/SO_SNDTIMEORich Felker2019-07-313-2/+64
| | | | | | | | | | without this, the SO_RCVTIMEO and SO_SNDTIMEO socket options would stop working on pre-5.1 kernels after time_t is switched to 64-bit and their values are changed to the new time64 versions. new code is written such that it's statically unreachable on 64-bit archs, and on existing 32-bit archs until the macro values are changed to activate 64-bit time_t.
* make __socketcall analogous to __syscall, error-returningRich Felker2019-07-311-6/+6
| | | | | | | | | | | the __socketcall and __socketcall_cp macros are remnants from a really old version of the syscall-mechanism infrastructure, and don't follow the pattern that the "__" version of the macro returns the raw negated error number rather than setting errno and returning -1. for time64 purposes, some socket syscalls will need to operate on the error value rather than returning immediately, so fix this up so they can use it.
* sysvipc: overhaul {sem,shm,msg}ctl for time64Rich Felker2019-07-314-12/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | being "ctl" functions that take command numbers, these will be handled like ioctl/sockopt/etc., using new command numbers for the time64 variants with an "IPC_TIME64" bit added to their values. to obtain such a reserved bit, we reuse the IPC_64 bit, 0x100, which served only as part of the libc-to-kernel interface, not as a public interface of the libc functions. using new command numbers avoids the need for compat shims (in ABIs doing time64 through symbol redirection and compat shims) and, by virtue of having a fixed time64 bit for all commands, we can ensure that libc can perform the appropriate translations, even if the application is using new commands from a newer version of the libc headers than the libc available at runtime. for the vast majority of 32-bit archs, the kernel {sem,shm,msq}id64_ds definitions left padding space intended for expanding their time_t fields to 64 bits in-place, and it would have been really nice to be able to do time64 support that way. however the padding was almost always in little-endian order (except on powerpc, and for msqid_ds only on mips, where it matched the arch's byte order), and more importantly, the alignment was overlooked. in semid_ds and msqid_ds, the time_t members were not suitably aligned to be expanded to 64-bit, due to the ipc_perm header consisting of 9 32-bit words -- except on powerpc where ipc_perm contains an extra padding word. in shmid_ds, the time_t members were suitably aligned, except that mips (accidentally?) omitted the padding for them alltogether. as a result, we're stuck with adding new time_t fields on the end of the structures, and assembling the 32-bit lo/hi parts (or 16-bit hi parts, for mips shmid_ds, which lacked sufficient reserved space for full 32-bit hi parts) to fill them in. all of the functional changes here are conditional on the IPC_TIME64 macro having a nonzero definition, which will only happen when IPC_STAT is redefined for 32-bit archs, and on time_t being larger than long, so for now the new code is all dead code.
* fix semctl with SEM_STAT_ANYRich Felker2019-07-311-1/+1
| | | | | | due to the variadic signature, semctl needs to be made aware of any new commands that take arguments. this was overlooked when commit af55070eae5438476f921d827b7ae49e8141c3fe added SEM_STAT_ANY.
* move IPC_64 from public bits/ipc.h to syscall_arch.hRich Felker2019-07-301-0/+6
| | | | | | | | | | | | | | | | | | | | the definition of the IPC_64 macro controls the interface between libc and the kernel through syscalls; it's not a public API. the meaning is rather obscure. long ago, Linux's sysvipc *id_ds structures used 16-bit uids/gids and wrong types for a few other fields. this was in the libc5 era, before glibc. the IPC_64 flag (64 is a misnomer; it's more like 32) tells the kernel to use the modern[-ish] versions of the structures. the definition of IPC_64 has nothing to do with whether the arch is 32- or 64-bit. rather, due to either historical accident or intentional obnoxiousness, the kernel only accepts and masks off the 0x100 IPC_64 flag conditional on CONFIG_ARCH_WANT_IPC_PARSE_VERSION, i.e. for archs that want to provide, or that accidentally provided, both. for archs which don't define this option, no masking is performed and commands with the 0x100 bit set will fail as invalid. so ultimately, the definition is just a matter of matching an arbitrary switch defined per-arch in the kernel.
* select: overhaul for time64Rich Felker2019-07-301-13/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | major changes are made alongside adding time64 syscall support to account for issues found during research. select historically accepts non-normalized (tv_usec not restricted to less than 1000000) timeouts, and the kernel normalizes them, but the normalization code is buggy and subject to integer overflows. since normalization is needed anyway when using SYS_pselect6 or SYS_pselect6_time64 as the backend, simply do it up-front to eliminate both code path complexity and the possibility of kernel bugs. as a side effect, select no longer updates the caller's timeout timeval with the remaining time. previously, archs that used SYS_select updated it and archs that used SYS_pselect6 didn't. this change may turn out to be controversial and may need revisiting, but in any case the old behavior was not strictly conforming. POSIX allows modification of the timeout "upon successful completion", but the Linux syscall modifies it upon unsuccessful completion (EINTR) as well (and presumably each time the syscall stops and restarts before it's known whether completion will be successful). it's possible that this language does not reflect the actual intent of the standard, since other historical implementations probably behaved like Linux, but that should be clarified if there's a desire to bring the old behavior back. regardless, programs that are depending on this are not correct and are already broken on some archs we support.
* recvmmsg: add time64 syscall support, decouple 32-bit time_tRich Felker2019-07-291-0/+18
| | | | | | | | | | | | | the time64 syscall is used only if the timeout does not fit in 32 bits. after preprocessing, the code is unchanged on 64-bit archs. for 32-bit archs, the timeout now goes through an intermediate copy, meaning that the caller does not get back the updated timeout. this is based on my reading of the documentation, which does not document the updating as a contract you can rely on, and mentions that the whole recvmmsg timeout mechanism is buggy and unlikely to be useful. if it turns out that there's interest in making the remaining time officially available to callers, such functionality could be added back later.
* setitimer, getitimer: decouple time_t from longRich Felker2019-07-294-0/+44
| | | | | | | | | | | | | | | | these functions have no new time64 syscall, so the existence of a time64 syscall cannot be used as the condition for the new code. instead, assume the syscall takes timevals as longs, which is true everywhere but x32, and interface with the kernel through long[4] objects. rather than adding new hacks to special-case x32 here, just add x32-specific source files since a trivial syscall wrapper suffices there. the new code paths added in this commit are statically unreachable on all current archs, but will become reachable when 32-bit archs get 64-bit time_t.
* timerfd: add time64 syscall support, decouple 32-bit time_tRich Felker2019-07-291-0/+42
| | | | | the changes here are semantically and structurally identical to those made to timer_settime and timer_gettime for time64 support.
* sched_rr_get_interval: don't assume time_t is 32-bit on 32-bit archsRich Felker2019-07-291-0/+14
| | | | | | | | | | as with clock_getres, the time64 syscall for this is not necessary or useful, this time since scheduling timeslices are not on the order 68 years. if there's a 32-bit syscall, use it and expand the result into timespec; otherwise there is only one syscall and it does the right thing to store to timespec directly. on 64-bit archs, there is no change to the code after preprocessing.
* clock_getres: don't assume time_t is 32-bit on 32-bit archsRich Felker2019-07-291-0/+14
| | | | | | | | | | the time64 syscall for this is not necessary or useful, since clock resolution is generally better than 68-year granularity. if there's a 32-bit syscall, use it and expand the result into timespec; otherwise there is only one syscall and it does the right thing to store to timespec directly. on 64-bit archs, there is no change to the code after preprocessing.
* timer_gettime: add time64 syscall support, decouple 32-bit time_tRich Felker2019-07-291-0/+16
| | | | | | | | | | | | the time64 syscall has to be used if time_t is 64-bit, since there's no way of knowing before making a syscall whether the result will fit in 32 bits, and the 32-bit syscalls do not report overflow as an error. on 64-bit archs, there is no change to the code after preprocessing. on current 32-bit archs, the result is now read from the kernel through long[4] array, then copied into the timespec, to remove the assumption that time_t is the same as long.
* remove x32 syscall timespec fixup hacksRich Felker2019-07-292-43/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | the x32 syscall interfaces treat timespec's tv_nsec member as 64-bit despite the API type being long and long being 32-bit in the ABI. this is no problem for syscalls that store timespecs to userspace as results, but caused uninitialized padding to be misinterpreted as the high bits in syscalls that take timespecs as input. since the beginning of the port, we've dealt with this situation with hacks in syscall_arch.h, and injected between __syscall_cp_c and __syscall_cp_asm, to special-case the syscall numbers that involve timespecs as inputs and copy them to a form suitable to pass to the kernel. commit 40aa18d55ab763e69ad16d0cf1cebea708ffde47 set the stage for removal of these hacks by letting us treat the "normal" x32 syscalls dealing with timespec as if they're x32's "time64" syscalls, effectively making x32 ax "time64-only 32-bit arch" like riscv32 will be when it's added. since then, all users of syscalls that x32's syscall_arch.h had hacks for have been updated to use time64 syscalls, so the hacks can be removed. there are still at least a few other timespec-related syscalls broken on x32, which were overlooked when the x32 hacks were done or added later. these include at least recvmmsg, adjtimex/clock_adjtime, and timerfd_settime, and they will be fixed independently later on.
* utimensat: add time64 syscall support, decouple 32-bit time_tRich Felker2019-07-291-6/+31
| | | | | | | | | | | | | | | | | | | | | | | | time64 syscall is used only if it's the only one defined for the arch, or if either of the requested times does not fit in 32 bits. care is taken to normalize the inputs to account for UTIME_NOW or UTIME_OMIT in tv_nsec, in which case tv_sec should be ignored. this is needed not only to avoid spurious time64 syscalls that might waste time failing with ENOSYS, but also to accurately decide whether fallback is possible. if the requested time cannot be represented, the function fails with ENOTSUP, defined in general as "The implementation does not support the requested feature or value". neither the time64 syscall, nor this error, can happen on current 32-bit archs where time_t is a 32-bit type, and both are statically unreachable. on 64-bit archs, there are only superficial changes to the SYS_futimesat fallback path, which has been modified to pass long[4] instead of struct timeval[2] to the kernel, making it suitable for use on 32-bit archs even once time_t is changed to 64-bit. for 32-bit archs, the call to SYS_utimensat has also been changed to copy the timespecs through an array of long[4] rather than passing the timespec[2] in-place.
* clock_settime: add time64 syscall support, decouple 32-bit time_tRich Felker2019-07-291-0/+17
| | | | | | | | | | | | | | | time64 syscall is used only if it's the only one defined for the arch, or if the requested time does not fit in 32 bits. on current 32-bit archs where time_t is a 32-bit type, this makes it statically unreachable. if the time64 syscall is needed because the requested time does not fit in 32 bits, we define this as an error ENOTSUP, for "The implementation does not support the requested feature or value". on 64-bit archs, there is no change to the code after preprocessing. on current 32-bit archs, the time is moved through an intermediate copy to remove the assumption that time_t is a 32-bit type.
* timer_settime: add support for time64 syscall, decouple 32-bit time_tRich Felker2019-07-291-0/+25
| | | | | | | | | | | | | time64 syscall is used only if it's the only one defined for the arch, if either component of the itimerspec does not fit in 32 bits, or if time_t is 64-bit and the caller requested the old value, in which case there's a possibility that the old value might not fit in 32 bits. on current 32-bit archs where time_t is a 32-bit type, this makes it statically unreachable. on 64-bit archs, there is no change to the code after preprocessing. on current 32-bit archs, the time is moved through an intermediate copy to remove the assumption that time_t is a 32-bit type.
* pselect, ppoll: add time64 syscall support, decouple 32-bit time_tRich Felker2019-07-282-4/+34
| | | | | | | | | | | | | | time64 syscall is used only if it's the only one defined for the arch, or if the requested timeout length does not fit in 32 bits. on current 32-bit archs where time_t is a 32-bit type, this makes it statically unreachable. on 64-bit archs, there are only superficial changes to the code after preprocessing. both before and after these changes, these functions copied their timeout arguments to avoid letting the kernel clobber the caller's copies. now, the copying also serves to change the type from userspace timespec to a pair of longs, which makes a difference only in the 32-bit fallback case, not on 64-bit.
* futex wait operations: add time64 syscall support, decouple 32-bit time_tRich Felker2019-07-282-3/+41
| | | | | | | | | | | | | | | | | | | | | thanks to the original factorization using the __timedwait function, there are no FUTEX_WAIT calls anywhere else, giving us a single point of change to make nearly all the timed thread primitives time64-ready. the one exception is the FUTEX_LOCK_PI command for PI mutex timedlock. I haven't tried to make these two points share code, since they have different fallbacks (no non-private fallback needed for PI since PI was added later) and FUTEX_LOCK_PI isn't a cancellation point (thus allowing the whole code path to inline into pthread_mutex_timedlock). as for other changes in this series, the time64 syscall is used only if it's the only one defined for the arch, or if the requested timeout does not fit in 32 bits. on current 32-bit archs where time_t is a 32-bit type, this makes it statically unreachable. on 64-bit archs, there are only superficial changes to the code after preprocessing. on current 32-bit archs, the time is passed via an intermediate copy to remove the assumption that time_t is a 32-bit type.
* semtimedop: add time64 syscall support, decouple 32-bit time_tRich Felker2019-07-281-2/+24
| | | | | | | | | | | | | | | | | | | | time64 syscall is used only if it's the only one defined for the arch, or if the requested timeout does not fit in 32 bits. on current 32-bit archs where time_t is a 32-bit type, this makes it statically unreachable. on 64-bit archs, there is no change to the code after preprocessing. on current 32-bit archs, the time is passed via an intermediate copy to remove the assumption that time_t is a 32-bit type. to avoid duplicating SYS_ipc/SYS_semtimedop choice logic, the code for 32-bit archs "falls through" after updating the timeout argument ts to point to a [compound literal] array of longs. in preparation for "time64-only" 32-bit archs, an extra case is added for neither SYS_ipc nor the non-time64 SYS_semtimedop existing; the ENOSYS failure path here should never be reachable, and is added just in case a compiler can't see that it's not reachable, to avoid spurious static analysis complaints.
* sigtimedwait: add time64 syscall support, decouple 32-bit time_tRich Felker2019-07-281-4/+24
| | | | | | | | | | | | time64 syscall is used only if it's the only one defined for the arch, or if the requested timeout length does not fit in 32 bits. on current 32-bit archs where time_t is a 32-bit type, this makes it statically unreachable. on 64-bit archs, there are only superficial changes to the code after preprocessing. on current 32-bit archs, the timeout is passed via an intermediate copy to remove the assumption that time_t is a 32-bit type.
* mq_timedsend, mq_timedreceive: add time64, decouple 32-bit time_tRich Felker2019-07-282-0/+34
| | | | | | | | | | | time64 syscall is used only if it's the only one defined for the arch, or if the requested absolute timeout does not fit in 32 bits. on current 32-bit archs where time_t is a 32-bit type, this makes it statically unreachable. on 64-bit archs, there is no change to the code after preprocessing. on current 32-bit archs, the timeout is passed via an intermediate copy to remove the assumption that time_t is a 32-bit type.