diff options
author | Adhemerval Zanella <adhemerval.zanella@linaro.org> | 2021-03-31 13:53:34 -0300 |
---|---|---|
committer | Adhemerval Zanella <adhemerval.zanella@linaro.org> | 2021-04-12 18:38:37 -0300 |
commit | 9d7c5cc38e58fb0923e88901f87174a511b61552 (patch) | |
tree | bfd3d255e520814207a27679c8011ad71d71a5dd /include/time.h | |
parent | 49a40ba18e2cb948259771317fe6ff6f5eb68683 (diff) | |
download | glibc-9d7c5cc38e58fb0923e88901f87174a511b61552.tar.gz glibc-9d7c5cc38e58fb0923e88901f87174a511b61552.tar.xz glibc-9d7c5cc38e58fb0923e88901f87174a511b61552.zip |
linux: Normalize and return timeout on select (BZ #27651)
The commit 2433d39b697, which added time64 support to select, changed the function to use __NR_pselect6 (or __NR_pelect6_time64) on all architectures. However, on architectures where the symbol was implemented with __NR_select the kernel normalizes the passed timeout instead of return EINVAL. For instance, the input timeval { 0, 5000000 } is interpreted as { 5, 0 }. And as indicated by BZ #27651, this semantic seems to be expected and changing it results in some performance issues (most likely the program does not check the return code and keeps issuing select with unormalized tv_usec argument). To avoid a different semantic depending whether which syscall the architecture used to issue, select now always normalize the timeout input. This is a slight change for some ABIs (for instance aarch64). Checked on x86_64-linux-gnu and i686-linux-gnu.
Diffstat (limited to 'include/time.h')
-rw-r--r-- | include/time.h | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/include/time.h b/include/time.h index caf2af5e74..e0636132a6 100644 --- a/include/time.h +++ b/include/time.h @@ -502,6 +502,11 @@ time_now (void) __clock_gettime (TIME_CLOCK_GETTIME_CLOCKID, &ts); return ts.tv_sec; } + +#define NSEC_PER_SEC 1000000000L /* Nanoseconds per second. */ +#define USEC_PER_SEC 1000000L /* Microseconds per second. */ +#define NSEC_PER_USEC 1000L /* Nanoseconds per microsecond. */ + #endif #endif |