about summary refs log tree commit diff
path: root/string/test-strlen.c
diff options
context:
space:
mode:
authorAdhemerval Zanella <adhemerval.zanella@linaro.org>2021-03-31 13:53:34 -0300
committerAdhemerval Zanella <adhemerval.zanella@linaro.org>2021-04-12 18:38:37 -0300
commit9d7c5cc38e58fb0923e88901f87174a511b61552 (patch)
treebfd3d255e520814207a27679c8011ad71d71a5dd /string/test-strlen.c
parent49a40ba18e2cb948259771317fe6ff6f5eb68683 (diff)
downloadglibc-9d7c5cc38e58fb0923e88901f87174a511b61552.tar.gz
glibc-9d7c5cc38e58fb0923e88901f87174a511b61552.tar.xz
glibc-9d7c5cc38e58fb0923e88901f87174a511b61552.zip
linux: Normalize and return timeout on select (BZ #27651)
The commit 2433d39b697, which added time64 support to select, changed
the function to use __NR_pselect6 (or __NR_pelect6_time64) on all
architectures.  However, on architectures where the symbol was
implemented with __NR_select the kernel normalizes the passed timeout
instead of return EINVAL.  For instance, the input timeval
{ 0, 5000000 } is interpreted as { 5, 0 }.

And as indicated by BZ #27651, this semantic seems to be expected
and changing it results in some performance issues (most likely
the program does not check the return code and keeps issuing
select with unormalized tv_usec argument).

To avoid a different semantic depending whether which syscall the
architecture used to issue, select now always normalize the timeout
input.  This is a slight change for some ABIs (for instance aarch64).

Checked on x86_64-linux-gnu and i686-linux-gnu.
Diffstat (limited to 'string/test-strlen.c')
0 files changed, 0 insertions, 0 deletions