| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For MicroBlaze, setjmp/check-installed-headers-cxx fails with:
../setjmp/setjmp.h:34:8: error: '__jmp_buf_tag' has a field '__jmp_buf_tag::__jmpbuf' whose type depends on the type '<unnamed struct>' which has no linkage [-Werror=subobject-linkage]
This patch fixes this in the same way as for some other architectures:
the struct used for the internal __jmp_buf type is given the tag
__jmp_buf_internal_tag.
Tested (compilation tests) with build-many-glibcs.py.
* sysdeps/microblaze/bits/setjmp.h (__jmp_buf): Give struct tag
__jmp_buf_internal_tag.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This corresponds to a patch applied to libgcc. In glibc it doesn't
actually affect much (only fma, I think).
The MIPS sfp-machine.h files have an _FP_CHOOSENAN implementation
which emulates hardware semantics of not preserving signaling NaN
payloads for an operation with two NaN arguments (although that
doesn't suffice to avoid sNaN payload preservation in any case with
just one NaN argument).
However, those are only hardware semantics in the legacy NaN case; in
the NAN2008 case, the architecture documentation says hardware
preserves payloads in such cases. Furthermore, this implementation
assumes legacy NaN semantics, so in the NAN2008 case the
implementation actually has the effect of preserving sNaN payloads but
not preserving qNaN payloads, when both should be preserved.
This patch fixes the code just to copy from the first argument.
Tested for mips64 soft-float.
* sysdeps/mips/mips32/sfp-machine.h (_FP_CHOOSENAN): Always
preserve NaN payload if [__mips_nan2008].
* sysdeps/mips/mips64/sfp-machine.h (_FP_CHOOSENAN): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Many linknamespace tests fail for MicroBlaze because __backtrace (as
brought in by libc_fatal.c) uses an inline function get_frame_size
which is not declared static. This patch fixes it to be declared
static.
Tested (compilation tests) with build-many-glibcs.py.
[BZ #21022]
* sysdeps/microblaze/backtrace.c (get_frame_size): Make static.
|
|
|
|
|
|
|
|
|
|
| |
When testing changes to i386 libm functions (that are shadowed for
i686 builds by i686 versions) recently, I saw that the plain i386
libm-test-ulps (as opposed to the i686 multiarch version) needed
updating for tests that had been added since it was last updated.
This patch updates it accordingly.
* sysdeps/i386/fpu/libm-test-ulps: Update.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since commit 6e46de42fe16 default strcat implementation is essentially
the same for specialized ia64 and powerpc ones. This patch removes the
redundant implementation and adjust powerpc64 ifunc code to use the
default one.
Checked on powerpc32-linux-gnu (default and power4) and ia64-linux build
and on powerpc64le-linux-gnu.
* sysdeps/ia64/strcat.c: Remove file.
* sysdeps/powerpc/strcat.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strcat-power7.c: Use default
C implementation.
* sysdeps/powerpc/powerpc64/multiarch/strcat-power8.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strcat-ppc64.c: Likewise.
|
|
|
|
|
|
| |
The update of *adapt_count after the release of the lock causes a race
condition when thread A unlocks, thread B continues and destroys the
mutex, and thread A writes to *adapt_count.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to BZ#19387, BZ#21014, and BZ#20971, both x86 sse2 strncat
optimized assembly implementations do not handle the size overflow
correctly.
The x86_64 one is in fact an issue with strcpy-sse2-unaligned, but
that is triggered also with strncat optimized implementation.
This patch uses a similar strategy used on 3daef2c8ee4df2, where
saturared math is used for overflow case.
Checked on x86_64-linux-gnu and i686-linux-gnu. It fixes BZ #19390.
[BZ #19390]
* string/test-strncat.c (test_main): Add tests with SIZE_MAX as
maximum string size.
* sysdeps/i386/i686/multiarch/strcat-sse2.S (STRCAT): Avoid overflow
in pointer addition.
* sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S (STRCPY):
Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The lseek consolidation broke lseek64 for MIPS n32, so resulting in
io/test-lfs failing with an incorrect return from ftello64. This
configuration uses the lseek syscall with a 64-bit return value; as
the C syscall macros return long, they cannot be used in this case and
so an assembly implementation is needed; accordingly, this patch adds
lseek64 back to syscalls.list for this configuration.
lseek was also broken, truncating the result without checking for
overflow. lseek however was already broken before the consolidation;
it aliased lseek64 so would return an out-of-range value, resulting in
architecturally undefined behavior in the caller if it tried to use a
non-sign-extended value with a 32-bit instruction. This patch adds a
custom lseek implementation in C for n32, which calls __lseek64 to get
the 64-bit value then checks for overflow.
Because the prior lseek breakage did not show in test results, and the
lseek64 breakage showed only indirectly through tests of ftello64,
test coverage was clearly inadequate. This patch extends
io/test-lfs.c to test the lseek64 return value (at a point where it
has already seeked over 2GB into a file), and then to test the lseek
return value (with the latter's expectations depending on whether
off_t is smaller than off64_t).
Tested for mips64 n32. Also tested test-lfs for x86_64 and x86, where
as expected it passes.
[BZ #21019]
* sysdeps/unix/sysv/linux/mips/mips64/n32/syscalls.list (lseek64):
New syscall entry.
* sysdeps/unix/sysv/linux/mips/mips64/n32/lseek.c: New file.
* io/test-lfs.c (do_test): Test offset returned from lseek64 and
lseek.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testing for MIPS soft float shows that the issue with NaN payload
preservation applies to soft float as well as hard float: the
sfp-machine.h emulates hardware non-preservation semantics, although
only for the case of two NaN arguments.
This patch duly changes the MIPS math-tests.h to expect such
non-preservation for soft float as well as hard float. The issue in
the NAN2008 case for which I posted
<https://gcc.gnu.org/ml/gcc-patches/2017-01/msg00034.html>, of sNaN
payloads being preserved but qNaN payloads not being preserved, is not
currently an issue for glibc tests because we don't have any tests
that check for qNaN payloads being preserved by arithmetic, so a
simple __mips_nan2008 conditional suffices without needing compiler
version checks in the __mips_nan2008 case.
Tested for mips64 soft float.
* sysdeps/mips/math-tests.h (SNAN_TESTS_PRESERVE_PAYLOAD): Do not
condition on [__mips_hard_float].
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to BZ#19387 and BZ#20971, both i686 memchr optimized assembly
implementations (memchr-sse2-bsf and memchr-sse2) do not handle the
size overflow correctly.
It is shown by the new tests added by commit 3daef2c8ee4df29, where
both implementation fails with size as SIZE_MAX.
This patch uses a similar strategy used on 3daef2c8ee4df2, where
saturared math is used for overflow case.
Checked on i686-linux-gnu.
[BZ #21014]
* sysdeps/i386/i686/multiarch/memchr-sse2-bsf.S (MEMCHR): Avoid overflow
in pointer addition.
* sysdeps/i386/i686/multiarch/memchr-sse2.S (MEMCHR): Likewise.
|
|
|
|
|
| |
* sysdeps/sparc/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt to
new condvar.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I've updated copyright dates in glibc for 2017. This is the patch for
the changes not generated by scripts/update-copyrights and subsequent
build / regeneration of generated files.
Please remember to include 2017 in the dates for any new files added
in future (which means updating any existing uncommitted patches you
have that add new files to use the new copyright dates in them).
* NEWS: Update copyright dates.
* catgets/gencat.c (print_version): Likewise.
* csu/version.c (banner): Likewise.
* debug/catchsegv.sh: Likewise.
* debug/pcprofiledump.c (print_version): Likewise.
* debug/xtrace.sh (do_version): Likewise.
* elf/ldconfig.c (print_version): Likewise.
* elf/ldd.bash.in: Likewise.
* elf/pldd.c (print_version): Likewise.
* elf/sotruss.sh: Likewise.
* elf/sprof.c (print_version): Likewise.
* iconv/iconv_prog.c (print_version): Likewise.
* iconv/iconvconfig.c (print_version): Likewise.
* locale/programs/locale.c (print_version): Likewise.
* locale/programs/localedef.c (print_version): Likewise.
* login/programs/pt_chown.c (print_version): Likewise.
* malloc/memusage.sh (do_version): Likewise.
* malloc/memusagestat.c (print_version): Likewise.
* malloc/mtrace.pl: Likewise.
* manual/libc.texinfo: Likewise.
* nptl/version.c (banner): Likewise.
* nscd/nscd.c (print_version): Likewise.
* nss/getent.c (print_version): Likewise.
* nss/makedb.c (print_version): Likewise.
* posix/getconf.c (main): Likewise.
* scripts/test-installation.pl: Likewise.
* sysdeps/unix/sysv/linux/lddlibc4.c (main): Likewise.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The tunables framework allows us to uniformly manage and expose global
variables inside glibc as switches to users. tunables/README has
instructions for glibc developers to add new tunables.
Tunables support can be enabled by passing the --enable-tunables
configure flag to the configure script. This patch only adds a
framework and does not pose any limitations on how tunable values are
read from the user. It also adds environment variables used in malloc
behaviour tweaking to the tunables framework as a PoC of the
compatibility interface.
* manual/install.texi: Add --enable-tunables option.
* INSTALL: Regenerate.
* README.tunables: New file.
* Makeconfig (CPPFLAGS): Define TOP_NAMESPACE.
(before-compile): Generate dl-tunable-list.h early.
* config.h.in: Add HAVE_TUNABLES.
* config.make.in: Add have-tunables.
* configure.ac: Add --enable-tunables option.
* configure: Regenerate.
* csu/init-first.c (__libc_init_first): Move
__libc_init_secure earlier...
* csu/init-first.c (LIBC_START_MAIN):... to here.
Include dl-tunables.h, libc-internal.h.
(LIBC_START_MAIN) [!SHARED]: Initialize tunables for static
binaries.
* elf/Makefile (dl-routines): Add dl-tunables.
* elf/Versions (ld): Add __tunable_set_val to GLIBC_PRIVATE
namespace.
* elf/dl-support (_dl_nondynamic_init): Unset MALLOC_CHECK_
only when !HAVE_TUNABLES.
* elf/rtld.c (process_envvars): Likewise.
* elf/dl-sysdep.c [HAVE_TUNABLES]: Include dl-tunables.h
(_dl_sysdep_start): Call __tunables_init.
* elf/dl-tunable-types.h: New file.
* elf/dl-tunables.c: New file.
* elf/dl-tunables.h: New file.
* elf/dl-tunables.list: New file.
* malloc/tst-malloc-usable-static.c: New test case.
* malloc/Makefile (tests-static): Add it.
* malloc/arena.c [HAVE_TUNABLES]: Include dl-tunables.h.
Define TUNABLE_NAMESPACE.
(DL_TUNABLE_CALLBACK (set_mallopt_check)): New function.
(DL_TUNABLE_CALLBACK_FNDECL): New macro. Use it to define
callback functions.
(ptmalloc_init): Set tunable values.
* scripts/gen-tunables.awk: New file.
* sysdeps/mach/hurd/dl-sysdep.c: Include dl-tunables.h.
(_dl_sysdep_start): Call __tunables_init.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a new implementation for condition variables, required
after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In
essence, we need to be stricter in which waiters a signal or broadcast
is required to wake up; this couldn't be solved using the old algorithm.
ISO C++ made a similar clarification, so this also fixes a bug in
current libstdc++, for example.
We can't use the old algorithm anymore because futexes do not guarantee
to wake in FIFO order. Thus, when we wake, we can't simply let any
waiter grab a signal, but we need to ensure that one of the waiters
happening before the signal is woken up. This is something the previous
algorithm violated (see bug 13165).
There's another issue specific to condvars: ABA issues on the underlying
futexes. Unlike mutexes that have just three states, or semaphores that
have no tokens or a limited number of them, the state of a condvar is
the *order* of the waiters. A waiter on a semaphore can grab a token
whenever one is available; a condvar waiter must only consume a signal
if it is eligible to do so as determined by the relative order of the
waiter and the signal.
Therefore, this new algorithm maintains two groups of waiters: Those
eligible to consume signals (G1), and those that have to wait until
previous waiters have consumed signals (G2). Once G1 is empty, G2
becomes the new G1. 64b counters are used to avoid ABA issues.
This condvar doesn't yet use a requeue optimization (ie, on a broadcast,
waking just one thread and requeueing all others on the futex of the
mutex supplied by the program). I don't think doing the requeue is
necessarily the right approach (but I haven't done real measurements
yet):
* If a program expects to wake many threads at the same time and make
that scalable, a condvar isn't great anyway because of how it requires
waiters to operate mutually exclusive (due to the mutex usage). Thus, a
thundering herd problem is a scalability problem with or without the
optimization. Using something like a semaphore might be more
appropriate in such a case.
* The scalability problem is actually at the mutex side; the condvar
could help (and it tries to with the requeue optimization), but it
should be the mutex who decides how that is done, and whether it is done
at all.
* Forcing all but one waiter into the kernel-side wait queue of the
mutex prevents/avoids the use of lock elision on the mutex. Thus, it
prevents the only cure against the underlying scalability problem
inherent to condvars.
* If condvars use short critical sections (ie, hold the mutex just to
check a binary flag or such), which they should do ideally, then forcing
all those waiter to proceed serially with kernel-based hand-off (ie,
futex ops in the mutex' contended state, via the futex wait queues) will
be less efficient than just letting a scalable mutex implementation take
care of it. Our current mutex impl doesn't employ spinning at all, but
if critical sections are short, spinning can be much better.
* Doing the requeue stuff requires all waiters to always drive the mutex
into the contended state. This leads to each waiter having to call
futex_wake after lock release, even if this wouldn't be necessary.
[BZ #13165]
* nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to
use new algorithm.
* nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise.
* nptl/pthread_cond_init.c (__pthread_cond_init): Likewise.
* nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise.
* nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise.
(__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c.
(__condvar_confirm_wakeup, __condvar_cancel_waiting,
__condvar_cleanup_waiting, __condvar_dec_grefs,
__pthread_cond_wait_common): New.
(__condvar_cleanup): Remove.
* npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt.
* npt/pthread_condattr_setclock.c (pthread_condattr_setclock):
Likewise.
* npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared):
Likewise.
* npt/pthread_condattr_init.c (pthread_condattr_init): Likewise.
* nptl/tst-cond1.c: Add comment.
* nptl/tst-cond20.c (do_test): Adapt.
* nptl/tst-cond22.c (do_test): Likewise.
* sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt
structure.
* sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
* sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
* sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
* sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t):
Likewise.
* sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
* sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
* sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
* sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
* sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
* sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t):
Likewise.
* sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t):
Likewise.
* sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise.
* sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove.
(COND_CLOCK_BITS): Adapt.
* sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt.
* nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK,
__PTHREAD_COND_SHARED_MASK): New.
* nptl/nptl-printers.py (CLOCK_IDS): Remove.
(ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt.
* nptl/nptl_lock_constants.pysym: Adapt.
* nptl/test-cond-printers.py: Adapt.
* sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear,
cond_compat_check_and_clear): Adapt.
* sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ...
* sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c
(__pthread_cond_timedwait): ... and move here.
* nptl/DESIGN-condvar.txt: Remove file.
* nptl/lowlevelcond.sym: Likewise.
* nptl/pthread_cond_timedwait.c: Likewise.
* sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise.
* sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise.
* sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise.
* sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise.
* sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise.
* sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise.
* sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise.
* sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise.
* sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise.
* sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise.
* sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise.
* sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise.
* sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise.
* sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise.
* sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise.
* sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TS 18661-1 defines fromfp functions (fromfp, fromfpx, ufromfp,
ufromfpx, and float and long double variants) to convert from
floating-point to an integer type with any signedness and any given
width up to that of intmax_t, in any of the five IEEE rounding modes
(the usual four for binary floating point, plus rounding to nearest
with ties rounding away from zero), with control of whether in-range
non-integer values should result in the "inexact" exception being
raised. This patch implements these functions for glibc.
These implementations are (apart from raising exceptions) pure integer
implementations; it's entirely possible optimized versions could be
devised for some architectures. A common math/fromfp.h header
provides various common helper code that can readily be shared between
the implementations for different types. For each type, the bulk of
the implementation is also shared between the four functions, with
wrappers that define UNSIGNED and INEXACT macros appropriately before
including the main implementation.
As the functions return intmax_t and uintmax_t without math.h being
allowed to expose those typedef names, they are declared using
__intmax_t and __uintmax_t as obtained from <bits/types.h>.
The FP_INT_* rounding direction macros are defined as ascending
integers in the order the names are listed in the TS; I see no
significant value in allowing architectures to vary the values of
them.
The libm-test machinery is duly adapted to handle unsigned int
arguments, and intmax_t and uintmax_t results. Because each test
input is generally tested for four functions, five rounding modes and
several different widths, the libm-test.inc additions are very large.
Thus, the diffs in the body of this message exclude the libm-test.inc
changes, with the full patch being attached gzipped. The bulk of the
new tests were generated (expanded from a test input plus rounding
results and information about where it lies in the relevant interval
between integers, to libm-test tests for all relevant combinations of
function, rounding direction and width) by a script that's included in
the patch as math/gen-fromfp-tests.py (input data
math/gen-fromfp-tests-inputs); as an ad hoc script that's not really
expected to be rerun, it's not very polished, but it's at least
plausibly useful for adding any further tests for these functions in
future. I may split the libm-test tests up by function in future (so
both libm-test.inc and auto-libm-test-out are split into separate
files, and the tests for each function are also built and run
separately), but not for 2.25.
For no obvious reason, adding tgmath tests for the new functions
resulted in -Wuninitialized errors from test-tgmath.c about the
variable i being used uninitialized. Those errors were correct - the
variable is read by the frexp version in test-tgmath.c (where real
frexp would write through that pointer instead of reading it) - but I
don't know why this patch would result in the pre-existing issue being
newly detected. The patch initializes the variable to avoid those
errors.
With these changes, glibc 2.25 should have all the library features
from TS 18661-1 other than the functions that round result to narrower
type (and constant rounding directions, but I'm considering those
mainly a compiler feature not a library one).
Tested for x86_64, x86, mips64 and powerpc.
* math/bits/mathcalls.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
(fromfp): New declaration.
[__GLIBC_USE (IEC_60559_BFP_EXT)] (fromfpx): Likewise.
[__GLIBC_USE (IEC_60559_BFP_EXT)] (ufromfp): Likewise.
[__GLIBC_USE (IEC_60559_BFP_EXT)] (ufromfpx): Likewise.
* math/tgmath.h (__TGMATH_TERNARY_FIRST_REAL_RET_ONLY): New macro.
[__GLIBC_USE (IEC_60559_BFP_EXT)] (fromfp): Likewise.
[__GLIBC_USE (IEC_60559_BFP_EXT)] (ufromfp): Likewise.
[__GLIBC_USE (IEC_60559_BFP_EXT)] (fromfpx): Likewise.
[__GLIBC_USE (IEC_60559_BFP_EXT)] (ufromfpx): Likewise.
* math/math.h: Include <bits/types.h>.
[__GLIBC_USE (IEC_60559_BFP_EXT)] (FP_INT_UPWARD): New enum
constant and macro.
(FP_INT_DOWNWARD): Likewise.
(FP_INT_TOWARDZERO): Likewise.
(FP_INT_TONEARESTFROMZERO): Likewise.
(FP_INT_TONEAREST): Likewise.
* math/Versions (fromfp): New libm symbol at version GLIBC_2.25.
(fromfpf): Likewise.
(fromfpl): Likewise.
(ufromfp): Likewise.
(ufromfpf): Likewise.
(ufromfpl): Likewise.
(fromfpx): Likewise.
(fromfpxf): Likewise.
(fromfpxl): Likewise.
(ufromfpx): Likewise.
(ufromfpxf): Likewise.
(ufromfpxl): Likewise.
* math/Makefile (libm-calls): Add s_fromfpF, s_ufromfpF,
s_fromfpxF and s_ufromfpxF.
* math/gen-fromfp-tests.py: New file.
* math/gen-fromfp-tests-inputs: Likewise.
* math/libm-test.inc: Include <stdint.h>
(check_intmax_t): New function.
(check_uintmax_t): Likewise.
(struct test_fiu_M_data): New type.
(struct test_fiu_U_data): Likewise.
(RUN_TEST_fiu_M): New macro.
(RUN_TEST_LOOP_fiu_M): Likewise.
(RUN_TEST_fiu_U): Likewise.
(RUN_TEST_LOOP_fiu_U): Likewise.
(fromfp_test_data): New array.
(fromfp_test): New function.
(fromfpx_test_data): New array.
(fromfpx_test): New function.
(ufromfp_test_data): New array.
(ufromfp_test): New function.
(ufromfpx_test_data): New array.
(ufromfpx_test): New function.
(main): Call fromfp_test, fromfpx_test, ufromfp_test and
ufromfpx_test.
* math/gen-libm-test.pl (parse_args): Handle u, M and U descriptor
characters.
* math/test-tgmath-ret.c: Include <stdint.h>.
(rm): New variable.
(width): Likewise.
(CHECK_RET_CONST_TYPE): Take extra arguments and pass them to
called function.
(CHECK_RET_CONST_FLOAT): Take extra arguments and pass them to
CHECK_RET_CONST_TYPE.
(CHECK_RET_CONST_DOUBLE): Likewise.
(CHECK_RET_CONST_LDOUBLE): Likewise.
(CHECK_RET_CONST): Take extra arguments and pass them to calls
macros.
(fromfp): New CHECK_RET_CONST call.
(ufromfp): Likewise.
(fromfpx): Likewise.
(ufromfpx): Likewise.
(do_test): Call check_return_fromfp, check_return_ufromfp,
check_return_fromfpx and check_return_ufromfpx.
* math/test-tgmath.c: Include <stdint.h>
(NCALLS): Increase to 138.
(F(compile_test)): Initialize i. Call fromfp functions.
(F(fromfp)): New function.
(F(fromfpx)): Likewise.
(F(ufromfp)): Likewise.
(F(ufromfpx)): Likewise.
* manual/arith.texi (Rounding Functions): Document FP_INT_UPWARD,
FP_INT_DOWNWARD, FP_INT_TOWARDZERO, FP_INT_TONEARESTFROMZERO,
FP_INT_TONEAREST, fromfp, fromfpf, fromfpl, ufromfp, ufromfpf,
ufromfpl, fromfpx, fromfpxf, fromfpxl, ufromfpx, ufromfpxf and
ufromfpxl.
* manual/libm-err-tab.pl (@all_functions): Add fromfp, fromfpx,
ufromfp and ufromfpx.
* math/fromfp.h: New file.
* sysdeps/ieee754/dbl-64/s_fromfp.c: Likewise.
* sysdeps/ieee754/dbl-64/s_fromfp_main.c: Likewise.
* sysdeps/ieee754/dbl-64/s_fromfpx.c: Likewise.
* sysdeps/ieee754/dbl-64/s_ufromfp.c: Likewise.
* sysdeps/ieee754/dbl-64/s_ufromfpx.c: Likewise.
* sysdeps/ieee754/flt-32/s_fromfpf.c: Likewise.
* sysdeps/ieee754/flt-32/s_fromfpf_main.c: Likewise.
* sysdeps/ieee754/flt-32/s_fromfpxf.c: Likewise.
* sysdeps/ieee754/flt-32/s_ufromfpf.c: Likewise.
* sysdeps/ieee754/flt-32/s_ufromfpxf.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_fromfpl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_fromfpl_main.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_fromfpxl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_ufromfpl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_ufromfpxl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_fromfpl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_fromfpl_main.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_fromfpxl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_ufromfpl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_ufromfpxl.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_fromfpl.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_fromfpl_main.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_fromfpxl.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_ufromfpl.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_ufromfpxl.c: Likewise.
* sysdeps/ieee754/ldbl-opt/Makefile (libnldbl-calls): Add fromfp,
ufromfp, fromfpx and ufromfpx.
(CFLAGS-nldbl-fromfp.c): New variable.
(CFLAGS-nldbl-fromfpx.c): Likewise.
(CFLAGS-nldbl-ufromfp.c): Likewise.
(CFLAGS-nldbl-ufromfpx.c): Likewise.
* sysdeps/ieee754/ldbl-opt/nldbl-compat.h: Include <stdint.h>.
* sysdeps/ieee754/ldbl-opt/nldbl-fromfp.c: New file.
* sysdeps/ieee754/ldbl-opt/nldbl-fromfpx.c: Likewise.
* sysdeps/ieee754/ldbl-opt/nldbl-ufromfp.c: Likewise.
* sysdeps/ieee754/ldbl-opt/nldbl-ufromfpx.c: Likewise.
* sysdeps/nacl/libm.abilist: Update.
* sysdeps/unix/sysv/linux/aarch64/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/alpha/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/arm/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/hppa/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/i386/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/ia64/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/m68k/coldfire/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/m68k/m680x0/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/microblaze/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/mips/mips32/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/nios2/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/fpu/libm.abilist:
Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/libm.abilist:
Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/libm-le.abilist:
Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/libm.abilist:
Likewise.
* sysdeps/unix/sysv/linux/s390/s390-32/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/sh/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc64/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/tile/tilegx/tilegx32/libm.abilist:
Likewise.
* sysdeps/unix/sysv/linux/tile/tilegx/tilegx64/libm.abilist:
Likewise.
* sysdeps/unix/sysv/linux/tile/tilepro/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/x86_64/64/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/x86_64/x32/libm.abilist: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TS 18661-1 defines *fromfp* functions, which are declared in math.h
and whose return types are intmax_t and uintmax_t, without allowing
math.h to define those typedefs. (This is similar to e.g. ISO C
declaring vprintf in stdio.h without allowing that header to define
va_list.) Thus, math.h needs to access those typedefs under internal
names.
This patch accordingly arranges for bits/types.h (which defines only
internal names, not public *_t typedefs) to define __intmax_t and
__uintmax_t. stdint.h is made to use bits/types.h and define intmax_t
and uintmax_t using __intmax_t and __uintmax_t, to avoid duplication
of information. (It would be reasonable to define more of the types
in stdint.h - and in sys/types.h, where it duplicates such types -
using information already available in bits/types.h.) The idea is
that the subsequent addition of fromfp functions would then make
math.h include bits/types.h and use __intmax_t and __uintmax_t as the
return types of those functions.
Tested for x86_64 and x86 (testsuite, and that installed stripped
shared libraries are unchanged by the patch).
* bits/types.h (__intmax_t): New typedef.
(__uintmax_t): Likewise.
* sysdeps/generic/stdint.h: Include <bits/types.h>.
(intmax_t): Define using __intmax_t.
(uintmax_t): Define using __uintmax_t.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this patch add a direct call to shmget syscall if it is supported by
kernel features.
Checked on x86_64, i686, powerpc64le, aarch64, and armhf.
* sysdeps/unix/sysv/linux/alpha/syscalls.list (shmget): Remove.
* sysdeps/unix/sysv/linux/arm/syscalls.list (shmget): Likewise.
* sysdeps/unix/sysv/linux/generic/syscalls.list (shmget): Likewise.
* sysdeps/unix/sysv/linux/hppa/syscalls.list (shmget): Likewise.
* sysdeps/unix/sysv/linux/ia64/syscalls.list (shmget): Likewise.
* sysdeps/unix/sysv/linux/microblaze/syscalls.list (shmget): Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/syscalls.list (shmget):
Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/syscalls.list (shmget):
Likewise.
* sysdeps/unix/sysv/linux/x86_64/syscalls.list (shmget): Likewise.
* sysdeps/unix/sysv/linux/shmget.c (shmget): Use shmget syscall if it
is defined.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this patch add a direct call to shmdt syscall if it is supported by
kernel features.
hecked on x86_64, i686, powerpc64le, aarch64, and armhf.
* sysdeps/unix/sysv/linux/alpha/syscalls.list (shmdt): Remove.
* sysdeps/unix/sysv/linux/arm/syscalls.list (shmdt): Likewise.
* sysdeps/unix/sysv/linux/generic/syscalls.list (shmdt): Likewise.
* sysdeps/unix/sysv/linux/hppa/syscalls.list (shmdt): Likewise.
* sysdeps/unix/sysv/linux/ia64/syscalls.list (shmdt): Likewise.
* sysdeps/unix/sysv/linux/microblaze/syscalls.list (shmdt): Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/syscalls.list (shmdt):
Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/syscalls.list (shmdt):
Likewise.
* sysdeps/unix/sysv/linux/x86_64/syscalls.list (shmdt): Likewise.
* sysdeps/unix/sysv/linux/shmdt.c (shmdt): Use shmdt syscall if it is
defined.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch consolidates the shmctl Linux implementation in only
one default file, sysdeps/unix/sysv/linux/shmctl.c. If tries to use
the direct syscall if it is supported, otherwise will use the old ipc
multiplex mechanism.
The patch also simplify header inclusion and reorganize internal
compat symbol to be built only if old ipc is defined.
Checked on x86_64, i686, powerpc64le, aarch64, and armhf.
* sysdeps/unix/sysv/linux/alpha/Makefile (sysdeps_routines): Remove
oldshmctl.
* sysdeps/unix/sysv/linux/alpha/syscalls.list (shmctl): Remove.
* sysdeps/unix/sysv/linux/arm/syscalls.list (shmctl): Likewise.
* sysdeps/unix/sysv/linux/generic/syscalls.list (shmctl): Likewise.
* sysdeps/unix/sysv/linux/hppa/syscalls.list (shmctl): Likewise.
* sysdeps/unix/sysv/linux/ia64/syscalls.list (shmctl): Likewise.
* sysdeps/unix/sysv/linux/microblaze/syscalls.list (shmctl): Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/syscalls.list (shmctl):
Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/syscalls.list (shmctl):
Likewise.
* sysdeps/unix/sysv/linux/x86_64/syscalls.list (shmctl): Likewise.
* sysdeps/unix/sysv/linux/alpha/shmctl.c: Remove file.
* sysdeps/unix/sysv/linux/arm/shmctl.c: Likewise.
* sysdeps/unix/sysv/linux/microblaze/shmctl.c: Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/shmctl.c: Use default
implementation.
* sysdeps/unix/sysv/linux/shmctl.c (__new_shmctl): Use shmctl syscall
if it is defined.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch add a direct call to shmat syscall if it is supported by
kernel features.
Checked on x86_64, i686, powerpc64le, aarch64, and armhf.
* sysdeps/unix/sysv/linux/alpha/syscalls.list (shmat): Remove.
* sysdeps/unix/sysv/linux/arm/syscalls.list (shmat): Likewise.
* sysdeps/unix/sysv/linux/generic/syscalls.list (shmat): Likewise.
* sysdeps/unix/sysv/linux/hppa/syscalls.list (shmat): Likewise.
* sysdeps/unix/sysv/linux/ia64/syscalls.list (shmat): Likewise.
* sysdeps/unix/sysv/linux/microblaze/syscalls.list (shmat): Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/syscalls.list (shmat):
Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/syscalls.list (shmat):
Likewise.
* sysdeps/unix/sysv/linux/x86_64/syscalls.list (shmat): Likewise.
* sysdeps/unix/sysv/linux/alpha/kernel-features.h (__NR_shmat):
Define to __NR_osf_shmat.
* sysdeps/unix/sysv/linux/shmat.c (shmat): Use shmat syscall if it is
defined.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch consolidates the semtimedop Linux implementation in only
one default file, sysdeps/unix/sysv/linux/semtimedop.c. If tries to use
the direct syscall if it is supported, otherwise will use the old ipc
multiplex mechanism.
Checked on x86_64, i686, powerpc64le, aarch64, and armhf.
* sysdeps/unix/sysv/linux/alpha/syscalls.list (semtimedop): Remove.
* sysdeps/unix/sysv/linux/arm/syscalls.list (semtimedop): Likewise.
* sysdeps/unix/sysv/linux/generic/syscalls.list (semtimedop):
Likewise.
* sysdeps/unix/sysv/linux/hppa/syscalls.list (semtimedop): Likewise.
* sysdeps/unix/sysv/linux/ia64/syscalls.list (semtimedop): Likewise.
* sysdeps/unix/sysv/linux/microblaze/syscalls.list (semtimedop):
Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/syscalls.list (semtimedop):
Likewise.
* sysdeps/unix/sysv/linux/x86_64/syscalls.list (semtimedop): Likewise.
* sysdeps/unix/sysv/linux/m68k/semtimedop.S: Remove file.
* sysdeps/unix/sysv/linux/s390/semtimedop.c: Reorganize headers and
add a comment about s390 syscall difference from default one.
* sysdeps/unix/sysv/linux/semtimedop.c (semtimedop): Use semtimedop
syscall if it is defined.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch add a direct call to semop syscall if it is supported by
kernel headers.
Checked on x86_64, i686, powerpc64le, aarch64, and armhf.
* sysdeps/unix/sysv/linux/alpha/syscalls.list (semop): Remove.
* sysdeps/unix/sysv/linux/arm/syscalls.list (semop): Likewise.
* sysdeps/unix/sysv/linux/generic/syscalls.list (semop): Likewise.
* sysdeps/unix/sysv/linux/hppa/syscalls.list (semop): Likewise.
* sysdeps/unix/sysv/linux/ia64/syscalls.list (semop): Likewise.
* sysdeps/unix/sysv/linux/microblaze/syscalls.list (semop): Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/syscalls.list (semop):
Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/syscalls.list (semop):
Likewise.
* sysdeps/unix/sysv/linux/x86_64/syscalls.list (semop): Likewise.
* sysdeps/unix/sysv/linux/semop.c (semop): Use semop syscall if it is
defined.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch add a direct call to semget syscall if it is supported by
kernel features.
hecked on x86_64, i686, powerpc64le, aarch64, and armhf.
* sysdeps/unix/sysv/linux/alpha/syscalls.list (semget): Remove.
* sysdeps/unix/sysv/linux/arm/syscalls.list (semget): Likewise.
* sysdeps/unix/sysv/linux/generic/syscalls.list (semget): Likewise.
* sysdeps/unix/sysv/linux/hppa/syscalls.list (semget): Likewise.
* sysdeps/unix/sysv/linux/ia64/syscalls.list (semget): Likewise.
* sysdeps/unix/sysv/linux/microblaze/syscalls.list (semget): Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/syscalls.list (semget):
Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/syscalls.list (semget):
Likewise.
* sysdeps/unix/sysv/linux/x86_64/syscalls.list (semget): Likewise.
* sysdeps/unix/sysv/linux/semget.c (semget): Use semget syscall
if it is defined.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch consolidates the semctl Linux implementation in only
one default file, sysdeps/unix/sysv/linux/semctl.c. If tries to use
the direct syscall if it is supported, otherwise will use the old ipc
multiplex mechanism.
The patch also simplify header inclusion and reorganize internal
compat symbol to be built only if old ipc is defined.
Checked on x86_64, i686, powerpc64le, aarch64, and armhf.
* sysdeps/unix/sysv/linux/alpha/Makefile (sysdeps_routines): Remove
oldsemctl.
* sysdeps/unix/sysv/linux/alpha/semctl.c: Remove file.
* sysdeps/unix/sysv/linux/arm/semctl.c: Likewise.
* sysdeps/unix/sysv/linux/microblaze/semctl.c: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc64/semctl.c: Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/semctl.c: Use defaulf
implementation.
* sysdeps/unix/sysv/linux/semctl.c (__new_semctl): Use semctl
syscall if it is defined.
* sysdeps/unix/sysv/linux/generic/syscalls.list (semctl): Remove.
* sysdeps/unix/sysv/linux/alpha/syscalls.list (semctl): Likewise.
* sysdeps/unix/sysv/linux/hppa/syscalls.list (semctl): Likewise.
* sysdeps/unix/sysv/linux/ia64/syscalls.list (semctl): Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/syscalls.list (semctl):
Likewise.
* sysdeps/unix/sysv/linux/x86_64/syscalls.list (semctl): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch add a direct call to msgget syscall if it is supported by
kernel features.
hecked on x86_64, i686, powerpc64le, aarch64, and armhf.
* sysdeps/unix/sysv/linux/alpha/syscalls.list (msgget): Remove.
* sysdeps/unix/sysv/linux/arm/syscalls.list (msgget): Likewise.
* sysdeps/unix/sysv/linux/generic/syscalls.list (msgget): Likewise.
* sysdeps/unix/sysv/linux/hppa/syscalls.list (msgget): Likewise.
* sysdeps/unix/sysv/linux/ia64/syscalls.list (msgget): Likewise.
* sysdeps/unix/sysv/linux/microblaze/syscalls.list (msgget): Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/syscalls.list (msgget):
Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/syscalls.list (msgget):
Likewise.
* sysdeps/unix/sysv/linux/x86_64/syscalls.list (msgget): Likewise.
* sysdeps/unix/sysv/linux/msgget.c (msgget): Use msgget syscall if
define.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch add a direct call to msgsnd syscall if it is supported by
kernel features.
hecked on x86_64, i686, powerpc64le, aarch64, and armhf.
* sysdeps/unix/sysv/linux/alpha/syscalls.list (msgsnd): Remove.
* sysdeps/unix/sysv/linux/arm/syscalls.list (msgsnd): Likewise.
* sysdeps/unix/sysv/linux/generic/syscalls.list (msgsnd): Likewise.
* sysdeps/unix/sysv/linux/hppa/syscalls.list (msgsnd): Likewise.
* sysdeps/unix/sysv/linux/ia64/syscalls.list (msgsnd): Likewise.
* sysdeps/unix/sysv/linux/microblaze/syscalls.list (msgsnd): Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/syscalls.list (msgsnd):
Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/syscalls.list (msgsnd):
Likewise.
* sysdeps/unix/sysv/linux/x86_64/syscalls.list (msgsnd): Likewise.
* sysdeps/unix/sysv/linux/msgsnd.c (__libc_msgsnd): Use msgsnd syscall
if defined.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch consolidates the msgrcv Linux implementation in only
one default file, sysdeps/unix/sysv/linux/msgrcv.c. If tries to use
the direct syscall if it is supported, otherwise will use the old ipc
multiplex mechanism.
Checked on x86_64, i686, powerpc64le, aarch64, and armhf.
* sysdeps/unix/sysv/linux/alpha/syscalls.list (msgctl): Remove.
* sysdeps/unix/sysv/linux/arm/syscalls.list (msgctl): Likewise.
* sysdeps/unix/sysv/linux/generic/syscalls.list (msgctl): Likewise.
* sysdeps/unix/sysv/linux/hppa/syscalls.list (msgctl): Likewise.
* sysdeps/unix/sysv/linux/ia64/syscalls.list (msgctl): Likewise.
* sysdeps/unix/sysv/linux/microblaze/syscalls.list (msgctl): Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/syscalls.list (msgctl):
Likewise.
* sysdeps/unix/sysv/linux/x86_64/syscalls.list (msgctl): Likewise,
* sysdeps/unix/sysv/linux/mips/mips64/syscalls.list (msgctl):
Likewise.
* sysdeps/unix/sysv/linux/msgrcv.c (__libc_msgrcv): Use msgrcv syscall
if defined.
* sysdeps/unix/sysv/linux/sparc/sparc64/msgrcv.c: Remove file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch consolidates the msgctl Linux implementation in only
one default file, sysdeps/unix/sysv/linux/msgctl.c. If tries to use
the direct syscall if it is supported, otherwise will use the old ipc
multiplex mechanism.
The patch also simplify header inclusion and reorganize internal
compat symbol to be built only if old ipc is defined.
Checked on x86_64, i686, powerpc64le, aarch64, and armhf.
* sysdeps/unix/sysv/linux/alpha/Makefile (sysdeps_routines): Remove
oldmsgctl.
* sysdeps/unix/sysv/linux/alpha/msgctl.c: Remove file.
* sysdeps/unix/sysv/linux/arm/msgctl.c: Likewise.
* sysdeps/unix/sysv/linux/microblaze/msgctl.c: Likewise.
* sysdeps/unix/sysv/linux/alpha/syscalls.list (oldmsgctl): Remove.
* sysdeps/unix/sysv/linux/generic/syscalls.list (msgctl): Likewise.
* sysdeps/unix/sysv/linux/hppa/syscalls.list (msgctl): Likewise.
* sysdeps/unix/sysv/linux/ia64/syscalls.list (msgctl): Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/syscalls.list (msgctl):
Likewise.
* sysdeps/unix/sysv/linux/x86_64/syscalls.list (msgctl): Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/msgctl.c: Use default
implementation.
* sysdeps/unix/sysv/linux/msgctl.c (__new_msgctl): Use msgctl syscall
if defined.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some architectures support the old-style IPC and require IPC_64 equal to
0x100 to be passed along SysV IPC syscalls, while new architectures should
default to new IPC version (without the flags being set).
This patch refactor current ipc_priv.h Linux headers in two directions:
- Remove cross platform references (for instance alpha including powerpc
definition) and add required definition for each required port. The
idea is to avoid tie one architecture definition with another and make
platform change independent.
- Move all common definitions (the ipc syscall commands) on a common
header, ipc_ops.h.
* sysdeps/unix/sysv/linux/aarch64/ipc_priv.h: New file.
* sysdeps/unix/sysv/linux/alpha/ipc_priv.h: Avoid included other arch
definition and define its own.
* sysdeps/unix/sysv/linux/ipc_ops.h: New file.
* sysdeps/unix/sysv/linux/x86_64/ipc_priv.h: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc64/ipc_priv.h: Likewise.
* sysdeps/unix/sysv/linux/mips/ipc_priv.h: Remove file.
* sysdeps/unix/sysv/linux/mips/mips64/ipc_priv.h: New file.
* sysdeps/unix/sysv/linux/ipc_priv.h: Move ipc syscall operation
definitions to common header.
* sysdeps/unix/sysv/linux/powerpc/ipc_priv.h: Use common syscall
operation from ipc_ops.h.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On current minimum supported kernels, the SysV IPC on Linux is provided
by either the ipc syscalls or correspondent wire syscalls. Also, for
architectures that supports wire syscalls all syscalls are supported
in a set (msgct, msgrcv, msgsnd, msgget, semctl, semget, semop, semtimedop,
shmctl, shmat, shmget, shmdt).
The architectures that only supports ipc syscall are:
- i386, m68k, microblaze, mips32, powerpc (powerpc32, powerpc64, and
powerpc64le), s390 (32 and 64 bits), sh, sparc32, and sparc64.
And the architectures that only supports wired syscalls are:
- aarch64, alpha, hppa, ia64, mips64, mips64n32, nios2, tile
(tilepro, tilegx, and tilegx64), and x86_64
Also arm is the only one that supports both wire syscalls and the
ipc, although the ipc one is deprecated.
This patch adds a new define, __ASSUME_DIRECT_SYSVIPC_SYSCALL, that wired
syscalls are supported on the system and the general idea is to use
it where possible.
I also checked the syscall table for all architectures on Linux 4.9
and there is no change on described support for Linux 2.6.32/3.2.
* sysdeps/unix/sysv/linux/kernel-features.h
(__ASSUME_DIRECT_SYSVIPC_SYSCALL): New define.
* sysdeps/unix/sysv/linux/i386/kernel-features.h
(__ASSUME_DIRECT_SYSVIPC_SYSCALL): Undef.
* sysdeps/unix/sysv/linux/m68k/kernel-features.h
(__ASSUME_DIRECT_SYSVIPC_SYSCALL): Likewise.
* sysdeps/unix/sysv/linux/mips/kernel-features.h
(__ASSUME_DIRECT_SYSVIPC_SYSCALL): Likewise.
* sysdeps/unix/sysv/linux/powerpc/kernel-features.h
(__ASSUME_DIRECT_SYSVIPC_SYSCALL): Likewise.
* sysdeps/unix/sysv/linux/s390/kernel-features.h
(__ASSUME_DIRECT_SYSVIPC_SYSCALL): Likewise.
* sysdeps/unix/sysv/linux/sh/kernel-features.h
(__ASSUME_DIRECT_SYSVIPC_SYSCALL): Likewise.
* sysdeps/unix/sysv/linux/sparc/kernel-features.h
(__ASSUME_DIRECT_SYSVIPC_SYSCALL): Likewise.
|
|
|
|
|
|
|
|
|
| |
The same error fixed in commit b224637928e9fc04e3cef3e10d02ccf042d01584
happens in the 32-bit implementation of memchr for power7.
This patch adopts the same solution, with a minimal change: it
implements a saturated addition where overflows sets the maximum pointer
size to UINTPTR_MAX.
|
|
|
|
|
|
|
| |
The P7 code is used for <=32B strings and for > 32B vectorized loops are used.
This shows as an average 25% improvement depending on the position of search
character. The performance is same for shorter strings.
Tested on ppc64 and ppc64le.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Apply the following spelling fix:
$ git grep -El 'implemetn?ation' |
xargs sed -ri 's/implemetn?ation/implementation/g'
[BZ #19514]
* resolv/res_send.c: Fix typo in comment.
* sysdeps/i386/i386-mcount.S: Likewise.
* sysdeps/s390/s390-32/s390-mcount.S: Likewise.
* sysdeps/s390/s390-64/s390x-mcount.S: Likewise.
* sysdeps/sparc/sparc-mcount.S: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch removes the powerpc assembly implementation of fmax/fmin.
Based on benchtests, the assembly ones shows:
$ ./testrun.sh benchtests/bench-fmax
"fmax": {
"": {
"duration": 5.07586e+09,
"iterations": 2.01676e+09,
"max": 1350.39,
"min": 2.073,
"mean": 2.51684
},
"qNaN": {
"duration": 5.09315e+09,
"iterations": 8.4568e+08,
"max": 2788,
"min": 5.806,
"mean": 6.02255
},
"sNaN": {
"duration": 5.09073e+09,
"iterations": 8.42316e+08,
"max": 4215.84,
"min": 5.737,
"mean": 6.04373
}
And
$ ./testrun.sh benchtests/bench-fmin
"fmin": {
"": {
"duration": 5.07711e+09,
"iterations": 2.02982e+09,
"max": 497.094,
"min": 2.073,
"mean": 2.50126
},
"qNaN": {
"duration": 5.09134e+09,
"iterations": 8.46968e+08,
"max": 2255.14,
"min": 5.807,
"mean": 6.01125
},
"sNaN": {
"duration": 5.09122e+09,
"iterations": 8.4746e+08,
"max": 1969.38,
"min": 5.729,
"mean": 6.00763
}
}
The default implementation (math/s_f{max.min}_template.c) shows slight better
latency for all cases:
$ ./testrun.sh benchtests/bench-fmax
"fmax": {
"": {
"duration": 5.07044e+09,
"iterations": 2.38695e+09,
"max": 2048.58,
"min": 2.073,
"mean": 2.12423
},
"qNaN": {
"duration": 5.09004e+09,
"iterations": 9.45428e+08,
"max": 3306.93,
"min": 5.138,
"mean": 5.38385
},
"sNaN": {
"duration": 5.08458e+09,
"iterations": 1.15959e+09,
"max": 972.008,
"min": 3.321,
"mean": 4.3848
}
}
And:
$ ./testrun.sh benchtests/bench-fmin
"fmin": {
"": {
"duration": 5.06817e+09,
"iterations": 2.3913e+09,
"max": 1177.9,
"min": 2.073,
"mean": 2.11942
},
"qNaN": {
"duration": 5.08857e+09,
"iterations": 9.45656e+08,
"max": 2658.83,
"min": 5.09,
"mean": 5.38099
},
"sNaN": {
"duration": 5.08093e+09,
"iterations": 1.16725e+09,
"max": 1030.74,
"min": 3.323,
"mean": 4.3529
}
}
Both were run with GCC 5.4 (ubuntu 16 default installation) using default
compiler flags on POWER8E 3.4GHz (powerpc64le-linux-gnu).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current optimized memchr for x86_64 does for input arguments pointers
module 64 in range of [49,63] if there is no searchr char in the rest
of 64-byte block a pointer addition which might overflow:
* sysdeps/x86_64/memchr.S
77 .p2align 4
78 L(unaligned_no_match):
79 add %rcx, %rdx
Add (uintptr_t)s % 16 to n in %rdx.
80 sub $16, %rdx
81 jbe L(return_null)
This patch fixes by adding a saturated math that sets a maximum pointer
value if it overflows (UINTPTR_MAX).
Checked on x86_64-linux-gnu and powerpc64-linux-gnu.
[BZ# 19387]
* sysdeps/x86_64/memchr.S (memchr): Avoid overflow in pointer
addition.
* string/test-memchr.c (do_test): Remove alignment limitation.
(test_main): Add test that trigger BZ# 19387.
|
|
|
|
|
|
|
|
|
|
|
|
| |
These are called from the kernel with the stack at a carefully-
chosen location so that the stack frame can be restored: they must not
move the stack pointer lest garbage be restored into the registers.
We explicitly inhibit protection for SPARC and for signal/sigreturn.c:
other arches either define their sigreturn stubs in .S files, or (i386,
x86_64, mips) use macros expanding to top-level asm blocks and explicit
labels in the text section to mock up a "function" without telling the
compiler that one is there at all.
|
|
|
|
|
|
| |
Add a hidden __stack_chk_fail_local alias to libc.so,
and make sure that on targets which use __stack_chk_fail,
this does not introduce a local PLT reference into libc.so.
|
|
|
|
|
| |
Also compile corresponding routines in the static libc.a with the same
flag.
|
|
|
|
|
|
|
|
|
|
|
| |
When dynamically linking, ifunc resolvers are called before TLS is
initialized, so they cannot be safely stack-protected.
We avoid disabling stack-protection on large numbers of files by
using __attribute__ ((__optimize__ ("-fno-stack-protector")))
to turn it off just for the resolvers themselves. (We provide
the attribute even when statically linking, because we will later
use it elsewhere too.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The address of the stack canary is stored in a per-thread variable,
which means that we must ensure that the TLS area is intialized before
calling any -fstack-protector'ed functions. For dynamically linked
applications, we ensure this (in a later patch) by disabling
-fstack-protector for the whole dynamic linker, but for static
applications, the AT_ENTRY address is called directly by the kernel, so
we must deal with the problem differently.
In static appliations, __libc_setup_tls performs the TCB setup and TLS
initialization, so this commit arranges for it to be called early and
unconditionally. The call (and the stack guard initialization) is
before the DL_SYSDEP_OSCHECK hook, which if set will probably call
functions which are stack-protected (it does on Linux and NaCL too). We
also move apply_irel up, so that we can still safely call functions that
require ifuncs while in __libc_setup_tls (though if stack-protection is
enabled we still have to avoid calling functions that are not
stack-protected at this stage).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently strsep calls strpbrk is is now a veneer to strcspn. Calling
strcspn directly is faster. Since it handles a delimiter string of size
1 as a special case, this is not needed in strsep itself. Although this
means there is a slightly higher overhead if the delimiter size is 1,
all other cases are slightly faster. The overall performance gain is 5-10%
on AArch64.
The string/bits/string2.h header contains optimizations for constant
delimiters of size 1-3. Benchmarking these showed similar performance for
size 1 (since in all cases strchr/strchrnul is used), while size 2 and 3
can give up to 2x speedup for small input strings. However if these cases
are common it seems much better to add this optimization to strcspn.
So move these header optimizations to string-inlines.c.
Improve the strsep benchmark so that it actually benchmarks something.
The current version contains a delimiter character at every position in the
input string, so there is very little work to do, and the extremely inefficent
simple_strsep implementation appears fastest in every case. The new version
has either no match in the input for the fail case and a match halfway in the
input for the success case. The input is then restored so that each iteration
does exactly the same amount of work. Reduce the number of testcases since
simple_strsep takes a lot of time now.
* benchtests/bench-strsep.c (oldstrsep): Add old implementation.
(do_one_test) Restore original string so iteration works.
* string/string-inlines.c (do_test): Create better input strings.
(test_main) Reduce number of testruns.
* string/string-inlines.c (__old_strsep_1c): New function.
(__old_strsep_2c): Likewise.
(__old_strsep_3c): Likewise.
* string/strsep.c (__strsep): Remove case of small delim string.
Call strcspn directly rather than strpbrk.
* string/bits/string2.h (__strsep): Remove define.
(__strsep_1c): Remove.
(__strsep_2c): Remove.
(__strsep_3c): Remove.
(strsep): Remove.
* sysdeps/unix/sysv/linux/internal_statvfs.c
(__statvfs_getflags): Rename to __strsep.
|
|
|
|
|
|
| |
Commit 7a5e3d9d633c828d84a9535f26b202a6179978e7 (elf: Assume TLS is
initialized in _dl_map_object_from_fd) removed the last call of
_dl_tls_setup, but did not remove the function itself.
|
|
|
|
|
|
| |
With stack protection enabled, these files have external symbol
references for the first time, so the fact that they are not compiled
with -fPIE and are then linked into a -pie binary starts to hurt.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TS 18661-1 defines roundeven functions that round a floating-point
number to the nearest integer, in that floating-point type, with ties
rounding to even (whereas the round functions round ties away from
zero). As with other such functions, they raise no exceptions apart
from "invalid" for signaling NaNs. There was a previous user request
for this functionality in glibc in
<https://sourceware.org/ml/libc-help/2015-02/msg00005.html>.
This patch implements these functions for glibc. The implementations
use integer bit-manipulation (or roundeven on the high and low parts,
in the IBM long double case). It's possible that there may be faster
approaches on some architectures (in particular, on AArch64 the frintn
instruction should do exactly what's required); I'll leave it to
architecture maintainers or others interested to implement such
architecture-specific versions if desired. (Where architectures have
instructions to round to nearest integer in the current rounding mode,
implementations saving and restoring the rounding mode - and dealing
with exceptions if those instructions generate "inexact" - are also
possible, though their performance depends on the cost of manipulating
exceptions / rounding mode state.)
Tested for x86_64, x86, mips64 and powerpc.
* math/bits/mathcalls.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
(roundeven): New declaration.
* math/tgmath.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (roundeven): New
macro.
* math/Versions (roundeven): New libm symbol at version
GLIBC_2.25.
(roundevenf): Likewise.
(roundevenl): Likewise.
* math/Makefile (libm-calls): Add s_roundevenF.
* math/libm-test.inc (roundeven_test_data): New array.
(roundeven_test): New function.
(main): Call roundeven_test.
* math/test-tgmath.c (NCALLS): Increase to 134.
(F(compile_test)): Call roundeven.
(F(roundeven)): New function.
* manual/arith.texi (Rounding Functions): Document roundeven,
roundevenf and roundevenl.
* manual/libm-err-tab.pl (@all_functions): Add roundeven.
* include/math.h (roundeven): Use libm_hidden_proto.
* sysdeps/ieee754/dbl-64/s_roundeven.c: New file.
* sysdeps/ieee754/dbl-64/wordsize-64/s_roundeven.c: Likewise.
* sysdeps/ieee754/flt-32/s_roundevenf.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_roundevenl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_roundevenl.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_roundevenl.c: Likewise.
* sysdeps/ieee754/ldbl-opt/Makefile (libnldbl-calls): Add
roundeven.
(CFLAGS-nldbl-roundeven.c): New variable.
* sysdeps/ieee754/ldbl-opt/nldbl-roundeven.c: New file.
* sysdeps/nacl/libm.abilist: Update.
* sysdeps/unix/sysv/linux/aarch64/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/alpha/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/arm/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/hppa/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/i386/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/ia64/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/m68k/coldfire/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/m68k/m680x0/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/microblaze/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/mips/mips32/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/nios2/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/fpu/libm.abilist:
Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/libm.abilist:
Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/libm-le.abilist:
Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/libm.abilist:
Likewise.
* sysdeps/unix/sysv/linux/s390/s390-32/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/sh/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc64/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/tile/tilegx/tilegx32/libm.abilist:
Likewise.
* sysdeps/unix/sysv/linux/tile/tilegx/tilegx64/libm.abilist:
Likewise.
* sysdeps/unix/sysv/linux/tile/tilepro/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/x86_64/64/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/x86_64/x32/libm.abilist: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch decrements the adapt_count while unlocking the futex
instead of before aquiring the futex as it is done on power, too.
Furthermore a transaction is only started if the futex is currently free.
This check is done after starting the transaction, too.
If the futex is not free and the transaction nesting depth is one,
we can simply end the started transaction instead of aborting it.
The implementation of this check was faulty as it always ended the
started transaction. By using the fallback path, the the outermost
transaction was aborted. Now the outermost transaction is aborted
directly.
This patch also adds some commentary and aligns the code in
elision-trylock.c to the code in elision-lock.c as possible.
ChangeLog:
* sysdeps/unix/sysv/linux/s390/lowlevellock.h
(__lll_unlock_elision, lll_unlock_elision): Add adapt_count argument.
* sysdeps/unix/sysv/linux/s390/elision-lock.c:
(__lll_lock_elision): Decrement adapt_count while unlocking
instead of before locking.
* sysdeps/unix/sysv/linux/s390/elision-trylock.c
(__lll_trylock_elision): Likewise.
* sysdeps/unix/sysv/linux/s390/elision-unlock.c:
(__lll_unlock_elision): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch implements __libc_tbegin_retry macro which is equivalent to
gcc builtin __builtin_tbegin_retry, except the changes which were applied
to __libc_tbegin in the previous patch.
If tbegin aborts with _HTM_TBEGIN_TRANSIENT. Then this macros restores
the fpc, fprs and automatically retries up to retry_cnt tbegins.
Further saving of the state is omitted as it is already saved in the
first round. Before retrying a further transaction, the
transaction-abort-assist instruction is used to support the cpu.
This macro is now used in function __lll_lock_elision.
ChangeLog:
* sysdeps/unix/sysv/linux/s390/htm.h(__libc_tbegin_retry): New macro.
* sysdeps/unix/sysv/linux/s390/elision-lock.c (__lll_lock_elision):
Use __libc_tbegin_retry macro.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch defines __libc_tbegin, __libc_tend, __libc_tabort and
__libc_tx_nesting_depth in htm.h which replaces the direct usage of
equivalent gcc builtins.
We have to use an own inline assembly instead of __builtin_tbegin,
as tbegin has to filter program interruptions which can't be done with
the builtin. Before this change, e.g. a segmentation fault within a
transaction, leads to a coredump where the instruction pointer points
behind the tbegin instruction instead of real failing one.
Now the transaction aborts and the code should be reexecuted by the
fallback path without transactions. The segmentation fault will
produce a coredump with the real failing instruction.
The fpc is not saved before starting the transaction. If e.g. the
rounging mode is changed and the transaction is aborting afterwards,
the builtin will not restore the fpc. This is now done with the
__libc_tbegin macro.
Now the call saved fprs have to be saved / restored in the
__libc_tbegin macro. Using the gcc builtin had forced the saving /
restoring of fprs at begin / end of e.g. __lll_lock_elision function.
The new macro saves these fprs before tbegin instruction and only
restores them on a transaction abort. Restoring is not needed on
a successfully started transaction.
The used inline assembly does not clobber the fprs / vrs!
Clobbering the latter ones would force the compiler to save / restore
the call saved fprs as those overlap with the vrs, but they only
need to be restored if the transaction fails. Thus the user of the
tbegin macros has to compile the file / function with -msoft-float.
It prevents gcc from using fprs / vrs.
ChangeLog:
* sysdeps/unix/sysv/linux/s390/Makefile (elision-CFLAGS):
Add -msoft-float.
* sysdeps/unix/sysv/linux/s390/htm.h: New File.
* sysdeps/unix/sysv/linux/s390/elision-lock.c:
Use __libc_t* transaction macros instead of __builtin_t*.
* sysdeps/unix/sysv/linux/s390/elision-trylock.c: Likewise.
* sysdeps/unix/sysv/linux/s390/elision-unlock.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
code.
This uses atomic operations to access lock elision metadata that is accessed
concurrently (ie, adapt_count fields). The size of the data is less than a
word but accessed only with atomic loads and stores.
See also x86 commit ca6e601a9d4a72b3699cca15bad12ac1716bf49a:
"Use C11-like atomics instead of plain memory accesses in x86 lock elision."
ChangeLog:
* sysdeps/unix/sysv/linux/s390/elision-lock.c
(__lll_lock_elision): Use atomics to load / store adapt_count.
* sysdeps/unix/sysv/linux/s390/elision-trylock.c
(__lll_trylock_elision): Likewise.
|