| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
these macros have the same distinct definition on blackfin, frv, m68k,
mips, sparc and xtensa kernels. POLLMSG and POLLRDHUP additionally
differ on sparc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the memory model we use internally for atomics permits plain loads of
values which may be subject to concurrent modification without
requiring that a special load function be used. since a compiler is
free to make transformations that alter the number of loads or the way
in which loads are performed, the compiler is theoretically free to
break this usage. the most obvious concern is with atomic cas
constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be
transformed to a_cas(p,*p,f(*p)); where the latter is intended to show
multiple loads of *p whose resulting values might fail to be equal;
this would break the atomicity of the whole operation. but even more
fundamental breakage is possible.
with the changes being made now, objects that may be modified by
atomics are modeled as volatile, and the atomic operations performed
on them by other threads are modeled as asynchronous stores by
hardware which happens to be acting on the request of another thread.
such modeling of course does not itself address memory synchronization
between cores/cpus, but that aspect was already handled. this all
seems less than ideal, but it's the best we can do without mandating a
C11 compiler and using the C11 model for atomics.
in the case of pthread_once_t, the ABI type of the underlying object
is not volatile-qualified. so we are assuming that accessing the
object through a volatile-qualified lvalue via casts yields volatile
access semantics. the language of the C standard is somewhat unclear
on this matter, but this is an assumption the linux kernel also makes,
and seems to be the correct interpretation of the standard.
|
|
|
|
|
|
|
|
|
| |
this syscall allows fexecve to be implemented without /proc, it is new
in linux v3.19, added in commit 51f39a1f0cea1cacf8c787f652f26dfee9611874
(sh and microblaze do not have allocated syscall numbers yet)
added a x32 fix as well: the io_setup and io_submit syscalls are no
longer common with x86_64, so use the x32 specific numbers.
|
|
|
|
|
|
| |
the definitions are generic for all kernel archs. exposure of these
macros now only occurs on the same feature test as for the function
accepting them, which is believed to be more correct.
|
|
|
|
|
|
|
|
|
|
|
| |
these syscalls are new in linux v3.18, bpf is present on all
supported archs except sh, kexec_file_load is only allocted for
x86_64 and x32 yet.
bpf was added in linux commit 99c55f7d47c0dc6fc64729f37bf435abf43f4c60
kexec_file_load syscall number was allocated in commit
f0895685c7fd8c938c91a9d8a6f7c11f22df58d2
|
| |
|
|
|
|
| |
it is part of kernel uapi, and some programs (e.g. nodejs) do use them
|
| |
|
|
|
|
|
|
|
|
|
|
| |
the register constraints in the non-clang case were tested to work on
clang back to 3.2, and earlier versions of clang have known bugs that
preclude building musl.
there may be other reasons to prefer not to use inline syscalls, but
if so the function-call-based implementations should be added back in
a unified way for all archs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
calls to __aeabi_read_tp may be generated by the compiler to access
TLS on pre-v6 targets. previously, this function was hard-coded to
call the kuser helper, which would crash on kernels with kuser helper
removed.
to fix the problem most efficiently, the definition of __aeabi_read_tp
is moved so that it's an alias for the new __a_gettp. however, on v7+
targets, code to initialize the runtime choice of thread-pointer
loading code is not even compiled, meaning that defining
__aeabi_read_tp would have caused an immediate crash due to using the
default implementation of __a_gettp with a HCF instruction.
fortunately there is an elegant solution which reduces overall code
size: putting the native thread-pointer loading instruction in the
default code path for __a_gettp, so that separate default/native code
paths are not needed. this function should never be called before
__set_thread_area anyway, and if it is called early on pre-v6
hardware, the old behavior (crashing) is maintained.
ideally __aeabi_read_tp would not be called at all on v7+ targets
anyway -- in fact, prior to the overhaul, the same problem existed,
but it was never caught by users building for v7+ with kuser disabled.
however, it's possible for calls to __aeabi_read_tp to end up in a v7+
binary if some of the object files were built for pre-v7 targets, e.g.
in the case of static libraries that were built separately, so this
case needs to be handled.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
previously, builds for pre-armv6 targets hard-coded use of the "kuser
helper" system for atomics and thread-pointer access, resulting in
binaries that fail to run (crash) on systems where this functionality
has been disabled (as a security/hardening measure) in the kernel.
additionally, builds for armv6 hard-coded an outdated/deprecated
memory barrier instruction which may require emulation (extremely
slow) on future models.
this overhaul replaces the behavior for all pre-armv7 builds (both of
the above cases) to perform runtime detection of the appropriate
mechanisms for barrier, atomic compare-and-swap, and thread pointer
access. detection is based on information provided by the kernel in
auxv: presence of the HWCAP_TLS bit for AT_HWCAP and the architecture
version encoded in AT_PLATFORM. direct use of the instructions is
preferred when possible, since probing for the existence of the kuser
helper page would be difficult and would incur runtime cost.
for builds targeting armv7 or later, the runtime detection code is not
compiled at all, and much more efficient versions of the non-cas
atomic operations are provided by using ldrex/strex directly rather
than wrapping cas.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
these syscalls are new in linux v3.17 and present on all supported
archs except sh.
seccomp was added in commit 48dc92b9fc3926844257316e75ba11eb5c742b2c
it has operation, flags and pointer arguments (if flags==0 then it is
the same as prctl(PR_SET_SECCOMP,...)), the uapi header for flag
definitions is linux/seccomp.h
getrandom was added in commit c6e9d6f38894798696f23c8084ca7edbf16ee895
it provides an entropy source when open("/dev/urandom",..) would fail,
the uapi header for flags is linux/random.h
memfd_create was added in commit 9183df25fe7b194563db3fec6dc3202a5855839c
it allows anon mmap to have an fd, that can be shared, sealed and needs no
mount point, the uapi header for flags is linux/memfd.h
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
based on patch by Jens Gustedt.
mtx_t and cnd_t are defined in such a way that they are formally
"compatible types" with pthread_mutex_t and pthread_cond_t,
respectively, when accessed from a different translation unit. this
makes it possible to implement the C11 functions using the pthread
functions (which will dereference them with the pthread types) without
having to use the same types, which would necessitate either namespace
violations (exposing pthread type names in threads.h) or incompatible
changes to the C++ name mangling ABI for the pthread types.
for the rest of the types, things are much simpler; using identical
types is possible without any namespace considerations.
|
|
|
|
| |
this was broken by commit ea818ea8340c13742a4f41e6077f732291aea4bc.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
conceptually, a_spin needs to be at least a compiler barrier, so the
compiler will not optimize out loops (and the load on each iteration)
while spinning. it should also be a memory barrier, or the spinning
thread might keep spinning without noticing stores from other threads,
thus delaying for longer than it should.
ideally, an optimal a_spin implementation that avoids unnecessary
cache/memory contention should be chosen for each arch, but for now,
the easiest thing is to perform a useless a_cas on the calling
thread's stack.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
unfortunately this needs to be able to vary by arch, because of a huge
mess GCC made: the GCC definition, which became the ABI, depends on
quirks in GCC's definition of __alignof__, which does not match the
formal alignment of the type.
GCC's __alignof__ unexpectedly exposes the an implementation detail,
its "preferred alignment" for the type, rather than the formal/ABI
alignment of the type, which it only actually uses in structures. on
most archs the two values are the same, but on some (at least i386)
the preferred alignment is greater than the ABI alignment.
I considered using _Alignas(8) unconditionally, but on at least one
arch (or1k), the alignment of max_align_t with GCC's definition is
only 4 (even the "preferred alignment" for these types is only 4).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when manipulating the robust list, the order of stores matters,
because the code may be asynchronously interrupted by a fatal signal
and the kernel will then access the robust list in what is essentially
an async-signal context.
previously, aliasing considerations made it seem unlikely that a
compiler could reorder the stores, but proving that they could not be
reordered incorrectly would have been extremely difficult. instead
I've opted to make all the pointers used as part of the robust list,
including those in the robust list head and in the individual mutexes,
volatile.
in addition, the format of the robust list has been changed to point
back to the head at the end, rather than ending with a null pointer.
this is to match the documented kernel robust list ABI. the null
pointer, which was previously used, only worked because faults during
access terminate the robust list processing.
|
|
|
|
|
|
|
|
|
|
|
| |
the a_cas_l, a_swap_l, a_swap_p, and a_store_l operations were
probably used a long time ago when only i386 and x86_64 were
supported. as other archs were added, support for them was
inconsistent, and they are obviously not in use at present. having
them around potentially confuses readers working on new ports, and the
type-punning hacks and inconsistent use of types in their definitions
is not a style I wish to perpetuate in the source tree, so removing
them seems appropriate.
|
|
|
|
|
| |
it's like rename but with flags eg. to allow atomic exchange of two files,
introduced in linux 3.15 commit 520c8b16505236fc82daa352e6c5e73cd9870cff
|
|
|
|
|
|
|
|
|
|
|
| |
this was one of the main instances of ugly code duplication: all archs
use basically the same types of relocations, but roughly equivalent
logic was duplicated for each arch to account for the different naming
and numbering of relocation types and variation in whether REL or RELA
records are used.
as an added bonus, both REL and RELA are now supported on all archs,
regardless of which is used by the standard toolchain.
|
|
|
|
|
|
|
|
| |
the immediate motivation is supporting TLSDESC relocations which
require allocation and thus may fail (unless we pre-allocate), but
this mechanism should also be used for throwing an error on
unsupported or invalid relocation types, and perhaps in certain cases,
for reporting when a relocation is not satisfiable.
|
|
|
|
|
|
|
|
|
|
|
|
| |
linux 3.14 introduced sched_getattr and sched_setattr syscalls in
commit d50dde5a10f305253cbc3855307f608f8a3c5f73
and the related SCHED_DEADLINE scheduling policy in
commit aab03e05e8f7e26f51dee792beddcb5cca9215a5
but struct sched_attr "extended scheduling parameters data structure"
is not yet exported to userspace (necessary for using the syscalls)
so related uapi definitions are not added yet.
|
|
|
|
|
|
|
|
|
|
|
| |
armv7/thumb2 provides a way to do atomics in thumb mode, but for armv6
we need a call to arm mode.
this commit is based on a patch by Stephen Thomas which fixed the
armv7 cases but not the armv6 ones.
all of this should be revisited if/when runtime selection of thread
pointer access and atomics are added.
|
|
|
|
|
|
|
| |
The mips arch is special in that it uses different RLIMIT_
numbers than other archs, so allow bits/resource.h to override
the default RLIMIT_ numbers (empty on all archs except mips).
Reported by orc.
|
|
|
|
|
|
|
| |
aside from potentially offering better performance, this change is
needed since the old coprocessor-based approach to barriers is
deprecated in arm v7, and some compilers/assemblers issue errors when
using the deprecated instruction for v7 targets.
|
|
|
|
|
|
| |
the "m" constraint could give a memory reference with an offset that's
not compatible with ldrex/strex, so the arm-specific "Q" constraint is
needed instead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this is perhaps not the optimal implementation; a_cas still compiles
to nested loops due to the different interface contracts of the kuser
helper cas function (whose contract this patch implements) and the
a_cas function (whose contract mimics the x86 cmpxchg). fixing this
may be possible, but it's more complicated and thus deferred until a
later time.
aside from improving performance and code size, this patch also
provides a means of producing binaries which can run on hardened
kernels where the kuser helpers have been disabled. however, at
present this requires producing binaries for armv6k or later, which
will not run on older cpus. a real solution to the problem of kernels
that omit the kuser helpers would be runtime detection, so that
universal binaries which run on all arm cpu models can also be
compatible with all kernel hardening profiles. robust detection
however is a much harder problem, and will be addressed at a later
time.
|
|
|
|
|
|
| |
in the previous changes, I missed the fact that both the prototype of
the sigaltstack function and the definition of ucontext_t depend on
stack_t.
|
|
|
|
|
| |
it's different at least on mips. mips version will be fixed in a
separate commit to show the change.
|
|
|
|
|
|
|
|
|
|
|
|
| |
the definition was found to be incorrect at least for powerpc, and
fixing this cleanly requires making the definition arch-specific. this
will allow cleaning up the definition for other archs to make it more
specific, and reversing some of the ugliness (time_t hacks) introduced
with the x32 port.
this first commit simply copies the existing definition to each arch
without any changes. this is intentional, to make it easier to review
changes made on a per-arch basis.
|
|
|
|
|
|
|
|
|
| |
the reordering of headers caused some risc archs to not see
the __syscall declaration anymore.
this caused build errors on mips with any compiler,
and on arm and microblaze with clang.
we now declare it locally just like the powerpc port does.
|
| |
|
| |
|
|
|
|
|
|
| |
the fix should be complete on archs that use the generic definitions
(i386, arm, x86_64, microblaze), but mips and powerpc have not been
checked thoroughly and may need more fixes.
|
|
|
|
|
|
| |
definition in linux:
#define O_TMPFILE (__O_TMPFILE | O_DIRECTORY)
where __O_TMPFILE and O_DIRECTORY are arch specific
|
|
|
|
|
|
|
|
|
|
|
| |
previously these macros wrongly had type double rather than long
double. I see no way an application could detect the error in C99, but
C11's _Generic can trivially detect it.
at the same time, even though these archs do not have excess
precision, the number of decimal places used to represent these
constants has been increased to 21 to be consistent with the decimal
representations used for the DBL_* macros.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
atomic store was lacking a barrier, which was fine for legacy arm with
no real smp and kernel-emulated cas, but unsuitable for more modern
systems. the kernel provides another "kuser" function, at 0xffff0fa0,
which could be used for the barrier, but using that would drop support
for kernels 2.6.12 through 2.6.14 unless an extra conditional were
added to check for barrier availability. just using the barrier in the
kernel cas is easier, and, based on my reading of the assembly code in
the kernel, does not appear to be significantly slower.
at the same time, other atomic operations are adapted to call the
kernel cas function directly rather than using a_cas; due to small
differences in their interface contracts, this makes the generated
code much simpler.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it turns out that __SOFTFP__ does not indicate the ABI in use but
rather that fpu instructions are not to be used at all. this is
specified in ARM's documentation so I'm unclear on how I previously
got the wrong idea. unfortunately, this resulted in the 0.9.12 release
producing a dynamic linker with the wrong name. fortunately, there do
not yet seem to be any public toolchain builds using the wrong name.
the __ARM_PCS_VFP macro does not seem to be official from ARM, and in
fact it was missing from the very earliest gcc versions (around 4.5.x)
that added -mfloat-abi=hard. it would be possible on such versions to
perform some ugly linker-based tests instead in hopes that the linker
will reject ABI-mismatching object files, if there is demand for
supporting such versions. I would probably prefer to document which
versions are broken and warn users to manually add -D__ARM_PCS_VFP if
using such a version.
there's definitely an argument to be made that the fenv macros should
be exposed even in -mfloat-abi=softfp mode. for now, I have chosen not
to expose them in this case, since the math library will not
necessarily have the capability to raise exceptions (it depends on the
CFLAGS used to compile it), and since exceptions are officially
excluded from the ARM EABI, which the plain "arm" arch aims to
follow.
|
|
|
|
|
| |
patch by nsz. I've tested it on an armhf machine and it seems to be
working correctly.
|
|
|
|
|
|
|
|
|
| |
without these, calls may be resolved incorrectly if the calling code
has been compiled to thumb instead of arm. it's not clear to me at
this point whether crt_arch.h is even working if crt1.c is built as
thumb; this needs testing. but the _init and _fini issues were known
to cause crashes in static-linked apps when libc was built as thumb,
and this commit should fix that issue.
|
|
|
|
| |
this is needed for recently committed sigaction code
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the only immediate effect of this commit is enabling PIE support on
some archs that did not previously have any Scrt1.s, since the
existing asm files for crt1 override this C code. so some of the
crt_arch.h files committed are only there for the sake of documenting
what their archs "would do" if they used the new C-based crt1.
the expectation is that new archs should use this new system rather
than using heavy asm for crt1. aside from being easier and less
error-prone, it also ensures that PIE support is available immediately
(since Scrt1.o is generated from the same C source, using -fPIC)
rather than having to be added as an afterthought in the porting
process.
|
|
|
|
|
|
|
|
|
|
| |
this is necessary to meet the C++ ABI target. alternatives were
considered to avoid the size increase for non-sig jmp_buf objects, but
they seemed to have worse properties. moreover, the relative size
increase is only extreme on x86[_64]; one way of interpreting this is
that, if the size increase from this patch makes jmp_buf use too much
memory, then the program was already using too much memory when built
for non-x86 archs.
|
|
|
|
| |
i386 was done with the big commit but I missed the others
|
|
|
|
|
|
|
|
| |
rather than moving nlink_t back to the arch-specific file, I've added
a macro _Reg defined to the canonical type for register-size values on
the arch. this is not the same as _Addr for (not-yet-supported)
32-on-64 pseudo-archs like x32 and mips n32, so a new macro was
needed.
|
|
|
|
|
|
|
|
|
|
|
| |
since the old, poorly-thought-out musl approach to init/fini arrays on
ARM (when it was the only arch that needed them) was to put the code
in crti/crtn and have the legacy _init/_fini code run the arrays,
adding proper init/fini array support caused the arrays to get
processed twice on ARM. I'm not sure skipping legacy init/fini
processing is the best solution to the problem, but it works, and it
shouldn't break anything since the legacy init/fini system was never
used for ARM EABI.
|
|
|
|
|
|
|
|
|
|
|
|
| |
aside from the obvious C++ ABI purpose for this change, it also brings
musl into alignment with the compiler's idea of the definition of
wint_t (use in -Wformat), and makes the situation less awkward on ARM,
where wchar_t is unsigned.
internal code using wint_t and WEOF was checked against this change,
and while a few cases of storing WEOF into wchar_t were found, they
all seem to operate properly with the natural conversion from unsigned
to signed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the arch-specific bits/alltypes.h.sh has been replaced with a generic
alltypes.h.in and minimal arch-specific bits/alltypes.h.in.
this commit is intended to have no functional changes except:
- exposing additional symbols that POSIX allows but does not require
- changing the C++ name mangling for some types
- fixing the signedness of blksize_t on powerpc (POSIX requires signed)
- fixing the limit macros for sig_atomic_t on x86_64
- making dev_t an unsigned type (ABI matching goal, and more logical)
in addition, some types that were wrongly defined with long on 32-bit
archs were changed to int, and vice versa; this change is
non-functional except for the possibility of making pointer types
mismatch, and only affects programs that were using them incorrectly,
and only at build-time, not runtime.
the following changes were made in the interest of moving
non-arch-specific types out of the alltypes system and into the
headers they're associated with, and also will tend to improve
application compatibility:
- netdb.h now includes netinet/in.h (for socklen_t and uint32_t)
- netinet/in.h now includes sys/socket.h and inttypes.h
- sys/resource.h now includes sys/time.h (for struct timeval)
- sys/wait.h now includes signal.h (for siginfo_t)
- langinfo.h now includes nl_types.h (for nl_item)
for the types in stdint.h:
- types which are of no interest to other headers were moved out of
the alltypes system.
- fast types for 8- and 64-bit are hard-coded (at least for now); only
the 16- and 32-bit ones have reason to vary by arch.
and the following types have been changed for C++ ABI purposes;
- mbstate_t now has a struct tag, __mbstate_t
- FILE's struct tag has been changed to _IO_FILE
- DIR's struct tag has been changed to __dirstream
- locale_t's struct tag has been changed to __locale_struct
- pthread_t is defined as unsigned long in C++ mode only
- fpos_t now has a struct tag, _G_fpos64_t
- fsid_t's struct tag has been changed to __fsid_t
- idtype_t has been made an enum type (also required by POSIX)
- nl_catd has been changed from long to void *
- siginfo_t's struct tag has been removed
- sigset_t's has been given a struct tag, __sigset_t
- stack_t has been given a struct tag, sigaltstack
- suseconds_t has been changed to long on 32-bit archs
- [u]intptr_t have been changed from long to int rank on 32-bit archs
- dev_t has been made unsigned
summary of tests that have been performed against these changes:
- nsz's libc-test (diff -u before and after)
- C++ ABI check symbol dump (diff -u before, after, glibc)
- grepped for __NEED, made sure types needed are still in alltypes
- built gcc 3.4.6
|
|
|
|
|
|
|
|
|
| |
this change is both to fix one of the remaining type (and thus C++
ABI) mismatches with glibc/LSB and to allow use of the full range of
uid and gid values, if so desired.
passwd/group access functions were not prepared to deal with unsigned
values, so they too have been fixed with this commit.
|