| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
these syscalls are new in linux v3.17 and present on all supported
archs except sh.
seccomp was added in commit 48dc92b9fc3926844257316e75ba11eb5c742b2c
it has operation, flags and pointer arguments (if flags==0 then it is
the same as prctl(PR_SET_SECCOMP,...)), the uapi header for flag
definitions is linux/seccomp.h
getrandom was added in commit c6e9d6f38894798696f23c8084ca7edbf16ee895
it provides an entropy source when open("/dev/urandom",..) would fail,
the uapi header for flags is linux/random.h
memfd_create was added in commit 9183df25fe7b194563db3fec6dc3202a5855839c
it allows anon mmap to have an fd, that can be shared, sealed and needs no
mount point, the uapi header for flags is linux/memfd.h
|
|
|
|
|
|
| |
the C11 _Alignas keyword is not present in C++, and despite it being
in the reserved namespace and thus reasonable to support even in
non-C11 modes, compilers seem to fail to support it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
based on patch by Jens Gustedt.
mtx_t and cnd_t are defined in such a way that they are formally
"compatible types" with pthread_mutex_t and pthread_cond_t,
respectively, when accessed from a different translation unit. this
makes it possible to implement the C11 functions using the pthread
functions (which will dereference them with the pthread types) without
having to use the same types, which would necessitate either namespace
violations (exposing pthread type names in threads.h) or incompatible
changes to the C++ name mangling ABI for the pthread types.
for the rest of the types, things are much simpler; using identical
types is possible without any namespace considerations.
|
|
|
|
| |
this was broken by commit ea818ea8340c13742a4f41e6077f732291aea4bc.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
conceptually, a_spin needs to be at least a compiler barrier, so the
compiler will not optimize out loops (and the load on each iteration)
while spinning. it should also be a memory barrier, or the spinning
thread might keep spinning without noticing stores from other threads,
thus delaying for longer than it should.
ideally, an optimal a_spin implementation that avoids unnecessary
cache/memory contention should be chosen for each arch, but for now,
the easiest thing is to perform a useless a_cas on the calling
thread's stack.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
unfortunately this needs to be able to vary by arch, because of a huge
mess GCC made: the GCC definition, which became the ABI, depends on
quirks in GCC's definition of __alignof__, which does not match the
formal alignment of the type.
GCC's __alignof__ unexpectedly exposes the an implementation detail,
its "preferred alignment" for the type, rather than the formal/ABI
alignment of the type, which it only actually uses in structures. on
most archs the two values are the same, but on some (at least i386)
the preferred alignment is greater than the ABI alignment.
I considered using _Alignas(8) unconditionally, but on at least one
arch (or1k), the alignment of max_align_t with GCC's definition is
only 4 (even the "preferred alignment" for these types is only 4).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when manipulating the robust list, the order of stores matters,
because the code may be asynchronously interrupted by a fatal signal
and the kernel will then access the robust list in what is essentially
an async-signal context.
previously, aliasing considerations made it seem unlikely that a
compiler could reorder the stores, but proving that they could not be
reordered incorrectly would have been extremely difficult. instead
I've opted to make all the pointers used as part of the robust list,
including those in the robust list head and in the individual mutexes,
volatile.
in addition, the format of the robust list has been changed to point
back to the head at the end, rather than ending with a null pointer.
this is to match the documented kernel robust list ABI. the null
pointer, which was previously used, only worked because faults during
access terminate the robust list processing.
|
|
|
|
|
|
| |
for or1k, the kernel expects the offset passed to mmap2 in units of
the 8k page size, not the standard unit of 4k used on most other
archs.
|
|
|
|
|
| |
according to Stefan Kristiansson, or1k page size is not actually
variable and the value of 8192 is part of the ABI.
|
|
|
|
|
|
|
|
|
|
|
|
| |
this commit changes the names to match the kernel names, exposing
under the normal names the "old" versions which work with a smaller
termios structure compatible with the userspace structure, and
renaming the "new" versions with "2" on the end like the kernel has.
this fixes spurious warnings "Unsupported ioctl: cmd=0x802c542a" from
qemu-sh4 and should be more correct anyway, since our userspace
termios structure does not have meaningful information in the part
which the kernel would be interpreting as speeds with the new ioctl.
|
|
|
|
| |
this follows the same logic as in the previous commit for other archs.
|
|
|
|
|
|
|
|
|
|
|
| |
the a_cas_l, a_swap_l, a_swap_p, and a_store_l operations were
probably used a long time ago when only i386 and x86_64 were
supported. as other archs were added, support for them was
inconsistent, and they are obviously not in use at present. having
them around potentially confuses readers working on new ports, and the
type-punning hacks and inconsistent use of types in their definitions
is not a style I wish to perpetuate in the source tree, so removing
them seems appropriate.
|
|
|
|
|
|
|
| |
while other usage I've seen only has the synco instruction after the
atomic operation, I cannot find any documentation indicating that this
is correct. certainly all stores before the atomic need to have been
synchronized before the atomic operation takes place.
|
|
|
|
|
| |
it's like rename but with flags eg. to allow atomic exchange of two files,
introduced in linux 3.15 commit 520c8b16505236fc82daa352e6c5e73cd9870cff
|
|
|
|
|
|
|
|
| |
due to what was essentially a copy and paste error, the changes made
in commit f61be1f875a2758509d6e9e2cf6f1d9603b28b65 caused syscalls
with 5 or 6 arguments (and syscalls with 2, 3, or 4 arguments when
compiled with clang compatibility) to negate the returned error code a
second time, breaking errno reporting.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the mips version of this structure on the kernel side wrongly has
32-bit type rather than 64-bit type. fortunately there is adjacent
padding to bring it up to 64 bits, and on little-endian, this allows
us to treat the adjacent kernel st_dev and st_pad0[0] as as single
64-bit dev_t. however, on big endian, such treatment results in the
upper and lower 32-bit parts of the dev_t value being swapped. for the
purpose of just comparing st_dev values this did not break anything,
but it precluded actually processing the device numbers as major/minor
values.
since the broken kernel behavior that needs to be worked around is
isolated to one arch, I put the workarounds in syscall_arch.h rather
than adding a stat fixup path in the common code. on little endian
mips, the added code optimizes out completely.
the changes necessary were incompatible with the way the __asm_syscall
macro was factored so I just removed it and flattened the individual
__syscallN functions. this arguably makes the code easier to read and
understand, anyway.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
at the very least, a compiler barrier is required no matter what, and
that was missing. current or1k implementations have strong ordering,
but this is not guaranteed as part of the ISA, so some sort of
synchronizing operation is necessary.
in principle we should use l.msync, but due to misinterpretation of
the spec, it was wrongly treated as an optional instruction and is not
supported by some implementations. if future kernels trap it and treat
it as a nop (rather than illegal instruction) when the
hardware/emulator does not support it, we could consider using it.
in the absence of l.msync support, the l.lwa/l.swa instructions, which
are specified to have a built-in l.msync, need to be used. the easiest
way to use them to implement atomic store is to perform an atomic swap
and throw away the result. using compare-and-swap would be lighter,
and would probably be sufficient for all actual usage cases, but
checking this is difficult and error-prone:
with store implemented in terms of swap, it's guaranteed that, when
another atomic operation is performed at the same time as the store,
either the result of the store followed by the other operation, or
just the store (clobbering the other operation's result) is seen. if
store were implemented in terms of cas, there are cases where this
invariant would fail to hold, and we would need detailed rules for the
situations in which the store operation is well-defined.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
as far as I can tell, microblaze is strongly ordered, but this does
not seem to be well-documented and the assumption may need revisiting.
even with strong ordering, however, a volatile C assignment is not
sufficient to implement atomic store, since it does not preclude
reordering by the compiler with respect to non-volatile stores and
loads.
simply flanking a C store with empty volatile asm blocks with memory
clobbers would achieve the desired result, but is likely to result in
worse code generation, since the address and value for the store may
need to be spilled. actually writing the store in asm, so that there's
only one asm block, should give optimal code generation while
satisfying the requirement for having a compiler barrier.
|
| |
|
|
|
|
|
|
|
|
| |
previously I had wrongly assumed the ll/sc instructions also provided
memory synchronization; apparently they do not. this commit adds sync
instructions before and after each atomic operation and changes the
atomic store to simply use sync before and after a plain store, rather
than a useless compare-and-swap.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
despite lacking the semantic content that the asm accesses the
pointed-to object rather than just using its address as a value, the
mips asm was not actually broken. the asm blocks were declared
volatile, meaning that the compiler must treat them as having unknown
side effects.
however changing the asm to use memory constraints is desirable not
just from a semantic correctness and consistency standpoint, but also
produces better code. the compiler is able to use base/offset
addressing expressions for the atomic object's address rather than
having to load the address into a single register. this improves
access to global locks in static libc, and access to non-zero-offset
atomic fields in synchronization primitives, etc.
|
|
|
|
|
|
|
|
|
|
|
|
| |
due to a mistake in my testing procedure, the changes in the previous
commit were not correctly tested and wrongly assumed to be valid. the
lwarx and stwcx. instructions do not accept general ppc memory address
expressions and thus the argument associated with the memory
constraint cannot be used directly.
instead, the memory constraint can be left as an argument that the asm
does not actually use, and the address can be provided in a separate
register constraint.
|
| |
|
|
|
|
|
|
|
|
| |
the register constraint for the address to be accessed did not convey
that the asm can access the pointed-to object. as far as the compiler
could tell, the result of the asm was just a pure function of the
address and the values passed in, and thus the asm could be hoisted
out of loops or omitted entirely if the result was not used.
|
|
|
|
|
|
| |
the erroneous definition was missed because with works with qemu
user-level emulation, which also has the wrong definition. the actual
kernel uses the asm-generic generic definition.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the exception of a fenv implementation, the port is fully featured.
The port has been tested in or1ksim, the golden reference functional
simulator for OpenRISC 1000.
It passes all libc-test tests (except the math tests that
requires a fenv implementation).
The port assumes an or1k implementation that has support for
atomic instructions (l.lwa/l.swa).
Although it passes all the libc-test tests, the port is still
in an experimental state, and has yet experienced very little
'real-world' use.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this issue caused the address of functions in shared libraries to
resolve to their PLT thunks in the main program rather than their
correct addresses. it was observed causing crashes, though the
mechanism of the crash was not thoroughly investigated. since the
issue is very subtle, it calls for some explanation:
on all well-behaved archs, GOT entries that belong to the PLT use a
special relocation type, typically called JMP_SLOT, so that the
dynamic linker can avoid having the jump destinations for the PLT
resolve to PLT thunks themselves (they also provide a definition for
the symbol, which must be used whenever the address of the function is
taken so that all DSOs see the same address).
however, the traditional mips PIC ABI lacked such a JMP_SLOT
relocation type, presumably because, due to the way PIC works, the
address of the PLT thunk was never needed and could always be ignored.
prior to commit adf94c19666e687a728bbf398f9a88ea4ea19996, the mips
version of reloc.h contained a hack that caused all symbol lookups to
be treated like JMP_SLOT, inhibiting undefined symbols from ever being
used to resolve symbolic relocations. this hack goes all the way back
to commit babf820180368f00742ec65b2050a82380d7c542, when the mips
dynamic linker was first made usable.
during the recent refactoring to eliminate arch-specific relocation
processing (commit adf94c19666e687a728bbf398f9a88ea4ea19996), this
hack was overlooked and no equivalent functionality was provided in
the new code.
fixing the problem is not as simple as adding back an equivalent hack,
since there is now also a "non-PIC ABI" that can be used for the main
executable, which actually does use a PLT. the closest thing to
official documentation I could find for this ABI is nonpic.txt,
attached to Message-ID: 20080701202236.GA1534@caradoc.them.org, which
can be found in the gcc mailing list archives and elsewhere. per this
document, undefined symbols corresponding to PLT thunks have the
STO_MIPS_PLT bit set in the symbol's st_other field. thus, I have
added an arch-specific rule for mips, applied at the find_sym level
rather than the relocation level, to reject undefined symbols with the
STO_MIPS_PLT bit clear.
the previous hack of treating all mips relocations as JMP_SLOT-like,
rather than rejecting the unwanted symbols in find_sym, probably also
caused dlsym to wrongly return PLT thunks in place of the correct
address of a function under at least some conditions. this should now
be fixed, at least for global-scope symbol lookups.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
this was one of the main instances of ugly code duplication: all archs
use basically the same types of relocations, but roughly equivalent
logic was duplicated for each arch to account for the different naming
and numbering of relocation types and variation in whether REL or RELA
records are used.
as an added bonus, both REL and RELA are now supported on all archs,
regardless of which is used by the standard toolchain.
|
|
|
|
|
|
|
| |
processing of R_PPC_TPREL32 was ignoring the addend provided by the
RELA-style relocation and instead using the inline value as the
addend. this presumably broke dynamic-linked access to initial TLS in
cases where the addend was nonzero.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the following issues are fixed:
- R_SH_REL32 was adding the load address of the module being relocated
to the result. this seems to have been a mistake in the original
port, since it does not match other dynamic linker implementations
and since adding a difference between two addresses (the symbol
value and the relocation address) to a load address does not make
sense.
- R_SH_TLS_DTPMOD32 was wrongly accepting an inline addend (i.e. using
+= rather than = on *reloc_addr) which makes no sense; addition is
not an operation that's defined on module ids.
- R_SH_TLS_DTPOFF32 and R_SH_TLS_TPOFF32 were wrongly using inline
addends rather than the RELA-provided addends.
in addition, handling of R_SH_GLOB_DAT, R_SH_JMP_SLOT, and R_SH_DIR32
are merged to all honor the addend. the first two should not need it
for correct usage generated by toolchains, but other dynamic linkers
allow addends here, and it simplifies the code anyway.
these issues were spotted while reviewing the code for the purpose of
refactoring this part of the dynamic linker. no testing was performed.
|
|
|
|
|
|
|
|
| |
the immediate motivation is supporting TLSDESC relocations which
require allocation and thus may fail (unless we pre-allocate), but
this mechanism should also be used for throwing an error on
unsupported or invalid relocation types, and perhaps in certain cases,
for reporting when a relocation is not satisfiable.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
linux 3.14 introduced sched_getattr and sched_setattr syscalls in
commit d50dde5a10f305253cbc3855307f608f8a3c5f73
and the related SCHED_DEADLINE scheduling policy in
commit aab03e05e8f7e26f51dee792beddcb5cca9215a5
but struct sched_attr "extended scheduling parameters data structure"
is not yet exported to userspace (necessary for using the syscalls)
so related uapi definitions are not added yet.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On 32 bit mips the kernel uses -1UL/2 to mark RLIM_INFINITY (and
this is the definition in the userspace api), but since it is in
the middle of the valid range of limits and limits are often
compared with relational operators, various kernel side logic is
broken if larger than -1UL/2 limits are used. So we truncate the
limits to -1UL/2 in get/setrlimit and prlimit.
Even if the kernel side logic consistently treated -1UL/2 as greater
than any other limit value, there wouldn't be any clean workaround
that allowed using large limits:
* using -1UL/2 as RLIM_INFINITY in userspace would mean different
infinity value for get/setrlimt and prlimit (where infinity is always
-1ULL) and userspace logic could break easily (just like the kernel
is broken now) and more special case code would be needed for mips.
* translating -1UL/2 kernel side value to -1ULL in userspace would
mean that -1UL/2 limit cannot be set (eg. -1UL/2+1 had to be passed
to the kernel instead).
|
|
|
|
|
|
|
|
|
|
|
| |
armv7/thumb2 provides a way to do atomics in thumb mode, but for armv6
we need a call to arm mode.
this commit is based on a patch by Stephen Thomas which fixed the
armv7 cases but not the armv6 ones.
all of this should be revisited if/when runtime selection of thread
pointer access and atomics are added.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the vdso symbol lookup code is based on the original 2011 patch by
Nicholas J. Kain, with some streamlining, pointer arithmetic fixes,
and one symbol version matching fix.
on the consumer side (clock_gettime), per-arch macros for the
particular symbol name and version to lookup are added in
syscall_arch.h, and no vdso code is pulled in on archs which do not
define these macros. at this time, vdso is enabled only on x86_64.
the vdso support at the dynamic linker level is no longer useful to
libc, but is left in place for the sake of debuggers (which may need
the vdso in the link map to find its functions) and possibly use with
dlsym.
|
|
|
|
|
|
|
| |
The mips arch is special in that it uses different RLIMIT_
numbers than other archs, so allow bits/resource.h to override
the default RLIMIT_ numbers (empty on all archs except mips).
Reported by orc.
|
|
|
|
|
|
|
|
|
|
|
| |
it will be needed to implement some things in sysconf, and the syscall
can't easily be used directly because the x32 syscall uses the wrong
structure layout. the l (uncreative, for "linux") prefix is used since
the symbol name __sysinfo is already taken for AT_SYSINFO from the aux
vector.
the way the x32 override of this function works is also changed to be
simpler and avoid the useless jump instruction.
|
|
|
|
|
|
|
| |
aside from potentially offering better performance, this change is
needed since the old coprocessor-based approach to barriers is
deprecated in arm v7, and some compilers/assemblers issue errors when
using the deprecated instruction for v7 targets.
|
|
|
|
|
|
| |
the "m" constraint could give a memory reference with an offset that's
not compatible with ldrex/strex, so the arm-specific "Q" constraint is
needed instead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this is perhaps not the optimal implementation; a_cas still compiles
to nested loops due to the different interface contracts of the kuser
helper cas function (whose contract this patch implements) and the
a_cas function (whose contract mimics the x86 cmpxchg). fixing this
may be possible, but it's more complicated and thus deferred until a
later time.
aside from improving performance and code size, this patch also
provides a means of producing binaries which can run on hardened
kernels where the kuser helpers have been disabled. however, at
present this requires producing binaries for armv6k or later, which
will not run on older cpus. a real solution to the problem of kernels
that omit the kuser helpers would be runtime detection, so that
universal binaries which run on all arm cpu models can also be
compatible with all kernel hardening profiles. robust detection
however is a much harder problem, and will be addressed at a later
time.
|
|
|
|
|
|
|
|
|
| |
the kernel entry point for syscalls on microblaze nominally saves and
restores all registers, and testing on qemu always worked since qemu
behaves this way too. however, the real kernel treats r3:r4 as a
potential 64-bit return value from the syscall function, and copies
both over top of the saved registers before returning to userspace.
thus, we need to treat r4 as always-clobbered.
|
|
|
|
|
|
| |
in the previous changes, I missed the fact that both the prototype of
the sigaltstack function and the definition of ucontext_t depend on
stack_t.
|
|
|
|
| |
like almost everything on mips, this is gratuitously different.
|