about summary refs log tree commit diff
path: root/arch/or1k
Commit message (Collapse)AuthorAgeFilesLines
* make all objects used with atomic operations volatileRich Felker2015-03-031-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the memory model we use internally for atomics permits plain loads of values which may be subject to concurrent modification without requiring that a special load function be used. since a compiler is free to make transformations that alter the number of loads or the way in which loads are performed, the compiler is theoretically free to break this usage. the most obvious concern is with atomic cas constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be transformed to a_cas(p,*p,f(*p)); where the latter is intended to show multiple loads of *p whose resulting values might fail to be equal; this would break the atomicity of the whole operation. but even more fundamental breakage is possible. with the changes being made now, objects that may be modified by atomics are modeled as volatile, and the atomic operations performed on them by other threads are modeled as asynchronous stores by hardware which happens to be acting on the request of another thread. such modeling of course does not itself address memory synchronization between cores/cpus, but that aspect was already handled. this all seems less than ideal, but it's the best we can do without mandating a C11 compiler and using the C11 model for atomics. in the case of pthread_once_t, the ABI type of the underlying object is not volatile-qualified. so we are assuming that accessing the object through a volatile-qualified lvalue via casts yields volatile access semantics. the language of the C standard is somewhat unclear on this matter, but this is an assumption the linux kernel also makes, and seems to be the correct interpretation of the standard.
* add syscall numbers for the new execveat syscallSzabolcs Nagy2015-02-091-0/+2
| | | | | | | | | this syscall allows fexecve to be implemented without /proc, it is new in linux v3.19, added in commit 51f39a1f0cea1cacf8c787f652f26dfee9611874 (sh and microblaze do not have allocated syscall numbers yet) added a x32 fix as well: the io_setup and io_submit syscalls are no longer common with x86_64, so use the x32 specific numbers.
* move MREMAP_MAYMOVE and MREMAP_FIXED out of bitsTrutz Behn2015-01-301-3/+0
| | | | | | the definitions are generic for all kernel archs. exposure of these macros now only occurs on the same feature test as for the function accepting them, which is believed to be more correct.
* add new syscall numbers for bpf and kexec_file_loadSzabolcs Nagy2014-12-231-0/+2
| | | | | | | | | | | these syscalls are new in linux v3.18, bpf is present on all supported archs except sh, kexec_file_load is only allocted for x86_64 and x32 yet. bpf was added in linux commit 99c55f7d47c0dc6fc64729f37bf435abf43f4c60 kexec_file_load syscall number was allocated in commit f0895685c7fd8c938c91a9d8a6f7c11f22df58d2
* move wint_t definition to the shared part of alltypes.h.inRich Felker2014-12-211-1/+0
|
* unify non-inline version of syscall code across archsRich Felker2014-11-221-34/+2
| | | | | | | | | | | | | | except powerpc, which still lacks inline syscalls simply because nobody has written the code, these are all fallbacks used to work around a clang bug that probably does not exist in versions of clang that can compile musl. however, it's useful to have the generic non-inline code anyway, as it eases the task of porting to new archs: writing inline syscall code is now optional. this approach could also help support compilers which don't understand inline asm or lack support for the needed register constraints. mips could not be unified because it has special fixup code for broken layout of the kernel's struct stat.
* fix 64-bit syscall argument passing on or1kRich Felker2014-11-051-1/+1
| | | | | | the kernel syscall interface for or1k does not expect 64-bit arguments to be aligned to "even" register boundaries. this incorrect alignment broke truncate/ftruncate and as well as a few less-common syscalls.
* add explicit barrier operation to internal atomic.h APIRich Felker2014-10-101-1/+3
|
* add new syscall numbers for seccomp, getrandom, memfd_createSzabolcs Nagy2014-10-081-0/+6
| | | | | | | | | | | | | | | | | | these syscalls are new in linux v3.17 and present on all supported archs except sh. seccomp was added in commit 48dc92b9fc3926844257316e75ba11eb5c742b2c it has operation, flags and pointer arguments (if flags==0 then it is the same as prctl(PR_SET_SECCOMP,...)), the uapi header for flag definitions is linux/seccomp.h getrandom was added in commit c6e9d6f38894798696f23c8084ca7edbf16ee895 it provides an entropy source when open("/dev/urandom",..) would fail, the uapi header for flags is linux/random.h memfd_create was added in commit 9183df25fe7b194563db3fec6dc3202a5855839c it allows anon mmap to have an fd, that can be shared, sealed and needs no mount point, the uapi header for flags is linux/memfd.h
* add threads.h and needed per-arch types for mtx_t and cnd_tRich Felker2014-09-061-0/+2
| | | | | | | | | | | | | | | | based on patch by Jens Gustedt. mtx_t and cnd_t are defined in such a way that they are formally "compatible types" with pthread_mutex_t and pthread_cond_t, respectively, when accessed from a different translation unit. this makes it possible to implement the C11 functions using the pthread functions (which will dereference them with the pthread types) without having to use the same types, which would necessitate either namespace violations (exposing pthread type names in threads.h) or incompatible changes to the C++ name mangling ABI for the pthread types. for the rest of the types, things are much simpler; using identical types is possible without any namespace considerations.
* add working a_spin() atomic for non-x86 targetsRich Felker2014-08-251-0/+1
| | | | | | | | | | | | | conceptually, a_spin needs to be at least a compiler barrier, so the compiler will not optimize out loops (and the load on each iteration) while spinning. it should also be a memory barrier, or the spinning thread might keep spinning without noticing stores from other threads, thus delaying for longer than it should. ideally, an optimal a_spin implementation that avoids unnecessary cache/memory contention should be chosen for each arch, but for now, the easiest thing is to perform a useless a_cas on the calling thread's stack.
* add max_align_t definition for C11 and C++11Rich Felker2014-08-201-0/+2
| | | | | | | | | | | | | | | | | unfortunately this needs to be able to vary by arch, because of a huge mess GCC made: the GCC definition, which became the ABI, depends on quirks in GCC's definition of __alignof__, which does not match the formal alignment of the type. GCC's __alignof__ unexpectedly exposes the an implementation detail, its "preferred alignment" for the type, rather than the formal/ABI alignment of the type, which it only actually uses in structures. on most archs the two values are the same, but on some (at least i386) the preferred alignment is greater than the ABI alignment. I considered using _Alignas(8) unconditionally, but on at least one arch (or1k), the alignment of max_align_t with GCC's definition is only 4 (even the "preferred alignment" for these types is only 4).
* make pointers used in robust list volatileRich Felker2014-08-171-1/+1
| | | | | | | | | | | | | | | | | | | | when manipulating the robust list, the order of stores matters, because the code may be asynchronously interrupted by a fatal signal and the kernel will then access the robust list in what is essentially an async-signal context. previously, aliasing considerations made it seem unlikely that a compiler could reorder the stores, but proving that they could not be reordered incorrectly would have been extremely difficult. instead I've opted to make all the pointers used as part of the robust list, including those in the robust list head and in the individual mutexes, volatile. in addition, the format of the robust list has been changed to point back to the head at the end, rather than ending with a null pointer. this is to match the documented kernel robust list ABI. the null pointer, which was previously used, only worked because faults during access terminate the robust list processing.
* fix broken offset argument to the mmap2 syscall on or1kRich Felker2014-07-301-0/+2
| | | | | | for or1k, the kernel expects the offset passed to mmap2 in units of the 8k page size, not the standard unit of 4k used on most other archs.
* provide PAGE_SIZE as a constant value of 8192 on or1kRich Felker2014-07-301-0/+1
| | | | | according to Stefan Kristiansson, or1k page size is not actually variable and the value of 8192 is part of the ABI.
* remove unused a_cas_l from or1k atomic.hRich Felker2014-07-271-5/+0
| | | | this follows the same logic as in the previous commit for other archs.
* add syscall numbers for the new renameat2 syscallSzabolcs Nagy2014-07-201-0/+6
| | | | | it's like rename but with flags eg. to allow atomic exchange of two files, introduced in linux 3.15 commit 520c8b16505236fc82daa352e6c5e73cd9870cff
* fix or1k atomic storeRich Felker2014-07-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | at the very least, a compiler barrier is required no matter what, and that was missing. current or1k implementations have strong ordering, but this is not guaranteed as part of the ISA, so some sort of synchronizing operation is necessary. in principle we should use l.msync, but due to misinterpretation of the spec, it was wrongly treated as an optional instruction and is not supported by some implementations. if future kernels trap it and treat it as a nop (rather than illegal instruction) when the hardware/emulator does not support it, we could consider using it. in the absence of l.msync support, the l.lwa/l.swa instructions, which are specified to have a built-in l.msync, need to be used. the easiest way to use them to implement atomic store is to perform an atomic swap and throw away the result. using compare-and-swap would be lighter, and would probably be sufficient for all actual usage cases, but checking this is difficult and error-prone: with store implemented in terms of swap, it's guaranteed that, when another atomic operation is performed at the same time as the store, either the result of the store followed by the other operation, or just the store (clobbering the other operation's result) is seen. if store were implemented in terms of cas, there are cases where this invariant would fail to hold, and we would need detailed rules for the situations in which the store operation is well-defined.
* add or1k (OpenRISC 1000) architecture portStefan Kristiansson2014-07-1832-0/+1737
With the exception of a fenv implementation, the port is fully featured. The port has been tested in or1ksim, the golden reference functional simulator for OpenRISC 1000. It passes all libc-test tests (except the math tests that requires a fenv implementation). The port assumes an or1k implementation that has support for atomic instructions (l.lwa/l.swa). Although it passes all the libc-test tests, the port is still in an experimental state, and has yet experienced very little 'real-world' use.