about summary refs log tree commit diff
path: root/arch/arm
Commit message (Collapse)AuthorAgeFilesLines
* add MCL_ONFAULT and MLOCK_ONFAULT mlockall and mlock2 flagsSzabolcs Nagy2016-01-261-0/+1
| | | | | | | | they lock faulted pages into memory (useful when a small part of a large mapped file needs efficient access), new in linux v4.4, commit b0f205c2a3082dd9081f9a94e50658c5fa906ff1 MLOCK_* is not in the POSIX reserved namespace for sys/mman.h
* add mlock2 syscall number from linux v4.4Szabolcs Nagy2016-01-261-0/+2
| | | | | | | this is mlock with a flags argument, new in linux commit a8ca5d0ecbdde5cc3d7accacbd69968b0c98764e as usual microblaze and sh don't have allocated syscall number yet.
* add new membarrier, userfaultfd and switch_endian syscallsSzabolcs Nagy2016-01-261-0/+4
| | | | | | | | | | | | | | | new in linux v4.3 added for aarch64, arm, i386, mips, or1k, powerpc, x32 and x86_64. membarrier is a system wide memory barrier, moves most of the synchronization cost to one side, new in kernel commit 5b25b13ab08f616efd566347d809b4ece54570d1 userfaultfd is useful for qemu and is new in kernel commit 8d2afd96c20316d112e04d935d9e09150e988397 switch_endian is powerpc only for switching endianness, new in commit 529d235a0e190ded1d21ccc80a73e625ebcad09b
* fix arm a_crash for big endianRich Felker2016-01-251-2/+4
| | | | | | | | contrary to commit 89e149d275a7699a4a5e4c98bab267648f64cbba, big endian arm does need the instruction bytes in big endian order. rather than trying to use a special encoding that works as arm or thumb, simply encode the simplest/canonical undefined instructions dependent on whether __thumb__ is defined.
* add native a_crash primitive for armRich Felker2016-01-251-0/+10
| | | | | | | | the .byte directive encodes a guaranteed-undefined instruction, the same one Linux fills the kuser helper page with when it's disabled. the udf mnemonic and and .insn directives are not supported by old binutils versions, and larger-than-byte integer directives would produce the wrong output on big-endian.
* move arm-specific translation units out of arch/arm/src, to src/*/armRich Felker2016-01-228-244/+0
| | | | | | | this is possible with the new build system that allows src/*/$(ARCH)/* files which do not shadow a file in the parent directory, and yields a more logical organization. eventually it will be possible to remove arch/*/src from the build system.
* overhaul arm atomics for new atomics frameworkRich Felker2016-01-211-142/+38
| | | | | | | | | | | | switch to ll/sc model so that new atomic.h can provide optimized versions of all the atomic primitives without needing an ll/sc loop written in asm for each one. all isa levels which use ldrex/strex now use the inline ll/sc model even if the type of barrier to use is not known until runtime (v6). the cas model is only used for arm v5 and earlier, and it has been optimized to make the call via inline asm with custom constraints rather than as a C function call.
* refactor internal atomic.hRich Felker2016-01-211-104/+11
| | | | | | | | | | | | | | | rather than having each arch provide its own atomic.h, there is a new shared atomic.h in src/internal which pulls arch-specific definitions from arc/$(ARCH)/atomic_arch.h. the latter can be extremely minimal, defining only a_cas or new ll/sc type primitives which the shared atomic.h will use to construct everything else. this commit avoids making heavy changes to the individual archs' atomic implementations. definitions which are identical or near-identical to what the new shared atomic.h would produce have been removed, but otherwise the changes made are just hooking up the arch-specific files to the new infrastructure. major changes to take advantage of the new system will come in subsequent commits.
* fix build regression for arm pre-v7 from out-of-tree build patchRich Felker2016-01-202-0/+0
| | | | | | | | | | commit 2f853dd6b9a95d5b13ee8f9df762125e0588df5d failed to replicate the old makefile logic that caused arch/arm/src/arm/atomics.s to be built. since this was the only .s file under arch/*/src, rather than trying to reproduce the old logic, I'm just moving it up a level and adjusting the glob pattern in the makefile to catch it. eventually arch/*/src will probably be removed in favor of moving all these files to appropriate src/*/$(ARCH) locations.
* fix dynamic linker path file selection for arm vs armhfRich Felker2016-01-201-3/+3
| | | | | | | | | | the __SOFTFP__ macro which was wrongly being used does not reflect the ABI (arm vs armhf) but just the availability of floating point instructions/registers, so -mfloat-abi=softfp was wrongly being treated as armhf. __ARM_PCS_VFP is the correct predefined macro to check for the armhf EABI variant. this macro usage was corrected for the build process in commit 4918c2bb206bfaaf5a1f7d3448c2f63d5e2b7d56 but reloc.h was apparently overlooked at the time.
* explicitly assemble all arm asm sources as UALRich Felker2015-11-101-0/+1
| | | | | | | | these files are all accepted as legacy arm syntax when producing arm code, but legacy syntax cannot be used for producing thumb2 with access to the full ISA. even after switching to UAL, some asm source files contain instructions which are not valid in thumb mode, so these will need to be addressed separately.
* remove non-working pre-armv4t support from arm asmRich Felker2015-11-092-11/+0
| | | | | | | | | | | | | | | the idea of the three-instruction sequence being removed was to be able to return to thumb code when used on armv4t+ from a thumb caller, but also to be able to run on armv4 without the bx instruction available (in which case the low bit of lr would always be 0). however, without compiler support for generating such a sequence from C code, which does not exist and which there is unlikely to be interest in implementing, there is little point in having it in the asm, and it would likely be easier to add pre-armv4t support via enhanced linker handling of R_ARM_V4BX than at the compiler level. removing this code simplifies adding support for building libc in thumb2-only form (for cortex-m).
* properly access mcontext_t program counter in cancellation handlerRich Felker2015-11-021-1/+1
| | | | | | | | | using the actual mcontext_t definition rather than an overlaid pointer array both improves correctness/readability and eliminates some ugly hacks for archs with 64-bit registers bit 32-bit program counter. also fix UB due to comparison of pointers not in a common array object.
* mark arm thread-pointer-loading inline asm as volatileRich Felker2015-10-151-3/+3
| | | | | | | | | | this builds on commits a603a75a72bb469c6be4963ed1b55fabe675fe15 and 0ba35d69c0e77b225ec640d2bd112ff6d9d3b2af to ensure that a compiler cannot conclude that it's valid to reorder the asm to a point before the thread pointer is set up, or to treat the inline function as if it were declared with attribute((const)). other archs already use volatile asm for thread pointer loading.
* remove attribute((const)) from arm __pthread_self inline functionRich Felker2015-10-151-2/+2
| | | | | commit a603a75a72bb469c6be4963ed1b55fabe675fe15 did this for the public pthread_self function but not the internal inline one.
* implement arm eabi mem* functionsTimo Teräs2015-08-314-0/+36
| | | | | | | these functions are part of the ARM EABI, meaning compilers may generate references to them. known versions of gcc do not use them, but llvm does. they are not provided by libgcc, and the de facto standard seems to be that libc provides them.
* arm: add vdso supportSzabolcs Nagy2015-06-141-0/+4
| | | | | vdso will be available on arm in linux v4.2, the user-space code for it is in kernel commit 8512287a8165592466cb9cb347ba94892e9c56a5
* add .text section directive to all crt_arch.h files missing itRich Felker2015-05-221-0/+1
| | | | | | | | i386 and x86_64 versions already had the .text directive; other archs did not. normally, top-level (file scope) __asm__ starts in the .text section anyway, but problems were reported with some versions of clang, and it seems preferable to set it explicitly anyway, at least for the sake of consistency between archs.
* make arm reloc.h CRTJMP macro compatible with thumbRich Felker2015-05-141-0/+5
| | | | | | | | | | | | | compilers targeting armv7 may be configured to produce thumb2 code instead of arm code by default, and in the future we may wish to support targets where only the thumb instruction set is available. the instructions this patch omits in thumb mode are needed only for non-thumb versions of armv4 or earlier, which are not supported by any current compilers/toolchains and thus rather pointless to have. at some point these compatibility return sequences may be removed from all asm source files, and in that case it would make sense to remove them here too and remove the ifdef.
* make arm crt_arch.h compatible with thumb code generationRich Felker2015-05-141-4/+6
| | | | | | | | | | | | | | | | | | compilers targeting armv7 may be configured to produce thumb2 code instead of arm code by default, and in the future we may wish to support targets where only the thumb instruction set is available. the changes made here avoid operating directly on the sp register, which is not possible in thumb code, and address an issue with the way the address of _DYNAMIC is computed. previously, the relative address of _DYNAMIC was stored with an additional offset of -8 versus the pc-relative add instruction, since on arm the pc register evaluates to ".+8". in thumb code, it instead evaluates to ".+4". both are two (normal-size) instructions beyond "." in the current execution mode, so the numbered label 2 used in the relative address expression is simply moved two instructions ahead to be compatible with both instruction sets.
* fix __syscall declaration with wrong visibility in syscall_arch.hSzabolcs Nagy2015-04-301-2/+0
| | | | | remove __syscall declaration where it is not needed (aarch64, arm, microblaze, or1k) and add the hidden attribute where it is (mips).
* dynamic linker bootstrap overhaulRich Felker2015-04-132-34/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | this overhaul further reduces the amount of arch-specific code needed by the dynamic linker and removes a number of assumptions, including: - that symbolic function references inside libc are bound at link time via the linker option -Bsymbolic-functions. - that libc functions used by the dynamic linker do not require access to data symbols. - that static/internal function calls and data accesses can be made without performing any relocations, or that arch-specific startup code handled any such relocations needed. removing these assumptions paves the way for allowing libc.so itself to be built with stack protector (among other things), and is achieved by a three-stage bootstrap process: 1. relative relocations are processed with a flat function. 2. symbolic relocations are processed with no external calls/data. 3. main program and dependency libs are processed with a fully-functional libc/ldso. reduction in arch-specific code is achived through the following: - crt_arch.h, used for generating crt1.o, now provides the entry point for the dynamic linker too. - asm is no longer responsible for skipping the beginning of argv[] when ldso is invoked as a command. - the functionality previously provided by __reloc_self for heavily GOT-dependent RISC archs is now the arch-agnostic stage-1. - arch-specific relocation type codes are mapped directly as macros rather than via an inline translation function/switch statement.
* move O_PATH definition back to arch bitsRich Felker2015-04-011-0/+1
| | | | | | | while it's the same for all presently supported archs, it differs at least on sparc, and conceptually it's no less arch-specific than the other O_* macros. O_SEARCH and O_EXEC are still defined in terms of O_PATH in the main fcntl.h.
* fix MINSIGSTKSZ values for archs with large signal contextsRich Felker2015-03-181-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | the previous values (2k min and 8k default) were too small for some archs. aarch64 reserves 4k in the signal context for future extensions and requires about 4.5k total, and powerpc reportedly uses over 2k. the new minimums are chosen to fit the saved context and also allow a minimal signal handler to run. since the default (SIGSTKSZ) has always been 6k larger than the minimum, it is also increased to maintain the 6k usable by the signal handler. this happens to be able to store one pathname buffer and should be sufficient for calling any function in libc that doesn't involve conversion between floating point and decimal representations. x86 (both 32-bit and 64-bit variants) may also need a larger minimum (around 2.5k) in the future to support avx-512, but the values on these archs are left alone for now pending further analysis. the value for PTHREAD_STACK_MIN is not increased to match MINSIGSTKSZ at this time. this is so as not to preclude applications from using extremely small thread stacks when they know they will not be handling signals. unfortunately cancellation and multi-threaded set*id() use signals as an implementation detail and therefore require a stack large enough for a signal context, so applications which use extremely small thread stacks may still need to avoid using these features.
* fix FLT_ROUNDS to reflect the current rounding modeSzabolcs Nagy2015-03-071-1/+0
| | | | | Implemented as a wrapper around fegetround introducing a new function to the ABI: __flt_rounds. (fegetround cannot be used directly from float.h)
* fix POLLWRNORM and POLLWRBAND on mipsTrutz Behn2015-03-041-0/+0
| | | | | | these macros have the same distinct definition on blackfin, frv, m68k, mips, sparc and xtensa kernels. POLLMSG and POLLRDHUP additionally differ on sparc.
* make all objects used with atomic operations volatileRich Felker2015-03-031-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the memory model we use internally for atomics permits plain loads of values which may be subject to concurrent modification without requiring that a special load function be used. since a compiler is free to make transformations that alter the number of loads or the way in which loads are performed, the compiler is theoretically free to break this usage. the most obvious concern is with atomic cas constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be transformed to a_cas(p,*p,f(*p)); where the latter is intended to show multiple loads of *p whose resulting values might fail to be equal; this would break the atomicity of the whole operation. but even more fundamental breakage is possible. with the changes being made now, objects that may be modified by atomics are modeled as volatile, and the atomic operations performed on them by other threads are modeled as asynchronous stores by hardware which happens to be acting on the request of another thread. such modeling of course does not itself address memory synchronization between cores/cpus, but that aspect was already handled. this all seems less than ideal, but it's the best we can do without mandating a C11 compiler and using the C11 model for atomics. in the case of pthread_once_t, the ABI type of the underlying object is not volatile-qualified. so we are assuming that accessing the object through a volatile-qualified lvalue via casts yields volatile access semantics. the language of the C standard is somewhat unclear on this matter, but this is an assumption the linux kernel also makes, and seems to be the correct interpretation of the standard.
* add syscall numbers for the new execveat syscallSzabolcs Nagy2015-02-091-0/+2
| | | | | | | | | this syscall allows fexecve to be implemented without /proc, it is new in linux v3.19, added in commit 51f39a1f0cea1cacf8c787f652f26dfee9611874 (sh and microblaze do not have allocated syscall numbers yet) added a x32 fix as well: the io_setup and io_submit syscalls are no longer common with x86_64, so use the x32 specific numbers.
* move MREMAP_MAYMOVE and MREMAP_FIXED out of bitsTrutz Behn2015-01-301-3/+0
| | | | | | the definitions are generic for all kernel archs. exposure of these macros now only occurs on the same feature test as for the function accepting them, which is believed to be more correct.
* add new syscall numbers for bpf and kexec_file_loadSzabolcs Nagy2014-12-231-0/+2
| | | | | | | | | | | these syscalls are new in linux v3.18, bpf is present on all supported archs except sh, kexec_file_load is only allocted for x86_64 and x32 yet. bpf was added in linux commit 99c55f7d47c0dc6fc64729f37bf435abf43f4c60 kexec_file_load syscall number was allocated in commit f0895685c7fd8c938c91a9d8a6f7c11f22df58d2
* move wint_t definition to the shared part of alltypes.h.inRich Felker2014-12-211-1/+0
|
* add arm private syscall numbersTimo Teräs2014-12-031-0/+5
| | | | it is part of kernel uapi, and some programs (e.g. nodejs) do use them
* inline 5- and 6-argument syscalls on armRich Felker2014-11-221-2/+15
|
* remove old clang workarounds from arm syscall implementationRich Felker2014-11-221-31/+0
| | | | | | | | | | the register constraints in the non-clang case were tested to work on clang back to 3.2, and earlier versions of clang have known bugs that preclude building musl. there may be other reasons to prefer not to use inline syscalls, but if so the function-call-based implementations should be added back in a unified way for all archs.
* fix __aeabi_read_tp oversight in arm atomics/tls overhaulRich Felker2014-11-222-6/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | calls to __aeabi_read_tp may be generated by the compiler to access TLS on pre-v6 targets. previously, this function was hard-coded to call the kuser helper, which would crash on kernels with kuser helper removed. to fix the problem most efficiently, the definition of __aeabi_read_tp is moved so that it's an alias for the new __a_gettp. however, on v7+ targets, code to initialize the runtime choice of thread-pointer loading code is not even compiled, meaning that defining __aeabi_read_tp would have caused an immediate crash due to using the default implementation of __a_gettp with a HCF instruction. fortunately there is an elegant solution which reduces overall code size: putting the native thread-pointer loading instruction in the default code path for __a_gettp, so that separate default/native code paths are not needed. this function should never be called before __set_thread_area anyway, and if it is called early on pre-v6 hardware, the old behavior (crashing) is maintained. ideally __aeabi_read_tp would not be called at all on v7+ targets anyway -- in fact, prior to the overhaul, the same problem existed, but it was never caught by users building for v7+ with kuser disabled. however, it's possible for calls to __aeabi_read_tp to end up in a v7+ binary if some of the object files were built for pre-v7 targets, e.g. in the case of static libraries that were built separately, so this case needs to be handled.
* overhaul ARM atomics/tls for performance and compatibilityRich Felker2014-11-195-44/+330
| | | | | | | | | | | | | | | | | | | | | | | | previously, builds for pre-armv6 targets hard-coded use of the "kuser helper" system for atomics and thread-pointer access, resulting in binaries that fail to run (crash) on systems where this functionality has been disabled (as a security/hardening measure) in the kernel. additionally, builds for armv6 hard-coded an outdated/deprecated memory barrier instruction which may require emulation (extremely slow) on future models. this overhaul replaces the behavior for all pre-armv7 builds (both of the above cases) to perform runtime detection of the appropriate mechanisms for barrier, atomic compare-and-swap, and thread pointer access. detection is based on information provided by the kernel in auxv: presence of the HWCAP_TLS bit for AT_HWCAP and the architecture version encoded in AT_PLATFORM. direct use of the instructions is preferred when possible, since probing for the existence of the kuser helper page would be difficult and would incur runtime cost. for builds targeting armv7 or later, the runtime detection code is not compiled at all, and much more efficient versions of the non-cas atomic operations are provided by using ldrex/strex directly rather than wrapping cas.
* add explicit barrier operation to internal atomic.h APIRich Felker2014-10-101-1/+3
|
* add new syscall numbers for seccomp, getrandom, memfd_createSzabolcs Nagy2014-10-081-0/+6
| | | | | | | | | | | | | | | | | | these syscalls are new in linux v3.17 and present on all supported archs except sh. seccomp was added in commit 48dc92b9fc3926844257316e75ba11eb5c742b2c it has operation, flags and pointer arguments (if flags==0 then it is the same as prctl(PR_SET_SECCOMP,...)), the uapi header for flag definitions is linux/seccomp.h getrandom was added in commit c6e9d6f38894798696f23c8084ca7edbf16ee895 it provides an entropy source when open("/dev/urandom",..) would fail, the uapi header for flags is linux/random.h memfd_create was added in commit 9183df25fe7b194563db3fec6dc3202a5855839c it allows anon mmap to have an fd, that can be shared, sealed and needs no mount point, the uapi header for flags is linux/memfd.h
* add threads.h and needed per-arch types for mtx_t and cnd_tRich Felker2014-09-061-0/+2
| | | | | | | | | | | | | | | | based on patch by Jens Gustedt. mtx_t and cnd_t are defined in such a way that they are formally "compatible types" with pthread_mutex_t and pthread_cond_t, respectively, when accessed from a different translation unit. this makes it possible to implement the C11 functions using the pthread functions (which will dereference them with the pthread types) without having to use the same types, which would necessitate either namespace violations (exposing pthread type names in threads.h) or incompatible changes to the C++ name mangling ABI for the pthread types. for the rest of the types, things are much simpler; using identical types is possible without any namespace considerations.
* fix build error on arm due to new a_spin codeRich Felker2014-08-251-1/+1
| | | | this was broken by commit ea818ea8340c13742a4f41e6077f732291aea4bc.
* add working a_spin() atomic for non-x86 targetsRich Felker2014-08-251-0/+1
| | | | | | | | | | | | | conceptually, a_spin needs to be at least a compiler barrier, so the compiler will not optimize out loops (and the load on each iteration) while spinning. it should also be a memory barrier, or the spinning thread might keep spinning without noticing stores from other threads, thus delaying for longer than it should. ideally, an optimal a_spin implementation that avoids unnecessary cache/memory contention should be chosen for each arch, but for now, the easiest thing is to perform a useless a_cas on the calling thread's stack.
* add max_align_t definition for C11 and C++11Rich Felker2014-08-201-0/+2
| | | | | | | | | | | | | | | | | unfortunately this needs to be able to vary by arch, because of a huge mess GCC made: the GCC definition, which became the ABI, depends on quirks in GCC's definition of __alignof__, which does not match the formal alignment of the type. GCC's __alignof__ unexpectedly exposes the an implementation detail, its "preferred alignment" for the type, rather than the formal/ABI alignment of the type, which it only actually uses in structures. on most archs the two values are the same, but on some (at least i386) the preferred alignment is greater than the ABI alignment. I considered using _Alignas(8) unconditionally, but on at least one arch (or1k), the alignment of max_align_t with GCC's definition is only 4 (even the "preferred alignment" for these types is only 4).
* make pointers used in robust list volatileRich Felker2014-08-171-1/+1
| | | | | | | | | | | | | | | | | | | | when manipulating the robust list, the order of stores matters, because the code may be asynchronously interrupted by a fatal signal and the kernel will then access the robust list in what is essentially an async-signal context. previously, aliasing considerations made it seem unlikely that a compiler could reorder the stores, but proving that they could not be reordered incorrectly would have been extremely difficult. instead I've opted to make all the pointers used as part of the robust list, including those in the robust list head and in the individual mutexes, volatile. in addition, the format of the robust list has been changed to point back to the head at the end, rather than ending with a null pointer. this is to match the documented kernel robust list ABI. the null pointer, which was previously used, only worked because faults during access terminate the robust list processing.
* clean up unused and inconsistent atomics in arch dirsRich Felker2014-07-271-5/+0
| | | | | | | | | | | the a_cas_l, a_swap_l, a_swap_p, and a_store_l operations were probably used a long time ago when only i386 and x86_64 were supported. as other archs were added, support for them was inconsistent, and they are obviously not in use at present. having them around potentially confuses readers working on new ports, and the type-punning hacks and inconsistent use of types in their definitions is not a style I wish to perpetuate in the source tree, so removing them seems appropriate.
* add syscall numbers for the new renameat2 syscallSzabolcs Nagy2014-07-201-0/+2
| | | | | it's like rename but with flags eg. to allow atomic exchange of two files, introduced in linux 3.15 commit 520c8b16505236fc82daa352e6c5e73cd9870cff
* refactor to remove arch-specific relocation code from dynamic linkerRich Felker2014-06-181-25/+12
| | | | | | | | | | | this was one of the main instances of ugly code duplication: all archs use basically the same types of relocations, but roughly equivalent logic was duplicated for each arch to account for the different naming and numbering of relocation types and variation in whether REL or RELA records are used. as an added bonus, both REL and RELA are now supported on all archs, regardless of which is used by the standard toolchain.
* dynamic linker: permit error returns from arch-specific reloc functionRich Felker2014-06-161-1/+2
| | | | | | | | the immediate motivation is supporting TLSDESC relocations which require allocation and thus may fail (unless we pre-allocate), but this mechanism should also be used for throwing an error on unsupported or invalid relocation types, and perhaps in certain cases, for reporting when a relocation is not satisfiable.
* add sched_{get,set}attr syscall numbers and SCHED_DEADLINE macroSzabolcs Nagy2014-05-301-0/+4
| | | | | | | | | | | | linux 3.14 introduced sched_getattr and sched_setattr syscalls in commit d50dde5a10f305253cbc3855307f608f8a3c5f73 and the related SCHED_DEADLINE scheduling policy in commit aab03e05e8f7e26f51dee792beddcb5cca9215a5 but struct sched_attr "extended scheduling parameters data structure" is not yet exported to userspace (necessary for using the syscalls) so related uapi definitions are not added yet.
* fix arm thread-pointer/atomic asm when compiling to thumb codeRich Felker2014-04-302-6/+7
| | | | | | | | | | | armv7/thumb2 provides a way to do atomics in thumb mode, but for armv6 we need a call to arm mode. this commit is based on a patch by Stephen Thomas which fixed the armv7 cases but not the armv6 ones. all of this should be revisited if/when runtime selection of thread pointer access and atomics are added.
* fix RLIMIT_ constants for mipsSzabolcs Nagy2014-04-151-0/+0
| | | | | | | The mips arch is special in that it uses different RLIMIT_ numbers than other archs, so allow bits/resource.h to override the default RLIMIT_ numbers (empty on all archs except mips). Reported by orc.