| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
| |
this allowing the linker to drop certain weak definitions that are
only used as dummies for static linking. they could be eliminated for
shared library builds using the preprocessor instead, but we are
trying to transition to using the same object files for shared and
static libc, so a link-time solution is preferable.
|
|
|
|
|
|
| |
based on patch by Denys Vlasenko. sorting sections and common data
symbols by alignment acts as an approximation for optimal packing,
which the linker does not actually support.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
based on patch by Denys Vlasenko. the original intent for using these
options was to enable linking optimizations. these are immediately
available for static linking applications to libc.a, and will also be
used for linking libc.so in a subsequent commit.
in addition to the original motives, this change works around a whole
class of toolchain bugs where the compiler generates relative address
expressions using a weak symbol and the assembler "optimizes out" the
relocation which should result by using the weak definition. (see gas
pr 18561 and gcc pr 66609, 68178, etc. for examples.) by having
different functions and data objects in their own sections, all
relative address expressions are cross-section and thus cannot be
resolved to constants until link time. this allows us to retain
support for affected compiler/assembler versions without invasive
and fragile source-level workarounds.
|
|
|
|
|
|
| |
this way, overriding these variables on the make command line (or just
re-passing the originally-passed values when invoking make) won't
suppress use of the flags added by configure.
|
|
|
|
|
|
|
|
|
|
|
| |
the option to suppress executable stack tagging was placed in CFLAGS,
which is treated as optional and overridable by the build system. if a
user replaces CFLAGS after configure has run, it could get lost,
resulting in a libc.so that's flagged as needing executable stack,
which would cause the kernel to map the initial stack as executable.
move -Wa,--noexecstack to CFLAGS_C99FSE, the make variable used for
mandatory compiler options.
|
|
|
|
|
|
| |
we need access to all instructions in order for runtime selection of
atomic model to work correctly. without this patch, some versions of
gcc instruct gas to reject instructions outside the target isa level.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
some newer binutils versions print scary warnings about protected data
because most gcc versions fail to produce the right address
references/relocations for such data that might be subject to copy
relocations. originally vis.h explicitly assigned default visibility
to all public data symbols to avoid this issue, but commit
b8dda24fe1caa901a99580f7a52defb95aedb67c removed this treatment for
stdin/out/err to work around a gcc 3.x bug, and since they don't
actually need it (because taking their addresses is not valid C).
instead, a check for the gcc 3.x bug is added to the configure check
for vis.h preinclude support; this feature will simply be disabled
when using a buggy version of gcc.
|
|
|
|
|
|
|
|
|
|
| |
this is always an error and usually results from failure to find/link
the compiler runtime library, but it could also result from
implementation errors in libc, using functions that don't (yet) exist.
either way the resulting libc.so will crash mysteriously at runtime.
the crash happens too early to produce a meaningful error, so these
crashes are very confusing to users and waste a lot of debugging time.
this commit should ensure that they do not happen.
|
| |
|
|
|
|
|
|
|
|
|
| |
with this commit it should be possible to produce a working
static-linked fdpic libc and application binaries for sh.
the changes in reloc.h are largely unused at this point since dynamic
linking is not supported, but the CRTJMP macro is used one place
outside of dynamic linking, in __unmapself.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some functions implemented in asm need to use EBP for purposes other
than acting as a frame pointer. (Notably, it is used for the 6th
argument to syscalls with 6 arguments.) Without frame pointers, GDB
can only show backtraces if it gets CFI information from a
.debug_frame or .eh_frame ELF section.
Rather than littering our asm with ugly .cfi directives, use an awk
script to insert them in the right places during the build process, so
GDB can keep track of where the current stack frame is relative to the
stack pointer. This means GDB can produce beautiful stack traces at
any given point when single-stepping through asm functions.
Additionally, when registers are saved on the stack and later
overwritten, emit ..cfi directives so GDB will know where they were
saved relative to the stack pointer. This way, when you look back up
the stack from within an asm function, you can still reliably print
the values of local variables in the caller.
If this awk script were to understand every possible wild and crazy
contortion that an asm programmer can do with the stack and registers,
and always emit the exact ..cfi directives needed for GDB to know what
the register values were in the preceding stack frame, it would
necessarily be as complex as a full x86 emulator. That way lies
madness.
Hence, we assume that the stack pointer will _only_ ever be adjusted
using push/pop or else add/sub with a constant. We do not attempt to
detect every possible way that a register value could be saved for
later use, just the simple and common ways.
Thanks to Szabolcs Nagy for suggesting numerous improvements to this
code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
musl-clang allows the user to compile musl-powered programs using their
already existent clang install, without the need of a special cross compiler.
it achieves this by wrapping around both the system clang install and the
linker and passing them special flags to re-target musl at runtime.
it does only affect invocations done through the special musl-clang wrapper
script, so that the user setup remains fully intact otherwise.
the clang wrapper consists of the compiler frontend wrapper script,
musl-clang, and the linker wrapper script, ld.musl-clang.
musl-clang makes sure clang invokes ld.musl-clang to link objects; neither
script needs to be in PATH for the wrapper to work.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the old test was broken in that it would never fail on a toolchains built
without dynamic linking support, leading to the wrapper script possibly being
installed on compilers that do not support it. in addition, the new test is
portable across compilers: the old test only worked on GCC.
the new test works by testing whether the toolchain libc defines __GLIBC__:
most non-musl Linux libc's do define this for compatibility even when they
are not glibc, so this is a safe bet to check for musl. in addition, the
compiler runtime would need to have a somewhat glibc-compatible ABI in the
first place, so any non-glibc compatible libc's compiler runtime might not
work. it is safer to disable these cases by default and have the user enable
the wrappers manually there using --enable-wrapper if they certain it works.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this overhauls part of the build system in order to support multiple
toolchain wrapper scripts, as opposed to solely the musl-gcc wrapper as
before. it thereby replaces --enable-gcc-wrapper with --enable-wrapper=...,
which has the options 'auto' (the default, detect whether to use wrappers),
'all' (build and install all wrappers), 'no' (don't build any) and finally
the options named after the individual compiler scripts (currently only
'gcc' is available) to build and install only that wrapper.
the old --enable-gcc-wrapper is removed from --help, but still available.
it also modifies the wrappers to use the C compiler specified to the build
system as 'inner' compiler, when applicable. as wrapper detection works by
probing this compiler, it may not work with any other.
|
|
|
|
|
|
|
|
|
|
|
| |
some compilers (such as clang) accept unknown options without error,
but then print warnings on each invocation, cluttering the build
output and burying meaningful warnings. this patch makes configure's
tryflag and tryldflag functions use additional options to turn the
unknown-option warnings into errors, if available, but only at check
time. these options are not output in config.mak to avoid the risk of
spurious build breakage; if they work, they will have already done
their job at configure time.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pcc does not search for -include relative to the working directory
unless -I. is used. rather than adding -I., which could be problematic
if there's extra junk in the top-level directory, switch back to the
old method (reverting commit 60ed988fd6c67b489d7cc186ecaa9db4e5c25b8c)
of using -include vis.h and relying on -I./src/internal being present
on the command line (which the Makefile guarantees). to fix the
breakage that was present in trycppif checks with the old method,
$CFLAGS_AUTO is removed from the command line passed to trycppif; this
is valid since $CFLAGS_AUTO should not contain options that alter
compiler semantics or ABI, only optimizations, warnings, etc.
|
|
|
|
|
|
|
|
| |
Some build environments pass -march and -mtune as part of CC, therefore
update configure to check both CC and CFLAGS before making the decision
to fall back to generic -march and -mtune options for x86.
Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
|
|
|
|
|
|
|
|
|
| |
commit de2b67f8d41e08caa56bf6540277f6561edb647f introduced a
regression by adding a -include option to CFLAGS_AUTO which did not
work without additional -I options. this broke subsequent trycppif
tests and caused x86_64 to be misdetected as x32, among other issues.
simply using the full relative pathname to vis.h rather than -I is the
cleanest way to fix the problem.
|
|
|
|
|
|
|
|
|
|
|
|
| |
this is implemented via the build system and does not affect source
files. the idea is to use protected or hidden visibility to prevent
the compiler from pessimizing function calls within a shared (or
position-independent static) libc in the form of overhead setting up
for a call through the PLT. the ld-time symbol binding via the
-Bsymbolic-functions option already optimized out the PLT itself, but
not the code in the caller needed to support a call through the PLT.
on some archs this overhead can be substantial; on others it's
trivial.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this was already essentially possible as a result of the previous
commits changing the dynamic linker/thread pointer bootstrap process.
this commit mainly adds build system infrastructure:
configure no longer attempts to disable stack protector. instead it
simply determines how so the makefile can disable stack protector for
a few translation units used during early startup.
stack protector is also disabled for memcpy and memset since compilers
(incorrectly) generate calls to them on some archs to implement
struct initialization and assignment, and such calls may creep into
early initialization.
no explicit attempt to enable stack protector is made by configure at
this time; any stack protector option supported by the compiler can be
passed to configure in CFLAGS, and if the compiler uses stack
protector by default, this default is respected.
|
|
|
|
|
|
|
|
|
|
| |
This adds complete aarch64 target support including bigendian subarch.
Some of the long double math functions are known to be broken otherwise
interfaces should be fully functional, but at this point consider this
port experimental.
Initial work on this port was done by Sireesh Tripurari and Kevin Bortis.
|
|
|
|
|
|
|
| |
based on patch by Vadim Ushakov. in general overriding LC_ALL rather
than specific categories (here, LC_MESSAGES) is undesirable, but
LC_ALL is easier and in this case there is nothing else that depends
on the locale in this invocation of the compiler.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the exception of a fenv implementation, the port is fully featured.
The port has been tested in or1ksim, the golden reference functional
simulator for OpenRISC 1000.
It passes all libc-test tests (except the math tests that
requires a fenv implementation).
The port assumes an or1k implementation that has support for
atomic instructions (l.lwa/l.swa).
Although it passes all the libc-test tests, the port is still
in an experimental state, and has yet experienced very little
'real-world' use.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
previously we detected this bug in configure and issued advice for a
workaround, but this turned out not to work. since then gcc 4.9.0 has
appeared in several distributions, and now 4.9.1 has been released
without a fix despite this being a wrong code generation bug which is
supposed to be a release-blocker, per gcc policy.
since the scope of the bug seems to affect only data objects (rather
than functions) whose definitions are overridable, and there are only
a very small number of these in musl, I am just changing them from
const to volatile for the time being. simply removing the const would
be sufficient to make gcc 4.9.1 work (the non-const case was
inadvertently fixed as part of another change in gcc), and this would
also be sufficient with 4.9.0 if we forced -O0 on the affected files
or on the whole build. however it's cleaner to just remove all the
broken compiler detection and use volatile, which will ensure that
they are never constant-folded. the quality of a non-broken compiler's
output should not be affected except for the fact that these objects
are no longer const and thus possibly add a few bytes to data/bss.
this change can be reconsidered and possibly reverted at some point in
the future when the broken gcc versions are no longer relevant.
|
|
|
|
|
|
|
| |
this behavior turned out to be counter-intuitive to users and in any
case it's unnecessary. optimization can be disabled explicitly using
the --disable-optimize option, or both can be achieved without any
enable/disable options by passing CFLAGS="-O0 -g".
|
|
|
|
|
|
|
|
| |
previously, a warning was issued in this case no matter what, even if
--disable-shared was used. now, the default for --enable-shared is
changed from "yes" to "auto", and the warning is issued by default,
but becomes an error if --enable-shared is used, and the test is
suppressed completely if --disable-shared is used.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
this is gcc bug #61144. the broken compiler is detected, but the user
must manually work around it. this is partly to avoid complex logic
for adding workaround CFLAGS and attempting to recheck with them, and
partly for the sake of letting the user know the compiler is broken
(since the workaround will result in less-efficient code production).
some refactoring was also needed to move the check for gcc outside of
the check for whether to build the compiler wrapper.
|
|
|
|
|
|
| |
without this, broken choices of CC/CPPFLAGS/CFLAGS don't show up until
late in the configure process where they are confusingly reported as a
different failure such as incorrect long double type.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As far as gcc3 knows, sh4 is the only processor version that can have an
FPU, so it indicates the FPU's presence by defining __SH4__. This is not
defined if there is no FPU, even if the processor really is an SH4.
Starting with gcc4, there is support for the sh2a processor, which has an
FPU but is not an SH4. gcc4 therefore additionally defines __SH_FPU_ANY__
when there is an FPU, but still doesn't define __SH4__ for an FPU-less sh4.
Therefore, to support all gcc versions, we must look at both preprocessor
symbols.
|
|
|
|
|
| |
otherwise a multilib compiler used with -mx32 will not be detected
properly.
|
|
|
|
|
|
|
|
| |
the previous pattern required "x32" to be used as the second field of
the gcc tuple, which is usually reserved for vendor use and not
appropriate as an ABI specifier. with this change, putting "x32" at
the end of the tuple, the way ABI specifiers are normally done, is
also permitted.
|
|
|
|
|
|
|
|
|
|
|
| |
most notably, it was failing to match sh4-*, etc., but in general the
explicit matching of hyphens for some archs was problematic because it
failed to accept simply the musl-style arch name (without a gcc-style
tuple) as an input. the original motivation of matching hyphens was to
prevent incorrectly identifying a 64-bit arch as the corresponding
32-bit arch (e.g. mips* matching mips64) but this is easily fixed by
simply checking (and for now, rejecting as unsupported) the relevant
64-bit archs.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
default endianness for sh on linux is little, and while conventions
vary, "eb" seems to be the most widely used suffix for big endian.
|
|
|
|
|
|
|
|
|
| |
linux, gcc, etc. all use "sh" as the name for the superh arch. there
was already some inconsistency internally in musl: the dynamic linker
was searching for "ld-musl-sh.path" as its path file despite its own
name being "ld-musl-superh.so.1". there was some sentiment in both
directions as to how to resolve the inconsistency, but overall "sh"
was favored.
|
|
|
|
|
|
|
|
|
| |
Userspace emulated floating-point (gcc -msoft-float) is not compatible
with the default mips abi (assumes an FPU or in kernel emulation of it).
Soft vs hard float abi should not be mixed, __mips_soft_float is checked
in musl's configure script and there is no runtime check. The -sf subarch
does not save/restore floating-point registers in setjmp/longjmp and only
provides dummy fenv implementation.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
x32 is the internal arch name, but glibc uses x86_64-x32.
there doesn't exist a specific triple for x32 in gcc and binutils.
you're supposed to build your compiler for x86_64 and configure
it with multilib support for "mx32".
however it turns out that using a triple of x86_64-x32 makes
gcc and binutils pick up the right arch (they detect it as x86_64)
and allows us to have a unique triple for cross-compiler toolchains.
|
|
|
|
|
|
|
|
|
|
|
| |
I originally added this warning option based on a misunderstanding of
how it works. it does not warn whenever the destination of the cast
has stricter alignment; it only warns in cases where misaligned
dereference could lead to a fault. thus, it's essentially a no-op for
i386, which had me wrongly believing the code was clean for this
warning level. on other archs, numerous diagnostic messages are
produced, and all of them are false-positives, so it's better just not
to use it.
|
|
|
|
|
|
| |
this will be needed for upcoming commits to the string/mem functions
to correct their unannounced use of aliasing violations for
word-at-a-time search, fill, and copy operations.
|
|
|
|
|
|
|
| |
one place where semicolon (non-portable) was still used in place of
separate -e options (copied over from an old version of this code),
and use of a literal slash in the bracket expression for the final
command, despite slash being used as the delimiter for the s command.
|
|
|
|
|
| |
proper shell quoting and pretty-printing (avoiding ugly gratuitous
quoting and bad quoting style) is included.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it turns out that __SOFTFP__ does not indicate the ABI in use but
rather that fpu instructions are not to be used at all. this is
specified in ARM's documentation so I'm unclear on how I previously
got the wrong idea. unfortunately, this resulted in the 0.9.12 release
producing a dynamic linker with the wrong name. fortunately, there do
not yet seem to be any public toolchain builds using the wrong name.
the __ARM_PCS_VFP macro does not seem to be official from ARM, and in
fact it was missing from the very earliest gcc versions (around 4.5.x)
that added -mfloat-abi=hard. it would be possible on such versions to
perform some ugly linker-based tests instead in hopes that the linker
will reject ABI-mismatching object files, if there is demand for
supporting such versions. I would probably prefer to document which
versions are broken and warn users to manually add -D__ARM_PCS_VFP if
using such a version.
there's definitely an argument to be made that the fenv macros should
be exposed even in -mfloat-abi=softfp mode. for now, I have chosen not
to expose them in this case, since the math library will not
necessarily have the capability to raise exceptions (it depends on the
CFLAGS used to compile it), and since exceptions are officially
excluded from the ARM EABI, which the plain "arm" arch aims to
follow.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the default subarch is the one whose full name is just the base arch
name, with no suffixes. normally, either the asm in the default
subarch is suitable for all subarch variants, or separate asm is
mandatory for each variant. however, in the case of asm which is
purely for optimization purposes, it's possible to have asm that only
works (or only performs well) on the default subarch, and not any othe
the other variants. thus, I have added a mechanism to give a name to
the default variant, for example "armel" for the default,
little-endian arm. further such default-subarch names can be added in
the future as needed.
|