| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
the other atomic FD_CLOEXEC interfaces (dup3, pipe2, socket) already
had such emulation in place. the justification for doing the emulation
here is the same as for the other functions: it allows applications to
simply use accept4 rather than having to have their own fallback code
for ENOSYS/EINVAL (which one you get is arch-specific!) and there is
no reasonable way an application could benefit from knowing the
operation is emulated/non-atomic since there is no workaround at the
application level for non-atomicity (that is the whole reason these
interfaces were added).
|
|
|
|
| |
based on patch by orc.
|
|
|
|
|
|
|
|
| |
this was unlikely to lead to any crash or dangerous behavior, but
caused adjacent string constants to be treated as part of the
protocols table, possibly returning nonsensical results for unknown
protocol names/numbers or when getprotoent was called in a loop to
enumerate all protocols.
|
|
|
|
| |
this is a requirement in the specification that was overlooked.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The architecture-specific assembly versions of clone did not set errno on
failure, which is inconsistent with glibc. __clone still returns the error
via its return value, and clone is now a wrapper that sets errno as needed.
The public clone has also been moved to src/linux, as it's not directly
related to the pthreads API.
__clone is called by pthread_create, which does not report errors via
errno. Though not strictly necessary, it's nice to avoid clobbering errno
here.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
the default fenv was not set up properly, in particular the
tag word that indicates the contents of the x87 registers was
set to 0 (used) instead of 0xffff (empty)
this could cause random crashes after setting the default fenv
because it corrupted the fpu stack and then any float computation
gives NaN result breaking the program logic (usually after a
float to integer conversion).
|
|
|
|
|
|
|
| |
this saves a syscall in the case where the underlying open already
took place with O_APPEND, which is common because fopen with append
modes sets O_APPEND at the time of open before passing the file
descriptor to __fdopen.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when there is unflushed output, ftello (and ftell) compute the logical
stream position as the underlying file descriptor's offset plus an
adjustment for the amount of buffered data. however, this can give the
wrong result for append-mode streams where the unflushed writes should
adjust the logical position to be at the end of the file, as if a seek
to end-of-file takes place before the write.
the solution turns out to be a simple trick: when ftello (indirectly)
calls lseek to determine the current file offset, use SEEK_END instead
of SEEK_CUR if the stream is append-mode and there's unwritten
buffered data.
the ISO C rules regarding switching between reading and writing for a
stream opened in an update mode, along with the POSIX rules regarding
switching "active handles", conveniently leave undefined the
hypothetical usage cases where this fix might lead to observably
incorrect offsets.
the bug being fixed was discovered via the test case for glibc issue
|
| |
|
|
|
|
|
| |
the incorrect check for crossing device boundaries was preventing nftw
from traversing anything except the initially provided pathname.
|
|
|
|
| |
posix allows zero length destination
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
STB_WEAK is only a weak reference for undefined symbols (those with a
section of SHN_UNDEF). otherwise, it's a weak definition. normally
this distinction would not matter, since a relocation referencing a
symbol that also provides a definition (not SHN_UNDEF) will always
succeed in finding the referenced symbol itself. however, in the case
of copy relocations, the referenced symbol itself is ignored in order
to search for another symbol to copy from, and thus it's possible that
no definition is found. in this case, if the symbol being resolved
happened to be a weak definition, it was misinterpreted as a weak
reference, suppressing the error path and causing a crash when the
copy relocation was performed with a null source pointer passed to
memcpy.
there are almost certainly still situations in which invalid
combinations of symbol and relocation types can cause the dynamic
linker to crash (this is pretty much inevitable), but the intent is
that crashes not be possible for symbol/relocation tables produced by
a valid linker.
|
|
|
|
|
|
|
|
|
|
|
| |
setstate could use the results of previous initstate or setstate
calls (they return the old state buffer), but the documentation
requires that an initialized state buffer should be possible to
use in setstate immediately, which means that initstate should
save the generator parameters in it.
I also removed the copyright notice since it is present in the
copyright file.
|
| |
|
|
|
|
|
| |
this glibc abi compatibility function was missed when the scanf
aliases were added.
|
|
|
|
|
| |
weak_alias was only in the c code, so drem was missing on platforms
where remainder is implemented in asm.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
per POSIX, the variadic argument has type union semun, which may
contain a pointer or int; the type read depends on the command being
issued. this allows the userspace part of the implementation to be
type-correct without requiring special-casing for different commands.
the kernel always expects to receive the argument interpreted as
unsigned long (or equivalently, a pointer), and does its own handling
of extracting the int portion from the representation, as needed.
this change fixes two possible issues: most immediately, reading the
argument as a (signed) long and passing it to the syscall would
perform incorrect sign-extension of pointers on the upcoming x32
target. the other possible issue is that some archs may use different
(user-space) argument-passing convention for unions, preventing va_arg
from correctly obtaining the argument when the type long (or even
unsigned long or void *) is passed to it.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
really, fcntl should be changed to use the correct type corresponding
to cmd when calling va_arg, and to carry the correct type through
until making the syscall. however, this greatly increases binary size
and does not seem to offer any benefits except formal correctness, so
I'm holding off on that change for now.
the minimal changes made in this patch are in preparation for addition
of the x32 port, where the syscall macros need to know whether their
arguments are pointers or integers in order to properly pass them to
the 64-bit kernel.
|
|
|
|
|
|
|
| |
it's unclear what the historical signature for this function was, but
semantically, the argument should be a pointer to const, and this is
what glibc uses. correct programs should not be using this function
anyway, so it's unlikely to matter.
|
|
|
|
|
| |
both the kernel and glibc agree that this argument is unsigned; the
incorrect type ssize_t came from erroneous man pages.
|
|
|
|
|
|
| |
this change is consistent with the corresponding glibc functions and
is semantically const-correct. the incorrect argument types without
const seem to have been taken from erroneous man pages.
|
|
|
|
|
|
| |
this was wrong since the original commit adding inotify, and I don't
see any explanation for it. not even the man pages have it wrong. it
was most likely a copy-and-paste error.
|
|
|
|
|
|
| |
the type int was taken from seemingly erroneous man pages. glibc uses
in_addr_t (uint32_t), and semantically, the arguments should be
unsigned.
|
|
|
|
|
|
|
|
| |
this practice came from very early, before internal/syscall.h defined
macros that could accept pointer arguments directly and handle them
correctly. aside from being ugly and unnecessary, it looks like it
will be problematic when we add support for 32-bit ABIs on archs where
registers (and syscall arguments) are 64-bit, e.g. x32 and mips n32.
|
|
|
|
|
|
|
| |
this agrees with implementation practice on glibc and BSD systems, and
is the const-correct way to do things; it eliminates warnings from
passing pointers to const. the prototype without const came from
seemingly erroneous man pages.
|
| |
|
|
|
|
|
|
| |
the header is included only as a guard to check that the declaration
and definition match, so the typo didn't cause any breakage aside
from omitting this check.
|
|
|
|
|
|
| |
the reasons are the same as for sbrk. unlike sbrk, there is no safe
usage because brk does not return any useful information, so it should
just fail unconditionally.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
use of sbrk is never safe; it conflicts with malloc, and malloc may be
used internally by the implementation basically anywhere. prior to
this change, applications attempting to use sbrk to do their own heap
management simply caused untrackable memory corruption; now, they will
fail with ENOMEM allowing the errors to be fixed.
sbrk(0) is still permitted as a way to get the current brk; some
misguided applications use this as a measurement of their memory
usage or for other related purposes, and such usage is harmless.
eventually sbrk may be re-added if/when malloc is changed to avoid
using the brk by using mmap for all allocations.
|
| |
|
|
|
|
| |
based on patch by Timo Teräs; greatly simplified to use fprintf.
|
|
|
|
| |
based on patch by Timo Teräs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the workaround/fallback code for supporting O_PATH file descriptors
when the kernel lacks support for performing these operations on them
caused EBADF to get replaced by ENOENT (due to missing entry in
/proc/self/fd). this is unlikely to affect real-world code (calls that
might yield EBADF are generally unsafe, especially in library code)
but it was breaking some test cases.
the fix I've applied is something of a tradeoff: it adds one syscall
to these operations on kernels where the workaround is needed. the
alternative would be to catch ENOENT from the /proc lookup and
translate it to EBADF, but I want to avoid doing that in the interest
of not touching/depending on /proc at all in these functions as long
as the kernel correctly supports the operations. this is following the
general principle of isolating hacks to code paths that are taken on
broken systems, and keeping the code for correct systems completely
hack-free.
|
| |
|
|
|
|
|
|
|
|
| |
the ABI allows the callee to clobber stack slots that correspond to
arguments passed in registers, so the caller must adjust the stack
pointer to reserve space appropriately. prior to this fix, the argv
array was possibly clobbered by dynamic linker code before passing
control to the main program.
|
|
|
|
|
|
|
|
| |
our getcwd already (as an extension) supports allocation of a buffer
when the buffer argument is a null pointer, so there's no need to
duplicate the allocation logic in this wrapper function. duplicating
it is actually harmful in that it doubles the stack usage from
PATH_MAX to 2*PATH_MAX.
|
|
|
|
| |
and thereby remove otherwise-unnecessary inclusion of stddef.h
|
| |
|
|
|
|
|
| |
at most 4 hexadecimal digits are processed in one field so the
value cannot overflow. the netdb.h header was not used.
|
|
|
|
|
| |
this makes the prototypes in math.h are visible so they are checked agaist
the function definitions
|
|
|
|
|
| |
this is purely a wrapper for close since Linux does not support EINTR
semantics for the close syscall.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
previously this flag was defined and accepted as a no-op, possibly
breaking some software that uses it. given the choice to remove the
definition and possibly break applications that were already working,
or simply implement the feature, the latter turned out to be easy
enough to make the decision easy.
in the case where the FNM_PATHNAME flag is also set, this
implementation is clean and essentially optimal. otherwise, it's an
inefficient "brute force" implementation. at some point, when cleaning
up and refactoring this code, I may add a more direct code path for
handling FNM_LEADING_DIR in the non-FNM_PATHNAME case, but at this
point my main interest is avoiding introducing new bugs in the code
that implements the standard fnmatch features specified by POSIX.
|
|
|
|
|
|
|
| |
this is still experimental and subject to change. for git checkouts,
an attempt is made to record the exact revision to aid in bug reports
and debugging. no version information is recorded in the static libc.a
or binaries it's linked into.
|
|
|
|
|
|
| |
the FNM_PATHNAME logic for advancing by /-delimited components was
incorrect when the / character was escaped (i.e. \/), and a final \ at
the end of pattern was not handled correctly.
|
|
|
|
|
|
| |
a '/' in the pattern could be incorrectly matched against the
terminating null byte in the string causing arbitrarily long
sequence of out-of-bounds access in fnmatch("/","",FNM_PATHNAME)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a v6 socket will only be used if there is at least one v6 nameserver
address. if the kernel lacks v6 support, the code will fall back to
using a v4 socket and requests to v6 servers will silently fail. when
using a v6 socket, v4 addresses are converted to v4-mapped form and
setsockopt is used to ensure that the v6 socket can accept both v4 and
v6 traffic (this is on-by-default on Linux but the default is
configurable in /proc and so it needs to be set explicitly on the
socket level). this scheme avoids increasing resource usage during
lookups and allows the existing network io loop to be used without
modification.
previously, nameservers whose address family did not match the address
family of the first-listed nameserver were simply ignored. prior to
recent __ipparse fixes, they were not ignored but erroneously parsed.
|
|
|
|
|
|
|
| |
subsequent code assumes the address family requested is either
unspecified or one of IPv4/IPv6, and could malfunction if this
constraint is not met, so other address families should be explicitly
rejected.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
these functions were spuriously failing in the case where the buffer
size was exactly the number of bytes/characters to be written,
including null termination. since these functions do not have defined
error conditions other than buffer size, a reasonable application may
fail to check the return value when the format string and buffer size
are known to be valid; such an application could then attempt to use a
non-terminated buffer.
in addition to fixing the bug, I have changed the error handling
behavior so that these functions always null-terminate the output
except in the case where the buffer size is zero, and so that they
always write as many characters as possible before failing, rather
than dropping whole fields that do not fit. this actually simplifies
the logic somewhat anyway.
|
| |
|