| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
this is a GNU extension, activated by including '-' as the first
character of the options string, whereby non-option arguments are
processed as if they were arguments to an option character '\1' rather
than ending option processing.
|
|
|
|
|
|
|
| |
Processing an option character with optional argument fails if the
option is last on the command line. This happens because the
if (optind >= argc) check runs first before testing for optional
argument.
|
|
|
|
|
| |
The function originates from SunOS 4.x in which the null argument
is allowed. glibc also handles this case.
|
|
|
|
|
|
| |
per the resolution of Austin Group issue #617, these are accepted for
XSI option in POSIX future and thus I'm treating them as standard
functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this function provides a way for third-party library code to use the
same logic that's used internally in libc for suppressing untrusted
input/state (e.g. the environment) when the application is running
with privleges elevated by the setuid or setgid bit or some other
mechanism. its semantics are intended to match the openbsd function by
the same name.
there was some question as to whether this function is necessary:
getauxval(AT_SECURE) was proposed as an alternative. however, this has
several drawbacks. the most obvious is that it asks programmers to be
aware of an implementation detail of ELF-based systems (the aux
vector) rather than simply the semantic predicate to be checked. and
trying to write a safe, reliable version of issetugid in terms of
getauxval is difficult. for example, early versions of the glibc
getauxval did not report ENOENT, which could lead to false negatives
if AT_SECURE was not present in the aux vector (this could probably
only happen when running on non-linux kernels under linux emulation,
since glibc does not support linux versions old enough to lack
AT_SECURE). as for musl, getauxval has always properly reported
errors, but prior to commit 7bece9c2095ee81f14b1088f6b0ba2f37fecb283,
the musl implementation did not emulate AT_SECURE if missing, which
would result in a false positive. since musl actually does partially
support kernels that lack AT_SECURE, this was problematic.
the intent is that library authors will use issetugid if its
availability is detected at build time, and only fall back to the
unreliable alternatives on systems that lack it.
patch by Brent Cook. commit message/rationale by Rich Felker.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this could happen on 2.4-series linux kernels that predate AT_SECURE
and possibly on other kernels that are emulating the linux syscall API
but not providing AT_SECURE in the aux vector at startup.
in principle applications should be checking errno anyway, but this
does not really work. to be secure, the caller would have to treat
ENOENT (indeterminate result) as possibly-suid and thereby disable
functionality in the typical non-suid usage case. and since glibc only
runs on kernels that provide AT_SECURE, applications written to the
glibc getauxval API might simply assume it succeeds.
|
|
|
|
|
|
|
| |
this was previously a no-op, somewhat intentionally, because I failed
to understand that it only has an effect when sending to the logging
facility fails and thus is not the nuisance that it would be if always
sent output to the console.
|
|
|
|
|
|
|
| |
this behavior is no longer valid in general, and was never necessary.
if the LOG_PERROR option is set, output to stderr could still succeed.
also, when the LOG_CONS option is added, it will need syslog to
proceed even if opening the log socket fails.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this is a nonstandard feature, but easy and inexpensive to add. since
the corresponding macro has always been defined in our syslog.h, it
makes sense to actually support it. applications may reasonably be
using the presence of the macro to assume that the feature is
supported.
the behavior of omitting the 'header' part of the log message does not
seem to be well-documented, but matches other implementations (at
least glibc) which have this option.
based on a patch by Clément Vasseur, but simplified using %n.
|
|
|
|
|
|
| |
errno must be saved upon vsyslog entry, otherwise its value could be
changed by some libc function before reaching the %m handler in
vsnprintf.
|
|
|
|
|
| |
contributed by Isaac Dunham. this seems to be the last interface that
was missing for complete POSIX 2008 base + XSI coverage.
|
|
|
|
|
|
|
| |
this extension is not incompatible with the standard behavior of the
function, not expensive, and avoids requiring a replacement getopt
with full GNU extensions for a few important apps including busybox's
sed with the -i option.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On 32 bit mips the kernel uses -1UL/2 to mark RLIM_INFINITY (and
this is the definition in the userspace api), but since it is in
the middle of the valid range of limits and limits are often
compared with relational operators, various kernel side logic is
broken if larger than -1UL/2 limits are used. So we truncate the
limits to -1UL/2 in get/setrlimit and prlimit.
Even if the kernel side logic consistently treated -1UL/2 as greater
than any other limit value, there wouldn't be any clean workaround
that allowed using large limits:
* using -1UL/2 as RLIM_INFINITY in userspace would mean different
infinity value for get/setrlimt and prlimit (where infinity is always
-1ULL) and userspace logic could break easily (just like the kernel
is broken now) and more special case code would be needed for mips.
* translating -1UL/2 kernel side value to -1ULL in userspace would
mean that -1UL/2 limit cannot be set (eg. -1UL/2+1 had to be passed
to the kernel instead).
|
|
|
|
|
|
|
|
|
|
| |
open is handled specially because it is used from so many places, in
so many variants (2 or 3 arguments, setting errno or not, and
cancellable or not). trying to do it as a function would not only
increase bloat, but would also risk subtle breakage.
this is the first step towards supporting "new" archs where linux
lacks "old" syscalls.
|
|
|
|
|
|
|
| |
in a sense this implementation is incomplete since it doesn't provide
the HWCAP_* macros for use with AT_HWCAP, which is perhaps the most
important intended usage case for getauxval. they will be added at a
later time.
|
|
|
|
|
|
|
| |
on x32, this change allows programs which use syscall() with pointers
or 64-bit values as arguments to work correctly, i.e. without
truncation or incorrect sign extension. on all other supported archs,
syscall_arg_t is defined as long, so this change is a no-op.
|
|
|
|
|
|
| |
the incorrect error codes also made their way into errno when
__ptsname_r was called by plain ptsname, which reports errors via
errno rather than a return value.
|
|
|
|
|
| |
the incorrect check for crossing device boundaries was preventing nftw
from traversing anything except the initially provided pathname.
|
|
|
|
|
|
|
|
| |
our getcwd already (as an extension) supports allocation of a buffer
when the buffer argument is a null pointer, so there's no need to
duplicate the allocation logic in this wrapper function. duplicating
it is actually harmful in that it doubles the stack usage from
PATH_MAX to 2*PATH_MAX.
|
| |
|
|
|
|
|
|
| |
loop condition was incorrect and confusing and caused an infinite loop
when (broken) applications reaped the pid from a signal handler or
another thread before wordexp's call to waitpid could do so.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when WRDE_NOSPACE is returned, the we_wordv and we_wordc members must
be valid, because the interface contract allows them to return partial
results.
in the case of zero results (due either to resource exhaustion or a
zero-word input) the we_wordv array still should contain a terminating
null pointer and the initial we_offs null pointers. this is impossible
on resource exhaustion, so a correct application must presumably check
for a null pointer in we_wordv; POSIX however seems to ignore the
issue. the previous code may have crashed under this situation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
avoid using exit status to determine if a shell error occurred, since
broken programs may install SIGCHLD handlers which reap all zombies,
including ones that don't belong to them. using clone and __WCLONE
does not seem to work for avoiding this problem since exec resets the
exit signal to SIGCHLD.
instead, the new code uses a dummy word at the beginning of the
shell's output, which is ignored, to determine whether the command was
executed successfully. this also fixes a corner case where a word
string containing zero words was interpreted as a single zero-length
word rather than no words at all. POSIX does not seem to require this
case to be supported anyway, though.
in addition, the new code uses the correct retry idiom for waitpid to
ensure that spurious STOP/CONT signals in the child and/or EINTR in
the parent do not prevent successful wait for the child, and blocks
signals in the child.
|
| |
|
|
|
|
|
|
|
|
|
| |
rather than allocating a PATH_MAX-sized buffer when the caller does
not provide an output buffer, work first with a PATH_MAX-sized temp
buffer with automatic storage, and either copy it to the caller's
buffer or strdup it on success. this not only avoids massive memory
waste, but also avoids pulling in free (and thus the full malloc
implementation) unnecessarily in static programs.
|
|
|
|
|
|
| |
this avoids failure if the file is not readable and avoids odd
behavior for device nodes, etc. on old kernels that lack O_PATH, the
old behavior (O_RDONLY) will naturally happen as the fallback.
|
|
|
|
|
|
|
| |
I intend to add more Linux workarounds that depend on using these
pathnames, and some of them will be in "syscall" functions that, from
an anti-bloat standpoint, should not depend on the whole snprintf
framework.
|
|
|
|
|
|
|
|
| |
GNU used several extensions that were incompatible with C99 and POSIX,
so they used alternate names for the standard functions.
The result is that we need these to run standards-conformant programs
that were linked with glibc.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. as reported by William Haddon, the value returned by snprintf was
wrongly used as a length passed to sendto, despite it possibly
exceeding the buffer length. this could lead to invalid reads and
leaking additional data to syslog.
2. openlog was storing a pointer to the ident string passed by the
caller, rather than copying it. this bug is shared with (and even
documented in) other implementations like glibc, but such behavior
does not seem to meet the requirements of the standard.
3. extremely long ident provided to openlog, or corrupt ident due to
the above issue, could possibly have resulted in buffer overflows.
despite having the potential for smashing the stack, i believe the
impact is low since ident points to a short string literal in typical
application usage (and per the above bug, other usages will break
horribly on other implementations).
4. when used with LOG_NDELAY, openlog was not connecting the
newly-opened socket; sendto was being used instead. this defeated the
main purpose of LOG_NDELAY: preparing for chroot.
5. the default facility was not being used at all, so all messages
without an explicit facility passed to syslog were getting logged at
the kernel facility.
6. setlogmask was not thread-safe; no synchronization was performed
updating the mask. the fix uses atomics rather than locking to avoid
introducing a lock in the fast path for messages whose priority is not
in the mask.
7. in some code paths, the syslog lock was being unlocked twice; this
could result in releasing a lock that was actually held by a different
thread.
some additional enhancements to syslog such as a default identifier
based on argv[0] or similar may still be desired; at this time, only
the above-listed bugs have been fixed.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
based on proposed patches by Daniel Cegiełka, with minor changes:
- use a weak symbol for optreset so it doesn't clash with namespace
- also reset optpos (position in multi-option arg like -lR)
- also make getopt_long support reset
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
also update syslog to use SOCK_CLOEXEC rather than separate fcntl
step, to make it safe in multithreaded programs that run external
programs.
emulation is not atomic; it could be made atomic by holding a lock on
forking during the operation, but this seems like overkill. my goal is
not to achieve perfect behavior on old kernels (which have plenty of
other imperfect behavior already) but to avoid catastrophic breakage
in (1) syslog, which would give no output on old kernels with the
change to use SOCK_CLOEXEC, and (2) programs built on a new kernel
where configure scripts detected a working SOCK_CLOEXEC, which later
get run on older kernels (they may otherwise fail to work completely).
|
|
|
|
| |
also optimized a bit.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
previously, it was pretty much random which one of these trees a given
function appeared in. they have now been organized into:
src/linux: non-POSIX linux syscalls (possibly shard with other nixen)
src/legacy: various obsolete/legacy functions, mostly wrappers
src/misc: still mostly uncategorized; some misc POSIX, some nonstd
src/crypt: crypt hash functions
further cleanup will be done later.
|
|
|
|
| |
void* does not implicitly convert to function pointer types.
|
|
|
|
|
|
|
|
| |
to deal with the fact that the public headers may be used with pre-c99
compilers, __restrict is used in place of restrict, and defined
appropriately for any supported compiler. we also avoid the form
[restrict] since older versions of gcc rejected it due to a bug in the
original c99 standard, and instead use the form *restrict.
|
| |
|
|
|
|
|
| |
all of the limits could use review, but err on the side of avoiding
excessive rounds for now.
|
|
|
|
|
|
|
| |
these limits could definitely use review, but for now, i feel
consistency and erring on the side of preventing servers from getting
bogged down by excessively-slow user-provided settings (think
.htpasswd) are the best policy. blowfish should be updated to match.
|
|
|
|
|
|
|
| |
based on versions sent to the list by nsz, with some simplification
and debloating. i'd still like to get them a bit smaller, or ideally
merge them into a single file with most of the code being shared, but
that can be done later.
|
| |
|
|
|
|
|
|
|
| |
there are still some discussions going on about tweaking the code, but
at least thing brings us to the point of having something working in
the repository. hopefully the remaining major hashes (md5,sha) will
follow soon.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
unfortunately, a large portion of programs which call crypt are not
prepared for its failure and do not check that the return value is
non-null before using it. thus, always "succeeding" but giving an
unmatchable hash is reportedly a better behavior than failing on
error.
it was suggested that we could do this the same way as other
implementations and put the null-to-unmatchable translation in the
wrapper rather than the individual crypt modules like crypt_des, but
when i tried to do it, i found it was making the logic in __crypt_r
for keeping track of which hash type we're working with and whether it
succeeded or failed much more complex, and potentially error-prone.
the way i'm doing it now seems to have essentially zero cost, anyway.
|