diff options
author | Rich Felker <dalias@aerifal.cx> | 2020-06-02 17:37:14 -0400 |
---|---|---|
committer | Rich Felker <dalias@aerifal.cx> | 2020-06-02 19:39:37 -0400 |
commit | 3e16313f8fe2ed143ae0267fd79d63014c24779f (patch) | |
tree | 6dd7400ddc4bd53822af62cc82ac43d4ce9bffe6 /src/linux/adjtime.c | |
parent | c40157d87ef0b585a90ff5230fdf69cddc824d7f (diff) | |
download | musl-3e16313f8fe2ed143ae0267fd79d63014c24779f.tar.gz musl-3e16313f8fe2ed143ae0267fd79d63014c24779f.tar.xz musl-3e16313f8fe2ed143ae0267fd79d63014c24779f.zip |
fix unbounded heap expansion race in malloc
this has been a longstanding issue reported many times over the years, with it becoming increasingly clear that it could be hit in practice. under concurrent malloc and free from multiple threads, it's possible to hit usage patterns where unbounded amounts of new memory are obtained via brk/mmap despite the total nominal usage being small and bounded. the underlying cause is that, as a fundamental consequence of keeping locking as fine-grained as possible, the state where free has unbinned an already-free chunk to merge it with a newly-freed one, but has not yet re-binned the combined chunk, is exposed to other threads. this is bad even with small chunks, and leads to suboptimal use of memory, but where it really blows up is where the already-freed chunk in question is the large free region "at the top of the heap". in this situation, other threads momentarily see a state of having almost no free memory, and conclude that they need to obtain more. as far as I can tell there is no fix for this that does not harm performance. the fix made here forces all split/merge of free chunks to take place under a single lock, which also takes the place of the old free_lock, being held at least momentarily at the time of free to determine whether there are neighboring free chunks that need merging. as a consequence, the pretrim, alloc_fwd, and alloc_rev operations no longer make sense and are deleted. simplified merging now takes place inline in free (__bin_chunk) and realloc. as commented in the source, holding the split_merge_lock precludes any chunk transition from in-use to free state. for the most part, it also precludes change to chunk header sizes. however, __memalign may still modify the sizes of an in-use chunk to split it into two in-use chunks. arguably this should require holding the split_merge_lock, but that would necessitate refactoring to expose it externally, which is a mess. and it turns out not to be necessary, at least assuming the existing sloppy memory model malloc has been using, because if free (__bin_chunk) or realloc sees any unsynchronized change to the size, it will also see the in-use bit being set, and thereby can't do anything with the neighboring chunk that changed size.
Diffstat (limited to 'src/linux/adjtime.c')
0 files changed, 0 insertions, 0 deletions