diff options
author | Rich Felker <dalias@aerifal.cx> | 2011-04-04 17:26:41 -0400 |
---|---|---|
committer | Rich Felker <dalias@aerifal.cx> | 2011-04-04 17:26:41 -0400 |
commit | b761bd19aa1ae0f95dd2146397b7f39b44a471b6 (patch) | |
tree | f12ebb65db40559baa4472c55ef83ddbbcf33349 | |
parent | 98c5583ad5d633166e28034c0a3544ad48b532b6 (diff) | |
download | musl-b761bd19aa1ae0f95dd2146397b7f39b44a471b6.tar.gz musl-b761bd19aa1ae0f95dd2146397b7f39b44a471b6.tar.xz musl-b761bd19aa1ae0f95dd2146397b7f39b44a471b6.zip |
fix rare but nasty under-allocation bug in malloc with large requests
the bug appeared only with requests roughly 2*sizeof(size_t) to 4*sizeof(size_t) bytes smaller than a multiple of the page size, and only for requests large enough to be serviced by mmap instead of the normal heap. it was only ever observed on 64-bit machines but presumably could also affect 32-bit (albeit with a smaller window of opportunity).
-rw-r--r-- | src/malloc/malloc.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/src/malloc/malloc.c b/src/malloc/malloc.c index ee6f170b..46cc21fb 100644 --- a/src/malloc/malloc.c +++ b/src/malloc/malloc.c @@ -333,7 +333,7 @@ void *malloc(size_t n) if (adjust_size(&n) < 0) return 0; if (n > MMAP_THRESHOLD) { - size_t len = n + PAGE_SIZE - 1 & -PAGE_SIZE; + size_t len = n + OVERHEAD + PAGE_SIZE - 1 & -PAGE_SIZE; char *base = __mmap(0, len, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0); if (base == (void *)-1) return 0; |