diff options
author | H.J. Lu <hjl.tools@gmail.com> | 2016-04-23 06:05:01 -0700 |
---|---|---|
committer | H.J. Lu <hjl.tools@gmail.com> | 2016-04-23 06:05:15 -0700 |
commit | 2bc983b78c215765979a29a2e98b0cc01791c2d1 (patch) | |
tree | dd39b724f90f6a70a28ee4fd06d1edf8aae743f2 /wcsmbs/wcsrchr.c | |
parent | 00277a3f81bf023d4562da485f62efe5b5b30388 (diff) | |
download | glibc-2bc983b78c215765979a29a2e98b0cc01791c2d1.tar.gz glibc-2bc983b78c215765979a29a2e98b0cc01791c2d1.tar.xz glibc-2bc983b78c215765979a29a2e98b0cc01791c2d1.zip |
Reduce number of mmap calls from __libc_memalign in ld.so
__libc_memalign in ld.so allocates one page at a time and tries to optimize consecutive __libc_memalign calls by hoping that the next mmap is after the current memory allocation. However, the kernel hands out mmap addresses in top-down order, so this optimization in practice never happens, with the result that we have more mmap calls and waste a bunch of space for each __libc_memalign. This change makes __libc_memalign to mmap one page extra. Worst case, the kernel never puts a backing page behind it, but best case it allows __libc_memalign to operate much much better. For elf/tst-align --direct, it reduces number of mmap calls from 12 to 9. * elf/dl-minimal.c (__libc_memalign): Mmap one extra page.
Diffstat (limited to 'wcsmbs/wcsrchr.c')
0 files changed, 0 insertions, 0 deletions