about summary refs log tree commit diff
diff options
context:
space:
mode:
authorH.J. Lu <hjl.tools@gmail.com>2016-04-23 06:05:01 -0700
committerH.J. Lu <hjl.tools@gmail.com>2016-04-23 06:05:15 -0700
commit2bc983b78c215765979a29a2e98b0cc01791c2d1 (patch)
treedd39b724f90f6a70a28ee4fd06d1edf8aae743f2
parent00277a3f81bf023d4562da485f62efe5b5b30388 (diff)
downloadglibc-2bc983b78c215765979a29a2e98b0cc01791c2d1.tar.gz
glibc-2bc983b78c215765979a29a2e98b0cc01791c2d1.tar.xz
glibc-2bc983b78c215765979a29a2e98b0cc01791c2d1.zip
Reduce number of mmap calls from __libc_memalign in ld.so
__libc_memalign in ld.so allocates one page at a time and tries to
optimize consecutive __libc_memalign calls by hoping that the next
mmap is after the current memory allocation.

However, the kernel hands out mmap addresses in top-down order, so
this optimization in practice never happens, with the result that we
have more mmap calls and waste a bunch of space for each __libc_memalign.

This change makes __libc_memalign to mmap one page extra.  Worst case,
the kernel never puts a backing page behind it, but best case it allows
__libc_memalign to operate much much better.  For elf/tst-align --direct,
it reduces number of mmap calls from 12 to 9.

	* elf/dl-minimal.c (__libc_memalign): Mmap one extra page.
-rw-r--r--ChangeLog4
-rw-r--r--elf/dl-minimal.c12
2 files changed, 9 insertions, 7 deletions
diff --git a/ChangeLog b/ChangeLog
index 5cd8c4012b..744c0c0fb7 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,7 @@
+2016-04-23   H.J. Lu  <hongjiu.lu@intel.com>
+
+	* elf/dl-minimal.c (__libc_memalign): Mmap one extra page.
+
 2016-04-23  Mike Frysinger  <vapier@gentoo.org>
 
 	* locale/programs/ld-time.c (time_finish): Set week_1stweek to 7
diff --git a/elf/dl-minimal.c b/elf/dl-minimal.c
index 762e65b4d0..c8a8f8dc93 100644
--- a/elf/dl-minimal.c
+++ b/elf/dl-minimal.c
@@ -66,15 +66,13 @@ __libc_memalign (size_t align, size_t n)
 
   if (alloc_ptr + n >= alloc_end || n >= -(uintptr_t) alloc_ptr)
     {
-      /* Insufficient space left; allocate another page.  */
+      /* Insufficient space left; allocate another page plus one extra
+	 page to reduce number of mmap calls.  */
       caddr_t page;
       size_t nup = (n + GLRO(dl_pagesize) - 1) & ~(GLRO(dl_pagesize) - 1);
-      if (__glibc_unlikely (nup == 0))
-	{
-	  if (n)
-	    return NULL;
-	  nup = GLRO(dl_pagesize);
-	}
+      if (__glibc_unlikely (nup == 0 && n != 0))
+	return NULL;
+      nup += GLRO(dl_pagesize);
       page = __mmap (0, nup, PROT_READ|PROT_WRITE,
 		     MAP_ANON|MAP_PRIVATE, -1, 0);
       if (page == MAP_FAILED)