summary refs log tree commit diff
path: root/malloc/arena.c
diff options
context:
space:
mode:
authorCarlos O'Donell <carlos@systemhalted.org>2015-10-07 22:21:36 -0400
committerCarlos O'Donell <carlos@systemhalted.org>2015-10-07 22:21:36 -0400
commite4bc326dbbf7328775fe7dd39de1178821363e0a (patch)
tree0574e1b889b71e501804b823857803348d282fc0 /malloc/arena.c
parent58a3a98d8f3488b659318b1a1c6efc169a6f06bf (diff)
downloadglibc-e4bc326dbbf7328775fe7dd39de1178821363e0a.tar.gz
glibc-e4bc326dbbf7328775fe7dd39de1178821363e0a.tar.xz
glibc-e4bc326dbbf7328775fe7dd39de1178821363e0a.zip
malloc: Consistently apply trim_threshold to all heaps (Bug 17195)
In the per-thread arenas we apply trim_threshold-based checks
to the extra space between the pad and the top_area. This isn't
quite accurate and instead we should be harmonizing with the way
in which trim_treshold is applied everywhere else like sysrtim
and _int_free. The trimming check should be based on the size of
the top chunk and only the size of the top chunk. The following
patch harmonizes the trimming and make it consistent for the main
arena and thread arenas.

In the old code a large padding request might have meant that
trimming was not triggered. Now trimming is considered first based
on the chunk, then the pad is subtracted, and the remainder trimmed.
This is how all the other trimmings operate. I didn't measure the
performance difference of this change because it corrects what I
consider to be a behavioural anomaly. We'll need some profile driven
optimization to make this code better, and even there Ondrej and
others have better ideas on how to speedup malloc.

Tested on x86_64 with no regressions. Already reviewed by Siddhesh
Poyarekar and Mel Gorman here and discussed here:
https://sourceware.org/ml/libc-alpha/2015-05/msg00002.html
Diffstat (limited to 'malloc/arena.c')
-rw-r--r--malloc/arena.c10
1 files changed, 8 insertions, 2 deletions
diff --git a/malloc/arena.c b/malloc/arena.c
index b44e307ade..cb45a048cd 100644
--- a/malloc/arena.c
+++ b/malloc/arena.c
@@ -696,14 +696,20 @@ heap_trim (heap_info *heap, size_t pad)
     }
 
   /* Uses similar logic for per-thread arenas as the main arena with systrim
-     by preserving the top pad and at least a page.  */
+     and _int_free by preserving the top pad and rounding down to the nearest
+     page.  */
   top_size = chunksize (top_chunk);
+  if ((unsigned long)(top_size) <
+      (unsigned long)(mp_.trim_threshold))
+    return 0;
+
   top_area = top_size - MINSIZE - 1;
   if (top_area < 0 || (size_t) top_area <= pad)
     return 0;
 
+  /* Release in pagesize units and round down to the nearest page.  */
   extra = ALIGN_DOWN(top_area - pad, pagesz);
-  if ((unsigned long) extra < mp_.trim_threshold)
+  if (extra == 0)
     return 0;
 
   /* Try to shrink. */