diff options
author | Noah Goldstein <goldstein.w.n@gmail.com> | 2023-06-07 13:18:01 -0500 |
---|---|---|
committer | Noah Goldstein <goldstein.w.n@gmail.com> | 2023-06-12 11:33:39 -0500 |
commit | af992e7abdc9049714da76cae1e5e18bc4838fb8 (patch) | |
tree | d849302de8864185badbeba75cef1f614df45bcf /sysdeps/hppa/nptl/jmpbuf-unwind.h | |
parent | 5e8d1b0328a850c229146f40e18848728b104583 (diff) | |
download | glibc-af992e7abdc9049714da76cae1e5e18bc4838fb8.tar.gz glibc-af992e7abdc9049714da76cae1e5e18bc4838fb8.tar.xz glibc-af992e7abdc9049714da76cae1e5e18bc4838fb8.zip |
x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 / ncores_per_socket'. This patch updates that value to roughly 'sizeof_L3 / 4` The original value (specifically dividing the `ncores_per_socket`) was done to limit the amount of other threads' data a `memcpy`/`memset` could evict. Dividing by 'ncores_per_socket', however leads to exceedingly low non-temporal thresholds and leads to using non-temporal stores in cases where REP MOVSB is multiple times faster. Furthermore, non-temporal stores are written directly to main memory so using it at a size much smaller than L3 can place soon to be accessed data much further away than it otherwise could be. As well, modern machines are able to detect streaming patterns (especially if REP MOVSB is used) and provide LRU hints to the memory subsystem. This in affect caps the total amount of eviction at 1/cache_associativity, far below meaningfully thrashing the entire cache. As best I can tell, the benchmarks that lead this small threshold where done comparing non-temporal stores versus standard cacheable stores. A better comparison (linked below) is to be REP MOVSB which, on the measure systems, is nearly 2x faster than non-temporal stores at the low-end of the previous threshold, and within 10% for over 100MB copies (well past even the current threshold). In cases with a low number of threads competing for bandwidth, REP MOVSB is ~2x faster up to `sizeof_L3`. The divisor of `4` is a somewhat arbitrary value. From benchmarks it seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs such as Broadwell prefer something closer to `8`. This patch is meant to be followed up by another one to make the divisor cpu-specific, but in the meantime (and for easier backporting), this patch settles on `4` as a middle-ground. Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable stores where done using: https://github.com/goldsteinn/memcpy-nt-benchmarks Sheets results (also available in pdf on the github): https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml Reviewed-by: DJ Delorie <dj@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Diffstat (limited to 'sysdeps/hppa/nptl/jmpbuf-unwind.h')
0 files changed, 0 insertions, 0 deletions