about summary refs log tree commit diff
path: root/src/malloc/memalign.c
diff options
context:
space:
mode:
authorRich Felker <dalias@aerifal.cx>2020-11-19 16:20:45 -0500
committerRich Felker <dalias@aerifal.cx>2020-11-19 16:36:49 -0500
commit233bb6972d84e9cb908ff038f78d97e487082225 (patch)
treea830437037601507b71899161e1af8d8ea703ad6 /src/malloc/memalign.c
parentd26e0774a59bb7245b205bc8e7d8b35cc2037095 (diff)
downloadmusl-233bb6972d84e9cb908ff038f78d97e487082225.tar.gz
musl-233bb6972d84e9cb908ff038f78d97e487082225.tar.xz
musl-233bb6972d84e9cb908ff038f78d97e487082225.zip
protect destruction of process-shared mutexes against robust list races
after a non-normal-type process-shared mutex is unlocked, it's
immediately available to another thread to lock, unlock, and destroy,
but the first unlocking thread may still have a pointer to it in its
robust_list pending slot. this means, on async process termination,
the kernel may attempt to access and modify the memory that used to
contain the mutex -- memory that may have been reused for some other
purpose after the mutex was destroyed.

setting up for this kind of race to occur is difficult to begin with,
requiring dynamic use of shared memory maps, and actually hitting the
race is very difficult even with a suitable setup. so this is mostly a
theoretical fix, but in any case the cost is very low.
Diffstat (limited to 'src/malloc/memalign.c')
0 files changed, 0 insertions, 0 deletions