about summary refs log tree commit diff
path: root/src/prng
Commit message (Collapse)AuthorAgeFilesLines
* fix mrand48/jrand48 return value on 64-bit archsRich Felker2016-12-161-1/+1
| | | | | | | | POSIX specifies the result to have signed 32-bit range. on 32-bit archs, the implicit conversion to long achieved the desired range already, but when long is 64-bit, a cast is needed. patch by Ed Schouten.
* make all objects used with atomic operations volatileRich Felker2015-03-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the memory model we use internally for atomics permits plain loads of values which may be subject to concurrent modification without requiring that a special load function be used. since a compiler is free to make transformations that alter the number of loads or the way in which loads are performed, the compiler is theoretically free to break this usage. the most obvious concern is with atomic cas constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be transformed to a_cas(p,*p,f(*p)); where the latter is intended to show multiple loads of *p whose resulting values might fail to be equal; this would break the atomicity of the whole operation. but even more fundamental breakage is possible. with the changes being made now, objects that may be modified by atomics are modeled as volatile, and the atomic operations performed on them by other threads are modeled as asynchronous stores by hardware which happens to be acting on the request of another thread. such modeling of course does not itself address memory synchronization between cores/cpus, but that aspect was already handled. this all seems less than ideal, but it's the best we can do without mandating a C11 compiler and using the C11 model for atomics. in the case of pthread_once_t, the ABI type of the underlying object is not volatile-qualified. so we are assuming that accessing the object through a volatile-qualified lvalue via casts yields volatile access semantics. the language of the C standard is somewhat unclear on this matter, but this is an assumption the linux kernel also makes, and seems to be the correct interpretation of the standard.
* fix incorrect sequence generation in *rand48 prng functionsRich Felker2014-09-221-2/+2
| | | | | | | | | | | patch by Jens Gustedt. this fixes a bug reported by Nadav Har'El. the underlying issue was that a left-shift by 16 bits after promotion of unsigned short to int caused integer overflow. while some compilers define this overflow case as "shifting into the sign bit", doing so doesn't help; the sign bit then gets extended through the upper bits in subsequent arithmetic as unsigned long long. this patch imposes a promotion to unsigned prior to the shift, so that the result is well-defined and matches the specified behavior.
* fix initstate to make the state buffer usable in setstateSzabolcs Nagy2014-01-211-12/+2
| | | | | | | | | | | setstate could use the results of previous initstate or setstate calls (they return the old state buffer), but the documentation requires that an initialized state buffer should be possible to use in setstate immediately, which means that initstate should save the generator parameters in it. I also removed the copyright notice since it is present in the copyright file.
* include cleanups: remove unused headers and add feature test macrosSzabolcs Nagy2013-12-121-1/+0
|
* improve the quality of output from rand_rRich Felker2013-06-121-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | due to the interface requirement of having the full state contained in a single object of type unsigned int, it is difficult to provide a reasonable-quality implementation; most good PRNGs are immediately ruled out because they need larger state. the old rand_r gave very poor output (very short period) in its lower bits; normally, it's desirable to throw away the low bits (as in rand()) when using a LCG, but this is not possible since the state is only 32 bits and we need 31 bits of output. glibc's rand_r uses the same LCG as musl's, but runs it for 3 iterations and only takes 10-11 bits from each iteration to construct the output value. this partially fixes the period issue, but introduces bias: not all outputs have the same frequency, and many do not appear at all. with such a low period, the bias is likely to be observable. I tried many approaches to "fix" rand_r, and the simplest I found which made it pass the "dieharder" tests was applying this transformation to the output. the "temper" function is taken from mersenne twister, where it seems to have been chosen for some rigorous properties; here, the only formal property I'm using is that it's one-to-one and thus avoids introducing bias. should further deficiencies in rand_r be reported, the obvious "best" solution is applying a 32-bit cryptographic block cipher in CTR mode. I identified several possible ciphers that could be used directly or adapted, but as they would be a lot slower and larger, I do not see a justification for using them unless the current rand_r proves deficient for some real-world use.
* prng: make rand_r have 2^32 period instead of 2^31Szabolcs Nagy2013-06-082-2/+2
| | | | | this is a minor fix to increase the period of the obsolete rand_r a bit. an include header in __rand48_step.c is fixed as well.
* prng: fix rand() to give good sequence with small stateSzabolcs Nagy2013-06-081-2/+4
| | | | | | | some applications rely on the low bits of rand() to be reasonably good quality prng, so now it fixed by using the top bits of a 64 bit LCG, this is simple, has small state and passes statistical tests. D.E. Knuth attributes the multiplier to C.E. Haynes in TAOCP Vol2 3.3.4
* ditch the priority inheritance locks; use malloc's version of lockRich Felker2012-04-241-9/+9
| | | | | | | | | | | | | | | | | | | i did some testing trying to switch malloc to use the new internal lock with priority inheritance, and my malloc contention test got 20-100 times slower. if priority inheritance futexes are this slow, it's simply too high a price to pay for avoiding priority inversion. maybe we can consider them somewhere down the road once the kernel folks get their act together on this (and perferably don't link it to glibc's inefficient lock API)... as such, i've switch __lock to use malloc's implementation of lightweight locks, and updated all the users of the code to use an array with a waiter count for their locks. this should give optimal performance in the vast majority of cases, and it's simple. malloc is still using its own internal copy of the lock code because it seems to yield measurably better performance with -O3 when it's inlined (20% or more difference in the contention stress test).
* locking support for random() prngRich Felker2011-06-291-7/+28
| | | | | | these interfaces are required to be thread-safe even though they are not state-free. the random number sequence is shared across all threads.
* initial commit of prng implementation by Szabolcs NagyRich Felker2011-06-232-12/+107
|
* initial check-in, version 0.5.0 v0.5.0Rich Felker2011-02-1212-0/+126