about summary refs log tree commit diff
Commit message (Collapse)AuthorAgeFilesLines
* e500 port: getcontext / setcontext / swapcontext.Joseph Myers2013-10-049-0/+190
|
* Correct error in iso-3166.defChris Leonard2013-10-041-1/+1
|
* Copy / modify pap_AN into pap_AW and pap_CW.Chris Leonard2013-10-045-7/+360
|
* Split ar_SD into ar_SD and ar_SSChris Leonard2013-10-042-2/+244
|
* Update iso-1366.def and related occurrencesChris Leonard2013-10-042-3/+13
|
* Fix typo in manualSiddhesh Poyarekar2013-10-042-1/+5
|
* ARM: Allow building __sigsetjmp as Thumb.Will Newton2013-10-042-3/+5
| | | | | | | | | | | Convert __sigsetjmp code to allow building as Thumb. ports/ChangeLog.arm: 2013-10-04 Will Newton <will.newton@linaro.org> * sysdeps/arm/setjmp.S (NO_THUMB): Remove define. (__sigsetjmp): Use Thumb supported instructions.
* ARM: Allow building __longjmp as Thumb.Will Newton2013-10-043-5/+10
| | | | | | | | | | | | | Convert __longjmp code to allow building as Thumb. ports/ChangeLog.arm: 2013-10-04 Will Newton <will.newton@linaro.org> * sysdeps/arm/__longjmp.S (NO_THUMB): Remove define. (__longjmp): Use Thumb supported instructions. * sysdeps/unix/sysv/linux/arm/____longjmp_chk.S (NO_THUMB): Remove define.
* malloc/tst-valloc.c: Tidy up code.Will Newton2013-10-042-6/+19
| | | | | | | | | | | | | | Add some comments and call free on all potentially allocated pointers. Also remove duplicate check for NULL pointer. ChangeLog: 2013-10-04 Will Newton <will.newton@linaro.org> * malloc/tst-valloc.c: Add comments. (do_test): Add comments and call free on all potentially allocated pointers. Remove duplicate check for NULL pointer. Add space after cast.
* malloc/tst-pvalloc.c: Tidy up code.Will Newton2013-10-042-6/+19
| | | | | | | | | | | | | | Add some comments and call free on all potentially allocated pointers. Also remove duplicate check for NULL pointer. ChangeLog: 2013-10-04 Will Newton <will.newton@linaro.org> * malloc/tst-pvalloc.c: Add comments. (do_test): Add comments and call free on all potentially allocated pointers. Remove duplicate check for NULL pointer. Add space after cast.
* malloc/tst-posix_memalign.c: Tidy up code.Will Newton2013-10-042-4/+18
| | | | | | | | | | | | Add some comments and call free on all potentially allocated pointers. ChangeLog: 2013-10-04 Will Newton <will.newton@linaro.org> * malloc/tst-posix_memalign.c: Add comments. (do_test): Add comments and call free on all potentially allocated pointers. Add space after cast.
* malloc: Add memalign test.Will Newton2013-10-043-1/+105
| | | | | | | | | ChangeLog: 2013-10-04 Will Newton <will.newton@linaro.org> * malloc/Makefile: Add tst-memalign. * malloc/tst-memalign.c: New file.
* Use stdint.h types in union unaligned.Alan Modra2013-10-043-6/+12
| | | | | | * sysdeps/powerpc/powerpc32/dl-machine.c (__process_machine_rela): Use stdint types in rather than __attribute__((mode())). * sysdeps/powerpc/powerpc64/dl-machine.h (elf_machine_rela): Likewise.
* Correct little-endian relocation of UADDR64,32,16.Alan Modra2013-10-043-24/+24
| | | | | | * sysdeps/powerpc/powerpc32/dl-machine.c (__process_machine_rela): Correct handling of unaligned relocs for little-endian. * sysdeps/powerpc/powerpc64/dl-machine.h (elf_machine_rela): Likewise.
* fix changelog dateAlan Modra2013-10-041-1/+1
|
* PowerPC LE configuryAlan Modra2013-10-045-4/+13
| | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00096.html This adds the basic configury bits for powerpc64le and powerpcle. * configure.in: Map powerpc64le and powerpcle to base_machine/machine. * configure: Regenerate. * nptl/shlib-versions: Powerpc*le starts at 2.18. * shlib-versions: Likewise.
* string/tester memrchr testAlan Modra2013-10-042-3/+7
| | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00095.html I found this useful at one stage when I was seeing a huge number of memrchr failures all of test number 10. * string/tester.c (test_memrchr): Increment reported test cycle.
* string/test-memcpy error reportingAlan Modra2013-10-042-2/+7
| | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00094.html Using plain %s here runs the risk of segfaulting when displaying the string. src and dst aren't zero terminated strings. * string/test-memcpy.c (do_one_test): When reporting errors, print string address and don't overrun end of string.
* PowerPC LE memchr and memrchrAlan Modra2013-10-047-382/+422
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00105.html Like strnlen, memchr and memrchr had a number of defects fixed by this patch as well as adding little-endian support. The first one I noticed was that the entry to the main loop needlessly checked for "are we done yet?" when we know the size is large enough that we can't be done. The second defect I noticed was that the main loop count was wrong, which in turn meant that the small loop needed to handle an extra word. Thirdly, there is nothing to say that the string can't wrap around zero, except of course that we'd normally hit a segfault on trying to read from address zero. Fixing that simplified a number of places: - /* Are we done already? */ - addi r9,r8,8 - cmpld r9,r7 - bge L(null) becomes + cmpld r8,r7 + beqlr However, the exit gets an extra test because I test for being on the last word then if so whether the byte offset is less than the end. Overall, the change is a win. Lastly, memrchr used the wrong cache hint. * sysdeps/powerpc/powerpc64/power7/memchr.S: Replace rlwimi with insrdi. Make better use of reg selection to speed exit slightly. Schedule entry path a little better. Remove useless "are we done" checks on entry to main loop. Handle wrapping around zero address. Correct main loop count. Handle single left-over word from main loop inline rather than by using loop_small. Remove extra word case in loop_small caused by wrong loop count. Add little-endian support. * sysdeps/powerpc/powerpc32/power7/memchr.S: Likewise. * sysdeps/powerpc/powerpc64/power7/memrchr.S: Likewise. Use proper cache hint. * sysdeps/powerpc/powerpc32/power7/memrchr.S: Likewise. * sysdeps/powerpc/powerpc64/power7/rawmemchr.S: Add little-endian support. Avoid rlwimi. * sysdeps/powerpc/powerpc32/power7/rawmemchr.S: Likewise.
* PowerPC LE memsetAlan Modra2013-10-048-34/+45
| | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00104.html One of the things I noticed when looking at power7 timing is that rlwimi is cracked and the two resulting insns have a register dependency. That makes it a little slower than the equivalent rldimi. * sysdeps/powerpc/powerpc64/memset.S: Replace rlwimi with insrdi. Formatting. * sysdeps/powerpc/powerpc64/power4/memset.S: Likewise. * sysdeps/powerpc/powerpc64/power6/memset.S: Likewise. * sysdeps/powerpc/powerpc64/power7/memset.S: Likewise. * sysdeps/powerpc/powerpc32/power4/memset.S: Likewise. * sysdeps/powerpc/powerpc32/power6/memset.S: Likewise. * sysdeps/powerpc/powerpc32/power7/memset.S: Likewise.
* PowerPC LE memcpyAlan Modra2013-10-0410-410/+941
| | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00103.html LIttle-endian support for memcpy. I spent some time cleaning up the 64-bit power7 memcpy, in order to avoid the extra alignment traps power7 takes for little-endian. It probably would have been better to copy the linux kernel version of memcpy. * sysdeps/powerpc/powerpc32/power4/memcpy.S: Add little endian support. * sysdeps/powerpc/powerpc32/power6/memcpy.S: Likewise. * sysdeps/powerpc/powerpc32/power7/memcpy.S: Likewise. * sysdeps/powerpc/powerpc32/power7/mempcpy.S: Likewise. * sysdeps/powerpc/powerpc64/memcpy.S: Likewise. * sysdeps/powerpc/powerpc64/power4/memcpy.S: Likewise. * sysdeps/powerpc/powerpc64/power6/memcpy.S: Likewise. * sysdeps/powerpc/powerpc64/power7/memcpy.S: Likewise. * sysdeps/powerpc/powerpc64/power7/mempcpy.S: Likewise. Make better use of regs. Use power7 mtocrf. Tidy function tails.
* PowerPC LE memcmpAlan Modra2013-10-045-1893/+3464
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00102.html This is a rather large patch due to formatting and renaming. The formatting changes were to make it possible to compare power7 and power4 versions of memcmp. Using different register defines came about while I was wrestling with the code, trying to find spare registers at one stage. I found it much simpler if we refer to a reg by the same name throughout a function, so it's better if short-term multiple use regs like rTMP are referred to using their register number. I made the cr field usage changes when attempting to reload rWORDn regs in the exit path to byte swap before comparing when little-endian. That proved a bad idea due to the pipelining involved in the main loop; Offsets to reload the regs were different first time around the loop.. Anyway, I left the cr field usage changes in place for consistency. Aside from these more-or-less cosmetic changes, I fixed a number of places where an early exit path restores regs unnecessarily, removed some dead code, and optimised one or two exits. * sysdeps/powerpc/powerpc64/power7/memcmp.S: Add little-endian support. Formatting. Consistently use rXXX register defines or rN defines. Use early exit labels that avoid restoring unused non-volatile regs. Make cr field use more consistent with rWORDn compares. Rename regs used as shift registers for unaligned loop, using rN defines for short lifetime/multiple use regs. * sysdeps/powerpc/powerpc64/power4/memcmp.S: Likewise. * sysdeps/powerpc/powerpc32/power7/memcmp.S: Likewise. Exit with addi 1,1,64 to pop stack frame. Simplify return value code. * sysdeps/powerpc/powerpc32/power4/memcmp.S: Likewise.
* PowerPC LE strchrAlan Modra2013-10-047-74/+228
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00101.html Adds little-endian support to optimised strchr assembly. I've also tweaked the big-endian code a little. In power7/strchr.S there's a check in the tail of the function that we didn't match 0 before finding a c match, done by comparing leading zero counts. It's just as valid, and quicker, to compare the raw output from cmpb. Another little tweak is to use rldimi/insrdi in place of rlwimi for the power7 strchr functions. Since rlwimi is cracked, it is a few cycles slower. rldimi can be used on the 32-bit power7 functions too. * sysdeps/powerpc/powerpc64/power7/strchr.S (strchr): Add little-endian support. Correct typos, formatting. Optimize tail. Use insrdi rather than rlwimi. * sysdeps/powerpc/powerpc32/power7/strchr.S: Likewise. * sysdeps/powerpc/powerpc64/power7/strchrnul.S (__strchrnul): Add little-endian support. Correct typos. * sysdeps/powerpc/powerpc32/power7/strchrnul.S: Likewise. Use insrdi rather than rlwimi. * sysdeps/powerpc/powerpc64/strchr.S (rTMP4, rTMP5): Define. Use in loop and entry code to keep "and." results. (strchr): Add little-endian support. Comment. Move cntlzd earlier in tail. * sysdeps/powerpc/powerpc32/strchr.S: Likewise.
* PowerPC LE strcpyAlan Modra2013-10-045-3/+85
| | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00100.html The strcpy changes for little-endian are quite straight-forward, just a matter of rotating the last word differently. I'll note that the powerpc64 version of stpcpy is just begging to be converted to use 64-bit loads and stores.. * sysdeps/powerpc/powerpc64/strcpy.S: Add little-endian support: * sysdeps/powerpc/powerpc32/strcpy.S: Likewise. * sysdeps/powerpc/powerpc64/stpcpy.S: Likewise. * sysdeps/powerpc/powerpc32/stpcpy.S: Likewise.
* PowerPC LE strcmp and strncmpAlan Modra2013-10-049-81/+393
| | | | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00099.html More little-endian support. I leave the main strcmp loops unchanged, (well, except for renumbering rTMP to something other than r0 since it's needed in an addi insn) and modify the tail for little-endian. I noticed some of the big-endian tail code was a little untidy so have cleaned that up too. * sysdeps/powerpc/powerpc64/strcmp.S (rTMP2): Define as r0. (rTMP): Define as r11. (strcmp): Add little-endian support. Optimise tail. * sysdeps/powerpc/powerpc32/strcmp.S: Similarly. * sysdeps/powerpc/powerpc64/strncmp.S: Likewise. * sysdeps/powerpc/powerpc32/strncmp.S: Likewise. * sysdeps/powerpc/powerpc64/power4/strncmp.S: Likewise. * sysdeps/powerpc/powerpc32/power4/strncmp.S: Likewise. * sysdeps/powerpc/powerpc64/power7/strncmp.S: Likewise. * sysdeps/powerpc/powerpc32/power7/strncmp.S: Likewise.
* PowerPC LE strnlenAlan Modra2013-10-043-102/+125
| | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00098.html The existing strnlen code has a number of defects, so this patch is more than just adding little-endian support. The changes here are similar to those for memchr. * sysdeps/powerpc/powerpc64/power7/strnlen.S (strnlen): Add little-endian support. Remove unnecessary "are we done" tests. Handle "s" wrapping around zero and extremely large "size". Correct main loop count. Handle single left-over word from main loop inline rather than by using small_loop. Correct comments. Delete "zero" tail, use "end_max" instead. * sysdeps/powerpc/powerpc32/power7/strnlen.S: Likewise.
* PowerPC LE strlenAlan Modra2013-10-045-47/+140
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00097.html This is the first of nine patches adding little-endian support to the existing optimised string and memory functions. I did spend some time with a power7 simulator looking at cycle by cycle behaviour for memchr, but most of these patches have not been run on cpu simulators to check that we are going as fast as possible. I'm sure PowerPC can do better. However, the little-endian support mostly leaves main loops unchanged, so I'm banking on previous authors having done a good job on big-endian.. As with most code you stare at long enough, I found some improvements for big-endian too. Little-endian support for strlen. Like most of the string functions, I leave the main word or multiple-word loops substantially unchanged, just needing to modify the tail. Removing the branch in the power7 functions is just a tidy. .align produces a branch anyway. Modifying regs in the non-power7 functions is to suit the new little-endian tail. * sysdeps/powerpc/powerpc64/power7/strlen.S (strlen): Add little-endian support. Don't branch over align. * sysdeps/powerpc/powerpc32/power7/strlen.S: Likewise. * sysdeps/powerpc/powerpc64/strlen.S (strlen): Add little-endian support. Rearrange tmp reg use to suit. Comment. * sysdeps/powerpc/powerpc32/strlen.S: Likewise.
* PowerPC SIGSTKSZAlan Modra2013-10-042-0/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00093.html This copies the sparc version of sigstack.h, which gives powerpc #define MINSIGSTKSZ 4096 #define SIGSTKSZ 16384 Before the VSX changes, struct rt_sigframe size was 1920 plus 128 for __SIGNAL_FRAMESIZE giving ppc64 exactly the default MINSIGSTKSZ of 2048. After VSX, ucontext increased by 256 bytes. Oops, we're over MINSIGSTKSZ, so powerpc has been using the wrong value for quite a while. Add another ucontext for TM and rt_sigframe is now at 3872, giving actual MINSIGSTKSZ of 4000. The glibc testcase that I was looking at was tst-cancel21, which allocates 2*SIGSTKSZ (not because the test is trying to be conservative, but because the test actually has nested signal stack frames). We blew the allocation by 48 bytes when using current mainline gcc to compile glibc (le ppc64). The required stack depth in _dl_lookup_symbol_x from the top of the next signal frame was 10944 bytes. I guess you'd want to add 288 to that, implying an actual SIGSTKSZ of 11232. * sysdeps/unix/sysv/linux/powerpc/bits/sigstack.h: New file.
* PowerPC makecontextAlan Modra2013-10-043-4/+16
| | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00092.html Use conditional form of branch and link to avoid destroying the cpu link stack used to predict blr return addresses. * sysdeps/unix/sysv/linux/powerpc/powerpc32/makecontext.S: Use conditional form of branch and link when obtaining pc. * sysdeps/unix/sysv/linux/powerpc/powerpc64/makecontext.S: Likewise.
* PowerPC LE _dl_hwcap accessAlan Modra2013-10-044-16/+23
| | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00091.html More LE support, correcting word accesses to _dl_hwcap. * sysdeps/unix/sysv/linux/powerpc/powerpc32/getcontext-common.S: Use HIWORD/LOWORD. * sysdeps/unix/sysv/linux/powerpc/powerpc32/setcontext-common.S: Ditto. * sysdeps/unix/sysv/linux/powerpc/powerpc32/swapcontext-common.S: Ditto.
* PowerPC ugly symbol versioningAlan Modra2013-10-0411-38/+40
| | | | | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00090.html This patch fixes symbol versioning in setjmp/longjmp. The existing code uses raw versions, which results in wrong symbol versioning when you want to build glibc with a base version of 2.19 for LE. Note that the merging the 64-bit and 32-bit versions in novmx-lonjmp.c and pt-longjmp.c doesn't result in GLIBC_2.0 versions for 64-bit, due to the base in shlib_versions. * sysdeps/powerpc/longjmp.c: Use proper symbol versioning macros. * sysdeps/powerpc/novmx-longjmp.c: Likewise. * sysdeps/powerpc/powerpc32/bsd-_setjmp.S: Likewise. * sysdeps/powerpc/powerpc32/bsd-setjmp.S: Likewise. * sysdeps/powerpc/powerpc32/fpu/__longjmp.S: Likewise. * sysdeps/powerpc/powerpc32/fpu/setjmp.S: Likewise. * sysdeps/powerpc/powerpc32/mcount.c: Likewise. * sysdeps/powerpc/powerpc32/setjmp.S: Likewise. * sysdeps/powerpc/powerpc64/setjmp.S: Likewise. * nptl/sysdeps/unix/sysv/linux/powerpc/pt-longjmp.c: Likewise.
* PowerPC LE setjmp/longjmpAnton Blanchard2013-10-046-85/+92
| | | | | | | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00089.html Little-endian fixes for setjmp/longjmp. When writing these I noticed the setjmp code corrupts the non volatile VMX registers when using an unaligned buffer. Anton fixed this, and also simplified it quite a bit. The current code uses boilerplate for the case where we want to store 16 bytes to an unaligned address. For that we have to do a read/modify/write of two aligned 16 byte quantities. In our case we are storing a bunch of back to back data (consective VMX registers), and only the start and end of the region need the read/modify/write. [BZ #15723] * sysdeps/powerpc/jmpbuf-offsets.h: Comment fix. * sysdeps/powerpc/powerpc32/fpu/__longjmp-common.S: Correct _dl_hwcap access for little-endian. * sysdeps/powerpc/powerpc32/fpu/setjmp-common.S: Likewise. Don't destroy vmx regs when saving unaligned. * sysdeps/powerpc/powerpc64/__longjmp-common.S: Correct CR load. * sysdeps/powerpc/powerpc64/setjmp-common.S: Likewise CR save. Don't destroy vmx regs when saving unaligned.
* PowerPC floating point little-endian [15 of 15]Alan Modra2013-10-042-12/+14
| | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-07/msg00206.html The union loses when little-endian. * sysdeps/powerpc/powerpc32/power4/hp-timing.h (HP_TIMING_NOW): Don't use a union to pack hi/low value.
* PowerPC floating point little-endian [14 of 15]Anton Blanchard2013-10-047-7/+29
| | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-07/msg00205.html These all wrongly specified float constants in a 64-bit word. * sysdeps/powerpc/powerpc64/fpu/s_ceilf.S: Correct float constants for little-endian. * sysdeps/powerpc/powerpc64/fpu/s_floorf.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/s_nearbyintf.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/s_rintf.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/s_roundf.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/s_truncf.S: Likewise.
* PowerPC floating point little-endian [13 of 15]Alan Modra2013-10-043-13/+19
| | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00088.html * sysdeps/powerpc/powerpc32/fpu/s_roundf.S: Increase alignment of constants to usual value for .cst8 section, and remove redundant high address load. * sysdeps/powerpc/powerpc32/power4/fpu/s_llround.S: Use float constant for 0x1p52. Load little-endian words of double from correct stack offsets.
* PowerPC floating point little-endian [12 of 15]Alan Modra2013-10-0420-38/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00087.html Fixes for little-endian in 32-bit assembly. * sysdeps/powerpc/sysdep.h (LOWORD, HIWORD, HISHORT): Define. * sysdeps/powerpc/powerpc32/fpu/s_copysign.S: Load little-endian words of double from correct stack offsets. * sysdeps/powerpc/powerpc32/fpu/s_copysignl.S: Likewise. * sysdeps/powerpc/powerpc32/fpu/s_lrint.S: Likewise. * sysdeps/powerpc/powerpc32/fpu/s_lround.S: Likewise. * sysdeps/powerpc/powerpc32/power4/fpu/s_llrint.S: Likewise. * sysdeps/powerpc/powerpc32/power4/fpu/s_llrintf.S: Likewise. * sysdeps/powerpc/powerpc32/power5+/fpu/s_llround.S: Likewise. * sysdeps/powerpc/powerpc32/power5+/fpu/s_lround.S: Likewise. * sysdeps/powerpc/powerpc32/power5/fpu/s_isnan.S: Likewise. * sysdeps/powerpc/powerpc32/power6/fpu/s_isnan.S: Likewise. * sysdeps/powerpc/powerpc32/power6/fpu/s_llrint.S: Likewise. * sysdeps/powerpc/powerpc32/power6/fpu/s_llrintf.S: Likewise. * sysdeps/powerpc/powerpc32/power6/fpu/s_llround.S: Likewise. * sysdeps/powerpc/powerpc32/power7/fpu/s_finite.S: Likewise. * sysdeps/powerpc/powerpc32/power7/fpu/s_isinf.S: Likewise. * sysdeps/powerpc/powerpc32/power7/fpu/s_isnan.S: Likewise. * sysdeps/powerpc/powerpc64/power7/fpu/s_finite.S: Use HISHORT. * sysdeps/powerpc/powerpc64/power7/fpu/s_isinf.S: Likewise.
* PowerPC floating point little-endian [11 of 15]Alan Modra2013-10-043-57/+70
| | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-07/msg00202.html Another little-endian fix. * sysdeps/powerpc/fpu_control.h (_FPU_GETCW): Rewrite using 64-bit int/double union. (_FPU_SETCW): Likewise. * sysdeps/powerpc/fpu/tst-setcontext-fpscr.c (_GET_DI_FPSCR): Likewise. (_SET_DI_FPSCR, _GET_SI_FPSCR, _SET_SI_FPSCR): Likewise.
* PowerPC floating point little-endian [10 of 15]Alan Modra2013-10-043-34/+37
| | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-07/msg00201.html These two functions oddly test x+1>0 when a double x is >= 0.0, and similarly when x is negative. I don't see the point of that since the test should always be true. I also don't see any need to convert x+1 to integer rather than simply using xr+1. Note that the standard allows these functions to return any value when the input is outside the range of long long, but it's not too hard to prevent xr+1 overflowing so that's what I've done. (With rounding mode FE_UPWARD, x+1 can be a lot more than what you might naively expect, but perhaps that situation was covered by the x - xrf < 1.0 test.) * sysdeps/powerpc/fpu/s_llround.c (__llround): Rewrite. * sysdeps/powerpc/fpu/s_llroundf.c (__llroundf): Rewrite.
* PowerPC floating point little-endian [9 of 15]Alan Modra2013-10-042-25/+35
| | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-07/msg00200.html This works around the fact that vsx is disabled in current little-endian gcc. Also, float constants take 4 bytes in memory vs. 16 bytes for vector constants, and we don't need to write one lot of masks for double (register format) and another for float (mem format). * sysdeps/powerpc/fpu/s_float_bitwise.h (__float_and_test28): Don't use vector int constants. (__float_and_test24, __float_and8, __float_get_exp): Likewise.
* PowerPC floating point little-endian [8 of 15]Anton Blanchard2013-10-0415-40/+57
| | | | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-07/msg00199.html Corrects floating-point environment code for little-endian. * sysdeps/powerpc/fpu/fenv_libc.h (fenv_union_t): Replace int array with long long. * sysdeps/powerpc/fpu/e_sqrt.c (__slow_ieee754_sqrt): Adjust. * sysdeps/powerpc/fpu/e_sqrtf.c (__slow_ieee754_sqrtf): Adjust. * sysdeps/powerpc/fpu/fclrexcpt.c (__feclearexcept): Adjust. * sysdeps/powerpc/fpu/fedisblxcpt.c (fedisableexcept): Adjust. * sysdeps/powerpc/fpu/feenablxcpt.c (feenableexcept): Adjust. * sysdeps/powerpc/fpu/fegetexcept.c (__fegetexcept): Adjust. * sysdeps/powerpc/fpu/feholdexcpt.c (feholdexcept): Adjust. * sysdeps/powerpc/fpu/fesetenv.c (__fesetenv): Adjust. * sysdeps/powerpc/fpu/feupdateenv.c (__feupdateenv): Adjust. * sysdeps/powerpc/fpu/fgetexcptflg.c (__fegetexceptflag): Adjust. * sysdeps/powerpc/fpu/fraiseexcpt.c (__feraiseexcept): Adjust. * sysdeps/powerpc/fpu/fsetexcptflg.c (__fesetexceptflag): Adjust. * sysdeps/powerpc/fpu/ftestexcept.c (fetestexcept): Adjust.
* PowerPC floating point little-endian [7 of 15]Anton Blanchard2013-10-042-12/+22
| | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00086.html * sysdeps/powerpc/bits/mathinline.h (__signbitf): Use builtin. (__signbit): Likewise. Correct for little-endian. (__signbitl): Call __signbit. (lrint): Correct for little-endian. (lrintf): Call lrint.
* PowerPC floating point little-endian [6 of 15]Alan Modra2013-10-042-28/+30
| | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-07/msg00197.html A rewrite to make this code correct for little-endian. * sysdeps/ieee754/ldbl-128ibm/e_sqrtl.c (mynumber): Replace union 32-bit int array member with 64-bit int array. (t515, tm256): Double rather than long double. (__ieee754_sqrtl): Rewrite using 64-bit arithmetic.
* PowerPC floating point little-endian [5 of 15]Alan Modra2013-10-043-56/+8
| | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00085.html Rid ourselves of ieee854. * sysdeps/ieee754/ldbl-128ibm/ieee754.h (union ieee854_long_double): Delete. (IEEE854_LONG_DOUBLE_BIAS): Delete. * sysdeps/ieee754/ldbl-128ibm/math_ldbl.h: Don't include ieee854 version of math_ldbl.h.
* PowerPC floating point little-endian [4 of 15]Alan Modra2013-10-046-134/+198
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00084.html Another batch of ieee854 macros and union replacement. These four files also have bugs fixed with this patch. The fact that the two doubles in an IBM long double may have different signs means that negation and absolute value operations can't just twiddle one sign bit as you can with ieee864 style extended double. fmodl, remainderl, erfl and erfcl all had errors of this type. erfl also returned +1 for large magnitude negative input where it should return -1. The hypotl error is innocuous since the value adjusted twice is only used as a flag. The e_hypotl.c tests for large "a" and small "b" are mutually exclusive because we've already exited when x/y > 2**120. That allows some further small simplifications. [BZ #15734], [BZ #15735] * sysdeps/ieee754/ldbl-128ibm/e_fmodl.c (__ieee754_fmodl): Rewrite all uses of ieee875 long double macros and unions. Simplify test for 0.0L. Correct |x|<|y| and |x|=|y| test. Use ldbl_extract_mantissa value for ix,iy exponents. Properly normalize after ldbl_extract_mantissa, and don't add hidden bit already handled. Don't treat low word of ieee854 mantissa like low word of IBM long double and mask off bit when testing for zero. * sysdeps/ieee754/ldbl-128ibm/e_hypotl.c (__ieee754_hypotl): Rewrite all uses of ieee875 long double macros and unions. Simplify tests for 0.0L and inf. Correct double adjustment of k. Delete dead code adjusting ha,hb. Simplify code setting kld. Delete two600 and two1022, instead use their values. Recognise that tests for large "a" and small "b" are mutually exclusive. Rename vars. Comment. * sysdeps/ieee754/ldbl-128ibm/e_remainderl.c (__ieee754_remainderl): Rewrite all uses of ieee875 long double macros and unions. Simplify test for 0.0L and nan. Correct negation. * sysdeps/ieee754/ldbl-128ibm/s_erfl.c (__erfl): Rewrite all uses of ieee875 long double macros and unions. Correct output for large magnitude x. Correct absolute value calculation. (__erfcl): Likewise. * math/libm-test.inc: Add tests for errors discovered in IBM long double versions of fmodl, remainderl, erfl and erfcl.
* PowerPC floating point little-endian [3 of 15]Alan Modra2013-10-0423-240/+322
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00083.html Further replacement of ieee854 macros and unions. These files also have some optimisations for comparison against 0.0L, infinity and nan. Since the ABI specifies that the high double of an IBM long double pair is the value rounded to double, a high double of 0.0 means the low double must also be 0.0. The ABI also says that infinity and nan are encoded in the high double, with the low double unspecified. This means that tests for 0.0L, +/-Infinity and +/-NaN need only check the high double. * sysdeps/ieee754/ldbl-128ibm/e_atan2l.c (__ieee754_atan2l): Rewrite all uses of ieee854 long double macros and unions. Simplify tests for long doubles that are fully specified by the high double. * sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_ilogbl.c (__ieee754_ilogbl): Likewise. Remove dead code too. * sysdeps/ieee754/ldbl-128ibm/e_jnl.c (__ieee754_jnl): Likewise. (__ieee754_ynl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_log10l.c (__ieee754_log10l): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_logl.c (__ieee754_logl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_powl.c (__ieee754_powl): Likewise. Remove dead code too. * sysdeps/ieee754/ldbl-128ibm/k_tanl.c (__kernel_tanl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_expm1l.c (__expm1l): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_frexpl.c (__frexpl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_isinf_nsl.c (__isinf_nsl): Likewise. Simplify. * sysdeps/ieee754/ldbl-128ibm/s_isinfl.c (___isinfl): Likewise. Simplify. * sysdeps/ieee754/ldbl-128ibm/s_log1pl.c (__log1pl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_modfl.c (__modfl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_nextafterl.c (__nextafterl): Likewise. Comment on variable precision. * sysdeps/ieee754/ldbl-128ibm/s_nexttoward.c (__nexttoward): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_nexttowardf.c (__nexttowardf): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_remquol.c (__remquol): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_scalblnl.c (__scalblnl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_scalbnl.c (__scalbnl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_tanhl.c (__tanhl): Likewise. * sysdeps/powerpc/fpu/libm-test-ulps: Adjust tan_towardzero ulps.
* PowerPC floating point little-endian [2 of 15]Alan Modra2013-10-0427-89/+176
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00082.html This patch replaces occurrences of GET_LDOUBLE_* and SET_LDOUBLE_* macros, and union ieee854_long_double_shape_type in ldbl-128ibm/, and a stray one in the 32-bit fpu support. These files have no significant changes apart from rewriting the long double bit access. * sysdeps/ieee754/ldbl-128ibm/math_ldbl.h (ldbl_high): Define. * sysdeps/ieee754/ldbl-128ibm/e_acoshl.c (__ieee754_acoshl): Rewrite all uses of ieee854 long double macros and unions. * sysdeps/ieee754/ldbl-128ibm/e_acosl.c (__ieee754_acosl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_asinl.c (__ieee754_asinl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_atanhl.c (__ieee754_atanhl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_coshl.c (__ieee754_coshl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_log2l.c (__ieee754_log2l): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_rem_pio2l.c (__ieee754_rem_pio2l): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_sinhl.c (__ieee754_sinhl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_cosl.c (__kernel_cosl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_sincosl.c (__kernel_sincosl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_sinl.c (__kernel_sinl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_asinhl.c (__asinhl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_atanl.c (__atanl): Likewise. Simplify sign and nan test too. * sysdeps/ieee754/ldbl-128ibm/s_cosl.c (__cosl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_fabsl.c (__fabsl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_finitel.c (___finitel): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_fpclassifyl.c (___fpclassifyl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_isnanl.c (___isnanl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_issignalingl.c (__issignalingl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_logbl.c (__logbl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_signbitl.c (___signbitl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_sincosl.c (__sincosl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_sinl.c (__sinl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_tanl.c (__tanl): Likewise. * sysdeps/powerpc/powerpc32/power7/fpu/s_logbl.c (__logbl): Likewise.
* PowerPC floating point little-endian [1 of 15]Alan Modra2013-10-0411-250/+131
| | | | | | | | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-08/msg00081.html This is the first of a series of patches to ban ieee854_long_double and the ieee854_long_double macros when using IBM long double. union ieee854_long_double just isn't correct for IBM long double, especially when little-endian, and pretending it is OK has allowed a number of bugs to remain undetected in sysdeps/ieee754/ldbl-128ibm/. This changes the few places in generic code that use it. * stdio-common/printf_size.c (__printf_size): Don't use union ieee854_long_double in fpnum union. * stdio-common/printf_fphex.c (__printf_fphex): Likewise. Use signbit macro to retrieve sign from long double. * stdio-common/printf_fp.c (___printf_fp): Use signbit macro to retrieve sign from long double. * sysdeps/ieee754/ldbl-128ibm/printf_fphex.c: Adjust for fpnum change. * sysdeps/ieee754/ldbl-128/printf_fphex.c: Likewise. * sysdeps/ieee754/ldbl-96/printf_fphex.c: Likewise. * sysdeps/x86_64/fpu/printf_fphex.c: Likewise. * math/test-misc.c (main): Don't use union ieee854_long_double. ports/ * sysdeps/ia64/fpu/printf_fphex.c: Adjust for fpnum change.
* Fix for [BZ #15680] IBM long double inaccuracyAlan Modra2013-10-046-106/+176
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-06/msg00919.html I discovered a number of places where denormals and other corner cases were being handled wrongly. - printf_fphex.c: Testing for the low double exponent being zero is unnecessary. If the difference in exponents is less than 53 then the high double exponent must be nearing the low end of its range, and the low double exponent hit rock bottom. - ldbl2mpn.c: A denormal (ie. exponent of zero) value is treated as if the exponent was one, so shift mantissa left by one. Code handling normalisation of the low double mantissa lacked a test for shift count greater than bits in type being shifted, and lacked anything to handle the case where the difference in exponents is less than 53 as in printf_fphex.c. - math_ldbl.h (ldbl_extract_mantissa): Same as above, but worse, with code testing for exponent > 1 for some reason, probably a typo for >= 1. - math_ldbl.h (ldbl_insert_mantissa): Round the high double as per mpn2ldbl.c (hi is odd or explicit mantissas non-zero) so that the number we return won't change when applying ldbl_canonicalize(). Add missing overflow checks and normalisation of high mantissa. Correct misleading comment: "The hidden bit of the lo mantissa is zero" is not always true as can be seen from the code rounding the hi mantissa. Also by inspection, lzcount can never be less than zero so remove that test. Lastly, masking bitfields to their widths can be left to the compiler. - mpn2ldbl.c: The overflow checks here on rounding of high double were just plain wrong. Incrementing the exponent must be accompanied by a shift right of the mantissa to keep the value unchanged. Above notes for ldbl_insert_mantissa are also relevant. [BZ #15680] * sysdeps/ieee754/ldbl-128ibm/e_rem_pio2l.c: Comment fix. * sysdeps/ieee754/ldbl-128ibm/printf_fphex.c (PRINT_FPHEX_LONG_DOUBLE): Tidy code by moving -53 into ediff calculation. Remove unnecessary test for denormal exponent. * sysdeps/ieee754/ldbl-128ibm/ldbl2mpn.c (__mpn_extract_long_double): Correct handling of denormals. Avoid undefined shift behaviour. Correct normalisation of low mantissa when low double is denormal. * sysdeps/ieee754/ldbl-128ibm/math_ldbl.h (ldbl_extract_mantissa): Likewise. Comment. Use uint64_t* for hi64. (ldbl_insert_mantissa): Make both hi64 and lo64 parms uint64_t. Correct normalisation of low mantissa. Test for overflow of high mantissa and normalise. (ldbl_nearbyint): Use more readable constant for two52. * sysdeps/ieee754/ldbl-128ibm/mpn2ldbl.c (__mpn_construct_long_double): Fix test for overflow of high mantissa and correct normalisation. Avoid undefined shift.
* IBM long double mechanical changes to support little-endianAlan Modra2013-10-0411-184/+161
| | | | | | | | | | | | | | | | | | | | | | | http://sourceware.org/ml/libc-alpha/2013-07/msg00001.html This patch starts the process of supporting powerpc64 little-endian long double in glibc. IBM long double is an array of two ieee doubles, so making union ibm_extended_long_double reflect this fact is the correct way to access fields of the doubles. * sysdeps/ieee754/ldbl-128ibm/ieee754.h (union ibm_extended_long_double): Define as an array of ieee754_double. (IBM_EXTENDED_LONG_DOUBLE_BIAS): Delete. * sysdeps/ieee754/ldbl-128ibm/printf_fphex.c: Update all references to ibm_extended_long_double and IBM_EXTENDED_LONG_DOUBLE_BIAS. * sysdeps/ieee754/ldbl-128ibm/e_exp10l.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/e_expl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/ldbl2mpn.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/math_ldbl.h: Likewise. * sysdeps/ieee754/ldbl-128ibm/mpn2ldbl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_nearbyintl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/strtold_l.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/x2y2m1l.c: Likewise.
* Hardcode locale archive page size as 4096.Joseph Myers2013-10-032-1/+9
|