ax_sign ^ (!same + same * (ax_abs - bx_abs));returns the incorrect signed result ifintis not 32-bit. Certainly a problem ifintis 16-bit and likely ifintis 64-bit. Ifunsigned/intneeds to be 32-bit, use(u)int32_t,operator reduces clarity here. Suggest 2 lines of code// ax.f = a->x, bx.f = b->x; ax.f = a->x; bx.f = b->x;I ran test cases and found OP's
x_cmp()functionally correct for finite float over a 12,000,000,000 test cases.As a test case, I tried OP's original
x_cmp()versus the below and was at least 10% faster with the new code. Of course, that is just one platform comparison, yet aside from NaN issues, the below code is functionally similar to OP's and as a plus, is highly portable - unlike OP's. The point being that OP's compare method needs some reference point to justify the bit magic.static int x_cmp_ref(const void *av, const void *bv) { return (*(float*)av > *(float*)bv) - (*(float*)av < *(float*)bv); }OP's has not stated the compare functionality of Not-a-number
floats. A desirable aspect is that all NaN sort to one side, either all greater or all less than any other number, regardless of the NaN's "sign".x_cmp()considers sign first without regard to NaN-ness.
As a reference, I used the following to generate random float
float randf() {
union {
float f;
unsigned char uc[sizeof (float)];
} u;
do {
for (unsigned i=0; i<sizeof u.uc; i++) {
u.uc[i] = (unsigned char) rand();
}
} while (!isfinite(u.f));
return u.f;
}
At a later time, I may try to implement the idea in this comment