Skip to main content
added the fact that bit manipulation is the only way to use them
Source Link

If your variables are floating-point numbers, IEEE754 (the floating point number standard which is supported by most modern processors and languages) has your back: it is a little-known feature, but the standard defines not one, but a whole family of NaN (not-a-number) values, which can be used for arbitrary application-defined meanings. In single-precision floats, for instance, you have 22 free bits that you can use to distinguish between 2^{22} types of invalid values.

Normally, programming interfaces expose only one of them (e.g., Numpy's nan); I don't know if there is a built-in way to generate the others other than explicit bit manipulation, but it's just a matter of writing a couple of low-level routines. (You will also need one to tell them apart, because, by design, a == b always returns false when one of them is a NaN.)

Using them is better than reinventing your own "magic number" to signal invalid data, because they propagate correctly and signal invalid-ness: for instance, you don't risk shooting yourself in the foot if you use an average() function and forget to check for your special values.

The only risk is libraries not supporting them correctly, since they are quite an obscure feature: for instance, a serialization library may 'flatten' them all to the same NaNnan (which looks equivalent to it for most purposes).

If your variables are floating-point numbers, IEEE754 (the floating point number standard) has your back: it is a little-known feature, but the standard defines not one, but a whole family of NaN (not-a-number) values, which can be used for arbitrary application-defined meanings. In single-precision floats, for instance, you have 22 free bits that you can use to distinguish between 2^{22} types of invalid values.

Using them is better than reinventing your own "magic number" to signal invalid data, because they propagate correctly and signal invalid-ness: for instance, you don't risk shooting yourself in the foot if you use an average() function and forget to check for your special values.

The only risk is libraries not supporting them correctly, since they are quite an obscure feature: for instance, a serialization library may 'flatten' them all to the same NaN (which looks equivalent to it for most purposes).

If your variables are floating-point numbers, IEEE754 (the floating point number standard which is supported by most modern processors and languages) has your back: it is a little-known feature, but the standard defines not one, but a whole family of NaN (not-a-number) values, which can be used for arbitrary application-defined meanings. In single-precision floats, for instance, you have 22 free bits that you can use to distinguish between 2^{22} types of invalid values.

Normally, programming interfaces expose only one of them (e.g., Numpy's nan); I don't know if there is a built-in way to generate the others other than explicit bit manipulation, but it's just a matter of writing a couple of low-level routines. (You will also need one to tell them apart, because, by design, a == b always returns false when one of them is a NaN.)

Using them is better than reinventing your own "magic number" to signal invalid data, because they propagate correctly and signal invalid-ness: for instance, you don't risk shooting yourself in the foot if you use an average() function and forget to check for your special values.

The only risk is libraries not supporting them correctly, since they are quite an obscure feature: for instance, a serialization library may 'flatten' them all to the same nan (which looks equivalent to it for most purposes).

made text clearer
Source Link

If your variables are floating-point numbers, IEEE754 (the floating point number standard) has your back: it is a little-known feature almost always ignored by applications, but you can have many kindsthe standard defines not one, but a whole family of not-a-numberNaN (not-a-number) values, withwhich can be used for arbitrary application-defined meanings. In single-precision floats, for instance, you have 22 free bits that you can setuse to any value you wish, for instancedistinguish between 2^{22} types of invalid values.

Using them is better than reinventing your own "magic number" to signal invalid data, because they propagate correctly and signal invalid-ness: for instance, you don't risk shooting yourself in the foot if you use an average() function and forget to check for your special values.

The only risk is libraries not supporting them correctly, since they are quite an obscure feature: for instance, a serialization library may 'flatten' them all to the same NaN (which looks equivalent to it for most purposes).

If your variables are floating-point numbers, IEEE754 (the floating point number standard) has your back: it is a feature almost always ignored by applications, but you can have many kinds of not-a-number values, with application-defined meanings. In single-precision floats you have 22 free bits that you can set to any value you wish, for instance.

Using them is better than reinventing your own "magic number" to signal invalid data, because they propagate correctly and signal invalid-ness: for instance, you don't risk shooting yourself in the foot if you use an average() function and forget to check for your special values.

The only risk is libraries not supporting them correctly, since they are quite an obscure feature.

If your variables are floating-point numbers, IEEE754 (the floating point number standard) has your back: it is a little-known feature, but the standard defines not one, but a whole family of NaN (not-a-number) values, which can be used for arbitrary application-defined meanings. In single-precision floats, for instance, you have 22 free bits that you can use to distinguish between 2^{22} types of invalid values.

Using them is better than reinventing your own "magic number" to signal invalid data, because they propagate correctly and signal invalid-ness: for instance, you don't risk shooting yourself in the foot if you use an average() function and forget to check for your special values.

The only risk is libraries not supporting them correctly, since they are quite an obscure feature: for instance, a serialization library may 'flatten' them all to the same NaN (which looks equivalent to it for most purposes).

Source Link

If your variables are floating-point numbers, IEEE754 (the floating point number standard) has your back: it is a feature almost always ignored by applications, but you can have many kinds of not-a-number values, with application-defined meanings. In single-precision floats you have 22 free bits that you can set to any value you wish, for instance.

Using them is better than reinventing your own "magic number" to signal invalid data, because they propagate correctly and signal invalid-ness: for instance, you don't risk shooting yourself in the foot if you use an average() function and forget to check for your special values.

The only risk is libraries not supporting them correctly, since they are quite an obscure feature.