If your variables are floating-point numbers, IEEE754 (the floating point number standard which is supported by most modern processors and languages) has your back: it is a little-known feature, but the standard defines not one, but a whole family of NaN (not-a-number) values, which can be used for arbitrary application-defined meanings. In single-precision floats, for instance, you have 22 free bits that you can use to distinguish between 2^{22} types of invalid values.
Normally, programming interfaces expose only one of them (e.g., Numpy's nan); I don't know if there is a built-in way to generate the others other than explicit bit manipulation, but it's just a matter of writing a couple of low-level routines. (You will also need one to tell them apart, because, by design, a == b always returns false when one of them is a NaN.)
Using them is better than reinventing your own "magic number" to signal invalid data, because they propagate correctly and signal invalid-ness: for instance, you don't risk shooting yourself in the foot if you use an average() function and forget to check for your special values.
The only risk is libraries not supporting them correctly, since they are quite an obscure feature: for instance, a serialization library may 'flatten' them all to the same NaNnan (which looks equivalent to it for most purposes).