Mostly, it's just the repr of numpy arrays that's fooling you.
Consider your example above:
import numpy as np
x = float(1) - np.array([1e-10, 1e-5])
print x
print x[0]
print x[0] == 1.0
This yields:
[ 1. 0.99999 ]
0.99999999999
False
So the first element isn't actually zero, it's just the pretty-printing of numpy arrays that's showing it that way.
This can be controlled by numpy.set_printoptions.
Of course, numpy is fundementally using limited precision floats. The whole point of numpy is to be a memory-efficient container for arrays of similar data, so there's no equivalent of the decimal class in numpy.
However, 64-bit floats have a decent range of precision. You won't hit too many problems with 1e-10 and 1e-5. If you need, there's also a numpy.float128 dtype, but operations will be much slower than using native floats.