Suppose I'm using a 32-bit float to store a bit-string (don't ask). Suppose further I'd like to serialize this float to a file (as a float), and will employ banker's rounding on the decimal representation of the float before serializing. When I read the float back into the program, the system will (naturally) store it in a 32-bit float that is as close as possible to the serialized number.
How precise, in terms of digits, must my serialized float be, after banker's rounding, to ensure that the float serialized is equivalent in binary to the float that is read back in?