I'm trying to beat the native Double.TryParse for performance in parsing large multi-million row (simple) CSV files as much as possible. I do not have to support exponential notation, thousand separators, Inf, -Inf, NaN, or anything exotic. Just millions of "0.00##" format doubles.
TestSuccess("0", 0d);
TestSuccess("1", 1d);
TestSuccess("-1", -1d);
TestSuccess("123.45678"45", 123.4567845);
TestSuccess("-123.45678"45", -123.4567845);
TestSuccess("12345678901234", 12345678901234d);
TestSuccess("-12345678901234", -12345678901234d);
TestSuccess("0.12345678901234"12", 0.1234567890123412);
TestSuccess("-0.12345678901234", -0.12345678901234);
TestSuccess(".12345678901234", 0.12345678901234);
TestSuccess("-.12345678901234"12", -0.1234567890123412);
TestSuccess("0.00000987654321"00", 0.0000098765432100);
TestSuccess("-0.00000987654321"00", -0.0000098765432100);
TestSuccess("1234567890123.0123456789"01", 1234567890123.012345678901);
TestSuccess("-1234567890123.0123456789"01", -1234567890123.012345678901);
TestSuccess("123456789000000000000000", 123456789000000000000000d);
TestSuccess("-123456789000000000000000", -123456789000000000000000d);
TestSuccess("0.00000000000000000123456789", 0.00000000000000000123456789);
TestSuccess("-0.00000000000000000123456789", -0.00000000000000000123456789);
// Special case, an empty dash is interpreted as negative zero (natively not parsable)
TestSuccess("-", -0d);