3
\$\begingroup\$

I'm looking for a function in C++ that can determine, how accurate all floating point numbers in a given range can be represented as strings, without the use of e.g. boost library, etc.

Please take a look at my code and tell me how it can be improved.

#include <iostream>
#include <cmath>
#include <cfloat>

/**
 * Calculates the scale of a given string.
 *
 * @param str the string to calculate the scale of
 *
 * @return the scale of the string
 *
 * @throws None
 */
int get_scale_of(const std::string &str)
{
    int scale = 0;
    for (int i = str.length() - 1; i >= 0; i--)
    {
        if (str[i] != '0')
        {
            break;
        }
        scale++;
    }
    return str.length() - str.find('.') - scale - 1;
}

/**
 * Calculates the maximum precision in the range of floating-point numbers.
 *
 * @param min The minimum value of the range.
 * @param max The maximum value of the range.
 *
 * @return The maximum precision in the range.
 *
 * @throws None.
 */
int get_max_precision_in_range_for_float(const int min, const int max)
{
    int precision = 100;
    float f = min;
    while (f < max)
    {
        float n = std::nextafter(f, FLT_MAX);
        std::string s1 = std::to_string(f);
        std::string s2 = std::to_string(n);
        while (s1 == s2)
        {
            n = std::nextafter(n, FLT_MAX);
            s2 = std::to_string(n);
        }
        int p = std::max(get_scale_of(s1), get_scale_of(s2));
        if (p < precision)
        {
            precision = p;
            // Debug outputs:
            std::cout << "s1 = " << s1 << "\n";
            std::cout << "s2 = " << s2 << "\n";
        }
        f = n;
    }
    return precision;
}

int main(int argc, char const *argv[])
{
    // Some examples:
    std::cout << get_max_precision_in_range_for_float(1, 2) << " digits accuracy\n";
    std::cout << get_max_precision_in_range_for_float(7, 8) << " digits accuracy\n";
    std::cout << get_max_precision_in_range_for_float(500000, 600000) << " digits accuracy\n";
    return 0;
}
\$\endgroup\$
2
  • \$\begingroup\$ Relevant reading: Float Precision–From Zero to 100+ Digits by Bruce Dawson. \$\endgroup\$ Commented Jan 11, 2024 at 10:32
  • \$\begingroup\$ Tobias Grothe, what was the (non-debug) output from your machine? \$\endgroup\$ Commented Jul 5, 2024 at 17:50

1 Answer 1

4
\$\begingroup\$

Precision vs digits

While the decimal representation of a floating point number might result in a certain number of digits, that's not the same as the precision of the floating point number itself. Only some of them are significant digits. In fact, also some of the digits that appear before the decimal point can be significant digits.

It's easy to mistake the number of digits for precision, but we should try to avoid that. Your function might still be useful as it is, but then it should be renamed. Maybe get_max_decimal_representation_digits_in_range_for_float()? That's awfully long though, maybe get_max_decimals_in_range() would be better.

Converting floating point numbers to strings

You are using std::to_string() to convert floating point numbers to strings. However, by default this will always limit the number of digits after the decimal separator to 6, regardless of how large or small that number is.

floats have a precision of 23 bits, which is almost 7 decimal digits. To represent the value of a float in decimal notation exactly, you might even need more digits than that. You can use std::to_chars() to do that.

Improve the interface

Despite the name mentioning float, the two arguments are ints. What if I want to know the max number of decimals in the range 0.1 to 0.2? Or from 1.0e41 to 1.0e42? What about doubles? I would take the parameters as floating point numbers. And to make it also handle doubles, make it a template:

template<typename T>
int get_max_decimals_in_range(const T min, const T max)
{
    int precision = 100;
    T f = min;
    while (f < max) {
        …
    }
    return precision;
}

Also consider that there are several ways to convert floating point values to strings. Instead of hardcoding which method to use, you can let the caller pass in the conversion function they want tested, or use a default function if they don't provide anything. It could look like:

template<typename T>
int get_max_decimals_in_range(const T min, const T max,
                              std::function<std::string(T)> convert = [](T value){
                                  return std::to_string(value);
                              })
{
    …
    std::string s1 = convert(f);
    std::string s2 = convert(n);
    …
}

And then call it like:

std::cout << get_max_decimals_in_range<float>(1, 2) << " digits accuracy\n";

Runtime

Your program will run for a long time. It will do about \$2^{23}\$ string conversions and comparisons to check the maximum number of decimals in the range of 1 to 2. But that's a lot of wasted time, as std::to_string() is capped at 6 decimals, and you reach that number after just a few iterations of your loops. So there is a lot of potential for optimization here. Even better:

You don't need string conversion

The smallest difference between consecutive floating point numbers in the given range is std::nextafter(min, max) - min. In binary, that difference is of the form 0.000…001. So you can use std::log2() of that to get how many binary digits there are after the decimal point. From that you can in principle calculate the number of decimal digits you need. I'll leave that as an excercise for the reader.

\$\endgroup\$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.