3

I was having some issues with this code. It was working in debug mode but would erroneously output 0.0 for some of the csv lines in release mode. It was only when I added the preprocessor commands that turn off optimisation around line 19 that everything started to work in release mode. Does anyone have any insight into why this is the case, I've never come across this sort of behavior before.

#include <vector>
#include <complex>
#include <fstream>

size_t constexpr kBins = 64;
double constexpr kFrequency = 16.0;

double Triangle(double const bin)
{
    return abs(bin - floor(bin + 1.0 / 2.0));
}

int main()
{
    std::vector<std::complex<double>> input{kBins, {0.0, 0.0}};
    for (size_t i = 0; i < kBins; ++i)
    {
#pragma optimize("" off)
        double value = sin(2.0 * M_PI * kFrequency * Triangle(static_cast<double>(i) / kBins)) / (2.0 * M_PI * kFrequency * Triangle(static_cast<double>(i) / kBins));
#pragma optimize("" on)
        input[i] = fpclassify(value) == FP_NAN ? 1.0 : value;
    }

    std::ofstream output_file{"output.csv"};
    if (output_file.is_open())
    {
        for (size_t i = 0; i < kBins; ++i)
        {
            output_file << (static_cast<double>(i) / kBins) << ", " << input[i].real() << ", " << input[i].imag() << std::endl;
        }
        output_file.close();
    }
}
6
  • Which compiler? Commented Dec 2, 2020 at 22:04
  • 2
    Please provide a minimal reproducible example. When you feel like the optimizer is breaking your code, 99% of the time you have UB, 0.9% of the time is when you have misconstrued assumptions on floating point accuracy. The remaining is a compiler bug. Commented Dec 2, 2020 at 22:05
  • Clang. The minimal reproducible example is just when the #pragma lines are removed. Commented Dec 2, 2020 at 22:08
  • @JoshuaWilliams The ofstream stuff is irrelevant (unless you are being rounded down by the formatting), and you don't need 64 values to see one is broken. Commented Dec 2, 2020 at 22:10
  • Simplify your giant expression on line 19 into sub-expressions, and add checks for each of them to ensure they're correct. Turn on the strictest compiler warning level as well. Usually, "my code doesn't work when optimized" types of errors mean you've used an undefined value somewhere, or modified a value within an expression (between sequence points) Commented Dec 2, 2020 at 22:13

1 Answer 1

7

Your input vector is not being initialized as you expect. It always has size two, and all the strange results you observe are a result of Undefined Behaviour due to reading the vector out of bounds. You could have found this with your debugger, or by checking against input.size() or using input.at(i).

It turns out that size_t is convertible to std::complex<double>, and that this line:

std::vector<std::complex<double>> input{kBins, {0.0, 0.0}};

Is constructing a vector with two elements, even though kBins = 64! It appears the vector constructor taking a std::initializer_list<std::complex<double>> is being chosen and that kBins has been implicitly converted to std::complex<double>.

One way to get around this is to initialize your vector differently, using the auto keyword to avoid the Most Vexing Parse and using parentheses instead of curly braces to avoid accidentally passing a std::initializer_list:

auto input = std::vector<std::complex<double>>(kBins, {0.0, 0.0});`

With this change, the (range-checked) code runs without errors.

Sign up to request clarification or add additional context in comments.

2 Comments

You don't get the vexing parse because of {0.0, 0.0}.
@PasserBy good catch, although it's a good habit to be aware of MVP and to know how to avoid it

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.