2
  1. As the title suggests, is there a way in C to detect such errors? For example if I try a complex computation like int a=b*c+d*e/f; can I detect if the result has gone beyond the range of int?

  2. Suppose I decide that handling such errors comes at a cost of performance hit, what are the security implications and what at worst could happen? Can a malware author exploit this bug?

7
  • 1
    1) No. 2) Yes, and also it doesn't really make sense to ask about the behaviour of undefined behaviour. You can make a correct program fast, but it's very hard to make a fast program correct. Commented Jan 11, 2016 at 19:45
  • Possible duplicate of stackoverflow.com/questions/199333/… Commented Jan 11, 2016 at 19:48
  • 1
    @KerrekSB So do you actually mean to say I'm helpless if I wanted to make a program that's "arithmetic overflow"-proof? I mean I'm ready to do anything to make the program robust. Commented Jan 11, 2016 at 20:02
  • You can always test for potential overflow. The problem is how much overhead/additional complexity (source code and runtime) you can accept. Commented Jan 11, 2016 at 20:07
  • Q2: if it can overflow, your code will probably break before any malware can take advantage anyway. Commented Jan 11, 2016 at 20:17

2 Answers 2

1

In the simpler case of unsigned int you can test for overflow before the multiplication like this

#include <stdio.h>
#include <limits.h>

void multiply(unsigned a, unsigned b)
{
    if(UINT_MAX / a >= b)
        printf("%u * %u = %u\n", a, b, a * b);
    else
        printf("%u * %u is out of range\n", a, b);
}

int main(void)
{
    multiply(70000, 60000);
    multiply(70000, 80000);

    return 0;
}

Program output:

70000 * 60000 = 4200000000
70000 * 80000 is out of range

And you can employ more extensive tests for int

Sign up to request clarification or add additional context in comments.

Comments

1

Realistically, the answer to your first question is no. You can do checks for simple operations, but for larger ones the checks quickly become arduous. Datatypes and primitives like an int are only defined to be able to handle a finite range of values. If you're worried about overflow, the simple answer is use a bigger datatype (i.e. int64_t, etc.).

To answer your latter question, it depends on context. If a space shuttle's navigation software encountered an overflow in the right part of it's calculations, it could change course erratically and crash. What I'm getting at is that technically anything can happen and it's very hard to be "completely" safe. Hedge your bets and make sure don't have overflow by using properly sized datatypes. It helps to find bounds on possible input values and use those as a guide, but if bounds aren't really applicable you can just use the largest types available. uint64_t can handle positive integers up to 1.8446744e+19 ... which is huge.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.