14

In the given example below I try to set the stacksize to 1kb.

Why is it now possible to allocate an array of ints on the stack with size 8kb in foo() ?

#include <stdio.h>
#include <sys/resource.h>

void foo(void);

int main() {
 struct rlimit lim = {1024, 1024};

 if (setrlimit(RLIMIT_STACK, &lim) == -1)
  return 1;

 foo();

 return 0;
}

void foo() {
 unsigned ints[2048];

 printf("foo: %u\n", ints[2047]=42);
}
2
  • Thank you, I am now addicted to finding out why this doesn't work as advertised in man(2) setrlimit. Fortunately, gcc lets you specify the stack size :) Commented Nov 7, 2010 at 16:26
  • A question favorited more often than it was upvoted—at this time. Interesting. Commented Nov 7, 2010 at 17:39

2 Answers 2

9

The limit is set immediately but only checked when trying to allocate a new stack or trying to grow the existing stack. A grep for RLIMIT_STACK (or a LXR identifier search) on the kernel sources should tell.

Apparently, the initial size of the stack is whatever is needed to the filename + env strings + arg strings plus some extra pages allocated on setup_arg_pages (20 pages in 2.6.33 1,2, 128 Kb on 2.6.34 3).

In summary:

initial stack size = MIN(size for filename + arg strings + env strings + extra pages, MAX(size for filename + arg strings + env strings, RLIMIT_STACK))

where

size for filename + arg strings + env strings <= MAX(ARG_MAX(32 pages), RLIMIT_STACK/4)

Additionally, kernels with Ingo Molnar's exec-shield patch (Fedora, Ubuntu, ...) have an additional EXEC_STACK_BIAS "(2MB more to cover randomization effects.)", see the call to the new function over_stack_limit() from acct_stack_growth() ([Ubuntu1], [Ubuntu2], [Ubuntu3]).

I've edited the original program to show this:

#include <stdio.h>
#include <sys/resource.h>

void foo(void);

int main(int argc, char *argv[]) {
        struct rlimit lim = {1, 1};


        if (argc > 1 && argv[1][0] == '-' && argv[1][8]=='l') {
                printf("limiting stack size\n");
                if (setrlimit(RLIMIT_STACK, &lim) == -1) {
                        printf("rlimit failed\n");
                        return 1;
                }
        }

        foo();

        return 0;
}

void foo() {
        unsigned ints[32768];

        printf("foo: %u\n", ints[2047]=42);
}

Which results in:

$./rl
foo: 42
$./rl -l
limiting stack size
Segmentation fault
$  
Sign up to request clarification or add additional context in comments.

12 Comments

No, actually, I was able to grow an existing stack. I am now like a dog that won't let go of a bone with this problem.
@Tim Post: are you sure the stack did grow? See my edited answer, there is some extra space on the initial stack.
Yes, I expanded both cases to 16k, same result.
@Tim Post: expand it to > 80 Kb on 2.6.33- x86 and > 128Kb on 2.6.34+
Attempting to set rlimit_stack after Stack Clash remediations may result in failure or related problems. Also see Red Hat Issue 1463241
|
4

I think setrlimit moves the "resource pointers" but doesn't apply the new limits until you exec a new copy of the program.

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/resource.h>

void foo(int chk) {
  unsigned ints[2048];
  ints[2047] = 42;
  printf("foo %d: %u\n", chk, ints[2047]);
}

int main(int argc, char **argv) {
  char *newarg[] = { "argv[0]", "one", "two" };
  char *newenv[] = { NULL };
  struct rlimit lim;

  newarg[0] = argv[0];
  getrlimit(RLIMIT_STACK, &lim);
  printf("lim: %d / %d\n", (int)lim.rlim_cur, (int)lim.rlim_max);
  switch (argc) {
    case 1: /* first call from command line */
      lim.rlim_cur = 65536;
      lim.rlim_max = 65536;
      if (setrlimit(RLIMIT_STACK, &lim) == -1) return EXIT_FAILURE;
      newarg[2] = NULL;
      foo(1);
      execve(argv[0], newarg, newenv);
      break;
    case 2: /* second call */
      lim.rlim_cur = 1024;
      lim.rlim_max = 1024;
      if (setrlimit(RLIMIT_STACK, &lim) == -1) return EXIT_FAILURE;
      foo(2);
      execve(argv[0], newarg, newenv);
      break;
    default: /* third call */
      foo(3);
      break;
  }
  return 0;
}

And a test run:

$ ./a.out 
lim: 8388608 / -1
foo 1: 42
lim: 65536 / 65536
foo 2: 42
Killed

Why the process gets killed before printing the limits (and before calling foo), I don't know.

13 Comments

I suspected similar and just tried with fork(), which made no difference. I can't understand why setrlimit() only effects processes spawned via exec and not the parent, but that does appear to be the case.
With GDB I get 'Program exited normally' after the "foo 2: 42" line - no killed, no segfault
@tur1ng: try adding newarg[0] = argv[0]; at the beginning of main. I suspect your binary is not called "a.out"
It's the correct a.out. Even a ulimit -s 1 will not result in an error.
@pmg, what kernel version / OS special sauce are you using? I think we might be talking about a moving target here :) I also can't reproduce your results with 2.6.31 (Ubuntu)
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.