22

Is there a (technical or practical) limit to how large you can configure the maximum number of open files in Linux? Are there some adverse effects if you configure it to a very large number (say 1-100M)?

I'm thinking server usage here, not embedded systems. Programs using huge amounts of open files can of course eat memory and be slow, but I'm interested in adverse effects if the limit is configured much larger than necessary (e.g. memory consumed by just the configuration).

1
  • 1
    In theory, you could calculate how many file-descriptors your system could handle based on available memory and the assertion that each fd consumes 1K of memory: serverfault.com/questions/330795/… Commented Jan 3, 2020 at 11:01

5 Answers 5

16

I suspect the main reason for the limit is to avoid excess memory consumption (each open file descriptor uses kernel memory). It also serves as a safeguard against buggy applications leaking file descriptors and consuming system resources.

But given how absurdly much RAM modern systems have compared to systems 10 years ago, I think the defaults today are quite low.

In 2011 the default hard limit for file descriptors on Linux was increased from 1024 to 4096.

Some software (e.g. MongoDB) uses many more file descriptors than the default limit. The MongoDB folks recommend raising this limit to 64,000. I've used an rlimit_nofile of 300,000 for certain applications.

As long as you keep the soft limit at the default (1024), it's probably fairly safe to increase the hard limit. Programs have to call setrlimit() in order to raise their limit above the soft limit, and are still capped by the hard limit.

See also some related questions:

2
  • 7
    This hasn't actually answered the question, though, which asked if there was a technical or practical limit to how high one can set the hard limit. There is, but this answer does not mention it at all. Commented Jan 2, 2017 at 8:19
  • 1
    I'm finding it impossible to raise the limit beyond about 1million. I think it might be hard-coded in the kernel, because despite changing many configurations, I can't raise beyond this. superuser.com/questions/1468436/… Commented Aug 7, 2019 at 16:16
11

It is technically limited to maximum value of unsigned long (C Lang) i.e. 4,294,967,295

Reference : fs.h file

/* And dynamically-tunable limits and defaults: */
struct files_stat_struct {
  unsigned long nr_files;   /* read only */
  unsigned long nr_free_files;  /* read only */
  unsigned long max_files;    /* tunable THIS IS OUR VALUE */
};
3
  • 2
    Do you have any references for this? Commented May 25, 2019 at 10:15
  • Also that's the max value for 32-bit signed integer, 32-bit unsigned integer max value is 4,294,967,295. Commented May 25, 2019 at 14:59
  • You are right, Sampo. My mistake. Commented May 26, 2019 at 11:04
3

The impact wouldn't normally be observable, but the kernel's IO module will have to take care of all those open file descriptors and they could also have an impact on cache efficiency.

Such limits have the advantage of protecting the user from their own (or third parties') mistakes. For example, if you run a small program or script that forks indefinitely, it will eventually block on one of the ulimits and therefore prevent a more intense (possibly unrecoverable) computer freeze.

Unless you have precise reasons to increase any of those limits, you should avoid it and sleep better.

2

Quiet late but this should help everyone else to get the answer for this question. The practical limit for number of open files in linux can also be counted using maximum number of file descriptor a process can open.

I have seen limits being changed from system to system. From the getlimit man page you can see that RLIMIT_NOFILE-1 specifies the limits internally.

To check RLIMIT_NOFILE value you can use the below statement to get a tuple

python -c "import resource; print(resource.getrlimit(resource.RLIMIT_NOFILE))"

The tuple returns results as (Soflimit, hardlimit). For me running on multiple systems results are like below

(1024, 1048576) # on UBUNTU linux 
(65536, 65536)  # on amazon linux 
(1024, 9223372036854775807) # on macos 

Note: 9223372036854775807 this number simply means infinity. You will always reach other resource limits before you ever hit this. If you have do modify the hardlimit on a system beyond what it is, you will have to modify kernel params.

0

include/asm-generic/resource.h defines the boot time defaults of RLIMIT_NOFILE:

[RLIMIT_NOFILE]     = {   INR_OPEN_CUR,   INR_OPEN_MAX },

INR_OPEN_CUR and INR_OPEN_MAX are defined in include/uapi/linux/fs.h:

#define INR_OPEN_CUR 1024   /* Initial setting for nfile rlimits */
#define INR_OPEN_MAX 4096   /* Hard limit for nfile rlimits */

But that's not the maximium possible value of RLIMIT_NOFILE.

To answer the question, we need to take a look at kernel/sys.c, where do_prlimit is the function resposible for handling our request to bump RLIMIT_NOFILE.

/* make sure you are allowed to change @tsk limits before calling this */
static int do_prlimit(struct task_struct *tsk, unsigned int resource,
              struct rlimit *new_rlim, struct rlimit *old_rlim)
{
    struct rlimit *rlim;
    int retval = 0;

    if (resource >= RLIM_NLIMITS)
        return -EINVAL;
    resource = array_index_nospec(resource, RLIM_NLIMITS);

    if (new_rlim) {
        if (new_rlim->rlim_cur > new_rlim->rlim_max)
            return -EINVAL;
        if (resource == RLIMIT_NOFILE &&
                new_rlim->rlim_max > sysctl_nr_open)
            return -EPERM;
    }
    ...
}

We can see that the hard maximum possible value of RLIMIT_NOFILE is sysctl_nr_open, which is defined in fs/file.c:

So by kernel default, the maximum value of RLIMIT_NOFILE is 1048576.

But fs.nr_open sysctl could be raised upto sysctl_nr_open_max.

sysctl fs.nr_open returns 1073741816 on my laptop. I don't know what program set it.

Now let's calculate the maximum possible value of sysctl_nr_open on x86_64 architecture:

unsigned int sysctl_nr_open __read_mostly = 1024*1024;
unsigned int sysctl_nr_open_min = BITS_PER_LONG;
/* our min() is unusable in constant expressions ;-/ */
#define __const_min(x, y) ((x) < (y) ? (x) : (y))
unsigned int sysctl_nr_open_max =
    __const_min(INT_MAX, ~(size_t)0/sizeof(void *)) & -BITS_PER_LONG;
  • BITS_PER_LONG is 64 on x86_64 so -BITS_PER_LONG is 0xffffffffffffffc0
  • ~(size_t)0/sizeof(void *)) is 2305843009213693952 (0x2000000000000000)
  • It is bigger than INT_MAX.
  • so sysctl_nr_open_max is INT_MAX & 0xffffffffffffffc0 = 2147483584 on x86_64.

This could be verified by running:

$ sudo sysctl fs.nr_open=2147483584 
fs.nr_open = 2147483584
$ sudo sysctl fs.nr_open=2147483585
sysctl: setting key "fs.nr_open": Invalid argument

So the max possible value of RLIMIT_NOFILE is 2147483584, only possible if and only if sysctl fs.nr_open=2147483585.

This answer is based on Linux source code at 34ac1e82e5a78d5ed7f647766f5b1b51ca4d983a. Future/Past linux versions might have a different limit.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.