0

Using Oracle Java 1.7.0_05 on Ubuntu Linux 3.2.0-25-virtual, on an amazon EC2 instance having 7.5 GB of memory, we start three instances of java, each using the switch -Xmx 2000m.

We use the default Ubuntu EC2 AMI configuration of no swap space.

After running these instances for some weeks, one of them freezes -- possibly out of memory. But my question isn't about finding our memory leak.

When we try to restart the app, java gives us a message that it cannot allocate the 2000 mb of memory. We solved the problem by rebooting the server.

In other words, 2000 + 2000 + 2000 > 7500?

We have seen this issue twice, and I'm sorry to report we don't have good diagnostics. How could we run out of space with only two remaining java processes each using a max of 2000 mb? How should we proceed to diagnose this problem the next time it occurs? I wish I had a "free -h" output, taken while we cannot start the program, to show here.

TIA.

2
  • try running top to see what's running. Something else may be using memory. Commented Sep 12, 2012 at 13:46
  • In a typical Linux server, lots of other things are using memory. Some of which is read-only (and doesn't count), some of which is read-write (and does). All of which will vary over a multi-week period. Commented Sep 12, 2012 at 14:05

2 Answers 2

3

-Xmx sets the maximum size of the JVM heap, not the maximum size of the java process, which allocates more memory besides the heap available to the application: its own memory, the Permanent generation, what's allocated inside JNI libraries, etc.

Sign up to request clarification or add additional context in comments.

6 Comments

@gigadot Because not all memory is allocated during the application startup? The OP doesn't say anything about his -Xms parameter, or -XX:PermSize / -XX:MaxPermSize.
We start with -Xms=2000m -Xmx=2000m -XX:MaxPermSize=256m
Since we allocated all the heap up front, wouldn't this indicate a memory leak within "its own memory" (as we don't use any jni and the permgen space is 256mb per process)
@FrankPavageau your comment was really illuminating and thank you. I suspect the real answer was another process growing memory consumption, since -Xms==-Xmx. You answer clarified for me that the java process size is bigger tan the -Xmx but still should not grow after startup.
Not necessarily, Java might need more memory for itself during startup than it does after everything has been loaded. It's not a memory leak, just a peak. You could try straceing the process during your failed restart to see if a malloc fails (though stracing java does not usually give much), but even if you find it, I'm not sure it'll help.
|
1

There may be other processes using memory therefore the JVM cannot be started with 2G. If you really need that much memory for 3 Java processes each and you only have 7.5 total you might want to change your EC2 configuration to have more memory. Your just leaving 1.5 for everything else include the kernal, Oracle etc.

1 Comment

OK, we do have another important process running. My bad for omitting this from my question: we have nginx running as a reverse proxy. It could be growing in memory consumption.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.