0

I have a big Spring Boot aplication composed of multiple micro services using Gradle buildSrc. It contains a lot of tests seperated into seperate tasks (unit, integration, acceptance...).

These tests run on Jenkins pipelines each in a dedicated pod with 64GB memory and 8 cores using Gradle cache and 8 workers.

When running the integration tests without cache and with -PforceRunTests=true we run into OutOfMemory heap error in 50% of the runs.

Tried adding -XX:+HeapDumpOnOutOfMemoryError to the tasks but no dump is being generated. Also reproducing locally and monitoring with VisualVM does not show any jvm using more that allocated memory.

Any idea why HeapDumpOnOutOfMemoryError wouldn't generate dump does that indicate something?

If not how can we pin point the issue, is it possible that integration tests should be optimized and their contexts if so how can we prove that the tests are the issue?

I tried changing maxHeapSize (Xmx) and minHeapSize (Xms) and settled on both being set to 7GB which is more than enough.

Adding jvm args -XX:+HeapDumpOnOutOfMemoryError and -XX:HeapDumpPath still not generating heap dump on error.

9
  • Using @SpringBootTest with a lot of different configurations, properties and/or combinations of @MockBean and friends? Commented May 22 at 7:51
  • Hello, naturally yes (note using newest spring boot 3.4.5 so @MockitoBean) Commented May 22 at 7:56
  • Don't. This will start new applications for each unique combination, which will fill up your memory. Really check if you need to use @SpringBootTest for all the situations, don't use it for things that should be a simple unit test. If you really must use that try to reduce the differences, and also reduce the number of contexts being cached by setting the spring.test.context.cache.maxSize system property (default is 32) Commented May 22 at 8:02
  • Yes, of course. It's only used for integration tests, but there are a large number of tests since the application is quite big. How can I demonstrate that the performance issues are caused by the tests, so I can justify a costly refactoring Commented May 22 at 8:03
  • Each combination will start a new application, which takes time instead of re-use. You basically end up with 32 active applications in memory due to the use of various combinations of @MockitoBean. Commented May 22 at 8:14

1 Answer 1

2

I resolved the issue by reducing the minHeapSize from 7GB to 2GB. The reasoning is that setting a high minHeapSize (like 7GB) causes the JVM to reserve that amount of memory upfront, regardless of whether it's actually needed. This can lead to resource contention, especially when multiple Gradle workers are running simultaneously. As a result, other processes (including Gradle itself) may struggle to allocate memory, leading to failures or degraded performance. Lowering the minHeapSize to 2GB allowed the system to allocate memory more flexibly, which resolved the problem.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.