Java 8/9 brought support for -XX:+UseCGroupMemoryLimitForHeap
(with -XX:+UnlockExperimentalVMOptions
). This sets -XX:MaxRAM
to the cgroup memory limit. Per default, the JVM allocates roughly 25% of the max RAM, because -XX:MaxRAMFraction
defaults to 4.
Example:
MaxRAM = 1g
MaxRAMFraction = 4
JVM is allowed to allocate: MaxRAM / MaxRAMFraction = 1g / 4 = 256m
Using only 25% of the quota seems like waste for a deployment which (usually) consists of a single JVM process. So now people set -XX:MaxRAMFraction=1
, so the JVM is theoretically allowed to use 100% of the MaxRAM.
For the 1g example, this often results in heap sizes around 900m. This seems a bit high - there is not a lot of free room for the JVM or other stuff like remote shells or out-of-process tasks.
So is this configuration (-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1
) considered safe for prod or even best practice? Or should I still hand pick -Xmx
, -Xms
, -Xss
and so on?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…