Java and Large Memory Pages on Linux

Recently I helped configure a system for an application running under Tomcat on Linux with very large memory requirements: a minimum heap of 6GB with a maximum of 11GB. The JVM was initially configured to use the Parallel garbage collector. With this configuration garbage collection of the “Young Generation” was fine, but the “Old Generation” GC was taking over 30 seconds (and blocking all other threads while doing this). We looked into enabling Large Memory Pages, a feature of modern CPUs which allow memory-hungry applications to allocate memory in 2MB chunks instead of the standard 4KB. Documentation on the web on how to do this exactly is sparse and missing some details we ran into. Here’s the sequence of steps we had to take:

  1. configure the kernel’s maximum shared memory to span the whole address space (via the kernel.shmmax and kernel.shmall parameters)
  2. configure the kernel’s allocated large memory pages (via the vm.nr_hugepages parameter)
  3. configure the user limits to ensure that the user running Tomcat can allocate the necessary memory (via the maxlock parameter)
  4. ensure that PAM applies the security limits to users who “login” via su and sudo
  5. configure the JVM for Large Memory Pages

Add the following lines to /etc/sysctl.conf and use sysctl -p to reload the changes into the running kernel although I recommend rebooting the system so that the Large Memory pages can be properly allocated (they have to be contiguous).

# Maximum size of a shared memory segment (in bytes)
# Maximum total size of all shared memory segments (in pages of 4KB)
# Number of allocated Large Memory Pages (each one takes up 2MB)

Edit /etc/security/limits.conf so that the user running the Java application can lock the correct amount of memory.

tomcat soft memlock 12884901888
tomcat hard memlock 12884901888

Edit /etc/pam.d/su and /etc/pam.d/sudo and ensure that they contain the following line so that the above memory limits are applied:

session required

Next add the relevant options to the JVM’s command-line:

-XX:+UseLargePages -Xmx11g -Xms6g


5 thoughts on “Java and Large Memory Pages on Linux”

  1. Thanks for posting this, you’re dead on that doing this correctly is scattered across many, many sources.

    What you don’t answer is, did using large tables improve your GC throughput? Was it worth all the trouble in any way you could quantify?

  2. Yes. Eriks questions is _exactly_ what I asked myself after reading your post, too. Is it worth all the hassle?

  3. I don’t have real numbers on hand now, also because of other improvements we made. However I remember that our full GC time went down from 30 seconds to less than 5 seconds consistently.

  4. How did you arrive at the values you selected for SHMMAX and SHMALL? It would appear that your SHMMAX is set to 16GB and SHMALL is set to 12GB. I’m slightly confused by those choices… memlock = 12GB, SHMALL = 12GB, HugePage Space = 12GB, why is SHMMAX = 16GB? SHMMAX is the largest size a single segment can be, I don’t understand why you’d have it set higher than SHMALL, the maximum amount of all shared memory.

    We are configuring JVMs similarly but our servers have 96GB of RAM, and we’re using NUMA to run multiple instances bound to specific cores on the CPUs.

  5. Sorry for taking so long to reply, I completely forgot about it.
    The values you see there come from an installation where we changed the values around a bit after experimenting with different loads, so they are not totally consistent: i.e. we forgot to set SHMMAX to 12GB as you correctly point out.

Leave a Reply

Your email address will not be published.