Friday, June 21, 2013

Making the JVM release memory

It's well known that a Java application (the JVM) won't typically release much memory once it's warmed up, even when the application is lightly loaded or even idle at a later time.

If you plot the memory usage (or the resident set size) of a Java application, it typically looks like a mostly flat line after an upslope at the beginning. At a low level, this corresponds to the memory pages of the Java heap gradually getting allocated, and once all the pages are allocated, the memory usages stays mostly flat even when a large portion of the heap is not used * **.

This can be a problem if a Java application is run on a non-dedicated system (a server or desktop) where it co-exists with other (non-Java) applications. In a non-dedicated system, one application that's not playing nice with others by dominating the memory can slow down the other applications, or prevent them from running.

This is where an experimental JVM feature, DeallocateHeapPages, that I worked on comes in. It causes the underlying memory pages that correspond to the unused (free) parts of the heap to be deallocated (released) and helps reduce the memory usage of a Java application. Internally, it calls the system call madvise(MADV_DONTNEED) for the bodies of free chunks in the old generation without unmapping the heap address space.

Another way to look at this is that this feature makes the memory usage of a Java application behave more like that of a C/C++ application where the process memory usage is more in line with the memory actually used by the application.

This has been very useful for servers and desktop tools that we have at Google and helped save a lot of memory (RAM) usage.

The implementation currently supports the concurrent mark sweep (CMS) collector and the Linux platform.

Here's the email thread on the OpenJDK mailing list and a link to the JVM patch:

  http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2013-January/005664.html

  http://cr.openjdk.java.net/~hiroshi/webrevs/dhp/webrev.00/

The patch hasn't been accepted (yet) as the support for all the other OS platforms is deemed necessary for that to happen, which it lacks. I might be able to address that at some point, if I have the time and resources to make it happen.

* For simplicity, I am ignoring the memory use other than the heap such as the native C heap and the thread stacks here as the heap uses usually by far the largest amount of memory.

** Though the serial garbage collector (-XX:+UseSerialGC) of the JVM can occasionally shrink the heap and return memory, it's almost never used in production for an obvious performance reason. The parallel collector and the concurrent mark sweep (CMS) collector, which are often used in production, almost never shrink the heap and return memory, in my experience.

8 comments:

Dave Minter said...

I'm curious - do you know of any particular reason why it's necessary to specify a maximum heap size up front when kicking off the JVM rather than just allowing it to consume resources up to the OS enforced limits? Or is it just a historical accident? It's often been a frustration to me when really I want to say "use as much memory as possible" rather than constraining it.

Anyway, nice work on the new feature, thanks!

Hiroshi Yamauchi said...

Dave,

I don't know of a particular reason as a fact as I wasn't there when the JVM was originally designed.

But I think one major factor might be that the Java heap was implemented as a single contiguous memory region. The advantages of it being a single contiguous memory region are, as I understand, 1) to make the JVM simple and quick (e.g., it's easy to check if an address points to an object in the Java heap with a simple address range check, or it's easy to arrange for a data structure that has to be parallel to the heap space such as the card table), and 2) to reduce the address space fragmentation and leave enough space for other things like memory mapped files, thread stacks, the JIT code cache, etc. (especially in 32-bit systems.) If it has to be a single contiguous memory region, it has to be reserved upfront at startup as it may not be possible to expand it later (e.g., the subsequent address range is already taken by other things.) Hence, the maximum heap size option is there. But I don't think in theory that there's a reason the Java heap has to be a single contiguous memory region.

There might be other reasons.

Dave Minter said...

That makes sense - thanks for the illuminating response.

Anonymous said...

This is very interesting and from my point of view a feature that is painfully missing. It is great you took the time to do it. Is it possible to apply your patch to my default JVM? IT would be certainly useful for a current project.

Anonymous said...

To complete a bit....Can anyone apply the patch, or one needs some serious expertise and tools? Thank you.

Hiroshi Yamauchi said...

Sorry about the delayed response. I'd say that it might need some expertise when applying a patch to and building OpenJDK. But it shouldn't be too hard with the right skills and mindset.

Anonymous said...

A FreeBSD implementation of this would be much appreciated.

Anonymous said...

Hi, did you continue with the development of the patch or do you know of any other efforts to make the JVM actually free memory to the OS?