Java memory mystery (do I have a leak)?

A couple of potential problems: Direct allocated buffers and memory mapped files are allocated outside of the Java heap, and can't conveniently be disposed An area of stack is reserved for each new thread Permanent generation (code and interned strings) is outside of the usual stack. It can be a problem is class loaders leak (usually when reloading webapps) It's possible that the C heap is leaking pmap -x should show how your memory has disappeared.

A couple of potential problems: Direct allocated buffers and memory mapped files are allocated outside of the Java heap, and can't conveniently be disposed. An area of stack is reserved for each new thread. Permanent generation (code and interned strings) is outside of the usual stack.It can be a problem is class loaders leak (usually when reloading webapps).

It's possible that the C heap is leaking. Pmap -x should show how your memory has disappeared.

There's basically a long long list of this pastie. Org/629976 in my pmap -x. Any ideas?

– erotsppa Sep 25 '09 at 4:18 I had a similar problem and the same kind of output. It turns out it's the stack space allocated for new threads. (I have a thread-leak) – Cristian Vat May 31 '10 at 23:47.

Sounds like you have a leak. Can't you do profiling to see which function is driving the memory up? I am not sure though.

If I had to take a stab in the dark, I would say that the JVM you are using has a memory leak.

Swap Sun vs IBM JVM to test RES will include code + non-head data. Also, some things that you think would be stored in the heap aren't, such as the thread stack and "class data". (It's a matter of definition but code and class data are controlled by -XX:MaxPermSize=.) This one sounds like a memory leak in either the JVM implementation, the linux kernel, or in library JNI code.

If using the Sun JVM, try IBM, or vice versa. I'm not sure exactly how dlopen works, but code accessing system libraries might be remapping the same thing repeatedly, if that's possible. Finally, you should use ulimit to make the system fail earlier, so you can repeat tests easily.

WRT #1, it's normal for your RSS to be larger than your heap. This is because system libraries and non-Java code are included in the RSS but not the heap size. WRT #2, Yes, it sounds like you have a leak of some sort.

If the system itself is crashing, you are likely consuming too much of a system resources, like sockets, threads, or files. Try using lsof to see what files the JVM has open. Run this a few times as your memory increases.

If the JVM is crashing, be sure to set the -XX:+HeapDumpOnOutOfMemoryError option.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions