You need to distinguish between memory leaks (memory that was allocated, but you lost all references to) and memory hogs (memory that was allocated, that you keep references to, but forgot to deallocate).
You need to distinguish between memory leaks (memory that was allocated, but you lost all references to) and memory hogs (memory that was allocated, that you keep references to, but forgot to deallocate). The later one can not be detected by valgrind, since valgrind doesn't know you did not want to use it anymore. To get some statistics about your programs memory usage, you can use the massif tool of valgrind, which will show you in more detail where your memory gets allocated.
This might be helpful in finding memory hogs.
Actually, valgrind will detect memory that is still allocated at program exit, to which references still exist. – Hasturkun Oct 11 at 13:13 1 @Hasturkun: Yes, but it will not detect the memory that you forgot to deallocate during the program run, but then in the end deallocate. Like when you always new, and then instead of deleting, push things into a pool that you thought you were allocating from.
– PlasmaHH Oct 11 at 13:22 I have a map with vector as key and set > as value which I constantly clear and recreate and a set > which I constantly clear and add values too. Could these two cause the memory increase? – Yotam Oct 11 at 13:32 @Yotam: That is hard to tell without a more detailed analysis, which you could just use massif for.
Note that depending on the tools you use to determine your programs memory usage, they might show a different amount of memory than the sum of all new/malloc calls, due to things like caching or fragmentation. Use massif to get a better idea of whether it is your programs fault. – PlasmaHH Oct 11 at 13:41.
A small increase in memory usage is not necessarily something to worry about - it may be that your program is ramping up and will peak at some point. Without knowing the logic of that application, it's hard to tell. However, it's adamant that all allocated blocks were freed and it's usually pretty good.
You may want to consider letting it run for longer, increasing the work it has to do somehow (again this depends on the application) to see if it ever peaks or continues to rise forever (or until it runs out of virtual memory, anyway). I'd also look at those last two lines: ==4492== For counts of detected and suppressed errors, rerun with: -v ==4492== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 4 from 4) You may want to run it with -v just to check what those suppressions were. They may be nothing but it doesn't hurt to look into it.
Depending on the behavior of your program, if you allocate and free memory continually you may introduce fragmentation that causes the address space to grow. If you let the process run long enough it may stabilize and stop growing.
Valgrind can detect memory leaks, but not poor usage of memory. It's possible that a bug in your code is continually allocating memory for no apparent reason, and that defensive code is then cleaning it all up afterwards anyway. That said, I wouldn't trust your mechanism for determining your process's memory usage, either.
There's a lot that goes on behind the scenes: caching, for one. I'd call this "inconclusive".
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.