System memory usage and JVM object heap increasing while application is not being used

0
Hi, I'm working on a project that is deployed to the Mendix Cloud (XS21 with 1,024 MB of system memory). It is a relatively simple application with hardly any complex logic. The app is not yet in official use but its system memory usage is steadily climbing since deployment causing warnings and will at this rate run out of memory . Below you can see the two most notable metrics. Since deployment on 18 februari there has been almost no log ins by users as the app is not yet in official use, but there is a upward trend in memory usage (probably correlated to JVM object heap?). There are only 3 scheduled events in this project (related to clean up and saml/sso, all with interval daily).  Things I have tried with no results: Sporadically placing breakpoints in the scheduled events and after start up flows seeing if something is stuck in a loop. The project uses a lot of animations using the Lottie Widget, I removed most uses of this widget to see if this had any effect but the steady incline increase in memory usage persists (on similar test cloud environment) Played with turning off scheduled events and certain parts of the after startup flow Trying to read mendix documentation about system memory but it is quite limited   At the moment its very hard for me to see what exactly is running/consuming that more and more memory is needed while no one is using this app. Is the garbage collector not supposed to do something here?  Any tips on how to tackle this problem? Is the answer more system memory?
asked
7 answers
1

You could ask for a memory dump of your environment. But analyzing those can be tricky. Currently doing it for one of our environments with the Eclipse memory analyzer ( https://www.eclipse.org/mat/ ). It gives you already some hints about memory leakage. Currently it is pointing me to two suspects: com.mendix.modules.microflowengine.internal.MicroflowEngineModuleImpl

and java.util.concurrent.ConcurrentHashMap$Node. But I probably still will need Mendix Expert service for fully analyzing this. My first hunch though is that it looks to me that nothing I can do. Specially those ConcurrentHashMaps looks suspicious to me. I can not figure out why those are kept in memory for so long.

Regards,

Ronald

answered
0

does the application make use of custom Java actions which are not part of a standard module from the Marketplace. 

Note: not all module in the marketplace are standard module. read: modules supported by Mendix

answered
0

Just another hypothesis: Are you using any caching?

answered
0

Have you looked at the Running now section of Metrics? Look for long running requests that could be stuck. These could keep objects alive.

 

This is a screenshot of a free-app, but you should have access to this feature with Licensed cloud node.

answered
0

Valentijn, have you found a solution for the problem with the increased memory usage for the unused app?

we are facing the same problem with one of our apps (also running on a small container, Max 1GB memory). Graphs are taken from the unused production version:

 

answered
0

Try to see if this problem can be reproduced locally. I often use JVisualVM available in the one of the JDK subfolders to monitor what memory does on my local system. I believe it also come with some options to create dumps and such. 

answered
0

I got the same problem with the system memory.

After some research and testing i could confirm that the JVM Heap Garbage collector works fine. It will only clean up the tenured generation after it hits 2/3 of the total available space. I tested this and it works fine.

The system memory is a different problem. Even when the JVM Heap is cleaned and drops back down, the system memory wont decrease, it increases till 95% and there it stays then. Even when using the application, the memory wont increase further.

 

We implemented a workaround where we stop and start an environment via the deploy api and a script.

answered