Hi Team , I am working on an application where I have a microflow running on 13million records with offset of 500 every time . This process is throwing “Maximum run time exceeded, framework is now terminating” on my local system and on server : CRITICAL - ActionManager: Error in execution of monitored action 'Integration.SUB_ClearLog' (execution id: 34-a2f9-4944-a0f7-7bfb0aab1b90, execution type: SCHEDULED_EVENT) 2021-04-06T13:50:41.476742 [APP/PROC/WEB/0] CRITICAL - ActionManager: java.lang.OutOfMemoryError: Java heap space 2021-04-06T13:50:41.476748 [APP/PROC/WEB/0] at java.util.Arrays.copyOfRange(Arrays.java:3664) 2021-04-06T13:50:41.476776 [APP/PROC/WEB/0] at java.lang.String.<init>(String.java:207) Can you please guide me what is going wrong here .
The root cause here is java.lang.OutOfMemoryError: Java heap space
To resolve this, either decrease your batch size, in this case, lower 500. Or add more java heap memory to your app. Depending on where the app is running:
mendix public cloud: purchase larger containers or more memory then do an app resize/scale
In my opinion it’s not the best solution to increase memory.
If you want to clear a database table of 13 million records, you can also use a more low level database solution.
For example, the native SQL script below I use for clearing a table with millions of records in < 1 second:
CREATE TABLE new_integration_run_logs LIKE integration_run_logs;
RENAME TABLE integration_run_logs TO old_integration_run_logs, new_integration_run_logs TO integration_run_logs;
DROP TABLE old_integration_run_logs;