In the CommunityCommons module is a DeleteAll java action which removes all items in a table and deletes those in Batches. Maybe you better use this if you must delete all items.
Mendix recommends that you not pull more than 3’000 objects into memory. Unfortunately, I can not point you at this moment where exactly I read that.
If you are aggregating values, then a retrieve + aggregate function will only pull the aggregate value into memory and not the entire list (ONLY if that retrieve is not used within the same microflow again!)
Hope that helps!
In this case, it would be a better option to retrieve in a batch of 2000-3000 objects and the delete them. Instead of ‘All’ use ‘Custom’ retrieve
See this for more information https://docs.mendix.com/refguide/retrieve#3-3-2-range
Thx for your input!
The Java action turned out as very slow for 1,600,000 instances. Is there a better and faster way do handle data amounts like this?
You could use the executeWithConnection API to directly run a SQL query against the database:
Be careful with this API! It allows any SQL statement to be executed.