I have a process which loops through a set of users in the system, and for each user, is handling a large number of objects related to the user. These objects are only retrieved once at the beginning of the parent microflow, but then in a sub-microflow, they are copied and assigned to each user. They are committed in the sub-microflow for each patient, but because the sub-microflow is part of the same transaction as the parent microflow, the objects are not cleared for garbage collection until the end of the parent microflow, so while executing this flow for, say 50 users, I very quickly run out of heap space in the JVM and I cannot complete the process, because something like 100,000-200,000 objects are being created for each user. However, I also cannot think of how it would be possible to reorganize this flow in order to do the same thing but have the objects cleared after each patient, for instance. Is there any way to decouple the sub-microflow from the parent transaction so that it really commits the objects and clears them for garbage collection? If not, this is a huge limitation for us and will create major problems. I've considered batch processing, but the batch would still have to happen in the sub-microflow, so I don't think this will help me at all in this case.
Even if you manage to clear the list, as you just said all the commits are part of the same DB transaction.
This means that when your microflow is finished and the DB transaction needs to resolved and commited it will most likely hand and time out.
So you need to make sure that the objects are processed in different transactions. You have two options,
1) endTransaction java action from community commons - simpler but not so nice, since it interferes with the default mendix behaviour
2) Use the processQeueue module to process the objects in batches of e.g. 100 objects
I always prefer to use the processQueue for heavy duty tasks, although it takes some minutes to get the module set up properly.