For some first control in processing large amount of data I use offset/limit at retrieve so the MBS memory stays more under control. For not spooling up the database-server memory (in the default behaviour full commit or rollback) you might use EndTransaction as also available in community-commons so you have 'hard' commit points along the way (if your transaction scope allows this). Alternatively you might move to more async using the process-queue module, or is that already in place which is not clear to me.
We also ran into issues with asnychronous processing. It is improtant to have the latest version of Process Queue as it includes fixes for session issues - check the release notes - https://appstore.home.mendix.com/link/app/393/Mendix/Process-Queue
If you want to have mutliple objects processed in a single process you can use the QueuedAction object.
1. Create an association between the QueuedAction and the entity that you are proessing in the queue.
2. Then before scheduling the Process associate a batch of objects that you want to process to the QueuedAction.
Use $Limit and $Offset to get the objects you want to process in batches, just like Remco suggests.
3. In the processing microflow retrieve the objects from the Queued and process them.
Hope this helps,