I am still in the batch wrestling mode and have not found a sollution yet. May be one of you guys can give me new insight how to tackle this. I have the following problem: I want to proces 20000 objects. The problem is that just doing the retrieve of this total list a lot of memory is getting used. So the the first sollution on how to commit items commit items in a batch does not work. When I change the objects in the list I already run out of memory. The second sollution was to do a partial retrieve like in this post best way to deal with a huge list and use the Community commons module to add the items to a batch and commit the batch. I even used seperate microflows and use custom error handling set at continue. This worked slightly better because now at least the items before the out of memory error gets committed. So the second time you run this it completes. But this is still unsatisfactory for me. How can I avoid this out of memory completly? When is the database transaction according to Mendix completed? My assumption was that it was completed when the single microflow had ended when custom error handling was set. But this is obviously wrong. Mendix still keeps them in memory untill the whole chain of microflows is ended even if I use CommitBatch java action from the community commons.
The way to do this is by creating a new context. Within this context you can call startTransaction and endTransaction() to control your transaction. Do not end the transaction started in your regular context, this could lead to weird behavior.
So in pseudo code (typing this by head) :
IContext newContext = currentContext.getSession().getContext();
//do stuff with your new context
Sebastiaan van den Broek
Super valuable info. Was banging my head against this problem for over a day, wondering why records weren't getting written to the database.