When you said you tried it with batching, i am assuming you took 100 objects processed it, made your commit, cleaned your list and then start processing the next 100.
Does it happens even if you process 100 objects and commits them?
Also, cleaning your list, does not exactly mean you are freeing up the memory.
Can you also check the size of the object you are processing? For example, if you are processing 100 objects which has lot of attributes for example and if attributes are mostly strings, then the object size will increase, which also has negative impact.
The problem you might face is that both the runtime as the database require memory.
The database keeps the changed information for in case a rollback is needed, so the allready processed objects will be kept in memory by the database.
A possible solution could be to do the processing in a sub-microflow and then in the calling microflow set an error handler on the sub-flow: ‘Custom without rollback’ that way the objects will be released from memory faster when they get comitted.
can you check if you arent having any refresh events as well inside loop.
possible solutions is to use submicroflow with batching.