Implemented the same microflow with a Database connector and Fetch file from URL action. With fetching from URL it works as expected, but with the Database connector there is a memory leak.
I wonder what would happen if you use this workaround:
Create an entity where you store the limit and offset values. Create a scheduled event that process only the filedocuments with the current limit and offset and change the limit and offset after the process. IIf you would run this scheduled event every x minutes (depending on the timing of a batch) it would finish hopefully. t is offcourse not so efficient but I do wonder if this way the memory is released again. If so then a bug report would be the way to go.
Behaviour reproduced in a seperate project with a local Postgresql DB. Filed a ticket at Mendix to have a look at the issue.
Thanks Ronald for your reply. If I run it in seperate microflows with different offsets, memory is indeed released by the garbage collector, so it looks like a memory leak in the file document commit or DB connector.
I'm trying to reproduce the behavior in an isolated project, so I can send it to Mendix.