Performance issue with large input data sets in a micro flow
When using a large data collection (~17.000 objects) as an input list for a microflow, the performance seems to be going down dramatically. It appears that Mendix is doing a large number of pre-loads, maybe in memory, slowing the application down. When using a much smaller data set, e.g. 100 objects, the application performs fine. Is there a way to avoid this slowing down? Can I chop the input in different batches?
The best way is to handle the list in smaller batches to make it scalable. So, cut your list in pieces of 100 and put these lists into your microflow. In this way you have a scalable approach and you can handle any amount of objects.
See this answer for an example of how to split a list into batches.