Instead of splitting the CSV file it's also possible to split the processing of the import rows in your microflow. In other words: the processing of the import rows must be done in batches.
Okay, sounds great. But how do you do that? And therefor for you need Java, because it's currently not possible to give a amount and offset to a list / retrieve activity in microflow. But that's no problem because you can retrieve the limited list by Java. You can also do the complete job by Java but in my opinion a solution with as much as possible logic in microflow is better.
So do the following:
Core.retrieveXPathQuery(IContext context, String xPathQuery, int amount, int offset, Map<String, String> sortMap)to retrieve the objects you want.
NOTE: When you have tot deal with very much objects (> 10.000) then consider using separate transactions for the processing. Otherwise your will maybe get database / memory trouble.
I've got the same scenario, except that I am dealing with up to 400000 records.
I notice that this is a fairly old post, is it still the best solution for the problem?
With a little bit Java knowledge i think it is quite feasible to use stream processing to read your csv file: Open a filestream to the csv file, read a couple of lines, open a new mendix context, parse the lines into objects, commit and close the context.
The advantage of reading from a filestream is that no data is kept in memory, except for the current line.