Don't know the exact
1 – schedule a sync request to determine amount of changes per table on your master db by a query over the database connector.
2 – create a process queue mechanism to get the updates per batch and update your mx db.table
3 – scale up the process queue (N-threads, watch out for overloading your Mx server)
thank you for sharing your issue. I am having the exact same issue, trying to fill up the postgress database of mendix from a SQLserver database hosted in Google Cloud. The database contains certain tables over a million records. These actions take with the current speed 5 days to complete.
Then I also applied the following tips as in Sinan's reply, the proccesses are split into 5 steps, where each step is created as a process in the process queue. That gives me the opportunity to play with the number of threads. I tried with 1 till 10 threads, with little difference in speed to be honest.
So the speed problem continues, besides this issue there is another issue in play. That is the java heap space, because when you replicatie very large database table sets, the java heap space tends to get full, followed by a full server restart. Meaning you can start over again from the beginning.
Besides this I added a little extra, to lower the java heap space between different table mapping action, by adding a end transaction after the replicate action. This seems to lower the heap space memory usage. However like said before, when the table itself is big enough it will still run out of memory. Is this a bug in the module of the database replicator? I would wish that every set of 1000 records is cleaned from memory after replicated. But is does not look to work that way.
Does anyone has another suggestion how to overcome replicating a very large database?