Database delete rolls back

We have an on-premise deployment where we are trying to delete a large dataset of ~350k records with their related data.  The delete process is rather slow (due to the related data).  Our process deletes ~10k records every hour.  At the average estimate we would need ~35 hours to complete our delete.  At 5:30 in the morning, we have a connection pool reset error in the logs and it seems as if the database rolls back after this.   Connection pool error message: 2020-01-28 05:30:51.547 ERROR - ConnectionBus_Retrieve: JdbcDataStore::execRetrieveQuery failed with SQLState: 08S01, error code: 0, message: Connection reset 2020-01-28 05:30:51.547 ERROR - ConnectionBus_Retrieve: Connection reset at at at at at at$StmtExecCmd.doExecute( at at at at at at org.apache.commons.dbcp2.DelegatingStatement.executeQuery( at org.apache.commons.dbcp2.DelegatingStatement.executeQuery( at com.mendix.connectionbus.connections.jdbc.JdbcDataStore.execRetrieveQuery( at com.mendix.connectionbus.connections.jdbc.JdbcDataStore.execRetrieveQueries( at com.mendix.connectionbus.connections.jdbc.JdbcDataStore.retrieve( at com.mendix.connectionbus.connections.jdbc.JdbcDataStore.retrieve( at at at Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$ Source) at Source) According to this Stack Overflow post, the 08S01 is possibly a badly configured firewall. There is nothing immediately jumping out in the connection pool settings that is a likely culprit (see documentation here). What have we tried so far (without success)? Implemented our delete inside a loop where we delete 1000 records at a time Used the CommunityCommons deleteAll java action Used the CommunityCommons startTransaction and endTransaction inside our loop deleting 1000 records at a time   With the delete process likely taking 35 hours to complete, there is no way that we will be able to delete our records before the next connection pool reset. Has anyone else experienced this before?  Do you have a way around the problem?
2 answers

Some options are:

  1. Use the ProcessQueue do delete in batches. This will not increase performance, but you can delete in separated transactions
  2. Do every delete in a separate transaction (there are some javaactions in community commons) – however, be careful when fiddling around with transactions
  3. delete related data in a separate step instead of letting the delete behaviour handle this.

regards, Fabian

  • just deleting in a loop will still run in a single large transaction
  • starting a new transaction within the main transaction only creates a subtransaction

You don't need to set up and use ProcessQueue for this necessarily, you can simply use the 'executeMicroflowInBackground' or ‘executeMicroflowInBatches’ java action from the CommunityCommons module. I think the ProcessQueue module is a bit heavy for this if you are not using it already otherwise.

If you use executeMicroflowInBackground, just create your own batching and deletion logic, emulate a loop with a counter variable and allow it to trigger the executeMicroflowInBackground java action as many times as needed. They run asynchronously so they will queue in the background.

Furthermore, consider retrieving and deleting your data manually from the bottom up as opposed to deleting the top objects and letting delete behaviour handle it. You can retrieve and delete lists manually much, much faster than one by one via delete behaviour.