Hi, i have a big microflow that does a lot of create and commit functions. I have used batching in this microflow because i am iterating over 20,000 items. However, when it reaches around 15,000 , the same microflow is ran again. So , when the first one is finished, a duplicate one is still running, which then gives an error message. At the end of both runs, i can see that all objects i created are duplicated. Can anyone help me with this, i have tried adding blocking to the microflow settings, but even with blocking enabled, the microflow continues to run itself again from the start when i reach around 15,000 ( sometimes, its when it reaches 13,000 or 14,000 or 16,000) What can i do here , Also, this only happens in production, never in localhost. Note: i get this warning in the live log after importing the document, just before the create and commit actions start “Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.”
I had this issue with a long running microflow (the microflow would run twice which would result in duplicate objects being created), after spending a great of time debugging my microflow, I changed it to run asynchronously versus synchronously and it worked.
“when it reaches around 15,000 , the same microflow is ran again" What makes it getting run again?
Is your productionserver having multiple instances?
Tim van Steenbergen
As per my observation whenever some error occur in a microflow it runs twice before terminating and throwing error message. I have seen this while debugging. please check why your microflow is generating warning that will solve the issue I guess.
Be careful disallow concurrent execution is set to all users, so if you have two or more users that want to run the same microflow at the same time, it will not be possible with that option