I use TQs for updating records, where each update performs its commit. If one of the tasks fails, I can replay the updates of the missed records from the TQ management and the data point of view. In that case, you don’t get a functional inconsistency if queue entries are deleted. This requires some extra logic.
The batch size of the TQ commit is part of the design and the purpose of the task. For updating objects from a REST API that gets the details of one object, it makes sense to make the task only 1 object. I would rely on many objects per commit for a large data transformation action.
Last but not least, I only put in TQ’s actions with no relation or sequence dependencies unless I can bundle them in the same task (creating a logical unit of work) with an all-or-nothing way of committing.
I use the “old batch processing” if I need control of the commits and rollbacks in the Microflow, I need sequencing, or I have complex relations between objects that I can not bundle (or it is too much effort). In this way of working, you still can interact with the client; logically, that is not possible when the task is in the TQ.
Does it make sense to you? Maybe there are other reasons why to use TQ or batches. So I will stay tuned.
Go Make It