Suppose i have an app which logs events and links them to categories. I created the following domain model: Based on the logged events, i need to create a report showing counts and sums per category per month: CategoryName Year Month Number of events Sum of NrOfReports The user must be able to sort and filter on all columns in the report. What is the best approach to fill the MonthReport entity? Approaches i could think of: 1. An after commit event in "Event" which updates "MonthReport" This has performance impact, it feels awkward and risky: two simultaneous updates of the same MonthReport record could result in miscounts. 2. Creating microflows do a separate retrieve and count for each month. The microflows will become rather complex and can be slow in performance as the number of events rises. And what would be the trigger to run the microflow? 3. Using OQL, but: The reporting widget is deprecated The OQL appstore component is not "platform supported" I see no way in which Mendix studio validates the OQL script has consistent field names, which i see as a major risk to maintainability. 4. Exporting all event data to create the reports in Excel or PowerBI, but the users are not techy enough to do this. It sounds like a simple task for a database engine, but in Mendix it is not that obvious to me. Am i missing something?
I would go for option 2 and use scheduled events to trigger the microflow. In the current month, this could be a SE that runs often, like every minute, 5 minutes, hour etc based on your requirements.
A monthly scheduled event would be the last update to your report because after that last run it's no longer necessary to check previous months. After all, that's passed. Right?
Depends a bit on how realtime the result should be. If realtime is no issue use a ascheduled event that runs at the start of a new day. I would just proces all newly created events of the day before and update the reports accordingly.
If realtime is an issue you would indeed need event handlers. But you could use the event handler to trigger a java action in the background or schedule a task that does the update of the report data. That way it will not be heavy on your system.
I would create a queue thats runs at the end of each event or event action. The queues run in the background and you can break them down to do one specific action.(Object orientation)
I would also use keys and foreign keys in all my entities not just mendix assiciations.
Why i say this:
If you break the queues or microflows down to do one specific task you can run many at a time when your application get large. This means you can scale at any time. You can run a Q per cpu thread. so many of the same Q’s and the same time. parallel processing :-)
These microflows will not hold alot of resources so the he process with be fasted.
If they run at the end of each event action you can have realtime reports not just monthly reports.
And lastly if you start with keys and foreign keys in your entities it will be easier at a later stage if the information gets to large for mendix reports to push these entities to powerful reporting platforms. You will still use Mendix to envelope all these reports and control the row level security but let a powerbi or another reporting engine show and manipulate the data,grids etc etc.