This is the threadpool for the buidin jetty. More information on the Mendix monitoring page:
Threadpool for Handling External Requests
The application server thread pool graph shows the number of concurrent requests that are being handled bij the Mendix Runtime, but only when they’re initiated by a remote API, like the way the normal web-based client communicates, or by calling web services. Because creating a new thread that can concurrently process a request is an expensive operation, there’s a pool of threads being held that can quickly start processing new incoming requests. This pool automatically grows and shrinks according to the number of requests that are flowing through the application.
When i look at the graph in your post the 'threadpool size' did reach the 'max thread size'. However, what is not clear from the documention or the graph is how the 'threadpool size' is determined. You would expect the idle + active threads together will determine the 'threadpool size'. The idle threads are not visible in the graph.
In a runtime environment (on-premise) it is possible to get more information from the m2ee. There the idle_threads are also visible. Also see:
So, possibly there were a lot of idle threads. Something triggered from client building up a lot of threads. By default the threads do have a idle timeout.
I would advice to ask Mendix support.
More detailed information about the QueuedThreadPool:
Do you know if there are many simultaneous connections that remain open to the Mendix application (e.g. synchronously running microflows, database connections, API call)?
Each connection uses a thread from the jetty thread pool.