Sort of. We had something similar a year ago, retrieving weird results when the number of objects grew over 10000. We could work around it by taking steps of 2000, but that is probably no option for you.
It is very likely caused by the OQL-module, so if you can post a request at Mendix to solve it, of course providing them a test-project that shows the behavior, then hopefully they will have a look at it.
Otherwise, try module SaferOQL, see if you get a more reliable result (though I doubt it)
I have found a workaround. Tim's comment on problems occuring after 10.000 records triggered me to suspect that there would be some batch process somewhere which could cause the issue.
In the ExportOQLtoCSV java action, I found that there is indeed a batch-like process that uses 10.000 records as it's batch size.
I increased that to 100.000 as a test, and now my export appears without issues.
Of course, the batching is there for a reason, but as long as I don't run into major performance issues I won't look into it any further for now.