I've dug around further but haven't been able to find a way to adjust the queue size.
Instead I've modified our code to maintain a count of in-progress processing beans. Analysis is queued if there are too many files in-progress and passed to the beans as analysis completes. This has solved the problem and I can drop 10,000+ files in the directory at once without issue.
I'm still interested if there is a way to adjust the queue size though, it sounds like a problem that could pop up again in the future.