To give you an idea of how fast it's we recently ran a test on a modern PC, batching blog posts (average 2.4kb)
through 5 large classifiers. The throughput was +100 posts/second (including the communication).
That is 360000 posts/hour! On one core.
Also, it handles multiple requests in parallel, this is very nice if you have multiple cores!
It uses transactional behavior to ensure that classifiers are not left in an undefined state if a write operation
unexpectedly fails. For example if the server runs out of memory while training a class, the training is reverted and
an error message is returned.