The rules for the size of a
ThreadPoolExecutor's pool are generally miss-understood, because it doesn't work the way that you think it ought to or in the way that you want it to.
Take this example. Starting thread pool size is 1, core pool size is 5, max pool size is 10 and the queue is 100.
Sun's way: as requests come in threads will be created up to 5, then tasks will be added to the queue until it reaches 100. When the queue is full new threads will be created up to
maxPoolSize. Once all the threads are in use and the queue is full tasks will be rejected. As the queue reduces so does the number of active threads.
User anticipated way: as requests come in threads will be created up to 10, then tasks will be added to the queue until it reaches 100 at which point they are rejected. The number of threads will rename at max until the queue is empty. When the queue is empty the threads will die off until there are
The difference is that the users want to start increasing the pool size earlier and want the queue to be smaller, where as the Sun method want to keep the pool size small and only increase it once the load becomes to much.
Here are Sun's rules for thread creation in simple terms:
- If the number of threads is less than the
corePoolSize, create a new Thread to run a new task.
- If the number of threads is equal (or greater than) the
corePoolSize, put the task into the queue.
- If the queue is full, and the number of threads is less than the
maxPoolSize, create a new thread to run tasks in.
- If the queue is full, and the number of threads is greater than or equal to
maxPoolSize, reject the task.
The long and the short of it is that new threads are only created when the queue fills up, so if you're using an unbounded queue then the number of threads will not exceed
For a fuller explanation, get it from the horses mouth: ThreadPoolExecutor API documentation.
There is a really good forum post which talks you through the way that the
ThreadPoolExecutor works with code examples:
Most people want it the other way around, so that you increase the number of threads to avoid adding to the queue. When the threads are all in use the queue starts to fill up.
Using Sun's way, I think you are going to end up with a system that runs slower when the load is light and a bit quicker as the load increases. Using the other way means you are running flat out all the time to process outstanding work.
I just don't see why they have done it this way. So far I have not seen a satisfactory explanation of why Sun's implementation works the way it does. Does anyone out there know?
Comment from: Suryakant B [Visitor]
Comment from: vwchong [Visitor]
Comment from: Ponnusamy [Visitor]
Comment from: Gaurav Seth [Visitor]
Comment from: vivek singh [Visitor]
Comment from: amitabh roy [Visitor]
Comment from: Samy [Visitor]
Comment from: Latif [Visitor]
Comment from: ck [Visitor]
Comment from: Mina [Visitor]
Comment from: Ajay [Visitor]
Comment from: Baka [Visitor]
Comment from: Fabian [Visitor]
Thank you very much for this, to implement it like you suggested I did this
“There is an interesting method allowCoreThreadTimeOut(boolean) which allows core threads to be killed after given idle time. Setting this to true and setting core threads = max threads allows the thread pool to scale between 0 and max threads”
as written by Jaroslaw Pawlak in: Stackoverflow: Core Pool Size vs Maximum Pool Size in ThreadPoolExecutor
Form is loading...