Couldn't just get why Fork Join is better in terms of multicore utilisation.
An example to illustrate this (Just a theoretical one):
I have an array of webservice endpoints: [E1, E2, E3, E4]
Let's assume each endpoint returns a number.
I have to then sum up the total and return the result.
With this simple story in mind.
I have 2 options:
- ExecutorService fixedThreadPool of 4 and span these 4 calls in parallel.
- Fork Join framework with 4 tasks.
Assume I have 4 cores.
With Executor service, 4 JVM threads are created and AFAIK, it's completely upto the OS to schedule them on the same core or multiple cores.
If they're scheduled on the same core, then we have the problem of under-utilised cores.
If they're scheduled on different cores, then we're laughing!
All I'm trying to get at is this bit of uncertainty around using multiple cores.
How does Fork Join get around this? Does it internally pass some kind of magical instructions to the OS to use multiple cores?
If my above example is not relevant to draw a comparison between Fork Join vs Executors, how does Fork Join claim that it utilises cores much efficiently than traditional multithreading.
ForkJoinPoolis anExecutorService, right?ForkJoinPool.commonPool()(In Java 8) which actually an ExecutorService.