Threads vs coroutines for IO extensive task?

I am having a problem understanding the coroutines.

The problem I am trying to solve goes like this (simplified for the sake of discussion):

  1. The user creates an item in microservice A
  2. microservice A has to ping microservice B (over http, blocking) when the item is done. It can range from a few seconds to a few minutes. A pings every second for specific item until B responds that it is done.
  3. There is no upper limit of items, meaning A can be pinging for 1 item or 1000 of them every second.

My first idea was to create ExecutorService, give it some number of threads (find out the optimal number with a bit of experimenting) and hope for the best. If I give it 20 threads in a pool, I wouldn’t be able to “ping” for 100 of them at the same time, but it is what it is.

Now I read about coroutines and found this:

The IO dispatcher has a large number of threads in its thread pool, allowing for many parallel blocking tasks to run on this dispatcher. This makes it suitable for IO-intensive tasks that involve blocking operations such as reading and writing files, performing database queries, or making network requests. 

I assume I could do something like:

suspend fun itemsPing() { withContext(Dispatchers.IO) { async { pingB(item1) } async { pingB(item2) } ... } } suspend fun pingB(item: Item) { //HTTP call } 

Would this give me any benefit over executor service? I assume I could write it in a way that I dynamically create a number of items I need to ping for, but how does it work under the hood? What happens if all IO threads are waiting for B to reply?

IO has by default 64 threads, so what happens if I am pinging for 65 items? Will 65th item be served (call made and coroutine parked because it is waiting for IO to resolve) or will 65th item need to wait for one of the previous items to be resolved so it can take another thread?

submitted by /u/manyManyLinesOfCode
[link] [comments]