Concurrent Coroutines – Concurrency is not Parallelism

On Kotlin Coroutines and how concurrency is different from parallelism

The official docs describe Kotlin Coroutines as a tool “for asynchronous programming and more”, especially are coroutines supposed to support us with “asynchronous or non-blocking programming”. What exactly does this mean? How is “asynchrony” related to the terms “concurrency” and “parallelism”, tags we hear about a lot in this context as well. In this article, we will see that coroutines are mostly concerned about concurrency and not primarily about parallelism. Coroutines provide sophisticated means which help us structure code to make it highly concurrently executable, also enabling parallelism, which isn’t the default behavior though. If you don’t understand the difference yet, don’t worry about it, it will get clearer throughout the article. Many people, I included, struggle to make use of these terms correctly. Let’s learn more about coroutines and how they relate to the discussed topics.

(You can find a general introduction to Kotlin coroutines in this article)

Asynchrony – A programming model

Asynchronous programming is a topic we’ve been reading and hearing about a lot in the last couple of years. It mainly refers to “the occurrence of events independent of the main program flow” and also “ways to deal with these events” (Wikipedia). One crucial aspect of asynchronous programming is the fact that asynchronously started actions do not immediately block the program and take place concurrently. When programming asynchronously, we often find ourselves triggering some subroutine that immediately returns to the caller to let the main program flow continue without waiting for the subroutine’s result. Once the result is needed, you may run into two scenarios: 1) the result has been fully processed and can just be requested or 2) You need to block your program until it is available. That is how futures or promises work. Another popular example of asynchrony is how reactive streams work like as described in the Reactive Manifesto:

Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation and location transparency. […] Non-blocking communication allows recipients to only consume resources while active, leading to less system overhead.

Altogether, we can describe asynchrony, defined in the domain of software engineering, as a programming model that enables non-blocking and concurrent programming. We dispatch tasks to let our program continue doing something else until we receive a signal that the results are available. The following image illustrated this:

We want to continue reading a book and therefore let a machine do the washing for us.

Disclaimer: I took this and also the two following images from this Quora post which also describes the discussed terms.

Concurrency – It’s about structure

After we learned what asynchrony refers to, let’s see what concurrency is. Concurrency is not, as many people mistakenly believe, about running things “in parallel” or “at the same time”. Rob Pike, a Google engineer, best known for his work on Go, describes concurrency as a “composition of independently executing tasks” and he emphasizes that concurrency really is about structuring a program. That means that a concurrent program handles multiple tasks being in progress at the same time but not necessarily being executed simultaneously. The work on all tasks may be interleaved in some arbitrary order, as nicely illustrated in this little image:

Concurrency is not parallelism. It tries to break down tasks which we don’t necessarily need to execute at the same time. Its primary goal is structure, not parallelism.

Parallelism – It’s about execution

Parallelism, often mistakenly used synonymously for concurrency, is about the simultaneous execution of multiple things. If concurrency is about structure, then parallelism is about the execution of multiple tasks. We can say that concurrency makes the use of parallelism easier, but it is not even a prerequisite since we can have parallelism without concurrency.

Conclusively, as Rob Pike describes it: “Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once”. You can watch his talk “Concurrency is not Parallelism” on YouTube.

Coroutines in terms of concurrency and parallelism

Coroutines are about concurrency first of all. They provide great tools that let us break down tasks into various chunks which are not executed simultaneously by default. A simple example illustrating this is part of the Kotlin coroutines documentation:


fun main() = runBlocking<Unit> { val time = measureTimeMillis { val one = async { doSomethingUsefulOne() } val two = async { doSomethingUsefulTwo() } println("The answer is ${one.await() + two.await()}") } println("Completed in $time ms") } suspend fun doSomethingUsefulOne(): Int { delay(1000L) return 13 } suspend fun doSomethingUsefulTwo(): Int { delay(1000L) return 29 }

The example terminates in roughly 1000 milliseconds since both “somethingUseful” tasks take about 1 second each and we execute them asynchronously with the help of the async coroutine builder. Both tasks just use a simple non-blocking delay to simulate some reasonably long-running action. Let’s see if the framework executes these tasks truly simultaneously. Therefore we add some log statements that tell us the threads the actions run on:

[main] DEBUG logger - in runBlocking
[main] DEBUG logger - in doSomethingUsefulOne
[main] DEBUG logger - in doSomethingUsefulTwo

Since we use runBlocking from the main thread, it also runs on this one. The async builders do not specify a separate CoroutineScope or CoroutineContext and therefore also inherently run on main.
We have two tasks run on the same thread, and they finish after a 1-second delay. That is possible since delay only suspends the coroutine and does not block main. The example is, as correctly described, an example of concurrency, not utilizing parallelism. Let’s change the functions to something that really takes its time and see what happens.

Parallel Coroutines

Instead of just delaying the coroutines, we let the functions doSomethingUseful calculate the next probable prime based on a randomly generated BigInteger which happens to be a fairly expensive task (since this calculation is based on a random it will not run in deterministic time):

fun doSomethingUsefulOne(): BigInteger {
    log.debug("in doSomethingUsefulOne")
    return BigInteger(1500, Random()).nextProbablePrime()
}

Note that the suspend keyword is not necessary anymore and would actually be misleading. The function does not make use of other suspending functions and blocks the calling thread for the needed time. Running the code results in the following logs:

22:22:04.716 [main] DEBUG logger - in runBlocking
22:22:04.749 [main] DEBUG logger - in doSomethingUsefulOne
22:22:05.595 [main] DEBUG logger - Prime calculation took 844 ms
22:22:05.602 [main] DEBUG logger - in doSomethingUsefulOne
22:22:08.241 [main] DEBUG logger - Prime calculation took 2638 ms
Completed in 3520 ms

As we can easily see, the tasks still run concurrently as in with async coroutines but don’t execute at the same time anymore. The overall runtime is the sum of both sub-calculations (roughly). After changing the suspending code to blocking code, the result changes and we don’t win any time while execution anymore.


Note on the example

Let me note that I find the example provided in the documentation slightly misleading as it concludes with “This is twice as fast, because we have concurrent execution of two coroutines” after applying async coroutine builders to the previously sequentially executed code. It only is “twice as fast” since the concurrently executed coroutines just delay in a non-blocking way. The example gives the impression that we get “parallelism” for free although it’s only meant to demonstrate asynchronous programming as I see it.


Now how can we make coroutines run in parallel? To fix our prime example from above, we need to dispatch these tasks on some worker threads to not block the main thread anymore. We have a few possibilities to make this work.

Making coroutines run in parallel

1. Run in GlobalScope

We can spawn a coroutine in the GlobalScope. That means that the coroutine is not bound to any Job and only limited by the lifetime of the whole application. That is the behavior we know from spawning new threads. It’s hard to keep track of global coroutines, and the whole approach seems naive and error-prone. Nonetheless, running in this global scope dispatches a coroutine onto Dispatchers.Default, a shared thread pool managed by the kotlinx.coroutines library. By default, the maximal number of threads used by this dispatcher is equal to the number of available CPU cores, but is at least two.

Applying this approach to our example is simple. Instead of running async in the scope of runBlocking, i.e., on the main thread, we spawn them in GlobalScope:

val time = measureTimeMillis {
    val one = GlobalScope.async { doSomethingUsefulOne() }
    val two = GlobalScope.async { doSomethingUsefulTwo() }
}

The output verifies that we now run in roughly max(time(calc1), time(calc2)):

22:42:19.375 [main] DEBUG logger - in runBlocking
22:42:19.393 [DefaultDispatcher-worker-1] DEBUG logger - in doSomethingUsefulOne
22:42:19.408 [DefaultDispatcher-worker-4] DEBUG logger - in doSomethingUsefulOne
22:42:22.640 [DefaultDispatcher-worker-1] DEBUG logger - Prime calculation took 3245 ms
22:42:23.330 [DefaultDispatcher-worker-4] DEBUG logger - Prime calculation took 3922 ms
Completed in 3950 ms

We successfully applied parallelism to our concurrent example. As I said though, this fix is naive and can be improved further.

2. Specify a coroutine dispatcher

Instead of spawning async in the GlobalScope, we can still let them run in the scope of, i.e., as a child of, runBlocking. To get the same result, we explicitly set a coroutine dispatcher now:

val time = measureTimeMillis {
    val one = async(Dispatchers.Default) { doSomethingUsefulOne() }
    val two = async(Dispatchers.Default) { doSomethingUsefulTwo() }
    println("The answer is ${one.await() + two.await()}")
}

This adjustment leads to the same result as before while not losing the child-parent structure we want. We can still do better though. Wouldn’t it be most desirable to have real suspending functions again? Instead of taking care of not blocking the main thread while executing blocking functions, it would be best only to call suspending functions that don’t block the caller.

3. Make blocking function suspending

We can use withContext which “immediately applies dispatcher from the new context, shifting execution of the block into the different thread inside the block, and back when it completes”:

suspend fun doSomethingUsefulOne(): BigInteger = withContext(Dispatchers.Default) {
    executeAndMeasureTimeMillis {
        log.debug("in doSomethingUsefulOne")
        BigInteger(1500, Random()).nextProbablePrime()
    }
}.also {
    log.debug("Prime calculation took ${it.second} ms")
}.first

With this approach, we confine the execution of dispatched tasks to the prime calculation inside the suspending function. The output nicely demonstrates that only the actual prime calculation happens on a different thread while everything else stays on main. When has multi-threading ever been that easy? I really like this solution the most.

(The function executeAndMeasureTimeMillis is a custom one that measures execution time and returns a pair of result and execution time)

23:00:20.591 [main] DEBUG logger - in runBlocking
23:00:20.648 [DefaultDispatcher-worker-1] DEBUG logger - in doSomethingUsefulOne
23:00:20.714 [DefaultDispatcher-worker-2] DEBUG logger - in doSomethingUsefulOne
23:00:21.132 [main] DEBUG logger - Prime calculation took 413 ms
23:00:23.971 [main] DEBUG logger - Prime calculation took 3322 ms
Completed in 3371 ms

Caution: We use Concurrency and Parallelism interchangeably although we should not

As already mentioned in the introductory part of this article, we often use the terms parallelism and concurrency as synonyms of each other. I want to show you that even the Kotlin documentation does not clearly differentiate between both terms. The section on “Shared mutable state and concurrency” (as of 11/5/2018, may be changed in future) introduces with:

Coroutines can be executed concurrently using a multi-threaded dispatcher like the Dispatchers.Default. It presents all the usual concurrency problems. The main problem being synchronization of access to shared mutable state. Some solutions to this problem in the land of coroutines are similar to the solutions in the multi-threaded world, but others are unique.

This sentence should really read “Coroutines can be executed in parallel using multi-threaded dispatchers like Dispatchers.Default…”

Conclusion

It’s important to know the difference between concurrency and parallelism. We learned that concurrency is mainly about dealing with many things at once while parallelism is about executing many things at once. Coroutines provide sophisticated tools to enable concurrency but don’t give us parallelism for free. In some situations, it will be necessary to dispatch blocking code onto some worker threads to let the main program flow continue. Please remember that we mostly need parallelism for CPU intensive and performance critical tasks. In most scenarios, it might be just fine to don’t worry about parallelism and be happy about the fantastic concurrency we get from coroutines.

Lastly, let me say Thank you to Roman Elizarov who discussed these topics with me before I wrote the article. 🙏🏼

The post Concurrent Coroutines – Concurrency is not Parallelism appeared first on Kotlin Expertise Blog.

Continue ReadingConcurrent Coroutines – Concurrency is not Parallelism

Concurrent Coroutines – Concurrency is not Parallelism

On Kotlin Coroutines and how concurrency is different from parallelism

The official docs describe Kotlin Coroutines as a tool “for asynchronous programming and more”, especially are coroutines supposed to support us with “asynchronous or non-blocking programming”. What exactly does this mean? How is “asynchrony” related to the terms “concurrency” and “parallelism”, tags we hear about a lot in this context as well. In this article, we will see that coroutines are mostly concerned about concurrency and not primarily about parallelism. Coroutines provide sophisticated means which help us structure code to make it highly concurrently executable, also enabling parallelism, which isn’t the default behavior though. If you don’t understand the difference yet, don’t worry about it, it will get clearer throughout the article. Many people, I included, struggle to make use of these terms correctly. Let’s learn more about coroutines and how they relate to the discussed topics.

(You can find a general introduction to Kotlin coroutines in this article)

Asynchrony – A programming model

Asynchronous programming is a topic we’ve been reading and hearing about a lot in the last couple of years. It mainly refers to “the occurrence of events independent of the main program flow” and also “ways to deal with these events” (Wikipedia). One crucial aspect of asynchronous programming is the fact that asynchronously started actions do not immediately block the program and take place concurrently. When programming asynchronously, we often find ourselves triggering some subroutine that immediately returns to the caller to let the main program flow continue without waiting for the subroutine’s result. Once the result is needed, you may run into two scenarios: 1) the result has been fully processed and can just be requested or 2) You need to block your program until it is available. That is how futures or promises work. Another popular example of asynchrony is how reactive streams work like as described in the Reactive Manifesto:

Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation and location transparency. […] Non-blocking communication allows recipients to only consume resources while active, leading to less system overhead.

Altogether, we can describe asynchrony, defined in the domain of software engineering, as a programming model that enables non-blocking and concurrent programming. We dispatch tasks to let our program continue doing something else until we receive a signal that the results are available. The following image illustrated this:

We want to continue reading a book and therefore let a machine do the washing for us.

Disclaimer: I took this and also the two following images from this Quora post which also describes the discussed terms.

Concurrency – It’s about structure

After we learned what asynchrony refers to, let’s see what concurrency is. Concurrency is not, as many people mistakenly believe, about running things “in parallel” or “at the same time”. Rob Pike, a Google engineer, best known for his work on Go, describes concurrency as a “composition of independently executing tasks” and he emphasizes that concurrency really is about structuring a program. That means that a concurrent program handles multiple tasks being in progress at the same time but not necessarily being executed simultaneously. The work on all tasks may be interleaved in some arbitrary order, as nicely illustrated in this little image:

Concurrency is not parallelism. It tries to break down tasks which we don’t necessarily need to execute at the same time. Its primary goal is structure, not parallelism.

Parallelism – It’s about execution

Parallelism, often mistakenly used synonymously for concurrency, is about the simultaneous execution of multiple things. If concurrency is about structure, then parallelism is about the execution of multiple tasks. We can say that concurrency makes the use of parallelism easier, but it is not even a prerequisite since we can have parallelism without concurrency.

Conclusively, as Rob Pike describes it: “Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once”. You can watch his talk “Concurrency is not Parallelism” on YouTube.

Coroutines in terms of concurrency and parallelism

Coroutines are about concurrency first of all. They provide great tools that let us break down tasks into various chunks which are not executed simultaneously by default. A simple example illustrating this is part of the Kotlin coroutines documentation:


fun main() = runBlocking<Unit> { val time = measureTimeMillis { val one = async { doSomethingUsefulOne() } val two = async { doSomethingUsefulTwo() } println("The answer is ${one.await() + two.await()}") } println("Completed in $time ms") } suspend fun doSomethingUsefulOne(): Int { delay(1000L) return 13 } suspend fun doSomethingUsefulTwo(): Int { delay(1000L) return 29 }

The example terminates in roughly 1000 milliseconds since both “somethingUseful” tasks take about 1 second each and we execute them asynchronously with the help of the async coroutine builder. Both tasks just use a simple non-blocking delay to simulate some reasonably long-running action. Let’s see if the framework executes these tasks truly simultaneously. Therefore we add some log statements that tell us the threads the actions run on:

[main] DEBUG logger - in runBlocking
[main] DEBUG logger - in doSomethingUsefulOne
[main] DEBUG logger - in doSomethingUsefulTwo

Since we use runBlocking from the main thread, it also runs on this one. The async builders do not specify a separate CoroutineScope or CoroutineContext and therefore also inherently run on main.
We have two tasks run on the same thread, and they finish after a 1-second delay. That is possible since delay only suspends the coroutine and does not block main. The example is, as correctly described, an example of concurrency, not utilizing parallelism. Let’s change the functions to something that really takes its time and see what happens.

Parallel Coroutines

Instead of just delaying the coroutines, we let the functions doSomethingUseful calculate the next probable prime based on a randomly generated BigInteger which happens to be a fairly expensive task (since this calculation is based on a random it will not run in deterministic time):

fun doSomethingUsefulOne(): BigInteger {
    log.debug("in doSomethingUsefulOne")
    return BigInteger(1500, Random()).nextProbablePrime()
}

Note that the suspend keyword is not necessary anymore and would actually be misleading. The function does not make use of other suspending functions and blocks the calling thread for the needed time. Running the code results in the following logs:

22:22:04.716 [main] DEBUG logger - in runBlocking
22:22:04.749 [main] DEBUG logger - in doSomethingUsefulOne
22:22:05.595 [main] DEBUG logger - Prime calculation took 844 ms
22:22:05.602 [main] DEBUG logger - in doSomethingUsefulOne
22:22:08.241 [main] DEBUG logger - Prime calculation took 2638 ms
Completed in 3520 ms

As we can easily see, the tasks still run concurrently as in with async coroutines but don’t execute at the same time anymore. The overall runtime is the sum of both sub-calculations (roughly). After changing the suspending code to blocking code, the result changes and we don’t win any time while execution anymore.


Note on the example

Let me note that I find the example provided in the documentation slightly misleading as it concludes with “This is twice as fast, because we have concurrent execution of two coroutines” after applying async coroutine builders to the previously sequentially executed code. It only is “twice as fast” since the concurrently executed coroutines just delay in a non-blocking way. The example gives the impression that we get “parallelism” for free although it’s only meant to demonstrate asynchronous programming as I see it.


Now how can we make coroutines run in parallel? To fix our prime example from above, we need to dispatch these tasks on some worker threads to not block the main thread anymore. We have a few possibilities to make this work.

Making coroutines run in parallel

1. Run in GlobalScope

We can spawn a coroutine in the GlobalScope. That means that the coroutine is not bound to any Job and only limited by the lifetime of the whole application. That is the behavior we know from spawning new threads. It’s hard to keep track of global coroutines, and the whole approach seems naive and error-prone. Nonetheless, running in this global scope dispatches a coroutine onto Dispatchers.Default, a shared thread pool managed by the kotlinx.coroutines library. By default, the maximal number of threads used by this dispatcher is equal to the number of available CPU cores, but is at least two.

Applying this approach to our example is simple. Instead of running async in the scope of runBlocking, i.e., on the main thread, we spawn them in GlobalScope:

val time = measureTimeMillis {
    val one = GlobalScope.async { doSomethingUsefulOne() }
    val two = GlobalScope.async { doSomethingUsefulTwo() }
}

The output verifies that we now run in roughly max(time(calc1), time(calc2)):

22:42:19.375 [main] DEBUG logger - in runBlocking
22:42:19.393 [DefaultDispatcher-worker-1] DEBUG logger - in doSomethingUsefulOne
22:42:19.408 [DefaultDispatcher-worker-4] DEBUG logger - in doSomethingUsefulOne
22:42:22.640 [DefaultDispatcher-worker-1] DEBUG logger - Prime calculation took 3245 ms
22:42:23.330 [DefaultDispatcher-worker-4] DEBUG logger - Prime calculation took 3922 ms
Completed in 3950 ms

We successfully applied parallelism to our concurrent example. As I said though, this fix is naive and can be improved further.

2. Specify a coroutine dispatcher

Instead of spawning async in the GlobalScope, we can still let them run in the scope of, i.e., as a child of, runBlocking. To get the same result, we explicitly set a coroutine dispatcher now:

val time = measureTimeMillis {
    val one = async(Dispatchers.Default) { doSomethingUsefulOne() }
    val two = async(Dispatchers.Default) { doSomethingUsefulTwo() }
    println("The answer is ${one.await() + two.await()}")
}

This adjustment leads to the same result as before while not losing the child-parent structure we want. We can still do better though. Wouldn’t it be most desirable to have real suspending functions again? Instead of taking care of not blocking the main thread while executing blocking functions, it would be best only to call suspending functions that don’t block the caller.

3. Make blocking function suspending

We can use withContext which “immediately applies dispatcher from the new context, shifting execution of the block into the different thread inside the block, and back when it completes”:

suspend fun doSomethingUsefulOne(): BigInteger = withContext(Dispatchers.Default) {
    executeAndMeasureTimeMillis {
        log.debug("in doSomethingUsefulOne")
        BigInteger(1500, Random()).nextProbablePrime()
    }
}.also {
    log.debug("Prime calculation took ${it.second} ms")
}.first

With this approach, we confine the execution of dispatched tasks to the prime calculation inside the suspending function. The output nicely demonstrates that only the actual prime calculation happens on a different thread while everything else stays on main. When has multi-threading ever been that easy? I really like this solution the most.

(The function executeAndMeasureTimeMillis is a custom one that measures execution time and returns a pair of result and execution time)

23:00:20.591 [main] DEBUG logger - in runBlocking
23:00:20.648 [DefaultDispatcher-worker-1] DEBUG logger - in doSomethingUsefulOne
23:00:20.714 [DefaultDispatcher-worker-2] DEBUG logger - in doSomethingUsefulOne
23:00:21.132 [main] DEBUG logger - Prime calculation took 413 ms
23:00:23.971 [main] DEBUG logger - Prime calculation took 3322 ms
Completed in 3371 ms

Caution: We use Concurrency and Parallelism interchangeably although we should not

As already mentioned in the introductory part of this article, we often use the terms parallelism and concurrency as synonyms of each other. I want to show you that even the Kotlin documentation does not clearly differentiate between both terms. The section on “Shared mutable state and concurrency” (as of 11/5/2018, may be changed in future) introduces with:

Coroutines can be executed concurrently using a multi-threaded dispatcher like the Dispatchers.Default. It presents all the usual concurrency problems. The main problem being synchronization of access to shared mutable state. Some solutions to this problem in the land of coroutines are similar to the solutions in the multi-threaded world, but others are unique.

This sentence should really read “Coroutines can be executed in parallel using multi-threaded dispatchers like Dispatchers.Default…”

Conclusion

It’s important to know the difference between concurrency and parallelism. We learned that concurrency is mainly about dealing with many things at once while parallelism is about executing many things at once. Coroutines provide sophisticated tools to enable concurrency but don’t give us parallelism for free. In some situations, it will be necessary to dispatch blocking code onto some worker threads to let the main program flow continue. Please remember that we mostly need parallelism for CPU intensive and performance critical tasks. In most scenarios, it might be just fine to don’t worry about parallelism and be happy about the fantastic concurrency we get from coroutines.

Lastly, let me say Thank you to Roman Elizarov who discussed these topics with me before I wrote the article. 🙏🏼

The post Concurrent Coroutines – Concurrency is not Parallelism appeared first on Kotlin Expertise Blog.

Continue ReadingConcurrent Coroutines – Concurrency is not Parallelism

Concurrent Coroutines – Concurrency is not Parallelism

On Kotlin Coroutines and how concurrency is different from parallelism

The official docs describe Kotlin Coroutines as a tool “for asynchronous programming and more”, especially are coroutines supposed to support us with “asynchronous or non-blocking programming”. What exactly does this mean? How is “asynchrony” related to the terms “concurrency” and “parallelism”, tags we hear about a lot in this context as well. In this article, we will see that coroutines are mostly concerned about concurrency and not primarily about parallelism. Coroutines provide sophisticated means which help us structure code to make it highly concurrently executable, also enabling parallelism, which isn’t the default behavior though. If you don’t understand the difference yet, don’t worry about it, it will get clearer throughout the article. Many people, I included, struggle to make use of these terms correctly. Let’s learn more about coroutines and how they relate to the discussed topics.

(You can find a general introduction to Kotlin coroutines in this article)

Asynchrony – A programming model

Asynchronous programming is a topic we’ve been reading and hearing about a lot in the last couple of years. It mainly refers to “the occurrence of events independent of the main program flow” and also “ways to deal with these events” (Wikipedia). One crucial aspect of asynchronous programming is the fact that asynchronously started actions do not immediately block the program and take place concurrently. When programming asynchronously, we often find ourselves triggering some subroutine that immediately returns to the caller to let the main program flow continue without waiting for the subroutine’s result. Once the result is needed, you may run into two scenarios: 1) the result has been fully processed and can just be requested or 2) You need to block your program until it is available. That is how futures or promises work. Another popular example of asynchrony is how reactive streams work like as described in the Reactive Manifesto:

Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation and location transparency. […] Non-blocking communication allows recipients to only consume resources while active, leading to less system overhead.

Altogether, we can describe asynchrony, defined in the domain of software engineering, as a programming model that enables non-blocking and concurrent programming. We dispatch tasks to let our program continue doing something else until we receive a signal that the results are available. The following image illustrated this:

We want to continue reading a book and therefore let a machine do the washing for us.

Disclaimer: I took this and also the two following images from this Quora post which also describes the discussed terms.

Concurrency – It’s about structure

After we learned what asynchrony refers to, let’s see what concurrency is. Concurrency is not, as many people mistakenly believe, about running things “in parallel” or “at the same time”. Rob Pike, a Google engineer, best known for his work on Go, describes concurrency as a “composition of independently executing tasks” and he emphasizes that concurrency really is about structuring a program. That means that a concurrent program handles multiple tasks being in progress at the same time but not necessarily being executed simultaneously. The work on all tasks may be interleaved in some arbitrary order, as nicely illustrated in this little image:

Concurrency is not parallelism. It tries to break down tasks which we don’t necessarily need to execute at the same time. Its primary goal is structure, not parallelism.

Parallelism – It’s about execution

Parallelism, often mistakenly used synonymously for concurrency, is about the simultaneous execution of multiple things. If concurrency is about structure, then parallelism is about the execution of multiple tasks. We can say that concurrency makes the use of parallelism easier, but it is not even a prerequisite since we can have parallelism without concurrency.

Conclusively, as Rob Pike describes it: “Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once”. You can watch his talk “Concurrency is not Parallelism” on YouTube.

Coroutines in terms of concurrency and parallelism

Coroutines are about concurrency first of all. They provide great tools that let us break down tasks into various chunks which are not executed simultaneously by default. A simple example illustrating this is part of the Kotlin coroutines documentation:


fun main() = runBlocking<Unit> { val time = measureTimeMillis { val one = async { doSomethingUsefulOne() } val two = async { doSomethingUsefulTwo() } println("The answer is ${one.await() + two.await()}") } println("Completed in $time ms") } suspend fun doSomethingUsefulOne(): Int { delay(1000L) return 13 } suspend fun doSomethingUsefulTwo(): Int { delay(1000L) return 29 }

The example terminates in roughly 1000 milliseconds since both “somethingUseful” tasks take about 1 second each and we execute them asynchronously with the help of the async coroutine builder. Both tasks just use a simple non-blocking delay to simulate some reasonably long-running action. Let’s see if the framework executes these tasks truly simultaneously. Therefore we add some log statements that tell us the threads the actions run on:

[main] DEBUG logger - in runBlocking
[main] DEBUG logger - in doSomethingUsefulOne
[main] DEBUG logger - in doSomethingUsefulTwo

Since we use runBlocking from the main thread, it also runs on this one. The async builders do not specify a separate CoroutineScope or CoroutineContext and therefore also inherently run on main.
We have two tasks run on the same thread, and they finish after a 1-second delay. That is possible since delay only suspends the coroutine and does not block main. The example is, as correctly described, an example of concurrency, not utilizing parallelism. Let’s change the functions to something that really takes its time and see what happens.

Parallel Coroutines

Instead of just delaying the coroutines, we let the functions doSomethingUseful calculate the next probable prime based on a randomly generated BigInteger which happens to be a fairly expensive task (since this calculation is based on a random it will not run in deterministic time):

fun doSomethingUsefulOne(): BigInteger {
    log.debug("in doSomethingUsefulOne")
    return BigInteger(1500, Random()).nextProbablePrime()
}

Note that the suspend keyword is not necessary anymore and would actually be misleading. The function does not make use of other suspending functions and blocks the calling thread for the needed time. Running the code results in the following logs:

22:22:04.716 [main] DEBUG logger - in runBlocking
22:22:04.749 [main] DEBUG logger - in doSomethingUsefulOne
22:22:05.595 [main] DEBUG logger - Prime calculation took 844 ms
22:22:05.602 [main] DEBUG logger - in doSomethingUsefulOne
22:22:08.241 [main] DEBUG logger - Prime calculation took 2638 ms
Completed in 3520 ms

As we can easily see, the tasks still run concurrently as in with async coroutines but don’t execute at the same time anymore. The overall runtime is the sum of both sub-calculations (roughly). After changing the suspending code to blocking code, the result changes and we don’t win any time while execution anymore.


Note on the example

Let me note that I find the example provided in the documentation slightly misleading as it concludes with “This is twice as fast, because we have concurrent execution of two coroutines” after applying async coroutine builders to the previously sequentially executed code. It only is “twice as fast” since the concurrently executed coroutines just delay in a non-blocking way. The example gives the impression that we get “parallelism” for free although it’s only meant to demonstrate asynchronous programming as I see it.


Now how can we make coroutines run in parallel? To fix our prime example from above, we need to dispatch these tasks on some worker threads to not block the main thread anymore. We have a few possibilities to make this work.

Making coroutines run in parallel

1. Run in GlobalScope

We can spawn a coroutine in the GlobalScope. That means that the coroutine is not bound to any Job and only limited by the lifetime of the whole application. That is the behavior we know from spawning new threads. It’s hard to keep track of global coroutines, and the whole approach seems naive and error-prone. Nonetheless, running in this global scope dispatches a coroutine onto Dispatchers.Default, a shared thread pool managed by the kotlinx.coroutines library. By default, the maximal number of threads used by this dispatcher is equal to the number of available CPU cores, but is at least two.

Applying this approach to our example is simple. Instead of running async in the scope of runBlocking, i.e., on the main thread, we spawn them in GlobalScope:

val time = measureTimeMillis {
    val one = GlobalScope.async { doSomethingUsefulOne() }
    val two = GlobalScope.async { doSomethingUsefulTwo() }
}

The output verifies that we now run in roughly max(time(calc1), time(calc2)):

22:42:19.375 [main] DEBUG logger - in runBlocking
22:42:19.393 [DefaultDispatcher-worker-1] DEBUG logger - in doSomethingUsefulOne
22:42:19.408 [DefaultDispatcher-worker-4] DEBUG logger - in doSomethingUsefulOne
22:42:22.640 [DefaultDispatcher-worker-1] DEBUG logger - Prime calculation took 3245 ms
22:42:23.330 [DefaultDispatcher-worker-4] DEBUG logger - Prime calculation took 3922 ms
Completed in 3950 ms

We successfully applied parallelism to our concurrent example. As I said though, this fix is naive and can be improved further.

2. Specify a coroutine dispatcher

Instead of spawning async in the GlobalScope, we can still let them run in the scope of, i.e., as a child of, runBlocking. To get the same result, we explicitly set a coroutine dispatcher now:

val time = measureTimeMillis {
    val one = async(Dispatchers.Default) { doSomethingUsefulOne() }
    val two = async(Dispatchers.Default) { doSomethingUsefulTwo() }
    println("The answer is ${one.await() + two.await()}")
}

This adjustment leads to the same result as before while not losing the child-parent structure we want. We can still do better though. Wouldn’t it be most desirable to have real suspending functions again? Instead of taking care of not blocking the main thread while executing blocking functions, it would be best only to call suspending functions that don’t block the caller.

3. Make blocking function suspending

We can use withContext which “immediately applies dispatcher from the new context, shifting execution of the block into the different thread inside the block, and back when it completes”:

suspend fun doSomethingUsefulOne(): BigInteger = withContext(Dispatchers.Default) {
    executeAndMeasureTimeMillis {
        log.debug("in doSomethingUsefulOne")
        BigInteger(1500, Random()).nextProbablePrime()
    }
}.also {
    log.debug("Prime calculation took ${it.second} ms")
}.first

With this approach, we confine the execution of dispatched tasks to the prime calculation inside the suspending function. The output nicely demonstrates that only the actual prime calculation happens on a different thread while everything else stays on main. When has multi-threading ever been that easy? I really like this solution the most.

(The function executeAndMeasureTimeMillis is a custom one that measures execution time and returns a pair of result and execution time)

23:00:20.591 [main] DEBUG logger - in runBlocking
23:00:20.648 [DefaultDispatcher-worker-1] DEBUG logger - in doSomethingUsefulOne
23:00:20.714 [DefaultDispatcher-worker-2] DEBUG logger - in doSomethingUsefulOne
23:00:21.132 [main] DEBUG logger - Prime calculation took 413 ms
23:00:23.971 [main] DEBUG logger - Prime calculation took 3322 ms
Completed in 3371 ms

Caution: We use Concurrency and Parallelism interchangeably although we should not

As already mentioned in the introductory part of this article, we often use the terms parallelism and concurrency as synonyms of each other. I want to show you that even the Kotlin documentation does not clearly differentiate between both terms. The section on “Shared mutable state and concurrency” (as of 11/5/2018, may be changed in future) introduces with:

Coroutines can be executed concurrently using a multi-threaded dispatcher like the Dispatchers.Default. It presents all the usual concurrency problems. The main problem being synchronization of access to shared mutable state. Some solutions to this problem in the land of coroutines are similar to the solutions in the multi-threaded world, but others are unique.

This sentence should really read “Coroutines can be executed in parallel using multi-threaded dispatchers like Dispatchers.Default…”

Conclusion

It’s important to know the difference between concurrency and parallelism. We learned that concurrency is mainly about dealing with many things at once while parallelism is about executing many things at once. Coroutines provide sophisticated tools to enable concurrency but don’t give us parallelism for free. In some situations, it will be necessary to dispatch blocking code onto some worker threads to let the main program flow continue. Please remember that we mostly need parallelism for CPU intensive and performance critical tasks. In most scenarios, it might be just fine to don’t worry about parallelism and be happy about the fantastic concurrency we get from coroutines.

Lastly, let me say Thank you to Roman Elizarov who discussed these topics with me before I wrote the article. 🙏🏼

The post Concurrent Coroutines – Concurrency is not Parallelism appeared first on Kotlin Expertise Blog.

Continue ReadingConcurrent Coroutines – Concurrency is not Parallelism

Concurrent Coroutines – Concurrency is not Parallelism

On Kotlin Coroutines and how concurrency is different from parallelism

The official docs describe Kotlin Coroutines as a tool “for asynchronous programming and more”, especially are coroutines supposed to support us with “asynchronous or non-blocking programming”. What exactly does this mean? How is “asynchrony” related to the terms “concurrency” and “parallelism”, tags we hear about a lot in this context as well. In this article, we will see that coroutines are mostly concerned about concurrency and not primarily about parallelism. Coroutines provide sophisticated means which help us structure code to make it highly concurrently executable, also enabling parallelism, which isn’t the default behavior though. If you don’t understand the difference yet, don’t worry about it, it will get clearer throughout the article. Many people, I included, struggle to make use of these terms correctly. Let’s learn more about coroutines and how they relate to the discussed topics.

(You can find a general introduction to Kotlin coroutines in this article)

Asynchrony – A programming model

Asynchronous programming is a topic we’ve been reading and hearing about a lot in the last couple of years. It mainly refers to “the occurrence of events independent of the main program flow” and also “ways to deal with these events” (Wikipedia). One crucial aspect of asynchronous programming is the fact that asynchronously started actions do not immediately block the program and take place concurrently. When programming asynchronously, we often find ourselves triggering some subroutine that immediately returns to the caller to let the main program flow continue without waiting for the subroutine’s result. Once the result is needed, you may run into two scenarios: 1) the result has been fully processed and can just be requested or 2) You need to block your program until it is available. That is how futures or promises work. Another popular example of asynchrony is how reactive streams work like as described in the Reactive Manifesto:

Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation and location transparency. […] Non-blocking communication allows recipients to only consume resources while active, leading to less system overhead.

Altogether, we can describe asynchrony, defined in the domain of software engineering, as a programming model that enables non-blocking and concurrent programming. We dispatch tasks to let our program continue doing something else until we receive a signal that the results are available. The following image illustrated this:

We want to continue reading a book and therefore let a machine do the washing for us.

Disclaimer: I took this and also the two following images from this Quora post which also describes the discussed terms.

Concurrency – It’s about structure

After we learned what asynchrony refers to, let’s see what concurrency is. Concurrency is not, as many people mistakenly believe, about running things “in parallel” or “at the same time”. Rob Pike, a Google engineer, best known for his work on Go, describes concurrency as a “composition of independently executing tasks” and he emphasizes that concurrency really is about structuring a program. That means that a concurrent program handles multiple tasks being in progress at the same time but not necessarily being executed simultaneously. The work on all tasks may be interleaved in some arbitrary order, as nicely illustrated in this little image:

Concurrency is not parallelism. It tries to break down tasks which we don’t necessarily need to execute at the same time. Its primary goal is structure, not parallelism.

Parallelism – It’s about execution

Parallelism, often mistakenly used synonymously for concurrency, is about the simultaneous execution of multiple things. If concurrency is about structure, then parallelism is about the execution of multiple tasks. We can say that concurrency makes the use of parallelism easier, but it is not even a prerequisite since we can have parallelism without concurrency.

Conclusively, as Rob Pike describes it: “Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once”. You can watch his talk “Concurrency is not Parallelism” on YouTube.

Coroutines in terms of concurrency and parallelism

Coroutines are about concurrency first of all. They provide great tools that let us break down tasks into various chunks which are not executed simultaneously by default. A simple example illustrating this is part of the Kotlin coroutines documentation:


fun main() = runBlocking<Unit> { val time = measureTimeMillis { val one = async { doSomethingUsefulOne() } val two = async { doSomethingUsefulTwo() } println("The answer is ${one.await() + two.await()}") } println("Completed in $time ms") } suspend fun doSomethingUsefulOne(): Int { delay(1000L) return 13 } suspend fun doSomethingUsefulTwo(): Int { delay(1000L) return 29 }

The example terminates in roughly 1000 milliseconds since both “somethingUseful” tasks take about 1 second each and we execute them asynchronously with the help of the async coroutine builder. Both tasks just use a simple non-blocking delay to simulate some reasonably long-running action. Let’s see if the framework executes these tasks truly simultaneously. Therefore we add some log statements that tell us the threads the actions run on:

[main] DEBUG logger - in runBlocking
[main] DEBUG logger - in doSomethingUsefulOne
[main] DEBUG logger - in doSomethingUsefulTwo

Since we use runBlocking from the main thread, it also runs on this one. The async builders do not specify a separate CoroutineScope or CoroutineContext and therefore also inherently run on main.
We have two tasks run on the same thread, and they finish after a 1-second delay. That is possible since delay only suspends the coroutine and does not block main. The example is, as correctly described, an example of concurrency, not utilizing parallelism. Let’s change the functions to something that really takes its time and see what happens.

Parallel Coroutines

Instead of just delaying the coroutines, we let the functions doSomethingUseful calculate the next probable prime based on a randomly generated BigInteger which happens to be a fairly expensive task (since this calculation is based on a random it will not run in deterministic time):

fun doSomethingUsefulOne(): BigInteger {
    log.debug("in doSomethingUsefulOne")
    return BigInteger(1500, Random()).nextProbablePrime()
}

Note that the suspend keyword is not necessary anymore and would actually be misleading. The function does not make use of other suspending functions and blocks the calling thread for the needed time. Running the code results in the following logs:

22:22:04.716 [main] DEBUG logger - in runBlocking
22:22:04.749 [main] DEBUG logger - in doSomethingUsefulOne
22:22:05.595 [main] DEBUG logger - Prime calculation took 844 ms
22:22:05.602 [main] DEBUG logger - in doSomethingUsefulOne
22:22:08.241 [main] DEBUG logger - Prime calculation took 2638 ms
Completed in 3520 ms

As we can easily see, the tasks still run concurrently as in with async coroutines but don’t execute at the same time anymore. The overall runtime is the sum of both sub-calculations (roughly). After changing the suspending code to blocking code, the result changes and we don’t win any time while execution anymore.


Note on the example

Let me note that I find the example provided in the documentation slightly misleading as it concludes with “This is twice as fast, because we have concurrent execution of two coroutines” after applying async coroutine builders to the previously sequentially executed code. It only is “twice as fast” since the concurrently executed coroutines just delay in a non-blocking way. The example gives the impression that we get “parallelism” for free although it’s only meant to demonstrate asynchronous programming as I see it.


Now how can we make coroutines run in parallel? To fix our prime example from above, we need to dispatch these tasks on some worker threads to not block the main thread anymore. We have a few possibilities to make this work.

Making coroutines run in parallel

1. Run in GlobalScope

We can spawn a coroutine in the GlobalScope. That means that the coroutine is not bound to any Job and only limited by the lifetime of the whole application. That is the behavior we know from spawning new threads. It’s hard to keep track of global coroutines, and the whole approach seems naive and error-prone. Nonetheless, running in this global scope dispatches a coroutine onto Dispatchers.Default, a shared thread pool managed by the kotlinx.coroutines library. By default, the maximal number of threads used by this dispatcher is equal to the number of available CPU cores, but is at least two.

Applying this approach to our example is simple. Instead of running async in the scope of runBlocking, i.e., on the main thread, we spawn them in GlobalScope:

val time = measureTimeMillis {
    val one = GlobalScope.async { doSomethingUsefulOne() }
    val two = GlobalScope.async { doSomethingUsefulTwo() }
}

The output verifies that we now run in roughly max(time(calc1), time(calc2)):

22:42:19.375 [main] DEBUG logger - in runBlocking
22:42:19.393 [DefaultDispatcher-worker-1] DEBUG logger - in doSomethingUsefulOne
22:42:19.408 [DefaultDispatcher-worker-4] DEBUG logger - in doSomethingUsefulOne
22:42:22.640 [DefaultDispatcher-worker-1] DEBUG logger - Prime calculation took 3245 ms
22:42:23.330 [DefaultDispatcher-worker-4] DEBUG logger - Prime calculation took 3922 ms
Completed in 3950 ms

We successfully applied parallelism to our concurrent example. As I said though, this fix is naive and can be improved further.

2. Specify a coroutine dispatcher

Instead of spawning async in the GlobalScope, we can still let them run in the scope of, i.e., as a child of, runBlocking. To get the same result, we explicitly set a coroutine dispatcher now:

val time = measureTimeMillis {
    val one = async(Dispatchers.Default) { doSomethingUsefulOne() }
    val two = async(Dispatchers.Default) { doSomethingUsefulTwo() }
    println("The answer is ${one.await() + two.await()}")
}

This adjustment leads to the same result as before while not losing the child-parent structure we want. We can still do better though. Wouldn’t it be most desirable to have real suspending functions again? Instead of taking care of not blocking the main thread while executing blocking functions, it would be best only to call suspending functions that don’t block the caller.

3. Make blocking function suspending

We can use withContext which “immediately applies dispatcher from the new context, shifting execution of the block into the different thread inside the block, and back when it completes”:

suspend fun doSomethingUsefulOne(): BigInteger = withContext(Dispatchers.Default) {
    executeAndMeasureTimeMillis {
        log.debug("in doSomethingUsefulOne")
        BigInteger(1500, Random()).nextProbablePrime()
    }
}.also {
    log.debug("Prime calculation took ${it.second} ms")
}.first

With this approach, we confine the execution of dispatched tasks to the prime calculation inside the suspending function. The output nicely demonstrates that only the actual prime calculation happens on a different thread while everything else stays on main. When has multi-threading ever been that easy? I really like this solution the most.

(The function executeAndMeasureTimeMillis is a custom one that measures execution time and returns a pair of result and execution time)

23:00:20.591 [main] DEBUG logger - in runBlocking
23:00:20.648 [DefaultDispatcher-worker-1] DEBUG logger - in doSomethingUsefulOne
23:00:20.714 [DefaultDispatcher-worker-2] DEBUG logger - in doSomethingUsefulOne
23:00:21.132 [main] DEBUG logger - Prime calculation took 413 ms
23:00:23.971 [main] DEBUG logger - Prime calculation took 3322 ms
Completed in 3371 ms

Caution: We use Concurrency and Parallelism interchangeably although we should not

As already mentioned in the introductory part of this article, we often use the terms parallelism and concurrency as synonyms of each other. I want to show you that even the Kotlin documentation does not clearly differentiate between both terms. The section on “Shared mutable state and concurrency” (as of 11/5/2018, may be changed in future) introduces with:

Coroutines can be executed concurrently using a multi-threaded dispatcher like the Dispatchers.Default. It presents all the usual concurrency problems. The main problem being synchronization of access to shared mutable state. Some solutions to this problem in the land of coroutines are similar to the solutions in the multi-threaded world, but others are unique.

This sentence should really read “Coroutines can be executed in parallel using multi-threaded dispatchers like Dispatchers.Default…”

Conclusion

It’s important to know the difference between concurrency and parallelism. We learned that concurrency is mainly about dealing with many things at once while parallelism is about executing many things at once. Coroutines provide sophisticated tools to enable concurrency but don’t give us parallelism for free. In some situations, it will be necessary to dispatch blocking code onto some worker threads to let the main program flow continue. Please remember that we mostly need parallelism for CPU intensive and performance critical tasks. In most scenarios, it might be just fine to don’t worry about parallelism and be happy about the fantastic concurrency we get from coroutines.

Lastly, let me say Thank you to Roman Elizarov who discussed these topics with me before I wrote the article. 🙏🏼

The post Concurrent Coroutines – Concurrency is not Parallelism appeared first on Kotlin Expertise Blog.

Continue ReadingConcurrent Coroutines – Concurrency is not Parallelism

Concurrent Coroutines – Concurrency is not Parallelism

On Kotlin Coroutines and how concurrency is different from parallelism

The official docs describe Kotlin Coroutines as a tool “for asynchronous programming and more”, especially are coroutines supposed to support us with “asynchronous or non-blocking programming”. What exactly does this mean? How is “asynchrony” related to the terms “concurrency” and “parallelism”, tags we hear about a lot in this context as well. In this article, we will see that coroutines are mostly concerned about concurrency and not primarily about parallelism. Coroutines provide sophisticated means which help us structure code to make it highly concurrently executable, also enabling parallelism, which isn’t the default behavior though. If you don’t understand the difference yet, don’t worry about it, it will get clearer throughout the article. Many people, I included, struggle to make use of these terms correctly. Let’s learn more about coroutines and how they relate to the discussed topics.

(You can find a general introduction to Kotlin coroutines in this article)

Asynchrony – A programming model

Asynchronous programming is a topic we’ve been reading and hearing about a lot in the last couple of years. It mainly refers to “the occurrence of events independent of the main program flow” and also “ways to deal with these events” (Wikipedia). One crucial aspect of asynchronous programming is the fact that asynchronously started actions do not immediately block the program and take place concurrently. When programming asynchronously, we often find ourselves triggering some subroutine that immediately returns to the caller to let the main program flow continue without waiting for the subroutine’s result. Once the result is needed, you may run into two scenarios: 1) the result has been fully processed and can just be requested or 2) You need to block your program until it is available. That is how futures or promises work. Another popular example of asynchrony is how reactive streams work like as described in the Reactive Manifesto:

Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation and location transparency. […] Non-blocking communication allows recipients to only consume resources while active, leading to less system overhead.

Altogether, we can describe asynchrony, defined in the domain of software engineering, as a programming model that enables non-blocking and concurrent programming. We dispatch tasks to let our program continue doing something else until we receive a signal that the results are available. The following image illustrated this:

We want to continue reading a book and therefore let a machine do the washing for us.

Disclaimer: I took this and also the two following images from this Quora post which also describes the discussed terms.

Concurrency – It’s about structure

After we learned what asynchrony refers to, let’s see what concurrency is. Concurrency is not, as many people mistakenly believe, about running things “in parallel” or “at the same time”. Rob Pike, a Google engineer, best known for his work on Go, describes concurrency as a “composition of independently executing tasks” and he emphasizes that concurrency really is about structuring a program. That means that a concurrent program handles multiple tasks being in progress at the same time but not necessarily being executed simultaneously. The work on all tasks may be interleaved in some arbitrary order, as nicely illustrated in this little image:

Concurrency is not parallelism. It tries to break down tasks which we don’t necessarily need to execute at the same time. Its primary goal is structure, not parallelism.

Parallelism – It’s about execution

Parallelism, often mistakenly used synonymously for concurrency, is about the simultaneous execution of multiple things. If concurrency is about structure, then parallelism is about the execution of multiple tasks. We can say that concurrency makes the use of parallelism easier, but it is not even a prerequisite since we can have parallelism without concurrency.

Conclusively, as Rob Pike describes it: “Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once”. You can watch his talk “Concurrency is not Parallelism” on YouTube.

Coroutines in terms of concurrency and parallelism

Coroutines are about concurrency first of all. They provide great tools that let us break down tasks into various chunks which are not executed simultaneously by default. A simple example illustrating this is part of the Kotlin coroutines documentation:


fun main() = runBlocking<Unit> { val time = measureTimeMillis { val one = async { doSomethingUsefulOne() } val two = async { doSomethingUsefulTwo() } println("The answer is ${one.await() + two.await()}") } println("Completed in $time ms") } suspend fun doSomethingUsefulOne(): Int { delay(1000L) return 13 } suspend fun doSomethingUsefulTwo(): Int { delay(1000L) return 29 }

The example terminates in roughly 1000 milliseconds since both “somethingUseful” tasks take about 1 second each and we execute them asynchronously with the help of the async coroutine builder. Both tasks just use a simple non-blocking delay to simulate some reasonably long-running action. Let’s see if the framework executes these tasks truly simultaneously. Therefore we add some log statements that tell us the threads the actions run on:

[main] DEBUG logger - in runBlocking
[main] DEBUG logger - in doSomethingUsefulOne
[main] DEBUG logger - in doSomethingUsefulTwo

Since we use runBlocking from the main thread, it also runs on this one. The async builders do not specify a separate CoroutineScope or CoroutineContext and therefore also inherently run on main.
We have two tasks run on the same thread, and they finish after a 1-second delay. That is possible since delay only suspends the coroutine and does not block main. The example is, as correctly described, an example of concurrency, not utilizing parallelism. Let’s change the functions to something that really takes its time and see what happens.

Parallel Coroutines

Instead of just delaying the coroutines, we let the functions doSomethingUseful calculate the next probable prime based on a randomly generated BigInteger which happens to be a fairly expensive task (since this calculation is based on a random it will not run in deterministic time):

fun doSomethingUsefulOne(): BigInteger {
    log.debug("in doSomethingUsefulOne")
    return BigInteger(1500, Random()).nextProbablePrime()
}

Note that the suspend keyword is not necessary anymore and would actually be misleading. The function does not make use of other suspending functions and blocks the calling thread for the needed time. Running the code results in the following logs:

22:22:04.716 [main] DEBUG logger - in runBlocking
22:22:04.749 [main] DEBUG logger - in doSomethingUsefulOne
22:22:05.595 [main] DEBUG logger - Prime calculation took 844 ms
22:22:05.602 [main] DEBUG logger - in doSomethingUsefulOne
22:22:08.241 [main] DEBUG logger - Prime calculation took 2638 ms
Completed in 3520 ms

As we can easily see, the tasks still run concurrently as in with async coroutines but don’t execute at the same time anymore. The overall runtime is the sum of both sub-calculations (roughly). After changing the suspending code to blocking code, the result changes and we don’t win any time while execution anymore.


Note on the example

Let me note that I find the example provided in the documentation slightly misleading as it concludes with “This is twice as fast, because we have concurrent execution of two coroutines” after applying async coroutine builders to the previously sequentially executed code. It only is “twice as fast” since the concurrently executed coroutines just delay in a non-blocking way. The example gives the impression that we get “parallelism” for free although it’s only meant to demonstrate asynchronous programming as I see it.


Now how can we make coroutines run in parallel? To fix our prime example from above, we need to dispatch these tasks on some worker threads to not block the main thread anymore. We have a few possibilities to make this work.

Making coroutines run in parallel

1. Run in GlobalScope

We can spawn a coroutine in the GlobalScope. That means that the coroutine is not bound to any Job and only limited by the lifetime of the whole application. That is the behavior we know from spawning new threads. It’s hard to keep track of global coroutines, and the whole approach seems naive and error-prone. Nonetheless, running in this global scope dispatches a coroutine onto Dispatchers.Default, a shared thread pool managed by the kotlinx.coroutines library. By default, the maximal number of threads used by this dispatcher is equal to the number of available CPU cores, but is at least two.

Applying this approach to our example is simple. Instead of running async in the scope of runBlocking, i.e., on the main thread, we spawn them in GlobalScope:

val time = measureTimeMillis {
    val one = GlobalScope.async { doSomethingUsefulOne() }
    val two = GlobalScope.async { doSomethingUsefulTwo() }
}

The output verifies that we now run in roughly max(time(calc1), time(calc2)):

22:42:19.375 [main] DEBUG logger - in runBlocking
22:42:19.393 [DefaultDispatcher-worker-1] DEBUG logger - in doSomethingUsefulOne
22:42:19.408 [DefaultDispatcher-worker-4] DEBUG logger - in doSomethingUsefulOne
22:42:22.640 [DefaultDispatcher-worker-1] DEBUG logger - Prime calculation took 3245 ms
22:42:23.330 [DefaultDispatcher-worker-4] DEBUG logger - Prime calculation took 3922 ms
Completed in 3950 ms

We successfully applied parallelism to our concurrent example. As I said though, this fix is naive and can be improved further.

2. Specify a coroutine dispatcher

Instead of spawning async in the GlobalScope, we can still let them run in the scope of, i.e., as a child of, runBlocking. To get the same result, we explicitly set a coroutine dispatcher now:

val time = measureTimeMillis {
    val one = async(Dispatchers.Default) { doSomethingUsefulOne() }
    val two = async(Dispatchers.Default) { doSomethingUsefulTwo() }
    println("The answer is ${one.await() + two.await()}")
}

This adjustment leads to the same result as before while not losing the child-parent structure we want. We can still do better though. Wouldn’t it be most desirable to have real suspending functions again? Instead of taking care of not blocking the main thread while executing blocking functions, it would be best only to call suspending functions that don’t block the caller.

3. Make blocking function suspending

We can use withContext which “immediately applies dispatcher from the new context, shifting execution of the block into the different thread inside the block, and back when it completes”:

suspend fun doSomethingUsefulOne(): BigInteger = withContext(Dispatchers.Default) {
    executeAndMeasureTimeMillis {
        log.debug("in doSomethingUsefulOne")
        BigInteger(1500, Random()).nextProbablePrime()
    }
}.also {
    log.debug("Prime calculation took ${it.second} ms")
}.first

With this approach, we confine the execution of dispatched tasks to the prime calculation inside the suspending function. The output nicely demonstrates that only the actual prime calculation happens on a different thread while everything else stays on main. When has multi-threading ever been that easy? I really like this solution the most.

(The function executeAndMeasureTimeMillis is a custom one that measures execution time and returns a pair of result and execution time)

23:00:20.591 [main] DEBUG logger - in runBlocking
23:00:20.648 [DefaultDispatcher-worker-1] DEBUG logger - in doSomethingUsefulOne
23:00:20.714 [DefaultDispatcher-worker-2] DEBUG logger - in doSomethingUsefulOne
23:00:21.132 [main] DEBUG logger - Prime calculation took 413 ms
23:00:23.971 [main] DEBUG logger - Prime calculation took 3322 ms
Completed in 3371 ms

Caution: We use Concurrency and Parallelism interchangeably although we should not

As already mentioned in the introductory part of this article, we often use the terms parallelism and concurrency as synonyms of each other. I want to show you that even the Kotlin documentation does not clearly differentiate between both terms. The section on “Shared mutable state and concurrency” (as of 11/5/2018, may be changed in future) introduces with:

Coroutines can be executed concurrently using a multi-threaded dispatcher like the Dispatchers.Default. It presents all the usual concurrency problems. The main problem being synchronization of access to shared mutable state. Some solutions to this problem in the land of coroutines are similar to the solutions in the multi-threaded world, but others are unique.

This sentence should really read “Coroutines can be executed in parallel using multi-threaded dispatchers like Dispatchers.Default…”

Conclusion

It’s important to know the difference between concurrency and parallelism. We learned that concurrency is mainly about dealing with many things at once while parallelism is about executing many things at once. Coroutines provide sophisticated tools to enable concurrency but don’t give us parallelism for free. In some situations, it will be necessary to dispatch blocking code onto some worker threads to let the main program flow continue. Please remember that we mostly need parallelism for CPU intensive and performance critical tasks. In most scenarios, it might be just fine to don’t worry about parallelism and be happy about the fantastic concurrency we get from coroutines.

Lastly, let me say Thank you to Roman Elizarov who discussed these topics with me before I wrote the article. 🙏🏼

The post Concurrent Coroutines – Concurrency is not Parallelism appeared first on Kotlin Expertise Blog.

Continue ReadingConcurrent Coroutines – Concurrency is not Parallelism

Concurrent Coroutines – Concurrency is not Parallelism

On Kotlin Coroutines and how concurrency is different from parallelism

The official docs describe Kotlin Coroutines as a tool “for asynchronous programming and more”, especially are coroutines supposed to support us with “asynchronous or non-blocking programming”. What exactly does this mean? How is “asynchrony” related to the terms “concurrency” and “parallelism”, tags we hear about a lot in this context as well. In this article, we will see that coroutines are mostly concerned about concurrency and not primarily about parallelism. Coroutines provide sophisticated means which help us structure code to make it highly concurrently executable, also enabling parallelism, which isn’t the default behavior though. If you don’t understand the difference yet, don’t worry about it, it will get clearer throughout the article. Many people, I included, struggle to make use of these terms correctly. Let’s learn more about coroutines and how they relate to the discussed topics.

(You can find a general introduction to Kotlin coroutines in this article)

Asynchrony – A programming model

Asynchronous programming is a topic we’ve been reading and hearing about a lot in the last couple of years. It mainly refers to “the occurrence of events independent of the main program flow” and also “ways to deal with these events” (Wikipedia). One crucial aspect of asynchronous programming is the fact that asynchronously started actions do not immediately block the program and take place concurrently. When programming asynchronously, we often find ourselves triggering some subroutine that immediately returns to the caller to let the main program flow continue without waiting for the subroutine’s result. Once the result is needed, you may run into two scenarios: 1) the result has been fully processed and can just be requested or 2) You need to block your program until it is available. That is how futures or promises work. Another popular example of asynchrony is how reactive streams work like as described in the Reactive Manifesto:

Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation and location transparency. […] Non-blocking communication allows recipients to only consume resources while active, leading to less system overhead.

Altogether, we can describe asynchrony, defined in the domain of software engineering, as a programming model that enables non-blocking and concurrent programming. We dispatch tasks to let our program continue doing something else until we receive a signal that the results are available. The following image illustrated this:

We want to continue reading a book and therefore let a machine do the washing for us.

Disclaimer: I took this and also the two following images from this Quora post which also describes the discussed terms.

Concurrency – It’s about structure

After we learned what asynchrony refers to, let’s see what concurrency is. Concurrency is not, as many people mistakenly believe, about running things “in parallel” or “at the same time”. Rob Pike, a Google engineer, best known for his work on Go, describes concurrency as a “composition of independently executing tasks” and he emphasizes that concurrency really is about structuring a program. That means that a concurrent program handles multiple tasks being in progress at the same time but not necessarily being executed simultaneously. The work on all tasks may be interleaved in some arbitrary order, as nicely illustrated in this little image:

Concurrency is not parallelism. It tries to break down tasks which we don’t necessarily need to execute at the same time. Its primary goal is structure, not parallelism.

Parallelism – It’s about execution

Parallelism, often mistakenly used synonymously for concurrency, is about the simultaneous execution of multiple things. If concurrency is about structure, then parallelism is about the execution of multiple tasks. We can say that concurrency makes the use of parallelism easier, but it is not even a prerequisite since we can have parallelism without concurrency.

Conclusively, as Rob Pike describes it: “Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once”. You can watch his talk “Concurrency is not Parallelism” on YouTube.

Coroutines in terms of concurrency and parallelism

Coroutines are about concurrency first of all. They provide great tools that let us break down tasks into various chunks which are not executed simultaneously by default. A simple example illustrating this is part of the Kotlin coroutines documentation:


fun main() = runBlocking<Unit> { val time = measureTimeMillis { val one = async { doSomethingUsefulOne() } val two = async { doSomethingUsefulTwo() } println("The answer is ${one.await() + two.await()}") } println("Completed in $time ms") } suspend fun doSomethingUsefulOne(): Int { delay(1000L) return 13 } suspend fun doSomethingUsefulTwo(): Int { delay(1000L) return 29 }

The example terminates in roughly 1000 milliseconds since both “somethingUseful” tasks take about 1 second each and we execute them asynchronously with the help of the async coroutine builder. Both tasks just use a simple non-blocking delay to simulate some reasonably long-running action. Let’s see if the framework executes these tasks truly simultaneously. Therefore we add some log statements that tell us the threads the actions run on:

[main] DEBUG logger - in runBlocking
[main] DEBUG logger - in doSomethingUsefulOne
[main] DEBUG logger - in doSomethingUsefulTwo

Since we use runBlocking from the main thread, it also runs on this one. The async builders do not specify a separate CoroutineScope or CoroutineContext and therefore also inherently run on main.
We have two tasks run on the same thread, and they finish after a 1-second delay. That is possible since delay only suspends the coroutine and does not block main. The example is, as correctly described, an example of concurrency, not utilizing parallelism. Let’s change the functions to something that really takes its time and see what happens.

Parallel Coroutines

Instead of just delaying the coroutines, we let the functions doSomethingUseful calculate the next probable prime based on a randomly generated BigInteger which happens to be a fairly expensive task (since this calculation is based on a random it will not run in deterministic time):

fun doSomethingUsefulOne(): BigInteger {
    log.debug("in doSomethingUsefulOne")
    return BigInteger(1500, Random()).nextProbablePrime()
}

Note that the suspend keyword is not necessary anymore and would actually be misleading. The function does not make use of other suspending functions and blocks the calling thread for the needed time. Running the code results in the following logs:

22:22:04.716 [main] DEBUG logger - in runBlocking
22:22:04.749 [main] DEBUG logger - in doSomethingUsefulOne
22:22:05.595 [main] DEBUG logger - Prime calculation took 844 ms
22:22:05.602 [main] DEBUG logger - in doSomethingUsefulOne
22:22:08.241 [main] DEBUG logger - Prime calculation took 2638 ms
Completed in 3520 ms

As we can easily see, the tasks still run concurrently as in with async coroutines but don’t execute at the same time anymore. The overall runtime is the sum of both sub-calculations (roughly). After changing the suspending code to blocking code, the result changes and we don’t win any time while execution anymore.


Note on the example

Let me note that I find the example provided in the documentation slightly misleading as it concludes with “This is twice as fast, because we have concurrent execution of two coroutines” after applying async coroutine builders to the previously sequentially executed code. It only is “twice as fast” since the concurrently executed coroutines just delay in a non-blocking way. The example gives the impression that we get “parallelism” for free although it’s only meant to demonstrate asynchronous programming as I see it.


Now how can we make coroutines run in parallel? To fix our prime example from above, we need to dispatch these tasks on some worker threads to not block the main thread anymore. We have a few possibilities to make this work.

Making coroutines run in parallel

1. Run in GlobalScope

We can spawn a coroutine in the GlobalScope. That means that the coroutine is not bound to any Job and only limited by the lifetime of the whole application. That is the behavior we know from spawning new threads. It’s hard to keep track of global coroutines, and the whole approach seems naive and error-prone. Nonetheless, running in this global scope dispatches a coroutine onto Dispatchers.Default, a shared thread pool managed by the kotlinx.coroutines library. By default, the maximal number of threads used by this dispatcher is equal to the number of available CPU cores, but is at least two.

Applying this approach to our example is simple. Instead of running async in the scope of runBlocking, i.e., on the main thread, we spawn them in GlobalScope:

val time = measureTimeMillis {
    val one = GlobalScope.async { doSomethingUsefulOne() }
    val two = GlobalScope.async { doSomethingUsefulTwo() }
}

The output verifies that we now run in roughly max(time(calc1), time(calc2)):

22:42:19.375 [main] DEBUG logger - in runBlocking
22:42:19.393 [DefaultDispatcher-worker-1] DEBUG logger - in doSomethingUsefulOne
22:42:19.408 [DefaultDispatcher-worker-4] DEBUG logger - in doSomethingUsefulOne
22:42:22.640 [DefaultDispatcher-worker-1] DEBUG logger - Prime calculation took 3245 ms
22:42:23.330 [DefaultDispatcher-worker-4] DEBUG logger - Prime calculation took 3922 ms
Completed in 3950 ms

We successfully applied parallelism to our concurrent example. As I said though, this fix is naive and can be improved further.

2. Specify a coroutine dispatcher

Instead of spawning async in the GlobalScope, we can still let them run in the scope of, i.e., as a child of, runBlocking. To get the same result, we explicitly set a coroutine dispatcher now:

val time = measureTimeMillis {
    val one = async(Dispatchers.Default) { doSomethingUsefulOne() }
    val two = async(Dispatchers.Default) { doSomethingUsefulTwo() }
    println("The answer is ${one.await() + two.await()}")
}

This adjustment leads to the same result as before while not losing the child-parent structure we want. We can still do better though. Wouldn’t it be most desirable to have real suspending functions again? Instead of taking care of not blocking the main thread while executing blocking functions, it would be best only to call suspending functions that don’t block the caller.

3. Make blocking function suspending

We can use withContext which “immediately applies dispatcher from the new context, shifting execution of the block into the different thread inside the block, and back when it completes”:

suspend fun doSomethingUsefulOne(): BigInteger = withContext(Dispatchers.Default) {
    executeAndMeasureTimeMillis {
        log.debug("in doSomethingUsefulOne")
        BigInteger(1500, Random()).nextProbablePrime()
    }
}.also {
    log.debug("Prime calculation took ${it.second} ms")
}.first

With this approach, we confine the execution of dispatched tasks to the prime calculation inside the suspending function. The output nicely demonstrates that only the actual prime calculation happens on a different thread while everything else stays on main. When has multi-threading ever been that easy? I really like this solution the most.

(The function executeAndMeasureTimeMillis is a custom one that measures execution time and returns a pair of result and execution time)

23:00:20.591 [main] DEBUG logger - in runBlocking
23:00:20.648 [DefaultDispatcher-worker-1] DEBUG logger - in doSomethingUsefulOne
23:00:20.714 [DefaultDispatcher-worker-2] DEBUG logger - in doSomethingUsefulOne
23:00:21.132 [main] DEBUG logger - Prime calculation took 413 ms
23:00:23.971 [main] DEBUG logger - Prime calculation took 3322 ms
Completed in 3371 ms

Caution: We use Concurrency and Parallelism interchangeably although we should not

As already mentioned in the introductory part of this article, we often use the terms parallelism and concurrency as synonyms of each other. I want to show you that even the Kotlin documentation does not clearly differentiate between both terms. The section on “Shared mutable state and concurrency” (as of 11/5/2018, may be changed in future) introduces with:

Coroutines can be executed concurrently using a multi-threaded dispatcher like the Dispatchers.Default. It presents all the usual concurrency problems. The main problem being synchronization of access to shared mutable state. Some solutions to this problem in the land of coroutines are similar to the solutions in the multi-threaded world, but others are unique.

This sentence should really read “Coroutines can be executed in parallel using multi-threaded dispatchers like Dispatchers.Default…”

Conclusion

It’s important to know the difference between concurrency and parallelism. We learned that concurrency is mainly about dealing with many things at once while parallelism is about executing many things at once. Coroutines provide sophisticated tools to enable concurrency but don’t give us parallelism for free. In some situations, it will be necessary to dispatch blocking code onto some worker threads to let the main program flow continue. Please remember that we mostly need parallelism for CPU intensive and performance critical tasks. In most scenarios, it might be just fine to don’t worry about parallelism and be happy about the fantastic concurrency we get from coroutines.

Lastly, let me say Thank you to Roman Elizarov who discussed these topics with me before I wrote the article. 🙏🏼

The post Concurrent Coroutines – Concurrency is not Parallelism appeared first on Kotlin Expertise Blog.

Continue ReadingConcurrent Coroutines – Concurrency is not Parallelism

Introduction to Coroutines: What Problems Do They Solve?

Problem Solution
Simplify Callbacks Coroutines
Get results from a potentially infinite list BuildSequence
Get a promise for a future result Async/Await
Work with streams of data Channels and Pipelines
Act on multiple asynchronous inputs Select

The purpose of coroutines is to take care of the complications in working with asynchronous programming. You write code sequentially, like you usually do, and then leave to the coroutines the hard work. Coroutines are a low-level mechanism. The end objective is to build  accessible mechanisms like async/await in C#

Coroutines are an experimental feature, introduced with Kotlin version 1.1. They have been created to manage operations with long execution time (e.g., Input/Output operations with a remote resource) without blocking a thread. They allows to suspend a computation without keeping occupied a thread. In practical terms, they behave as if they were light threads with little overhead, which means that you can have many of them.

In fact, traditional threads have a flaw: they are costly to maintain. Thus it is not practical to have more than a few available and they are mostly controlled by the system. A coroutine offers a lightweight alternative that is cheap in terms of resources and it is easier to control by the developer.

Suspension Points

You have limited control over a thread: you can create a new one or terminate it, but that is basically it. If you want to do more you have to deal with system libraries and all the low-level issues that comes with them. For instance, you have to check for deadlock problems.

Coroutines are a easier to use, but there are a few rules. The basic ideas is that coroutines are blocks of code that can be suspended, without blocking a thread. The difference is that blocking a thread means the thread cannot do anything else, while suspending it means that it can do other things while waiting the completion of the suspended block.

However you cannot suspend a coroutine at arbitrary positions. A coroutine can be suspended only at certain points, called suspension points. That is to say functions with the modifier suspend.

suspend fun answer() {
      println("Hello to you!")
}

The behavior is controlled by the library: there might be a suspension in these points, but this is not a certainty. For example, the library can decide to proceed without suspension, if the result for the call in question is already available.

Functions with suspension points works normally except for one thing: they can only be called from coroutines or other functions with suspension points. They cannot be called by normal code. The calling function can be a (suspending) lambda.

Launching a Suspension Function

The simplest way to launch a suspension function is with the launch function. It requires as an argument a thread pool. Usually you pass the default CommonPool.

import kotlin.coroutines.experimental.* // notice that it is under a package experimental

suspend fun answer() {
      println("Hello to you!")
}

fun main(args: Array<String>) {
    launch(CommonPool) {
        answer() // it prints this second
    }

    println("Hello, dude!") // it prints this first
    Thread.sleep(2000L) // it simulates real work being done
}

Notice how the coroutines are inside a package called experimental, because the API could change. This is also the suggested approach for your own library functions that implement coroutines. You should put them under an experimental package to warn them of the instability and to prepare for future changes.

This is the basic way, but in many cases you do not need to deal directly with the low-level API. Instead you can use wrappers available for the most common situations.

In fact, coroutines are actually a low-level feature, so the module itself kotlin.coroutines offers few high-level functionalities. That is to say, features made for direct use by developers. It contains mostly functions meant for creators of libraries based upon coroutines (e.g., to create a coroutine or to manage its life). Most of the functionalities that you would want to use are in the package kotlinx.coroutines.

From Callbacks to Coroutines

Coroutines are useful to get rid of callbacks. Assume that you have a function that must perform some expensive computation: solveAllProblems. This function calls a callback that receives the result of the computation and performs what is needed (e.g., it saves the result in a database).

fun solveAllProblems(params: Params, callback: (Result) -> Unit)

You can easily eliminate the callback using the suspendCoroutine function. This function works by relying on a Continuation type (cont in the following example), that offers a resume method, which is used to return the expected result.

suspend fun solveAllProblems(params: Params): Result = suspendCoroutine { cont ->
    solveAllProblems(params) { cont.resume(it) }
} 

The advantage of this solution is that now the return type of this computation is explicit., but the computation itself is still asynchronous and does not block a thread.

Coroutines And Sequences

The function buildSequence in one of the high-level functionalities of the basic kotlin.coroutines package. It is used to create lazy sequences and relies on coroutines: each time it runs it return a new result. It is designed for when you can obtain partial results that you can use instantly, without having the complete data. For instance, you can use it to obtain a list of items from a database to display in a UI with infinite scrolling.

Essentially the power of the function rests on the yield mechanism. Each time you asks for an element the function executes until it can give me that element.

In practice, the function executes until it find the yield function, then it returns with the argument of that function. The next execution continues from the point it stopped previously. This happens until the end of the calls of the yield functions. So the yield functions are the suspension points.

Of course, the yield functions can actually continue indefinitely, if you need it. For instance, as in the following example, which calculates a series of powers of 2.

val powerOf2 = buildSequence {
    var a = 1
    yield(a) // the first execution stops here
    while (true) { // the second to N executions continue here
        a = a + a
        yield(a) // the second to N executions stop here
    }
}

println(powerOf2.take(5).toList()) // it prints [1, 2, 4, 8, 16]

It is important to understand that the function itself has no memory. This means that once a run it is completed, the next execution will start from scratch.

println(powerOf2.take(5).toList()) // it prints [1, 2, 4, 8, 16]
println(powerOf2.take(5).toList()) // it still prints [1, 2, 4, 8, 16]

If you want to keep getting new results, you have to use directly iterator() and next()methods on the sequence, as in the following example.

var memory = powerOf2.iterator()
println("${memory.next()} ${memory.next()} ${memory.next()}") // it prints "1 2 4"

The Async and Await Mechanism

Let’s see some of the functionalities of the package kotlinx.coroutines. These are are the heart of the coroutines: the primitive parts are there to implement these functionalities, but you typically do not want to use. The exception is if you are a library developer, so that you  can implement similar high-level functionalities for your own library.

For instance, it provides async and await, that works similarly to the homonym in C#. Typically they are used together, in the case in which an async function returns a meaningful value. First you invoke a function with async, then you read the result with await.

There is not an official documentation of these features, but you can find info about it on the GitHub project.

fun asyncAFunction(): Deferred<Int> = async(CommonPool) {
    10
}

You can see that:

  • the name of the function starts with async. This is the suggested naming convention.
  • it does not have the modifier suspend, thus it can be called everywhere
  • it returns Deferred<Int> instead of Int directly.

An example of how it can be used.

fun main(args: Array<String>) {

    // runBlocking prevents the closing of the program until execution is completed
    runBlocking {
        // one is of type Deferred
        val one = asyncAFunction()
        // we use await to wait for the result
        println("The value is ${one.await()")
    }
}

The first interesting fact is that one.await returns Int. So we can say that it unpacks the deferred type and make it usable as usual. The second one is the function runBlocking, that prevents the premature end of the program. In this simple case it is just an alternative to the usage of the old sleep trick (i.e., blocking the ending of the program with Thread.Sleep) to wait for the result of the asynchronous call. However the function can also be useful in a few situations with asynchronous programming. For instance, it allows to call a suspend function from anywhere.

The function essentially behaves like the following.

// example without await and runBlocking

fun main(args: Array<String>) {
    // one is of type Deferred;
    val one = asyncAFunction()
    // wait completion
    while(one.isActive) {}
  
    println("The value is ${one.getCompleted()}")
}

In the previous example we basically simulated the await function:

  • we did nothing while the asynchronous function was active
  • we read the result once we were certain it was finished

Channels

Coroutines can also implement Go-like channels: a source that send content and a destination that receive it. Essentially channels are like non-blocking queues, that send and operate data asynchronously.

import kotlin.coroutines.experimental.*
import kotlinx.coroutines.experimental.channels.*

fun main(args: Array<String>) {
    runBlocking {
        val channel = Channel<Int>()
        launch(CommonPool) {            
            var y = 1
            for (x in 1..5) {
                y = y + y
                channel.send(y)
            }
            // we close the channel when finished
            channel.close()
        }
        // here we print the received data
        // you could also use channel.receive() to get the messages one by one
        for (y in channel)
            println(y)
    }
}

This code also produces the sequence of powers of 2, but it does it asynchronously.

A more correct way to produce the same results is using produce function. Basically it bring a little order and take care of making everything running smoothly.

fun produceAFewNumbers() = produce<Int>(CommonPool) {
    var y = 1
    for (x in 1..5) {
        y = y + y
        send(y)
    }
}

fun main(args: Array<String>) {
    runBlocking {
        val numbers = produceAFewNumbers()
        numbers.consumeEach { println(it) }
}

The end result is the same, but it is much cleaner.

Pipelines

A common pattern with channels is the pipeline. A pipeline is made up of two coroutines: one that send and one that receive a stream of data. This is the more natural way to deal with an infinite stream of data.

import kotlin.coroutines.experimental.*
import kotlinx.coroutines.experimental.channels.*

fun produceNumbers() = produce<Int>(CommonPool) {
    var x = 1
    while (true)
        // we send an infinite stream of numbers
        send(x++)
}

The first coroutine is produceNumbers which sends an infinite stream of numbers, starting from 1. The function rely on the aptly named produce(CouroutineContext) , that accepts a thread pool and must send some data.

fun divisibleBy2(numbers: ReceiveChannel<Int>) = produce<Int>(CommonPool) {
    for (x in numbers)
    {
        if ( x % 2 == 0)
            // we filter out the number not divisible by 2
            send(x)
    }
}

The second coroutine receive the data from a channel, but also send it to another one, if they are divisible by 2. So a pipeline can actually be made up of more channels linked together in a cascade.

fun main(args: Array<String>) {
    runBlocking {
        val numbers = produceNumbers()
        val divisibles = divisibleBy2(numbers)
        // we print the first 5 numbers
        for (i in 1..5)
            println(divisibles.receive())

        println("Finished!") // we have finished
        // let's cancel the coroutines for good measure
        divisibles.cancel()
        numbers.cancel()
    }
}

The example is quite straightforward. In this last part we just put everything together to get the first five numbers that are produced by the pipeline. In this case we have sent 9 numbers from the first channel, but only 5 have come out of the complete pipeline.

Notice also that we did not close the channels, but directly cancelled the coroutines. That is because we are completely ending their usage instead of politely informing their users about the end of transmissions.

The Select Expression

Select is an expression that is able to wait for multiple suspending functions and to select the result from the first one that becomes available. An example of a scenario in which it can be useful is the backend of a queue. There are multiple producers sending messages and the backend processes them in the order they are arrived.

Let’s see a simple example: we have two kids, John and Mike, trading insults. We are going to see how John insult his friend, but Mike is equally (un)skilled.

fun john() = produce<String>(CommonPool) {
    while (true) {
        val insults = listOf("stupid", "idiot", "stinky")
        val random = Random()
        delay(random.nextInt(1000).toLong())
        send(insults[random.nextInt(3)])
    }
}

All they do is sending a random insult whenever they are ready.

suspend fun selectInsult(john: ReceiveChannel<String>, mike: ReceiveChannel<String>) {
    select { //   means that this select expression does not produce any result
        john.onReceive { value ->  // this is the first select clause
            println("John says '$value'")
        }
        mike.onReceive { value ->  // this is the second select clause
            println("Mike says '$value'")
        }
    }
}

Their insults are processed by the selectInsult function, that accepts the two channels as parameters and prints the insult whenever it receive one. Our select expression does not produce any result, but the expression could return one if needed.

The power of the select expression is in its clauses. In this example each clause is based upon onReceive, but it could also use OnReceiveOrNull. The second one is used to get a message when the channel is closed, but since we never close the channel we do not need it.

fun main(args: Array<String>)
{
    runBlocking {
        val john = john()
        val mike = mike()
        repeat(6) {
            selectInsult(john, mike)
        }
    }

Putting all the pieces together is child play. We could print an infinite list of insults, but we are satisfied with 6.

The select expression clauses can also depend upon onAwait of a Deferred type. For example, we can modify our select expression to accept the input of an adult.

fun adult(): Deferred<String> = async(CommonPool) {
    // the adult stops the exchange after a while
    delay(Random().nextInt(2000).toLong())
    "Stop it!"
}

suspend fun selectInsult(john: ReceiveChannel<String>, mike: ReceiveChannel<String>,
                         adult: Deferred<String>) {
    select {
        // [..] the rest is like before
        adult.onAwait { value ->
            println("Exasperated adult says '$value'")
        }
    }
}

fun main(args: Array<String>) {
    runBlocking {
        val john = john()
        val mike = mike()
        val adult = adult()
        repeat(6) {
            selectInsult(john, mike, adult)
        }
    }

Notice that you should be wary of mixing the two kinds of clauses. That is because the result of onAwait is always there. So in our example, after it fires the first time the adult takes over and keep shouting Stop it! whenever the function selectInsult is called. Of course in real usage you would probably make the select expression returns a result to check for the firing of the onAwait clause and end the cycle when it fired the first time.

Summary

With this article we have seen what are coroutines and what they bring to Kotlin. We have also shown how to use the main functionalities provided around them and exploit their potential to make your asynchronous easier to write. Writing asynchronous software is hard, this library solves for you the most common problems. However we have just scratched the surface of what you can do with coroutines.

For example, we have not talked about all the low-level features for creators of libraries. That is because we choose to concentrate on the knowledge that will be useful to most people. However if you want to know more you can read the ample documentation on kotlin.coroutines and kotlinx.coroutines. You will also find a reference for the module and a deep guide on how to apply coroutines on UI (both Android and Desktop).

The post Introduction to Coroutines: What Problems Do They Solve? appeared first on SuperKotlin.

Continue ReadingIntroduction to Coroutines: What Problems Do They Solve?

Introduction to Coroutines: What Problems Do They Solve?

Problem Solution
Simplify Callbacks Coroutines
Get results from a potentially infinite list BuildSequence
Get a promise for a future result Async/Await
Work with streams of data Channels and Pipelines
Act on multiple asynchronous inputs Select

The purpose of coroutines is to take care of the complications in working with asynchronous programming. You write code sequentially, like you usually do, and then leave to the coroutines the hard work. Coroutines are a low-level mechanism. The end objective is to build  accessible mechanisms like async/await in C#

Coroutines are an experimental feature, introduced with Kotlin version 1.1. They have been created to manage operations with long execution time (e.g., Input/Output operations with a remote resource) without blocking a thread. They allows to suspend a computation without keeping occupied a thread. In practical terms, they behave as if they were light threads with little overhead, which means that you can have many of them.

In fact, traditional threads have a flaw: they are costly to maintain. Thus it is not practical to have more than a few available and they are mostly controlled by the system. A coroutine offers a lightweight alternative that is cheap in terms of resources and it is easier to control by the developer.

Suspension Points

You have limited control over a thread: you can create a new one or terminate it, but that is basically it. If you want to do more you have to deal with system libraries and all the low-level issues that comes with them. For instance, you have to check for deadlock problems.

Coroutines are a easier to use, but there are a few rules. The basic ideas is that coroutines are blocks of code that can be suspended, without blocking a thread. The difference is that blocking a thread means the thread cannot do anything else, while suspending it means that it can do other things while waiting the completion of the suspended block.

However you cannot suspend a coroutine at arbitrary positions. A coroutine can be suspended only at certain points, called suspension points. That is to say functions with the modifier suspend.

suspend fun answer() {
      println("Hello to you!")
}

The behavior is controlled by the library: there might be a suspension in these points, but this is not a certainty. For example, the library can decide to proceed without suspension, if the result for the call in question is already available.

Functions with suspension points works normally except for one thing: they can only be called from coroutines or other functions with suspension points. They cannot be called by normal code. The calling function can be a (suspending) lambda.

Launching a Suspension Function

The simplest way to launch a suspension function is with the launch function. It requires as an argument a thread pool. Usually you pass the default CommonPool.

import kotlin.coroutines.experimental.* // notice that it is under a package experimental

suspend fun answer() {
      println("Hello to you!")
}

fun main(args: Array<String>) {
    launch(CommonPool) {
        answer() // it prints this second
    }

    println("Hello, dude!") // it prints this first
    Thread.sleep(2000L) // it simulates real work being done
}

Notice how the coroutines are inside a package called experimental, because the API could change. This is also the suggested approach for your own library functions that implement coroutines. You should put them under an experimental package to warn them of the instability and to prepare for future changes.

This is the basic way, but in many cases you do not need to deal directly with the low-level API. Instead you can use wrappers available for the most common situations.

In fact, coroutines are actually a low-level feature, so the module itself kotlin.coroutines offers few high-level functionalities. That is to say, features made for direct use by developers. It contains mostly functions meant for creators of libraries based upon coroutines (e.g., to create a coroutine or to manage its life). Most of the functionalities that you would want to use are in the package kotlinx.coroutines.

From Callbacks to Coroutines

Coroutines are useful to get rid of callbacks. Assume that you have a function that must perform some expensive computation: solveAllProblems. This function calls a callback that receives the result of the computation and performs what is needed (e.g., it saves the result in a database).

fun solveAllProblems(params: Params, callback: (Result) -> Unit)

You can easily eliminate the callback using the suspendCoroutine function. This function works by relying on a Continuation type (cont in the following example), that offers a resume method, which is used to return the expected result.

suspend fun solveAllProblems(params: Params): Result = suspendCoroutine { cont ->
    solveAllProblems(params) { cont.resume(it) }
} 

The advantage of this solution is that now the return type of this computation is explicit., but the computation itself is still asynchronous and does not block a thread.

Coroutines And Sequences

The function buildSequence in one of the high-level functionalities of the basic kotlin.coroutines package. It is used to create lazy sequences and relies on coroutines: each time it runs it return a new result. It is designed for when you can obtain partial results that you can use instantly, without having the complete data. For instance, you can use it to obtain a list of items from a database to display in a UI with infinite scrolling.

Essentially the power of the function rests on the yield mechanism. Each time you asks for an element the function executes until it can give me that element.

In practice, the function executes until it find the yield function, then it returns with the argument of that function. The next execution continues from the point it stopped previously. This happens until the end of the calls of the yield functions. So the yield functions are the suspension points.

Of course, the yield functions can actually continue indefinitely, if you need it. For instance, as in the following example, which calculates a series of powers of 2.

val powerOf2 = buildSequence {
    var a = 1
    yield(a) // the first execution stops here
    while (true) { // the second to N executions continue here
        a = a + a
        yield(a) // the second to N executions stop here
    }
}

println(powerOf2.take(5).toList()) // it prints [1, 2, 4, 8, 16]

It is important to understand that the function itself has no memory. This means that once a run it is completed, the next execution will start from scratch.

println(powerOf2.take(5).toList()) // it prints [1, 2, 4, 8, 16]
println(powerOf2.take(5).toList()) // it still prints [1, 2, 4, 8, 16]

If you want to keep getting new results, you have to use directly iterator() and next()methods on the sequence, as in the following example.

var memory = powerOf2.iterator()
println("${memory.next()} ${memory.next()} ${memory.next()}") // it prints "1 2 4"

The Async and Await Mechanism

Let’s see some of the functionalities of the package kotlinx.coroutines. These are are the heart of the coroutines: the primitive parts are there to implement these functionalities, but you typically do not want to use. The exception is if you are a library developer, so that you  can implement similar high-level functionalities for your own library.

For instance, it provides async and await, that works similarly to the homonym in C#. Typically they are used together, in the case in which an async function returns a meaningful value. First you invoke a function with async, then you read the result with await.

There is not an official documentation of these features, but you can find info about it on the GitHub project.

fun asyncAFunction(): Deferred<Int> = async(CommonPool) {
    10
}

You can see that:

  • the name of the function starts with async. This is the suggested naming convention.
  • it does not have the modifier suspend, thus it can be called everywhere
  • it returns Deferred<Int> instead of Int directly.

An example of how it can be used.

fun main(args: Array<String>) {

    // runBlocking prevents the closing of the program until execution is completed
    runBlocking {
        // one is of type Deferred
        val one = asyncAFunction()
        // we use await to wait for the result
        println("The value is ${one.await()")
    }
}

The first interesting fact is that one.await returns Int. So we can say that it unpacks the deferred type and make it usable as usual. The second one is the function runBlocking, that prevents the premature end of the program. In this simple case it is just an alternative to the usage of the old sleep trick (i.e., blocking the ending of the program with Thread.Sleep) to wait for the result of the asynchronous call. However the function can also be useful in a few situations with asynchronous programming. For instance, it allows to call a suspend function from anywhere.

The function essentially behaves like the following.

// example without await and runBlocking

fun main(args: Array<String>) {
    // one is of type Deferred;
    val one = asyncAFunction()
    // wait completion
    while(one.isActive) {}
  
    println("The value is ${one.getCompleted()}")
}

In the previous example we basically simulated the await function:

  • we did nothing while the asynchronous function was active
  • we read the result once we were certain it was finished

Channels

Coroutines can also implement Go-like channels: a source that send content and a destination that receive it. Essentially channels are like non-blocking queues, that send and operate data asynchronously.

import kotlin.coroutines.experimental.*
import kotlinx.coroutines.experimental.channels.*

fun main(args: Array<String>) {
    runBlocking {
        val channel = Channel<Int>()
        launch(CommonPool) {            
            var y = 1
            for (x in 1..5) {
                y = y + y
                channel.send(y)
            }
            // we close the channel when finished
            channel.close()
        }
        // here we print the received data
        // you could also use channel.receive() to get the messages one by one
        for (y in channel)
            println(y)
    }
}

This code also produces the sequence of powers of 2, but it does it asynchronously.

A more correct way to produce the same results is using produce function. Basically it bring a little order and take care of making everything running smoothly.

fun produceAFewNumbers() = produce<Int>(CommonPool) {
    var y = 1
    for (x in 1..5) {
        y = y + y
        send(y)
    }
}

fun main(args: Array<String>) {
    runBlocking {
        val numbers = produceAFewNumbers()
        numbers.consumeEach { println(it) }
}

The end result is the same, but it is much cleaner.

Pipelines

A common pattern with channels is the pipeline. A pipeline is made up of two coroutines: one that send and one that receive a stream of data. This is the more natural way to deal with an infinite stream of data.

import kotlin.coroutines.experimental.*
import kotlinx.coroutines.experimental.channels.*

fun produceNumbers() = produce<Int>(CommonPool) {
    var x = 1
    while (true)
        // we send an infinite stream of numbers
        send(x++)
}

The first coroutine is produceNumbers which sends an infinite stream of numbers, starting from 1. The function rely on the aptly named produce(CouroutineContext) , that accepts a thread pool and must send some data.

fun divisibleBy2(numbers: ReceiveChannel<Int>) = produce<Int>(CommonPool) {
    for (x in numbers)
    {
        if ( x % 2 == 0)
            // we filter out the number not divisible by 2
            send(x)
    }
}

The second coroutine receive the data from a channel, but also send it to another one, if they are divisible by 2. So a pipeline can actually be made up of more channels linked together in a cascade.

fun main(args: Array<String>) {
    runBlocking {
        val numbers = produceNumbers()
        val divisibles = divisibleBy2(numbers)
        // we print the first 5 numbers
        for (i in 1..5)
            println(divisibles.receive())

        println("Finished!") // we have finished
        // let's cancel the coroutines for good measure
        divisibles.cancel()
        numbers.cancel()
    }
}

The example is quite straightforward. In this last part we just put everything together to get the first five numbers that are produced by the pipeline. In this case we have sent 9 numbers from the first channel, but only 5 have come out of the complete pipeline.

Notice also that we did not close the channels, but directly cancelled the coroutines. That is because we are completely ending their usage instead of politely informing their users about the end of transmissions.

The Select Expression

Select is an expression that is able to wait for multiple suspending functions and to select the result from the first one that becomes available. An example of a scenario in which it can be useful is the backend of a queue. There are multiple producers sending messages and the backend processes them in the order they are arrived.

Let’s see a simple example: we have two kids, John and Mike, trading insults. We are going to see how John insult his friend, but Mike is equally (un)skilled.

fun john() = produce<String>(CommonPool) {
    while (true) {
        val insults = listOf("stupid", "idiot", "stinky")
        val random = Random()
        delay(random.nextInt(1000).toLong())
        send(insults[random.nextInt(3)])
    }
}

All they do is sending a random insult whenever they are ready.

suspend fun selectInsult(john: ReceiveChannel<String>, mike: ReceiveChannel<String>) {
    select { //   means that this select expression does not produce any result
        john.onReceive { value ->  // this is the first select clause
            println("John says '$value'")
        }
        mike.onReceive { value ->  // this is the second select clause
            println("Mike says '$value'")
        }
    }
}

Their insults are processed by the selectInsult function, that accepts the two channels as parameters and prints the insult whenever it receive one. Our select expression does not produce any result, but the expression could return one if needed.

The power of the select expression is in its clauses. In this example each clause is based upon onReceive, but it could also use OnReceiveOrNull. The second one is used to get a message when the channel is closed, but since we never close the channel we do not need it.

fun main(args: Array<String>)
{
    runBlocking {
        val john = john()
        val mike = mike()
        repeat(6) {
            selectInsult(john, mike)
        }
    }

Putting all the pieces together is child play. We could print an infinite list of insults, but we are satisfied with 6.

The select expression clauses can also depend upon onAwait of a Deferred type. For example, we can modify our select expression to accept the input of an adult.

fun adult(): Deferred<String> = async(CommonPool) {
    // the adult stops the exchange after a while
    delay(Random().nextInt(2000).toLong())
    "Stop it!"
}

suspend fun selectInsult(john: ReceiveChannel<String>, mike: ReceiveChannel<String>,
                         adult: Deferred<String>) {
    select {
        // [..] the rest is like before
        adult.onAwait { value ->
            println("Exasperated adult says '$value'")
        }
    }
}

fun main(args: Array<String>) {
    runBlocking {
        val john = john()
        val mike = mike()
        val adult = adult()
        repeat(6) {
            selectInsult(john, mike, adult)
        }
    }

Notice that you should be wary of mixing the two kinds of clauses. That is because the result of onAwait is always there. So in our example, after it fires the first time the adult takes over and keep shouting Stop it! whenever the function selectInsult is called. Of course in real usage you would probably make the select expression returns a result to check for the firing of the onAwait clause and end the cycle when it fired the first time.

Summary

With this article we have seen what are coroutines and what they bring to Kotlin. We have also shown how to use the main functionalities provided around them and exploit their potential to make your asynchronous easier to write. Writing asynchronous software is hard, this library solves for you the most common problems. However we have just scratched the surface of what you can do with coroutines.

For example, we have not talked about all the low-level features for creators of libraries. That is because we choose to concentrate on the knowledge that will be useful to most people. However if you want to know more you can read the ample documentation on kotlin.coroutines and kotlinx.coroutines. You will also find a reference for the module and a deep guide on how to apply coroutines on UI (both Android and Desktop).

The post Introduction to Coroutines: What Problems Do They Solve? appeared first on SuperKotlin.

Continue ReadingIntroduction to Coroutines: What Problems Do They Solve?

Introduction to Coroutines: What Problems Do They Solve?

Problem Solution
Simplify Callbacks Coroutines
Get results from a potentially infinite list BuildSequence
Get a promise for a future result Async/Await
Work with streams of data Channels and Pipelines
Act on multiple asynchronous inputs Select

The purpose of coroutines is to take care of the complications in working with asynchronous programming. You write code sequentially, like you usually do, and then leave to the coroutines the hard work. Coroutines are a low-level mechanism. The end objective is to build  accessible mechanisms like async/await in C#

Coroutines are an experimental feature, introduced with Kotlin version 1.1. They have been created to manage operations with long execution time (e.g., Input/Output operations with a remote resource) without blocking a thread. They allows to suspend a computation without keeping occupied a thread. In practical terms, they behave as if they were light threads with little overhead, which means that you can have many of them.

In fact, traditional threads have a flaw: they are costly to maintain. Thus it is not practical to have more than a few available and they are mostly controlled by the system. A coroutine offers a lightweight alternative that is cheap in terms of resources and it is easier to control by the developer.

Suspension Points

You have limited control over a thread: you can create a new one or terminate it, but that is basically it. If you want to do more you have to deal with system libraries and all the low-level issues that comes with them. For instance, you have to check for deadlock problems.

Coroutines are a easier to use, but there are a few rules. The basic ideas is that coroutines are blocks of code that can be suspended, without blocking a thread. The difference is that blocking a thread means the thread cannot do anything else, while suspending it means that it can do other things while waiting the completion of the suspended block.

However you cannot suspend a coroutine at arbitrary positions. A coroutine can be suspended only at certain points, called suspension points. That is to say functions with the modifier suspend.

suspend fun answer() {
      println("Hello to you!")
}

The behavior is controlled by the library: there might be a suspension in these points, but this is not a certainty. For example, the library can decide to proceed without suspension, if the result for the call in question is already available.

Functions with suspension points works normally except for one thing: they can only be called from coroutines or other functions with suspension points. They cannot be called by normal code. The calling function can be a (suspending) lambda.

Launching a Suspension Function

The simplest way to launch a suspension function is with the launch function. It requires as an argument a thread pool. Usually you pass the default CommonPool.

import kotlin.coroutines.experimental.* // notice that it is under a package experimental

suspend fun answer() {
      println("Hello to you!")
}

fun main(args: Array<String>) {
    launch(CommonPool) {
        answer() // it prints this second
    }

    println("Hello, dude!") // it prints this first
    Thread.sleep(2000L) // it simulates real work being done
}

Notice how the coroutines are inside a package called experimental, because the API could change. This is also the suggested approach for your own library functions that implement coroutines. You should put them under an experimental package to warn them of the instability and to prepare for future changes.

This is the basic way, but in many cases you do not need to deal directly with the low-level API. Instead you can use wrappers available for the most common situations.

In fact, coroutines are actually a low-level feature, so the module itself kotlin.coroutines offers few high-level functionalities. That is to say, features made for direct use by developers. It contains mostly functions meant for creators of libraries based upon coroutines (e.g., to create a coroutine or to manage its life). Most of the functionalities that you would want to use are in the package kotlinx.coroutines.

From Callbacks to Coroutines

Coroutines are useful to get rid of callbacks. Assume that you have a function that must perform some expensive computation: solveAllProblems. This function calls a callback that receives the result of the computation and performs what is needed (e.g., it saves the result in a database).

fun solveAllProblems(params: Params, callback: (Result) -> Unit)

You can easily eliminate the callback using the suspendCoroutine function. This function works by relying on a Continuation type (cont in the following example), that offers a resume method, which is used to return the expected result.

suspend fun solveAllProblems(params: Params): Result = suspendCoroutine { cont ->
    solveAllProblems(params) { cont.resume(it) }
} 

The advantage of this solution is that now the return type of this computation is explicit., but the computation itself is still asynchronous and does not block a thread.

Coroutines And Sequences

The function buildSequence in one of the high-level functionalities of the basic kotlin.coroutines package. It is used to create lazy sequences and relies on coroutines: each time it runs it return a new result. It is designed for when you can obtain partial results that you can use instantly, without having the complete data. For instance, you can use it to obtain a list of items from a database to display in a UI with infinite scrolling.

Essentially the power of the function rests on the yield mechanism. Each time you asks for an element the function executes until it can give me that element.

In practice, the function executes until it find the yield function, then it returns with the argument of that function. The next execution continues from the point it stopped previously. This happens until the end of the calls of the yield functions. So the yield functions are the suspension points.

Of course, the yield functions can actually continue indefinitely, if you need it. For instance, as in the following example, which calculates a series of powers of 2.

val powerOf2 = buildSequence {
    var a = 1
    yield(a) // the first execution stops here
    while (true) { // the second to N executions continue here
        a = a + a
        yield(a) // the second to N executions stop here
    }
}

println(powerOf2.take(5).toList()) // it prints [1, 2, 4, 8, 16]

It is important to understand that the function itself has no memory. This means that once a run it is completed, the next execution will start from scratch.

println(powerOf2.take(5).toList()) // it prints [1, 2, 4, 8, 16]
println(powerOf2.take(5).toList()) // it still prints [1, 2, 4, 8, 16]

If you want to keep getting new results, you have to use directly iterator() and next()methods on the sequence, as in the following example.

var memory = powerOf2.iterator()
println("${memory.next()} ${memory.next()} ${memory.next()}") // it prints "1 2 4"

The Async and Await Mechanism

Let’s see some of the functionalities of the package kotlinx.coroutines. These are are the heart of the coroutines: the primitive parts are there to implement these functionalities, but you typically do not want to use. The exception is if you are a library developer, so that you  can implement similar high-level functionalities for your own library.

For instance, it provides async and await, that works similarly to the homonym in C#. Typically they are used together, in the case in which an async function returns a meaningful value. First you invoke a function with async, then you read the result with await.

There is not an official documentation of these features, but you can find info about it on the GitHub project.

fun asyncAFunction(): Deferred<Int> = async(CommonPool) {
    10
}

You can see that:

  • the name of the function starts with async. This is the suggested naming convention.
  • it does not have the modifier suspend, thus it can be called everywhere
  • it returns Deferred<Int> instead of Int directly.

An example of how it can be used.

fun main(args: Array<String>) {

    // runBlocking prevents the closing of the program until execution is completed
    runBlocking {
        // one is of type Deferred
        val one = asyncAFunction()
        // we use await to wait for the result
        println("The value is ${one.await()")
    }
}

The first interesting fact is that one.await returns Int. So we can say that it unpacks the deferred type and make it usable as usual. The second one is the function runBlocking, that prevents the premature end of the program. In this simple case it is just an alternative to the usage of the old sleep trick (i.e., blocking the ending of the program with Thread.Sleep) to wait for the result of the asynchronous call. However the function can also be useful in a few situations with asynchronous programming. For instance, it allows to call a suspend function from anywhere.

The function essentially behaves like the following.

// example without await and runBlocking

fun main(args: Array<String>) {
    // one is of type Deferred;
    val one = asyncAFunction()
    // wait completion
    while(one.isActive) {}
  
    println("The value is ${one.getCompleted()}")
}

In the previous example we basically simulated the await function:

  • we did nothing while the asynchronous function was active
  • we read the result once we were certain it was finished

Channels

Coroutines can also implement Go-like channels: a source that send content and a destination that receive it. Essentially channels are like non-blocking queues, that send and operate data asynchronously.

import kotlin.coroutines.experimental.*
import kotlinx.coroutines.experimental.channels.*

fun main(args: Array<String>) {
    runBlocking {
        val channel = Channel<Int>()
        launch(CommonPool) {            
            var y = 1
            for (x in 1..5) {
                y = y + y
                channel.send(y)
            }
            // we close the channel when finished
            channel.close()
        }
        // here we print the received data
        // you could also use channel.receive() to get the messages one by one
        for (y in channel)
            println(y)
    }
}

This code also produces the sequence of powers of 2, but it does it asynchronously.

A more correct way to produce the same results is using produce function. Basically it bring a little order and take care of making everything running smoothly.

fun produceAFewNumbers() = produce<Int>(CommonPool) {
    var y = 1
    for (x in 1..5) {
        y = y + y
        send(y)
    }
}

fun main(args: Array<String>) {
    runBlocking {
        val numbers = produceAFewNumbers()
        numbers.consumeEach { println(it) }
}

The end result is the same, but it is much cleaner.

Pipelines

A common pattern with channels is the pipeline. A pipeline is made up of two coroutines: one that send and one that receive a stream of data. This is the more natural way to deal with an infinite stream of data.

import kotlin.coroutines.experimental.*
import kotlinx.coroutines.experimental.channels.*

fun produceNumbers() = produce<Int>(CommonPool) {
    var x = 1
    while (true)
        // we send an infinite stream of numbers
        send(x++)
}

The first coroutine is produceNumbers which sends an infinite stream of numbers, starting from 1. The function rely on the aptly named produce(CouroutineContext) , that accepts a thread pool and must send some data.

fun divisibleBy2(numbers: ReceiveChannel<Int>) = produce<Int>(CommonPool) {
    for (x in numbers)
    {
        if ( x % 2 == 0)
            // we filter out the number not divisible by 2
            send(x)
    }
}

The second coroutine receive the data from a channel, but also send it to another one, if they are divisible by 2. So a pipeline can actually be made up of more channels linked together in a cascade.

fun main(args: Array<String>) {
    runBlocking {
        val numbers = produceNumbers()
        val divisibles = divisibleBy2(numbers)
        // we print the first 5 numbers
        for (i in 1..5)
            println(divisibles.receive())

        println("Finished!") // we have finished
        // let's cancel the coroutines for good measure
        divisibles.cancel()
        numbers.cancel()
    }
}

The example is quite straightforward. In this last part we just put everything together to get the first five numbers that are produced by the pipeline. In this case we have sent 9 numbers from the first channel, but only 5 have come out of the complete pipeline.

Notice also that we did not close the channels, but directly cancelled the coroutines. That is because we are completely ending their usage instead of politely informing their users about the end of transmissions.

The Select Expression

Select is an expression that is able to wait for multiple suspending functions and to select the result from the first one that becomes available. An example of a scenario in which it can be useful is the backend of a queue. There are multiple producers sending messages and the backend processes them in the order they are arrived.

Let’s see a simple example: we have two kids, John and Mike, trading insults. We are going to see how John insult his friend, but Mike is equally (un)skilled.

fun john() = produce<String>(CommonPool) {
    while (true) {
        val insults = listOf("stupid", "idiot", "stinky")
        val random = Random()
        delay(random.nextInt(1000).toLong())
        send(insults[random.nextInt(3)])
    }
}

All they do is sending a random insult whenever they are ready.

suspend fun selectInsult(john: ReceiveChannel<String>, mike: ReceiveChannel<String>) {
    select { //   means that this select expression does not produce any result
        john.onReceive { value ->  // this is the first select clause
            println("John says '$value'")
        }
        mike.onReceive { value ->  // this is the second select clause
            println("Mike says '$value'")
        }
    }
}

Their insults are processed by the selectInsult function, that accepts the two channels as parameters and prints the insult whenever it receive one. Our select expression does not produce any result, but the expression could return one if needed.

The power of the select expression is in its clauses. In this example each clause is based upon onReceive, but it could also use OnReceiveOrNull. The second one is used to get a message when the channel is closed, but since we never close the channel we do not need it.

fun main(args: Array<String>)
{
    runBlocking {
        val john = john()
        val mike = mike()
        repeat(6) {
            selectInsult(john, mike)
        }
    }

Putting all the pieces together is child play. We could print an infinite list of insults, but we are satisfied with 6.

The select expression clauses can also depend upon onAwait of a Deferred type. For example, we can modify our select expression to accept the input of an adult.

fun adult(): Deferred<String> = async(CommonPool) {
    // the adult stops the exchange after a while
    delay(Random().nextInt(2000).toLong())
    "Stop it!"
}

suspend fun selectInsult(john: ReceiveChannel<String>, mike: ReceiveChannel<String>,
                         adult: Deferred<String>) {
    select {
        // [..] the rest is like before
        adult.onAwait { value ->
            println("Exasperated adult says '$value'")
        }
    }
}

fun main(args: Array<String>) {
    runBlocking {
        val john = john()
        val mike = mike()
        val adult = adult()
        repeat(6) {
            selectInsult(john, mike, adult)
        }
    }

Notice that you should be wary of mixing the two kinds of clauses. That is because the result of onAwait is always there. So in our example, after it fires the first time the adult takes over and keep shouting Stop it! whenever the function selectInsult is called. Of course in real usage you would probably make the select expression returns a result to check for the firing of the onAwait clause and end the cycle when it fired the first time.

Summary

With this article we have seen what are coroutines and what they bring to Kotlin. We have also shown how to use the main functionalities provided around them and exploit their potential to make your asynchronous easier to write. Writing asynchronous software is hard, this library solves for you the most common problems. However we have just scratched the surface of what you can do with coroutines.

For example, we have not talked about all the low-level features for creators of libraries. That is because we choose to concentrate on the knowledge that will be useful to most people. However if you want to know more you can read the ample documentation on kotlin.coroutines and kotlinx.coroutines. You will also find a reference for the module and a deep guide on how to apply coroutines on UI (both Android and Desktop).

The post Introduction to Coroutines: What Problems Do They Solve? appeared first on SuperKotlin.

Continue ReadingIntroduction to Coroutines: What Problems Do They Solve?

Introduction to Coroutines: What Problems Do They Solve?

Problem Solution
Simplify Callbacks Coroutines
Get results from a potentially infinite list BuildSequence
Get a promise for a future result Async/Await
Work with streams of data Channels and Pipelines
Act on multiple asynchronous inputs Select

The purpose of coroutines is to take care of the complications in working with asynchronous programming. You write code sequentially, like you usually do, and then leave to the coroutines the hard work. Coroutines are a low-level mechanism. The end objective is to build  accessible mechanisms like async/await in C#

Coroutines are an experimental feature, introduced with Kotlin version 1.1. They have been created to manage operations with long execution time (e.g., Input/Output operations with a remote resource) without blocking a thread. They allows to suspend a computation without keeping occupied a thread. In practical terms, they behave as if they were light threads with little overhead, which means that you can have many of them.

In fact, traditional threads have a flaw: they are costly to maintain. Thus it is not practical to have more than a few available and they are mostly controlled by the system. A coroutine offers a lightweight alternative that is cheap in terms of resources and it is easier to control by the developer.

Suspension Points

You have limited control over a thread: you can create a new one or terminate it, but that is basically it. If you want to do more you have to deal with system libraries and all the low-level issues that comes with them. For instance, you have to check for deadlock problems.

Coroutines are a easier to use, but there are a few rules. The basic ideas is that coroutines are blocks of code that can be suspended, without blocking a thread. The difference is that blocking a thread means the thread cannot do anything else, while suspending it means that it can do other things while waiting the completion of the suspended block.

However you cannot suspend a coroutine at arbitrary positions. A coroutine can be suspended only at certain points, called suspension points. That is to say functions with the modifier suspend.

suspend fun answer() {
      println("Hello to you!")
}

The behavior is controlled by the library: there might be a suspension in these points, but this is not a certainty. For example, the library can decide to proceed without suspension, if the result for the call in question is already available.

Functions with suspension points works normally except for one thing: they can only be called from coroutines or other functions with suspension points. They cannot be called by normal code. The calling function can be a (suspending) lambda.

Launching a Suspension Function

The simplest way to launch a suspension function is with the launch function. It requires as an argument a thread pool. Usually you pass the default CommonPool.

import kotlin.coroutines.experimental.* // notice that it is under a package experimental

suspend fun answer() {
      println("Hello to you!")
}

fun main(args: Array<String>) {
    launch(CommonPool) {
        answer() // it prints this second
    }

    println("Hello, dude!") // it prints this first
    Thread.sleep(2000L) // it simulates real work being done
}

Notice how the coroutines are inside a package called experimental, because the API could change. This is also the suggested approach for your own library functions that implement coroutines. You should put them under an experimental package to warn them of the instability and to prepare for future changes.

This is the basic way, but in many cases you do not need to deal directly with the low-level API. Instead you can use wrappers available for the most common situations.

In fact, coroutines are actually a low-level feature, so the module itself kotlin.coroutines offers few high-level functionalities. That is to say, features made for direct use by developers. It contains mostly functions meant for creators of libraries based upon coroutines (e.g., to create a coroutine or to manage its life). Most of the functionalities that you would want to use are in the package kotlinx.coroutines.

From Callbacks to Coroutines

Coroutines are useful to get rid of callbacks. Assume that you have a function that must perform some expensive computation: solveAllProblems. This function calls a callback that receives the result of the computation and performs what is needed (e.g., it saves the result in a database).

fun solveAllProblems(params: Params, callback: (Result) -> Unit)

You can easily eliminate the callback using the suspendCoroutine function. This function works by relying on a Continuation type (cont in the following example), that offers a resume method, which is used to return the expected result.

suspend fun solveAllProblems(params: Params): Result = suspendCoroutine { cont ->
    solveAllProblems(params) { cont.resume(it) }
} 

The advantage of this solution is that now the return type of this computation is explicit., but the computation itself is still asynchronous and does not block a thread.

Coroutines And Sequences

The function buildSequence in one of the high-level functionalities of the basic kotlin.coroutines package. It is used to create lazy sequences and relies on coroutines: each time it runs it return a new result. It is designed for when you can obtain partial results that you can use instantly, without having the complete data. For instance, you can use it to obtain a list of items from a database to display in a UI with infinite scrolling.

Essentially the power of the function rests on the yield mechanism. Each time you asks for an element the function executes until it can give me that element.

In practice, the function executes until it find the yield function, then it returns with the argument of that function. The next execution continues from the point it stopped previously. This happens until the end of the calls of the yield functions. So the yield functions are the suspension points.

Of course, the yield functions can actually continue indefinitely, if you need it. For instance, as in the following example, which calculates a series of powers of 2.

val powerOf2 = buildSequence {
    var a = 1
    yield(a) // the first execution stops here
    while (true) { // the second to N executions continue here
        a = a + a
        yield(a) // the second to N executions stop here
    }
}

println(powerOf2.take(5).toList()) // it prints [1, 2, 4, 8, 16]

It is important to understand that the function itself has no memory. This means that once a run it is completed, the next execution will start from scratch.

println(powerOf2.take(5).toList()) // it prints [1, 2, 4, 8, 16]
println(powerOf2.take(5).toList()) // it still prints [1, 2, 4, 8, 16]

If you want to keep getting new results, you have to use directly iterator() and next()methods on the sequence, as in the following example.

var memory = powerOf2.iterator()
println("${memory.next()} ${memory.next()} ${memory.next()}") // it prints "1 2 4"

The Async and Await Mechanism

Let’s see some of the functionalities of the package kotlinx.coroutines. These are are the heart of the coroutines: the primitive parts are there to implement these functionalities, but you typically do not want to use. The exception is if you are a library developer, so that you  can implement similar high-level functionalities for your own library.

For instance, it provides async and await, that works similarly to the homonym in C#. Typically they are used together, in the case in which an async function returns a meaningful value. First you invoke a function with async, then you read the result with await.

There is not an official documentation of these features, but you can find info about it on the GitHub project.

fun asyncAFunction(): Deferred<Int> = async(CommonPool) {
    10
}

You can see that:

  • the name of the function starts with async. This is the suggested naming convention.
  • it does not have the modifier suspend, thus it can be called everywhere
  • it returns Deferred<Int> instead of Int directly.

An example of how it can be used.

fun main(args: Array<String>) {

    // runBlocking prevents the closing of the program until execution is completed
    runBlocking {
        // one is of type Deferred
        val one = asyncAFunction()
        // we use await to wait for the result
        println("The value is ${one.await()")
    }
}

The first interesting fact is that one.await returns Int. So we can say that it unpacks the deferred type and make it usable as usual. The second one is the function runBlocking, that prevents the premature end of the program. In this simple case it is just an alternative to the usage of the old sleep trick (i.e., blocking the ending of the program with Thread.Sleep) to wait for the result of the asynchronous call. However the function can also be useful in a few situations with asynchronous programming. For instance, it allows to call a suspend function from anywhere.

The function essentially behaves like the following.

// example without await and runBlocking

fun main(args: Array<String>) {
    // one is of type Deferred;
    val one = asyncAFunction()
    // wait completion
    while(one.isActive) {}
  
    println("The value is ${one.getCompleted()}")
}

In the previous example we basically simulated the await function:

  • we did nothing while the asynchronous function was active
  • we read the result once we were certain it was finished

Channels

Coroutines can also implement Go-like channels: a source that send content and a destination that receive it. Essentially channels are like non-blocking queues, that send and operate data asynchronously.

import kotlin.coroutines.experimental.*
import kotlinx.coroutines.experimental.channels.*

fun main(args: Array<String>) {
    runBlocking {
        val channel = Channel<Int>()
        launch(CommonPool) {            
            var y = 1
            for (x in 1..5) {
                y = y + y
                channel.send(y)
            }
            // we close the channel when finished
            channel.close()
        }
        // here we print the received data
        // you could also use channel.receive() to get the messages one by one
        for (y in channel)
            println(y)
    }
}

This code also produces the sequence of powers of 2, but it does it asynchronously.

A more correct way to produce the same results is using produce function. Basically it bring a little order and take care of making everything running smoothly.

fun produceAFewNumbers() = produce<Int>(CommonPool) {
    var y = 1
    for (x in 1..5) {
        y = y + y
        send(y)
    }
}

fun main(args: Array<String>) {
    runBlocking {
        val numbers = produceAFewNumbers()
        numbers.consumeEach { println(it) }
}

The end result is the same, but it is much cleaner.

Pipelines

A common pattern with channels is the pipeline. A pipeline is made up of two coroutines: one that send and one that receive a stream of data. This is the more natural way to deal with an infinite stream of data.

import kotlin.coroutines.experimental.*
import kotlinx.coroutines.experimental.channels.*

fun produceNumbers() = produce<Int>(CommonPool) {
    var x = 1
    while (true)
        // we send an infinite stream of numbers
        send(x++)
}

The first coroutine is produceNumbers which sends an infinite stream of numbers, starting from 1. The function rely on the aptly named produce(CouroutineContext) , that accepts a thread pool and must send some data.

fun divisibleBy2(numbers: ReceiveChannel<Int>) = produce<Int>(CommonPool) {
    for (x in numbers)
    {
        if ( x % 2 == 0)
            // we filter out the number not divisible by 2
            send(x)
    }
}

The second coroutine receive the data from a channel, but also send it to another one, if they are divisible by 2. So a pipeline can actually be made up of more channels linked together in a cascade.

fun main(args: Array<String>) {
    runBlocking {
        val numbers = produceNumbers()
        val divisibles = divisibleBy2(numbers)
        // we print the first 5 numbers
        for (i in 1..5)
            println(divisibles.receive())

        println("Finished!") // we have finished
        // let's cancel the coroutines for good measure
        divisibles.cancel()
        numbers.cancel()
    }
}

The example is quite straightforward. In this last part we just put everything together to get the first five numbers that are produced by the pipeline. In this case we have sent 9 numbers from the first channel, but only 5 have come out of the complete pipeline.

Notice also that we did not close the channels, but directly cancelled the coroutines. That is because we are completely ending their usage instead of politely informing their users about the end of transmissions.

The Select Expression

Select is an expression that is able to wait for multiple suspending functions and to select the result from the first one that becomes available. An example of a scenario in which it can be useful is the backend of a queue. There are multiple producers sending messages and the backend processes them in the order they are arrived.

Let’s see a simple example: we have two kids, John and Mike, trading insults. We are going to see how John insult his friend, but Mike is equally (un)skilled.

fun john() = produce<String>(CommonPool) {
    while (true) {
        val insults = listOf("stupid", "idiot", "stinky")
        val random = Random()
        delay(random.nextInt(1000).toLong())
        send(insults[random.nextInt(3)])
    }
}

All they do is sending a random insult whenever they are ready.

suspend fun selectInsult(john: ReceiveChannel<String>, mike: ReceiveChannel<String>) {
    select { //   means that this select expression does not produce any result
        john.onReceive { value ->  // this is the first select clause
            println("John says '$value'")
        }
        mike.onReceive { value ->  // this is the second select clause
            println("Mike says '$value'")
        }
    }
}

Their insults are processed by the selectInsult function, that accepts the two channels as parameters and prints the insult whenever it receive one. Our select expression does not produce any result, but the expression could return one if needed.

The power of the select expression is in its clauses. In this example each clause is based upon onReceive, but it could also use OnReceiveOrNull. The second one is used to get a message when the channel is closed, but since we never close the channel we do not need it.

fun main(args: Array<String>)
{
    runBlocking {
        val john = john()
        val mike = mike()
        repeat(6) {
            selectInsult(john, mike)
        }
    }

Putting all the pieces together is child play. We could print an infinite list of insults, but we are satisfied with 6.

The select expression clauses can also depend upon onAwait of a Deferred type. For example, we can modify our select expression to accept the input of an adult.

fun adult(): Deferred<String> = async(CommonPool) {
    // the adult stops the exchange after a while
    delay(Random().nextInt(2000).toLong())
    "Stop it!"
}

suspend fun selectInsult(john: ReceiveChannel<String>, mike: ReceiveChannel<String>,
                         adult: Deferred<String>) {
    select {
        // [..] the rest is like before
        adult.onAwait { value ->
            println("Exasperated adult says '$value'")
        }
    }
}

fun main(args: Array<String>) {
    runBlocking {
        val john = john()
        val mike = mike()
        val adult = adult()
        repeat(6) {
            selectInsult(john, mike, adult)
        }
    }

Notice that you should be wary of mixing the two kinds of clauses. That is because the result of onAwait is always there. So in our example, after it fires the first time the adult takes over and keep shouting Stop it! whenever the function selectInsult is called. Of course in real usage you would probably make the select expression returns a result to check for the firing of the onAwait clause and end the cycle when it fired the first time.

Summary

With this article we have seen what are coroutines and what they bring to Kotlin. We have also shown how to use the main functionalities provided around them and exploit their potential to make your asynchronous easier to write. Writing asynchronous software is hard, this library solves for you the most common problems. However we have just scratched the surface of what you can do with coroutines.

For example, we have not talked about all the low-level features for creators of libraries. That is because we choose to concentrate on the knowledge that will be useful to most people. However if you want to know more you can read the ample documentation on kotlin.coroutines and kotlinx.coroutines. You will also find a reference for the module and a deep guide on how to apply coroutines on UI (both Android and Desktop).

The post Introduction to Coroutines: What Problems Do They Solve? appeared first on SuperKotlin.

Continue ReadingIntroduction to Coroutines: What Problems Do They Solve?

Introduction to Coroutines: What Problems Do They Solve?

Problem Solution
Simplify Callbacks Coroutines
Get results from a potentially infinite list BuildSequence
Get a promise for a future result Async/Await
Work with streams of data Channels and Pipelines
Act on multiple asynchronous inputs Select

The purpose of coroutines is to take care of the complications in working with asynchronous programming. You write code sequentially, like you usually do, and then leave to the coroutines the hard work. Coroutines are a low-level mechanism. The end objective is to build  accessible mechanisms like async/await in C#

Coroutines are an experimental feature, introduced with Kotlin version 1.1. They have been created to manage operations with long execution time (e.g., Input/Output operations with a remote resource) without blocking a thread. They allows to suspend a computation without keeping occupied a thread. In practical terms, they behave as if they were light threads with little overhead, which means that you can have many of them.

In fact, traditional threads have a flaw: they are costly to maintain. Thus it is not practical to have more than a few available and they are mostly controlled by the system. A coroutine offers a lightweight alternative that is cheap in terms of resources and it is easier to control by the developer.

Suspension Points

You have limited control over a thread: you can create a new one or terminate it, but that is basically it. If you want to do more you have to deal with system libraries and all the low-level issues that comes with them. For instance, you have to check for deadlock problems.

Coroutines are a easier to use, but there are a few rules. The basic ideas is that coroutines are blocks of code that can be suspended, without blocking a thread. The difference is that blocking a thread means the thread cannot do anything else, while suspending it means that it can do other things while waiting the completion of the suspended block.

However you cannot suspend a coroutine at arbitrary positions. A coroutine can be suspended only at certain points, called suspension points. That is to say functions with the modifier suspend.

suspend fun answer() {
      println("Hello to you!")
}

The behavior is controlled by the library: there might be a suspension in these points, but this is not a certainty. For example, the library can decide to proceed without suspension, if the result for the call in question is already available.

Functions with suspension points works normally except for one thing: they can only be called from coroutines or other functions with suspension points. They cannot be called by normal code. The calling function can be a (suspending) lambda.

Launching a Suspension Function

The simplest way to launch a suspension function is with the launch function. It requires as an argument a thread pool. Usually you pass the default CommonPool.

import kotlin.coroutines.experimental.* // notice that it is under a package experimental

suspend fun answer() {
      println("Hello to you!")
}

fun main(args: Array<String>) {
    launch(CommonPool) {
        answer() // it prints this second
    }

    println("Hello, dude!") // it prints this first
    Thread.sleep(2000L) // it simulates real work being done
}

Notice how the coroutines are inside a package called experimental, because the API could change. This is also the suggested approach for your own library functions that implement coroutines. You should put them under an experimental package to warn them of the instability and to prepare for future changes.

This is the basic way, but in many cases you do not need to deal directly with the low-level API. Instead you can use wrappers available for the most common situations.

In fact, coroutines are actually a low-level feature, so the module itself kotlin.coroutines offers few high-level functionalities. That is to say, features made for direct use by developers. It contains mostly functions meant for creators of libraries based upon coroutines (e.g., to create a coroutine or to manage its life). Most of the functionalities that you would want to use are in the package kotlinx.coroutines.

From Callbacks to Coroutines

Coroutines are useful to get rid of callbacks. Assume that you have a function that must perform some expensive computation: solveAllProblems. This function calls a callback that receives the result of the computation and performs what is needed (e.g., it saves the result in a database).

fun solveAllProblems(params: Params, callback: (Result) -> Unit)

You can easily eliminate the callback using the suspendCoroutine function. This function works by relying on a Continuation type (cont in the following example), that offers a resume method, which is used to return the expected result.

suspend fun solveAllProblems(params: Params): Result = suspendCoroutine { cont ->
    solveAllProblems(params) { cont.resume(it) }
} 

The advantage of this solution is that now the return type of this computation is explicit., but the computation itself is still asynchronous and does not block a thread.

Coroutines And Sequences

The function buildSequence in one of the high-level functionalities of the basic kotlin.coroutines package. It is used to create lazy sequences and relies on coroutines: each time it runs it return a new result. It is designed for when you can obtain partial results that you can use instantly, without having the complete data. For instance, you can use it to obtain a list of items from a database to display in a UI with infinite scrolling.

Essentially the power of the function rests on the yield mechanism. Each time you asks for an element the function executes until it can give me that element.

In practice, the function executes until it find the yield function, then it returns with the argument of that function. The next execution continues from the point it stopped previously. This happens until the end of the calls of the yield functions. So the yield functions are the suspension points.

Of course, the yield functions can actually continue indefinitely, if you need it. For instance, as in the following example, which calculates a series of powers of 2.

val powerOf2 = buildSequence {
    var a = 1
    yield(a) // the first execution stops here
    while (true) { // the second to N executions continue here
        a = a + a
        yield(a) // the second to N executions stop here
    }
}

println(powerOf2.take(5).toList()) // it prints [1, 2, 4, 8, 16]

It is important to understand that the function itself has no memory. This means that once a run it is completed, the next execution will start from scratch.

println(powerOf2.take(5).toList()) // it prints [1, 2, 4, 8, 16]
println(powerOf2.take(5).toList()) // it still prints [1, 2, 4, 8, 16]

If you want to keep getting new results, you have to use directly iterator() and next()methods on the sequence, as in the following example.

var memory = powerOf2.iterator()
println("${memory.next()} ${memory.next()} ${memory.next()}") // it prints "1 2 4"

The Async and Await Mechanism

Let’s see some of the functionalities of the package kotlinx.coroutines. These are are the heart of the coroutines: the primitive parts are there to implement these functionalities, but you typically do not want to use. The exception is if you are a library developer, so that you  can implement similar high-level functionalities for your own library.

For instance, it provides async and await, that works similarly to the homonym in C#. Typically they are used together, in the case in which an async function returns a meaningful value. First you invoke a function with async, then you read the result with await.

There is not an official documentation of these features, but you can find info about it on the GitHub project.

fun asyncAFunction(): Deferred<Int> = async(CommonPool) {
    10
}

You can see that:

  • the name of the function starts with async. This is the suggested naming convention.
  • it does not have the modifier suspend, thus it can be called everywhere
  • it returns Deferred<Int> instead of Int directly.

An example of how it can be used.

fun main(args: Array<String>) {

    // runBlocking prevents the closing of the program until execution is completed
    runBlocking {
        // one is of type Deferred
        val one = asyncAFunction()
        // we use await to wait for the result
        println("The value is ${one.await()")
    }
}

The first interesting fact is that one.await returns Int. So we can say that it unpacks the deferred type and make it usable as usual. The second one is the function runBlocking, that prevents the premature end of the program. In this simple case it is just an alternative to the usage of the old sleep trick (i.e., blocking the ending of the program with Thread.Sleep) to wait for the result of the asynchronous call. However the function can also be useful in a few situations with asynchronous programming. For instance, it allows to call a suspend function from anywhere.

The function essentially behaves like the following.

// example without await and runBlocking

fun main(args: Array<String>) {
    // one is of type Deferred;
    val one = asyncAFunction()
    // wait completion
    while(one.isActive) {}
  
    println("The value is ${one.getCompleted()}")
}

In the previous example we basically simulated the await function:

  • we did nothing while the asynchronous function was active
  • we read the result once we were certain it was finished

Channels

Coroutines can also implement Go-like channels: a source that send content and a destination that receive it. Essentially channels are like non-blocking queues, that send and operate data asynchronously.

import kotlin.coroutines.experimental.*
import kotlinx.coroutines.experimental.channels.*

fun main(args: Array<String>) {
    runBlocking {
        val channel = Channel<Int>()
        launch(CommonPool) {            
            var y = 1
            for (x in 1..5) {
                y = y + y
                channel.send(y)
            }
            // we close the channel when finished
            channel.close()
        }
        // here we print the received data
        // you could also use channel.receive() to get the messages one by one
        for (y in channel)
            println(y)
    }
}

This code also produces the sequence of powers of 2, but it does it asynchronously.

A more correct way to produce the same results is using produce function. Basically it bring a little order and take care of making everything running smoothly.

fun produceAFewNumbers() = produce<Int>(CommonPool) {
    var y = 1
    for (x in 1..5) {
        y = y + y
        send(y)
    }
}

fun main(args: Array<String>) {
    runBlocking {
        val numbers = produceAFewNumbers()
        numbers.consumeEach { println(it) }
}

The end result is the same, but it is much cleaner.

Pipelines

A common pattern with channels is the pipeline. A pipeline is made up of two coroutines: one that send and one that receive a stream of data. This is the more natural way to deal with an infinite stream of data.

import kotlin.coroutines.experimental.*
import kotlinx.coroutines.experimental.channels.*

fun produceNumbers() = produce<Int>(CommonPool) {
    var x = 1
    while (true)
        // we send an infinite stream of numbers
        send(x++)
}

The first coroutine is produceNumbers which sends an infinite stream of numbers, starting from 1. The function rely on the aptly named produce(CouroutineContext) , that accepts a thread pool and must send some data.

fun divisibleBy2(numbers: ReceiveChannel<Int>) = produce<Int>(CommonPool) {
    for (x in numbers)
    {
        if ( x % 2 == 0)
            // we filter out the number not divisible by 2
            send(x)
    }
}

The second coroutine receive the data from a channel, but also send it to another one, if they are divisible by 2. So a pipeline can actually be made up of more channels linked together in a cascade.

fun main(args: Array<String>) {
    runBlocking {
        val numbers = produceNumbers()
        val divisibles = divisibleBy2(numbers)
        // we print the first 5 numbers
        for (i in 1..5)
            println(divisibles.receive())

        println("Finished!") // we have finished
        // let's cancel the coroutines for good measure
        divisibles.cancel()
        numbers.cancel()
    }
}

The example is quite straightforward. In this last part we just put everything together to get the first five numbers that are produced by the pipeline. In this case we have sent 9 numbers from the first channel, but only 5 have come out of the complete pipeline.

Notice also that we did not close the channels, but directly cancelled the coroutines. That is because we are completely ending their usage instead of politely informing their users about the end of transmissions.

The Select Expression

Select is an expression that is able to wait for multiple suspending functions and to select the result from the first one that becomes available. An example of a scenario in which it can be useful is the backend of a queue. There are multiple producers sending messages and the backend processes them in the order they are arrived.

Let’s see a simple example: we have two kids, John and Mike, trading insults. We are going to see how John insult his friend, but Mike is equally (un)skilled.

fun john() = produce<String>(CommonPool) {
    while (true) {
        val insults = listOf("stupid", "idiot", "stinky")
        val random = Random()
        delay(random.nextInt(1000).toLong())
        send(insults[random.nextInt(3)])
    }
}

All they do is sending a random insult whenever they are ready.

suspend fun selectInsult(john: ReceiveChannel<String>, mike: ReceiveChannel<String>) {
    select { //   means that this select expression does not produce any result
        john.onReceive { value ->  // this is the first select clause
            println("John says '$value'")
        }
        mike.onReceive { value ->  // this is the second select clause
            println("Mike says '$value'")
        }
    }
}

Their insults are processed by the selectInsult function, that accepts the two channels as parameters and prints the insult whenever it receive one. Our select expression does not produce any result, but the expression could return one if needed.

The power of the select expression is in its clauses. In this example each clause is based upon onReceive, but it could also use OnReceiveOrNull. The second one is used to get a message when the channel is closed, but since we never close the channel we do not need it.

fun main(args: Array<String>)
{
    runBlocking {
        val john = john()
        val mike = mike()
        repeat(6) {
            selectInsult(john, mike)
        }
    }

Putting all the pieces together is child play. We could print an infinite list of insults, but we are satisfied with 6.

The select expression clauses can also depend upon onAwait of a Deferred type. For example, we can modify our select expression to accept the input of an adult.

fun adult(): Deferred<String> = async(CommonPool) {
    // the adult stops the exchange after a while
    delay(Random().nextInt(2000).toLong())
    "Stop it!"
}

suspend fun selectInsult(john: ReceiveChannel<String>, mike: ReceiveChannel<String>,
                         adult: Deferred<String>) {
    select {
        // [..] the rest is like before
        adult.onAwait { value ->
            println("Exasperated adult says '$value'")
        }
    }
}

fun main(args: Array<String>) {
    runBlocking {
        val john = john()
        val mike = mike()
        val adult = adult()
        repeat(6) {
            selectInsult(john, mike, adult)
        }
    }

Notice that you should be wary of mixing the two kinds of clauses. That is because the result of onAwait is always there. So in our example, after it fires the first time the adult takes over and keep shouting Stop it! whenever the function selectInsult is called. Of course in real usage you would probably make the select expression returns a result to check for the firing of the onAwait clause and end the cycle when it fired the first time.

Summary

With this article we have seen what are coroutines and what they bring to Kotlin. We have also shown how to use the main functionalities provided around them and exploit their potential to make your asynchronous easier to write. Writing asynchronous software is hard, this library solves for you the most common problems. However we have just scratched the surface of what you can do with coroutines.

For example, we have not talked about all the low-level features for creators of libraries. That is because we choose to concentrate on the knowledge that will be useful to most people. However if you want to know more you can read the ample documentation on kotlin.coroutines and kotlinx.coroutines. You will also find a reference for the module and a deep guide on how to apply coroutines on UI (both Android and Desktop).

The post Introduction to Coroutines: What Problems Do They Solve? appeared first on SuperKotlin.

Continue ReadingIntroduction to Coroutines: What Problems Do They Solve?

Introduction to Coroutines: What Problems Do They Solve?

Problem Solution
Simplify Callbacks Coroutines
Get results from a potentially infinite list BuildSequence
Get a promise for a future result Async/Await
Work with streams of data Channels and Pipelines
Act on multiple asynchronous inputs Select

The purpose of coroutines is to take care of the complications in working with asynchronous programming. You write code sequentially, like you usually do, and then leave to the coroutines the hard work. Coroutines are a low-level mechanism. The end objective is to build  accessible mechanisms like async/await in C#

Coroutines are an experimental feature, introduced with Kotlin version 1.1. They have been created to manage operations with long execution time (e.g., Input/Output operations with a remote resource) without blocking a thread. They allows to suspend a computation without keeping occupied a thread. In practical terms, they behave as if they were light threads with little overhead, which means that you can have many of them.

In fact, traditional threads have a flaw: they are costly to maintain. Thus it is not practical to have more than a few available and they are mostly controlled by the system. A coroutine offers a lightweight alternative that is cheap in terms of resources and it is easier to control by the developer.

Suspension Points

You have limited control over a thread: you can create a new one or terminate it, but that is basically it. If you want to do more you have to deal with system libraries and all the low-level issues that comes with them. For instance, you have to check for deadlock problems.

Coroutines are a easier to use, but there are a few rules. The basic ideas is that coroutines are blocks of code that can be suspended, without blocking a thread. The difference is that blocking a thread means the thread cannot do anything else, while suspending it means that it can do other things while waiting the completion of the suspended block.

However you cannot suspend a coroutine at arbitrary positions. A coroutine can be suspended only at certain points, called suspension points. That is to say functions with the modifier suspend.

suspend fun answer() {
      println("Hello to you!")
}

The behavior is controlled by the library: there might be a suspension in these points, but this is not a certainty. For example, the library can decide to proceed without suspension, if the result for the call in question is already available.

Functions with suspension points works normally except for one thing: they can only be called from coroutines or other functions with suspension points. They cannot be called by normal code. The calling function can be a (suspending) lambda.

Launching a Suspension Function

The simplest way to launch a suspension function is with the launch function. It requires as an argument a thread pool. Usually you pass the default CommonPool.

import kotlin.coroutines.experimental.* // notice that it is under a package experimental

suspend fun answer() {
      println("Hello to you!")
}

fun main(args: Array<String>) {
    launch(CommonPool) {
        answer() // it prints this second
    }

    println("Hello, dude!") // it prints this first
    Thread.sleep(2000L) // it simulates real work being done
}

Notice how the coroutines are inside a package called experimental, because the API could change. This is also the suggested approach for your own library functions that implement coroutines. You should put them under an experimental package to warn them of the instability and to prepare for future changes.

This is the basic way, but in many cases you do not need to deal directly with the low-level API. Instead you can use wrappers available for the most common situations.

In fact, coroutines are actually a low-level feature, so the module itself kotlin.coroutines offers few high-level functionalities. That is to say, features made for direct use by developers. It contains mostly functions meant for creators of libraries based upon coroutines (e.g., to create a coroutine or to manage its life). Most of the functionalities that you would want to use are in the package kotlinx.coroutines.

From Callbacks to Coroutines

Coroutines are useful to get rid of callbacks. Assume that you have a function that must perform some expensive computation: solveAllProblems. This function calls a callback that receives the result of the computation and performs what is needed (e.g., it saves the result in a database).

fun solveAllProblems(params: Params, callback: (Result) -> Unit)

You can easily eliminate the callback using the suspendCoroutine function. This function works by relying on a Continuation type (cont in the following example), that offers a resume method, which is used to return the expected result.

suspend fun solveAllProblems(params: Params): Result = suspendCoroutine { cont ->
    solveAllProblems(params) { cont.resume(it) }
} 

The advantage of this solution is that now the return type of this computation is explicit., but the computation itself is still asynchronous and does not block a thread.

Coroutines And Sequences

The function buildSequence in one of the high-level functionalities of the basic kotlin.coroutines package. It is used to create lazy sequences and relies on coroutines: each time it runs it return a new result. It is designed for when you can obtain partial results that you can use instantly, without having the complete data. For instance, you can use it to obtain a list of items from a database to display in a UI with infinite scrolling.

Essentially the power of the function rests on the yield mechanism. Each time you asks for an element the function executes until it can give me that element.

In practice, the function executes until it find the yield function, then it returns with the argument of that function. The next execution continues from the point it stopped previously. This happens until the end of the calls of the yield functions. So the yield functions are the suspension points.

Of course, the yield functions can actually continue indefinitely, if you need it. For instance, as in the following example, which calculates a series of powers of 2.

val powerOf2 = buildSequence {
    var a = 1
    yield(a) // the first execution stops here
    while (true) { // the second to N executions continue here
        a = a + a
        yield(a) // the second to N executions stop here
    }
}

println(powerOf2.take(5).toList()) // it prints [1, 2, 4, 8, 16]

It is important to understand that the function itself has no memory. This means that once a run it is completed, the next execution will start from scratch.

println(powerOf2.take(5).toList()) // it prints [1, 2, 4, 8, 16]
println(powerOf2.take(5).toList()) // it still prints [1, 2, 4, 8, 16]

If you want to keep getting new results, you have to use directly iterator() and next()methods on the sequence, as in the following example.

var memory = powerOf2.iterator()
println("${memory.next()} ${memory.next()} ${memory.next()}") // it prints "1 2 4"

The Async and Await Mechanism

Let’s see some of the functionalities of the package kotlinx.coroutines. These are are the heart of the coroutines: the primitive parts are there to implement these functionalities, but you typically do not want to use. The exception is if you are a library developer, so that you  can implement similar high-level functionalities for your own library.

For instance, it provides async and await, that works similarly to the homonym in C#. Typically they are used together, in the case in which an async function returns a meaningful value. First you invoke a function with async, then you read the result with await.

There is not an official documentation of these features, but you can find info about it on the GitHub project.

fun asyncAFunction(): Deferred<Int> = async(CommonPool) {
    10
}

You can see that:

  • the name of the function starts with async. This is the suggested naming convention.
  • it does not have the modifier suspend, thus it can be called everywhere
  • it returns Deferred<Int> instead of Int directly.

An example of how it can be used.

fun main(args: Array<String>) {

    // runBlocking prevents the closing of the program until execution is completed
    runBlocking {
        // one is of type Deferred
        val one = asyncAFunction()
        // we use await to wait for the result
        println("The value is ${one.await()")
    }
}

The first interesting fact is that one.await returns Int. So we can say that it unpacks the deferred type and make it usable as usual. The second one is the function runBlocking, that prevents the premature end of the program. In this simple case it is just an alternative to the usage of the old sleep trick (i.e., blocking the ending of the program with Thread.Sleep) to wait for the result of the asynchronous call. However the function can also be useful in a few situations with asynchronous programming. For instance, it allows to call a suspend function from anywhere.

The function essentially behaves like the following.

// example without await and runBlocking

fun main(args: Array<String>) {
    // one is of type Deferred;
    val one = asyncAFunction()
    // wait completion
    while(one.isActive) {}
  
    println("The value is ${one.getCompleted()}")
}

In the previous example we basically simulated the await function:

  • we did nothing while the asynchronous function was active
  • we read the result once we were certain it was finished

Channels

Coroutines can also implement Go-like channels: a source that send content and a destination that receive it. Essentially channels are like non-blocking queues, that send and operate data asynchronously.

import kotlin.coroutines.experimental.*
import kotlinx.coroutines.experimental.channels.*

fun main(args: Array<String>) {
    runBlocking {
        val channel = Channel<Int>()
        launch(CommonPool) {            
            var y = 1
            for (x in 1..5) {
                y = y + y
                channel.send(y)
            }
            // we close the channel when finished
            channel.close()
        }
        // here we print the received data
        // you could also use channel.receive() to get the messages one by one
        for (y in channel)
            println(y)
    }
}

This code also produces the sequence of powers of 2, but it does it asynchronously.

A more correct way to produce the same results is using produce function. Basically it bring a little order and take care of making everything running smoothly.

fun produceAFewNumbers() = produce<Int>(CommonPool) {
    var y = 1
    for (x in 1..5) {
        y = y + y
        send(y)
    }
}

fun main(args: Array<String>) {
    runBlocking {
        val numbers = produceAFewNumbers()
        numbers.consumeEach { println(it) }
}

The end result is the same, but it is much cleaner.

Pipelines

A common pattern with channels is the pipeline. A pipeline is made up of two coroutines: one that send and one that receive a stream of data. This is the more natural way to deal with an infinite stream of data.

import kotlin.coroutines.experimental.*
import kotlinx.coroutines.experimental.channels.*

fun produceNumbers() = produce<Int>(CommonPool) {
    var x = 1
    while (true)
        // we send an infinite stream of numbers
        send(x++)
}

The first coroutine is produceNumbers which sends an infinite stream of numbers, starting from 1. The function rely on the aptly named produce(CouroutineContext) , that accepts a thread pool and must send some data.

fun divisibleBy2(numbers: ReceiveChannel<Int>) = produce<Int>(CommonPool) {
    for (x in numbers)
    {
        if ( x % 2 == 0)
            // we filter out the number not divisible by 2
            send(x)
    }
}

The second coroutine receive the data from a channel, but also send it to another one, if they are divisible by 2. So a pipeline can actually be made up of more channels linked together in a cascade.

fun main(args: Array<String>) {
    runBlocking {
        val numbers = produceNumbers()
        val divisibles = divisibleBy2(numbers)
        // we print the first 5 numbers
        for (i in 1..5)
            println(divisibles.receive())

        println("Finished!") // we have finished
        // let's cancel the coroutines for good measure
        divisibles.cancel()
        numbers.cancel()
    }
}

The example is quite straightforward. In this last part we just put everything together to get the first five numbers that are produced by the pipeline. In this case we have sent 9 numbers from the first channel, but only 5 have come out of the complete pipeline.

Notice also that we did not close the channels, but directly cancelled the coroutines. That is because we are completely ending their usage instead of politely informing their users about the end of transmissions.

The Select Expression

Select is an expression that is able to wait for multiple suspending functions and to select the result from the first one that becomes available. An example of a scenario in which it can be useful is the backend of a queue. There are multiple producers sending messages and the backend processes them in the order they are arrived.

Let’s see a simple example: we have two kids, John and Mike, trading insults. We are going to see how John insult his friend, but Mike is equally (un)skilled.

fun john() = produce<String>(CommonPool) {
    while (true) {
        val insults = listOf("stupid", "idiot", "stinky")
        val random = Random()
        delay(random.nextInt(1000).toLong())
        send(insults[random.nextInt(3)])
    }
}

All they do is sending a random insult whenever they are ready.

suspend fun selectInsult(john: ReceiveChannel<String>, mike: ReceiveChannel<String>) {
    select { //   means that this select expression does not produce any result
        john.onReceive { value ->  // this is the first select clause
            println("John says '$value'")
        }
        mike.onReceive { value ->  // this is the second select clause
            println("Mike says '$value'")
        }
    }
}

Their insults are processed by the selectInsult function, that accepts the two channels as parameters and prints the insult whenever it receive one. Our select expression does not produce any result, but the expression could return one if needed.

The power of the select expression is in its clauses. In this example each clause is based upon onReceive, but it could also use OnReceiveOrNull. The second one is used to get a message when the channel is closed, but since we never close the channel we do not need it.

fun main(args: Array<String>)
{
    runBlocking {
        val john = john()
        val mike = mike()
        repeat(6) {
            selectInsult(john, mike)
        }
    }

Putting all the pieces together is child play. We could print an infinite list of insults, but we are satisfied with 6.

The select expression clauses can also depend upon onAwait of a Deferred type. For example, we can modify our select expression to accept the input of an adult.

fun adult(): Deferred<String> = async(CommonPool) {
    // the adult stops the exchange after a while
    delay(Random().nextInt(2000).toLong())
    "Stop it!"
}

suspend fun selectInsult(john: ReceiveChannel<String>, mike: ReceiveChannel<String>,
                         adult: Deferred<String>) {
    select {
        // [..] the rest is like before
        adult.onAwait { value ->
            println("Exasperated adult says '$value'")
        }
    }
}

fun main(args: Array<String>) {
    runBlocking {
        val john = john()
        val mike = mike()
        val adult = adult()
        repeat(6) {
            selectInsult(john, mike, adult)
        }
    }

Notice that you should be wary of mixing the two kinds of clauses. That is because the result of onAwait is always there. So in our example, after it fires the first time the adult takes over and keep shouting Stop it! whenever the function selectInsult is called. Of course in real usage you would probably make the select expression returns a result to check for the firing of the onAwait clause and end the cycle when it fired the first time.

Summary

With this article we have seen what are coroutines and what they bring to Kotlin. We have also shown how to use the main functionalities provided around them and exploit their potential to make your asynchronous easier to write. Writing asynchronous software is hard, this library solves for you the most common problems. However we have just scratched the surface of what you can do with coroutines.

For example, we have not talked about all the low-level features for creators of libraries. That is because we choose to concentrate on the knowledge that will be useful to most people. However if you want to know more you can read the ample documentation on kotlin.coroutines and kotlinx.coroutines. You will also find a reference for the module and a deep guide on how to apply coroutines on UI (both Android and Desktop).

The post Introduction to Coroutines: What Problems Do They Solve? appeared first on SuperKotlin.

Continue ReadingIntroduction to Coroutines: What Problems Do They Solve?

End of content

No more pages to load