How We Test Concurrent Primitives in Kotlin Coroutines

Today we would like to share how we test concurrency primitives in Kotlin Coroutines.

Many of our users are delighted with the experience of using coroutines to write asynchronous code. This does not come as a surprise, since with coroutines we are able to write, with almost all the asynchronicity happening under the hood. For simplicity, we can think of coroutines in Kotlin as super-lightweight threads with some additional power, such as cancellability and structured concurrency. However, coroutines also make the code much safer. Traditional concurrent programming involves manipulating a shared mutable state, which is arguably error-prone. As an alternative, coroutines provide special communicating primitives, such as

Channel

, to perform the synchronization. Using channels and a couple of other primitives as building blocks, we can construct extremely powerful things, such as the recently introduced Flows. However, nothing is free. While the message passing approach is typically safer, the required primitives are not trivial to implement and, therefore, have to be tested as thoroughly as possible. Simultaneously, testing concurrent code may be as complicated as writing it.

That is why we have Lincheck – a special framework for testing concurrent data structures on the JVM. This framework’s main advantage is that it provides a simple and declarative way to write concurrent tests. Instead of describing how to perform the test, you specify what to test by declaring all the operations to examine along with the required correctness property (linearizability is the default one, but various extensions are supported as well) and restrictions (e.g. “single-consumer” for queues).

Lincheck Overview

Let’s consider a simple concurrent data structure, such as the stack algorithm presented below. In addition to the standard

push(value)

and

pop()

operations, we also implement a non-linearizable (thus, incorrect)

size()

, which increases and decreases the corresponding field following successful push and pop invocations.

class Stack<T> {
    private val top  = atomic<Node<T>?>(null)
    private val _size = atomic(0)

   fun push(value: T): Unit = top.loop { cur ->
        val newTop = Node(cur, value)
        if (top.compareAndSet(cur, newTop)) { // try to add
            _size.incrementAndGet() // <-- INCREMENT SIZE
            return
        }
    }

   fun pop(): T? = top.loop { cur ->
        if (cur == null) return null // is stack empty?
        if (top.compareAndSet(cur, cur.next)) { // try to retrieve
            _size.decrementAndGet() // <-- DECREMENT SIZE
            return cur.value
        }
    }

   val size: Int get() = _size.value
}
class Node<T>(val next: Node<T>?, val value: T)

To write a concurrent test for this stack without any tool, you have to manually run parallel threads, invoke the stack operations, and finally check that some sequential history can explain the obtained results. We have used such manual tests in the past, and all of them contained at least a hundred lines of boilerplate code. But with Lincheck, this machinery is automated, and tests become short and informative.

To write a concurrent test with Lincheck, you need to list the data structure operations and mark them with a special

@Operation

annotation. The initial state is specified in the constructor (here, we create a new

TriebeStack&lt;Int&gt; instance

). After that, we need to configure the testing modes, which can be done using special annotations on the test class –

@StressCTest

for stress testing and

@ModelCheckingCTest

for model checking mode. Finally, we can run the analysis by invoking the

Linchecker.check(..)

function on the testing class.

Let’s write a test for our incorrect stack:

@StressCTest
@ModelCheckingCTest
class StackTest {
    private val s = TrieberStack<Int>()

   @Operation fun push(value: Int) = s.push(value)
    @Operation fun pop() = s.pop()
    @Operation fun size() = s.size

   @Test fun runTest() = LinChecker.check(this::class)
}

There are just 12 lines of easy-to-follow code that describe the testing data structure, and that is all you need to do with Lincheck! It automatically:

  1. Generates several random concurrent scenarios.
  2. Examines each of them using either the stress or the model checking strategy, performing the number of scenario invocations specified by the user.
  3. Verifies that each of the invocation results satisfies the required correctness property (e.g. linearizability).For this step, we use a sequential specification, which is defined with the testing data structure by default; though, a custom sequential specification can be set.

If an invocation hangs, fails with an exception, or the result is incorrect, the test fails with an error similar to the one below. Here we smoothly come to the main advantage of the model checking mode. While Lincheck always provides a failure scenario with the incorrect results (if any are found), the model checking mode also provides a trace to reproduce the error. In the example below, the failing execution starts with the first thread, pushes

7

into the stack, and stops before incrementing the size. The execution switches to the second thread, which retrieves the already pushed

2

from the stack and decreases the size to

-1

. The following

size()

invocation returns

-1

, which is incorrect since it is negative. While the error seems obvious even without a trace, having the trace helps considerably when you are working on real-world concurrent algorithms, such as synchronization primitives in Kotlin Coroutines.

= Invalid execution results =
| push(7): void | pop():  7  |
|               | size(): -1 |

= The following interleaving leads to the error =
| push(7)                                |                      |
|   top.READ: null                       |                      |
|       at Stack.push(Stack.kt:5)        |                      |
|   top.compareAndSet(null,Node@1): true |                      |
|       at Stack.push(Stack.kt:7)        |                      |
|   switch                               |                      |
|                                        | pop(): 7             |
|                                        | size(): -1           |
|                                        |   thread is finished |
|   _size.incrementAndGet(): 0           |                      |
|       at Stack.push(Stack.kt:8)        |                      |
|   result: void                         |                      |
|   thread is finished                   |                      |

Scenario Generation
In Lincheck, we provide the numbers of parallel threads and operations within them that can be configured in

@&lt;Mode&gt;CTest

annotations. Lincheck generates the specified number of scenarios by filling the threads with randomly chosen operations. Note that the

push(..)

operation on the stack above takes an integer parameter value, which specifies the element to be inserted. Simultaneously, the error message provides a specific scenario with an input value to the

push(..)

invocation. This parameter is also randomly generated in a specified range, which can also be configured. It is also possible to add restrictions like “single-consumer” so that the corresponding operations cannot be invoked concurrently.

Testing Modes
Lincheck runs the generated scenario in either stress testing or model checking mode. In stress testing mode, the scenario is executed on parallel threads many times to detect interleavings that yield incorrect results. In model checking mode, Lincheck examines many different interleavings with a bounded number of context switches. Compared to stress testing, model checking increases test coverage as well as the failing execution trace of the found incorrect behavior. However, it assumes the sequential consistency memory model, which means it ignores weak memory model effects. In particular, it cannot find a bug caused by a missed

@Volatile

. Therefore, in practice we use both stress testing and model checking mode.

Minimizing Failing Scenarios
When writing a test, it makes sense to configure it so that several threads are executed, each with several operations. However, most bugs can be reproduced with fewer threads and operations. Thus, Lincheck “minimizes” the failing scenario by trying to remove an operation from it and checking whether the test still fails. It continues to remove operations until it is no longer possible to do so without also causing the test not to fail. While this greedy approach is not theoretically optimal, we find it works well enough in practice.

Model Checking Details

Let’s dig deeper into model checking,one of Lincheck’s most impressive features, and see how everything works under the hood. If you want to try Lincheck before going further, just add the

org.jetbrains.kotlinx:lincheck:2.12

dependency to your Gradle or Maven project!

Before we started working on model checking, we already had Lincheck with stress testing mode. While it provided a nice way to write tests and significantly improved their quality, we were still spending countless hours trying to understand how to reproduce a discovered bug. Spending all this time motivated us to find a way to automate the process. That is how we came to adopt the bounded model checking approach to our practical Lincheck. The idea was simple — once we implemented model checking, Lincheck would be able to provide an interleaving that re-produced the found bug automatically, meaning we would no longer need to spend hours trying to understand it. One of the main challenges was finding a way to make everything work without having to change the code.

In our experience, most bugs in complicated concurrent algorithms can be reproduced using the sequential consistency memory model. Simultaneously, the model-checking approaches for weak memory models are very complicated, so we decided to use a bounded model checking under the sequential consistency memory model. This was originally inspired by the one used in the CHESS framework for C#, which studies all possible schedules with a bounded number of context switches by fully controlling the execution and putting context switches in different locations in threads. In contrast to CHESS, Lincheck bounds the number of schedules to be studied instead of the number of context switches. This way, the test time is predictable independently of scenario size and algorithm complexity.

In short, Lincheck starts by studying all interleavings with one context switch but does this evenly, trying to explore a variety of interleavings simultaneously. This way, we increase the total coverage if the number of available invocations is not sufficient to cover all the interleavings. Once all the interleavings with one context switch have been reviewed, Lincheck starts examining interleavings with two context switches, and repeats the process, increasing the number of context switches each time, until the available invocations exceed the maximum or all possible interleavings are covered. This strategy increases the testing coverage and allows you to find an incorrect schedule with the lowest number of context switches possible, marking a significant improvement for bug investigation.

Switch Points
Lincheck inserts switch points into the testing code to control the execution. These points identify where a context switch can be performed. The interesting switch point locations are shared memory accesses, such as field and array element reads or updates in the JVM. These reads or updates can be performed by either the corresponding byte-code instructions or special methods, like the ones in

AtomicFieldUpdater

or

Unsafe

classes. To insert a switch point, we transform the testing code via the ASM framework and add our internal function invocations before each access. The transformation is performed on the fly using a custom class loader, which means that technically we’ve created a transformed copy of the testing code.

Interleaving Tree
To explore different possible schedules, Lincheck constructs an interleaving tree, the edges of which represent choices that can be performed by the scheduler. The figure below presents a partially built interleaving tree with one context switch for the scenario from the overview. First, Lincheck decides where the execution should be started. From there, several switch points are available to be examined. In the figure, only the first switch point related to

top.READ

is fully explored.

To explore interleavings evenly, Lincheck tracks the percentage of explored interleavings for each subtree, using weighted randomness to make its choices about where to go. The weights are proportional to the ratios of unobserved interleavings. In our example above, Lincheck is more likely to start the next interleaving from the second thread. Since the exact number of interleavings is unknown, all children have equal weights initially. After all the current tree’s interleavings have been explored, Lincheck increases the number of context switches, thus increasing the tree depth.

Execution Trace
The key advantage of model checking mode is that it provides an execution trace that reproduces the error. To increase its readability, Lincheck captures the arguments and the results of each of the shared variable accesses (such as reads, writes, and

CAS

-s) and the information about the fields. For example, in the trace in the listing from the overview, the first event is a reading of

null

from the

top

field, and the source code location is also provided. At the same time, if a function with multiple shared variable accesses is executed without a context switch in the middle, Lincheck shows this function invocation with the corresponding arguments and result instead of the events inside it. This also makes the trace easier to follow, as evidenced in the second thread in the trace.

To sum up

We hope you enjoyed reading this post and found Lincheck interesting and useful. In Kotlin, we successfully use Lincheck both to test existing concurrent data structures in Kotlin Coroutines and to develop new algorithms more productively and enjoyably. As a final note, we would like to say that Lincheck tests should be treated like regular unit tests — they can check your data structure for the specified correctness contract, nothing more and nothing else. So, even if you have Lincheck tests, do not hesitate to add integration tests when your data structure is used in the real world.

Want to try Lincheck? Just add the

org.jetbrains.kotlinx:lincheck:2.12

dependency to your Gradle or Maven project and enjoy!

Lincheck on Github

The post How We Test Concurrent Primitives in Kotlin Coroutines first appeared on JetBrains Blog.

Continue ReadingHow We Test Concurrent Primitives in Kotlin Coroutines

1.4.30 Is Released With a New JVM Backend and Language and Multiplatform Features

Kotlin 1.4.30 is now available. This is the last 1.4 incremental release, so we have lots of new experimental features that we plan to make stable in 1.5.0. We would really appreciate it if you would try them and share your feedback with us. We hope you enjoy testing out all these new updates, and please let us know what you think.

What’s changed in this release:

Language features and compiler

We’ve decided to cover two of the significant updates in separate blog posts so that we can provide more details on these features.

Compiler

The new JVM backend reached Beta, and it now produces stable binaries. This means you can safely use it in your projects.

More details about the update, ways to enable the new JVM IR backend, and how you can help stabilize it can be found here.

New language features preview

Among the new language features we plan to release in Kotlin 1.5.0 are inline value classes, JVM records, and sealed interfaces. You can read more details about them in this post, and here’s a brief overview:

Inline classes. Inline classes were previously a separate language feature, but now they have become a specific JVM optimization for a value class with one parameter. Value classes represent a more general concept and will support different optimizations in the future. They currently support inline classes, and they will support Valhalla primitive classes when project Valhalla becomes available.

Java records. Another upcoming improvement in the JVM ecosystem is Java records. They are analogous to Kotlin

data

classes, mainly used as simple holders of data. Interoperability with Java always has been and always will be a priority for Kotlin. Kotlin code “understands” the new Java records and sees them as classes with Kotlin properties.

Sealed interfaces. Interfaces can be declared sealed as well as classes. The sealed modifier works on interfaces the same way: all implementations of a sealed interface are known at compile time. Once a module with a sealed interface is compiled, no new implementations can appear.

So now we kindly ask you to give these language features a try and share your feedback with us. We would like to know what expectations you have for them, the use cases where you want to apply these features, and any thoughts or ideas you have about them.

You can find a detailed description of the new language features and instructions on how to try them out in this blog post.

Build tools

Configuration cache support in Kotlin Gradle Plugin

As of Kotlin 1.4.30, the Kotlin Gradle plugin is fully compatible with the Gradle configuration cache. This speeds up the build process. For example, the Square, which uses Kotlin for Android, has a build (Android, Java, Kotlin) of 1800 modules. Its team reports the following numbers:

  • The very first build took 16 minutes 30 seconds.
  • The second one was much shorter; it took 5 minutes 45 seconds.
    More specifically, for Square, Configuration Cache saves 1 minute 10 seconds of configuration and task graph creation per build.

When you run the command, Gradle executes the configuration phase and calculates the task graph. Gradle caches the result and reuses it for subsequent builds, saving you time.

To start using this feature, use the Gradle command or set up your IntelliJ-based IDE. And if anything doesn’t work as expected, please report it via YouTrack.

Please note that this feature is still in Alpha for multiplatform.

Kotlin/Native

Compilation time improvements

We’ve improved compilation time in 1.4.30. The time taken to rebuild the KMM Networking and data storage sample framework has been reduced from 9.5 seconds (in 1.4.10) to 4.5 seconds (in 1.4.30).

We plan to continue optimizing the compiler, you can follow the issue in YouTrack.

64-bit watchOS simulator support

With the 1.3.60 Kotlin release in October 2018, we introduced support for building Kotlin apps for Apple Watch simulators. Last November the Apple Watch simulator architecture was changed from i386 to x86_64, creating issues for developers working on this feature. The new Kotlin/Native

watchosX64

target can be used to run the watchOS simulator on 64-bit architecture and it works on WatchOS starting from version 7.0.

Xcode 12.2 SDK support

Kotlin/Native now supports Xcode 12.2. The macOS frameworks that have been added to the Xcode 12.2 release can be used with this Kotlin update. For example, the MLCompute framework is now available for users developing applications for macOS.

Kotlin/JS

Prototype lazy initialization for top-level properties.

We’ve made lazy initialization of top-level properties available as Experimental. You can read more about this in What’s new.

Standard library

Locale-agnostic API for upper/lowercasing text

This release introduces an experimental locale-agnostic API for changing the case of strings and characters. The current

toLowerCase()

,

toUpperCase()

,

capitalize()

,

decapitalize()

API functions are locale-sensitive, which is not obvious and inconvenient in some cases. In the case of different platform locale settings it affects the code behavior – for example, in the Turkish locale, when the string “kotlin” is converted by

toUpperCase

it results in “KOTLİN”, not “KOTLIN”. Now it uses the root locale, so it will work as expected.

You can find the full list of changes of text processing functions in KEEP. Remember that this API is experimental, and please share your feedback with us in YouTrack.

Unambiguous API for Char conversion

The current conversion functions from Char to numbers, that return its UTF-16 code expressed in different numeric types, are often confused with the similar String-to-Int conversion returning the numeric value of a string.

To avoid this confusion we’ve decided to separate Char conversions into two following sets of clearly named functions: functions to get the integer code of

Char

and to construct

Char

, and functions to convert

Char

to the numeric value of the digit it represents.

This feature is also Experimental, but we plan to make it stable for the 1.5.0 release. See more details in KEEP.

Find more about all the updates for 1.4.30 in What’s new and New JVM Backend and Language Features blog posts.

How to update

IntelliJ IDEA will suggest updating the Kotlin plugin to 1.4.30 automatically, or you can update it manually by following these instructions. The Kotlin plugin for Android Studio Arctic Fox will be released later.

If you want to work on existing projects created with previous versions of Kotlin, use the 1.4.30 Kotlin version in your project configuration. For more information, see the docs for Gradle and Maven.

You can download the command-line compiler from the Github release page.

The release details and the list of compatible libraries are available here.

If you run into any problems with the new release, you can find help on Slack (get an invite here) and report issues in our YouTrack.

Before updating your projects to the latest version of Kotlin, you can try the new language and standard library features online at play.kotl.in.

External contributors

We want to thank all of our external contributors whose pull requests were included in this release:

Jinseong Jeon
Toshiaki Kameyama
pyos
Mads Ager
Steven Schäfer
Mark Punzalan
Ivan Gavrilovic
Kristoffer Andersen
Bingran
Juan Chen
zhelenskiy
Kris
Hung Nguyen
Victor Turansky
AJ
Louis CAD
Kevin Bierhoff
Hollow Man
Francesco Vasco
Uzi Landsmann
Dominik Wuttke
Derek Bodin
Ciaran Treanor
rbares
Martin Petrov
Yuya Urano
KotlinIsland
Jiaxiang Chen
Jake Wharton
Sam Wang
MikeKulasinski-visa
Matthew Gharrity
Mikhail Likholetov

Continue Reading1.4.30 Is Released With a New JVM Backend and Language and Multiplatform Features

New Language Features Preview in Kotlin 1.4.30

We’re planning to add new language features in Kotlin 1.5, and you can already try them out in Kotlin 1.4.30:

To try these new features, you need to specify the 1.5 language version.

The new release cadence means that Kotlin 1.5 is going to be released in a few months, but new features are already available for preview in 1.4.30. Your early feedback is crucial to us, so please give these new features a try now!

Stabilization of inline value classes

Inline classes have been available in Alpha since Kotlin 1.3, and in 1.4.30 they are promoted to Beta.

Kotlin 1.5 stabilizes the concept of inline classes but makes it a part of a more general feature, value classes, which we will describe later in this post.

We’ll begin with a refresher on how inline classes work. If you are already familiar with inline classes, you can skip this section and go directly to the new changes.

As a quick reminder, an inline class eliminates a wrapper around a value:

inline class Color(val rgb: Int)

An inline class can be a wrapper both for a primitive type and for any reference type, like

String

.

The compiler replaces inline class instances (in our example, the

Color

instance) with the underlying type (

Int

) in the bytecode, when possible:

fun changeBackground(color: Color) 
val blue = Color(255)
changeBackground(blue)

Under the hood, the compiler generates the

changeBackground

function with a mangled name taking

Int

as a parameter, and it passes the

255

constant directly without creating a wrapper at the call site:

fun changeBackground-euwHqFQ(color: Int) 
changeBackground-euwHqFQ(255) // no extra object is allocated! 

The name is mangled to allow the seamless overload of functions taking instances of different inline classes and to prevent accidental invocations from the Java code that could violate the internal constraints of an inline class. Read below to find out how to make it usable from Java.

The wrapper is not always eliminated in the bytecode. This happens only when possible, and it works very similarly to built-in primitive types. When you define a variable of the

Color

type or pass it directly into a function, it gets replaced with the underlying value:

val color = Color(0)        // primitive
changeBackground(color)     // primitive

In this example, the

color

variable has the type

Color

during compilation, but it’s replaced with

Int

in the bytecode.

If you store it in a collection or pass it to a generic function, however, it gets boxed into a regular object of the

Color

type:

genericFunc(color)         // boxed
val list = listOf(color)   // boxed
val first = list.first()   // unboxed back to primitive

Boxing and unboxing is done automatically by the compiler. You don’t need to do anything about it, but it’s useful to understand the internals.

Changing JVM name for Java calls

Starting from 1.4.30, you can change the JVM name of a function taking an inline class as a parameter to make it usable from Java. By default, such names are mangled to prevent accidental usages from Java or conflicting overloads (like

changeBackground-euwHqFQ

in the example above).

If you annotate a function with

@JvmName

, it changes the name of this function in the bytecode and makes it possible to call it from Java and pass a value directly:

// Kotlin declarations
inline class Timeout(val millis: Long)

val Int.millis get() = Timeout(this.toLong())
val Int.seconds get() = Timeout(this * 1000L)

@JvmName("greetAfterTimeoutMillis")
fun greetAfterTimeout(timeout: Timeout)

// Kotlin usage
greetAfterTimeout(2.seconds)

// Java usage
greetAfterTimeoutMillis(2000);

As always with a function annotated with

@JvmName

, from Kotlin you call it by its Kotlin name. Kotlin usage is type-safe, since you can only pass a value of the

Timeout

type as an argument, and the units are obvious from the usage.

From Java you can pass a

long

value directly. It’s no longer type-safe, and that’s why it doesn’t work by default. If you see

greetAfterTimeout(2)

in the code, it’s not immediately obvious whether it’s 2 seconds, 2 milliseconds, or 2 years.

By providing the annotation you explicitly emphasize that you intend this function to be called from Java. A descriptive name helps avoid confusion: adding the “Millis” suffix to the JVM name makes the units clear for Java users.

Init blocks

Another improvement for inline classes in 1.4.30 is that you now can define initialization logic in the

init

block:

inline class Name(val s: String) {
   init {
       require(s.isNotEmpty())
   }
}

This was previously forbidden.

You can read more details about inline classes in the corresponding KEEP, in the documentation, and in the discussion under this issue.

Inline value classes

Kotlin 1.5 stabilizes the concept of inline classes and makes it a part of a more general feature: value classes.

Until now, “inline” classes constituted a separate language feature, but they are now becoming a specific JVM optimization for a value class with one parameter. Value classes represent a more general concept and will support different optimizations: inline classes now, and Valhalla primitive classes in the future when project Valhalla becomes available (more about this below).

The only thing that changes for you at the moment is syntax. Since an inline class is an optimized value class, you have to declare it differently than you used to:

@JvmInline
value class Color(val rgb: Int)

You define a value class with one constructor parameter and annotate it with

@JvmInline

. We expect everyone to use this new syntax starting from Kotlin 1.5. The old syntax

inline class

will continue to work for some time. It will be deprecated with a warning in 1.5 that will include an option to migrate all your declarations automatically. It will later be deprecated with an error.

Value classes

A

value

class represents an immutable entity with data. At the moment, a

value

class can contain only one property to support the use-case of “old” inline classes.

In future Kotlin versions with full support for this feature, it will be possible to define value classes with many properties. All the values should be read-only

val

s:

value class Point(val x: Int, val y: Int)

Value classes have no identity: they are completely defined by the data stored and === identity checks aren’t allowed for them. The == equality check automatically compares the underlying data.

This “identityless” quality of value classes allows significant future optimizations: project Valhalla’s arrival to the JVM will allow value classes to be implemented as JVM primitive classes under the hood.

The immutability constraint, and therefore the possibility of Valhalla optimizations, makes

value

classes different from

data

classes.

Future Valhalla optimization

Project Valhalla introduces a new concept to Java and the JVM: primitive classes.

The main goal of primitive classes is to combine performant primitives with the object-oriented benefits of regular JVM classes. Primitive classes are data holders whose instances can be stored in variables, on the computation stack, and operated on directly, without headers and pointers. In this regard, they are similar to primitive values like

int

,

long

, etc. (in Kotlin, you don’t work with primitive types directly but the compiler generates them under the hood).

An important advantage of primitive classes is that they allow the flat and dense layout of objects in memory. Currently,

Array&lt;Point&gt;

is an array of references. With Valhalla support, when defining

Point

as a primitive class (in Java terminology) or as a value class with the underlying optimization (in Kotlin terminology), the JVM can optimize it and store an array of

Point

s in a “flat” layout, as an array of many

x

s and

y

s directly, not as an array of references.

We’re really looking forward to the upcoming JVM changes and we want Kotlin to benefit from them. At the same time, we don’t want to force our community to depend on new JVM versions to use

value classes

, and so we are going to support it for earlier JVM versions as well. When compiling the code to the JVM with Valhalla support, the latest JVM optimizations will work for value classes.

Mutating methods

There’s much more to say regarding the functionality of value classes. Since value classes represent “immutable” data, mutating methods, like those in Swift, are possible for them. A mutating method is when a member function or property setter returns a new instance rather than updating an existing one, and the main benefit is that you use them with a familiar syntax. This still needs to be prototyped in the language.

More details

@JvmInline

annotation is JVM-specific. On other backends value classes can be implemented differently. For instance, as Swift structs in Kotlin/Native.

You can read the details about value classes in the Design Note for Kotlin value classes, or watch an extract from Roman Elizarov’s “A look into the future” talk.

Support for JVM records

Another upcoming improvement in the JVM ecosystem is Java records. They are analogous to Kotlin

data

classes and are mainly simple holders of data.

Java records don’t follow the JavaBeans convention, and they have ‘x()’ and ‘y()’ methods instead of the familiar ‘getX()’ and ‘getY()’.

Interoperability with Java always was and remains a priority for Kotlin. Thus, Kotlin code “understands” new Java records and sees them as classes with Kotlin properties. This works like it does for regular Java classes following the JavaBeans convention:

// Java
record Point(int x, int y) { }
// Kotlin
fun foo(point: Point) {
    point.x // seen as property
    point.x() // also works
}

Mainly for interoperability reasons, you can annotate your

data

class with

@JvmRecord

to have new JVM record methods generated:

@JvmRecord
data class Point(val x: Int, val y: Int)

The

@JvmRecord

annotation makes the compiler generate

x()

and

y()

methods instead of the standard

getX()

and

getX()

methods. We assume that you only need to use this annotation to preserve the API of the class when converting it from Java to Kotlin. In all the other use cases, Kotlin’s familiar

data

classes can be used instead without issue.

This annotation is only available if you compile Kotlin code to version 15+ of the JVM version. You can read more about this feature in the corresponding KEEP or in the documentation, as well as in the discussion in this issue.

Sealed interfaces and sealed classes improvements

When you make a class sealed, it restricts the hierarchy to defined subclasses, which allows exhaustive checks in

when

branches. In Kotlin 1.4, the sealed class hierarchy comes with two constraints. First, the top class can’t be a sealed interface, it should be a class. Second, all the subclasses should be located in the same file.

Kotlin 1.5 removes both constraints: you can now make an interface sealed. The subclasses (both to sealed classes and sealed interfaces) should be located in the same compilation unit and in the same package as the super class, but they can now be located in different files.

sealed interface Expr
data class Const(val number: Double) : Expr
data class Sum(val e1: Expr, val e2: Expr) : Expr
object NotANumber : Expr

fun eval(expr: Expr): Double = when(expr) {
    is Const -> expr.number
    is Sum -> eval(expr.e1) + eval(expr.e2)
    NotANumber -> Double.NaN
}

Sealed classes, and now interfaces, are useful for defining abstract data type (ADT) hierarchies.

Another important use case that can now be nicely addressed with sealed interfaces is closing an interface for inheritance outside the library. Defining an interface as sealed restricts its implementation to the same compilation unit and the same package, which in the case of a library, makes it impossible to implement outside of the library.

For example, the

Job

interface from the

kotlinx.coroutines

package is only intended to be implemented inside the

kotlinx.coroutines

library. Making it

sealed

makes this intention explicit:

package kotlinx.coroutines
sealed interface Job { ... }

As a user of the library, you are no longer allowed to define your own subclass of

Job

. This was always “implied”, but with sealed interfaces, the compiler can formally forbid that.

Using JVM support in the future

Preview support for sealed classes has been introduced in Java 15 and on the JVM. In the future, we’re going to use the natural JVM support for sealed classes if you compile Kotlin code to the latest JVM (most likely JVM 17 or later when this feature becomes stable).

In Java, you explicitly list all the subclasses of the given sealed class or interface:

// Java
public sealed interface Expression
    permits Const, Sum, NotANumber { ... }

This information is stored in the class file using the new

PermittedSubclasses

attribute. The JVM recognizes sealed classes at runtime and prevents their extension by unauthorized subclasses.

In the future, when you compile Kotlin to the latest JVM, this new JVM support for sealed classes will be used. Under the hood, the compiler will generate a permitted subclasses list in the bytecode to ensure there is JVM support and additional runtime checks.

// for JVM 17 or later
Expr::class.java.permittedSubclasses // [Const, Sum, NotANumber]

In Kotlin, you don’t need to specify the subclasses list! The compiler will generate the list based on the declared subclasses in the same package.

The ability to explicitly specify the subclasses for a super class or interface might be added later as an optional specification. At the moment we suspect it won’t be necessary, but we’ll be happy to hear about your use cases, and whether you need this functionality!

Note that for older JVM versions it’s theoretically possible to define a Java subclass to the Kotlin sealed interface, but don’t do it! Since JVM support for permitted subclasses is not yet available, this constraint is enforced only by the Kotlin compiler. We’ll add IDE warnings to prevent doing this accidentally. In the future, the new mechanism will be used for the latest JVM versions to ensure there are no “unauthorized” subclasses from Java.

You can read more about sealed interfaces and the loosened sealed classes restrictions in the corresponding KEEP or in the documentation, and see discussion in this issue.

How to try the new features

You need to use Kotlin 1.4.30. Specify language version 1.5 to enable the new features:

compileKotlin {
    kotlinOptions {
        languageVersion = "1.5"
        apiVersion = "1.5"
    }
}

To try JVM records, you additionally need to use jvmTarget 15 and enable JVM preview features: add the compiler options

-language-version 1.5

and

-Xjvm-enable-preview

.

Pre-release notes

Note that support for the new features is experimental and the 1.5 language version support is in the pre-release status. Setting the language version to 1.5 in the Kotlin 1.4.30 compiler is equivalent to trying the 1.5-M0 preview. The backward compatibility guarantees do not cover pre-release versions. The features and the API may change in subsequent releases. When we reach a final Kotlin 1.5-RC, all binaries produced by pre-release versions will be outlawed by the compiler, and you will be required to recompile everything that was compiled by 1.5‑Mx.

Share your feedback

Please try the new features described in this post and share your feedback! You can find further details in KEEPs and take part in the discussions in YouTrack, and you can report new issues if something doesn’t work for you. Please share your findings regarding how well the new features address use cases in your projects!

Further reading and discussion:

Continue ReadingNew Language Features Preview in Kotlin 1.4.30

The JVM Backend Is in Beta: Let’s Make It Stable Together

We’ll make the new backend Stable soon, so we need each of you to adopt it, let’s see how to do it.

We have been working to implement a new JVM IR backend as part of our ongoing project to rewrite the whole compiler. This new compiler will boost performance both for Kotlin users and for the Kotlin team itself by providing a versatile infrastructure that makes it easy to add new language features.

Our work on the JVM IR backend is nearly complete, and we’ll be making it Stable soon. Before we can do that, though, we need you to use it. In Kotlin 1.4.30, we’re making the new backend produce stable binaries, which means you will be able to safely use it in your projects. Read on to learn about the changes this new backend brings, as well as how to contribute to the process of finalizing this part of the compiler.

What changes with the new backend:

  • We’ve fixed a number of bugs that were present in the old backend.
  • The development of new language features will be much faster.
  • We will add all future performance improvements to the new JVM backend.
  • The new Jetpack Compose will only work with the new backend.

Another point in favor of starting to use the new JVM IR backend now is that it will become the default in Kotlin 1.5.0. Before we make it the default, we want to make sure we fix as many bugs as we can, and by adopting the new backend early you will help ensure that the migration is as smooth as possible.

To start using the new JVM IR backend

  1. Update the Kotlin dependency to 1.4.30 in your project.
  2. In the build configuration file, add the following lines to the target platform block of your project/module to turn the new compiler on.
    For Gradle add the following:

    • In Groovy
      compileKotlin {
          kotlinOptions.useIR = true
      }
      
    • In Kotlin
      	Kotlin
          import org.jetbrains.kotlin.gradle.tasks.KotlinCompile
          // ...
          val compileKotlin: KotlinCompile by tasks
          compileKotlin.kotlinOptions.useIR = true
      

    And for Maven add the following:

     
    <configuration>
     	<args>
    		<arg>-Xuse-ir</arg>
    	</args>
    </configuration>
    
  3. Please make a clean build and run tests after enabling the new backend to verify that your project compiles successfully.

You shouldn’t notice any difference, but if you do, please report it in YouTrack or send us a message in this Slack channel (get an invite here). When you do, please attach a list of steps to reproduce the problem and a code sample if possible.

You can switch back to the old backend anytime simply by removing the line from step two and rebuilding the project.

Continue ReadingThe JVM Backend Is in Beta: Let’s Make It Stable Together

Kotlin Plugin Released With IDEA 2020.3

We’ve changed the release cycle of the Kotlin plugin, so that all major updates are now synced with IntelliJ IDEA releases.

In this release:

  • Inline refactoring is now cross-language: you can apply it to any Kotlin elements defined in Java and the conversion will be performed automatically.
  • Now you can search and replace parts of your code based on its structure.
  • Building performant and beautiful user interfaces is now easy with the new experimental JetPack Compose templates.

New infrastructure and release cycle

We now ship new versions of the Kotlin plugin with every release of the IntelliJ Platform and every new version of Kotlin. IDE support for the latest version of the language will be available for the latest two versions of IntelliJ IDEA and for the latest version of Android Studio.

We’ve made this change to minimize the time it takes us to apply platform changes, and to make sure that the IntelliJ team gets the latest Kotlin IDE fixes quickly and with enough time to test them.

Inline refactoring

The Kotlin plugin now supports cross-language conversion. Starting with version 2020.3 of the Kotlin plugin, you can use inline refactoring actions for Kotlin elements defined in Java.

You can apply them via Refactor / Inline… or ⌥⌘N on Mac or Ctrl+Alt+N on Windows and Linux.

We’ve improved the inlining of lambda expressions. You no longer have to rewrite your code after you inline it; the IDE now analyzes the lambda syntax more thoroughly and formats the lambdas correctly.

Structural Search and Replace for Kotlin

Structural search and replace (SSR) actions are now available for Kotlin. Now you can find and replace code patterns, whilst taking the syntax and semantics of the source code into account.

To use the feature, go to Edit | Find | Search Structurally…. You can also write a search template or choose

Existing Templates…

by clicking the tools icon in the top right corner of the window.

You can add filters for variables to narrow down your search, for example, the

Type

filter:

Full support of EditorConfig

Starting from 2020.3, the Kotlin plugin fully supports storing code formatting settings in the .editorconfig.

Desktop and Multiplatform templates for JetPack Compose for Desktop

JetPack compose is a modern UI framework for Kotlin that makes it easy and enjoyable to build performant and beautiful user interfaces. The new experimental Jetpack Compose for Desktop templates are now available in the Kotlin Project Wizard. You can create a project using Desktop or Multiplatform templates. A desktop template is for the desktop JVM platform, and the Multiplatform template is for the desktop JVM platform and Android with shared code in common modules.

You can read more about the Jetpack Compose features in this blog post, look through the examples of Compose applications, and try them out in the latest version of the Kotlin plugin.

Learn about the Kotlin Plugin updates in What’s new.


We’ve added a lot of new IDE features that are all designed for greater productivity and to make working with the code more fun.

  • Now when you run an application in debug mode, you get clickable inline hints that you can expand to see all the fields that belong to the variable. Moreover, you can change the variable values inside the drop-down list.
  • Work together on your code with your colleagues. In IntelliJ IDEA 2020.3 you can use Code With Me, a tool for collaborative development and remote pair programming.
  • In IntelliJ IDEA 2020.3, we made it easier to start analyzing snapshots from the updated Profiler tool window. The Profiler allows you to view and study snapshots to track down performance and memory issues. You can open a snapshot file in the Recent Snapshots area and analyze it from there.
  • Evaluate math expressions and find Git branches and commits by their hashes in Search Everywhere.
  • Split the editor with drag and drop tabs.

These and other updates are described in detail in What’s new for IDEA 2020.3

Don’t forget to share your feedback!

You can do that by

Continue ReadingKotlin Plugin Released With IDEA 2020.3

Revamped Kotlin Documentation – Give It a Try

We’re revamping our Kotlin documentation to bring you more helpful features and improve your user experience. To name just a few advantages, the new Kotlin documentation:

  • Is mobile friendly.
  • Has a completely new look and improved structure.
  • Provides easy navigation on each page.
  • Lets you share feedback on every page.
  • Lets you copy code with a single click.

More features such as dark theme support are coming soon.

Kotlin revamped documentation

Before going to production and deprecating the current version of the documentation, we want to ensure that we haven’t missed any critical issues. We’d be extremely grateful if you would look through the new revamped documentation and share your feedback with us.

View new documentation 👀

When you view the Kotlin documentation you’ll still see it in the old format. To view the documentation in the new revamped format, click this link. This will add a cookie to your browser which will enable you to see the documentation in the new format.

New Kotlin documentation

If you want to check anything in the old version of the documentation, open the usual link in another browser or use an incognito window in your current browser.

If you find that there are problems with the new documentation and you would like to revert to the old format, please let us know about these issues and click this link. This link will remove the cookie and you will only see the documentation in the old format.

Share your feedback 🗣

Your feedback is very important for us. You can:

  • Add comments to this blog post.
  • Share feedback in the #docs-revamped channel in our Kotlin public Slack (get an invite).
  • Report an issue to our issue tracker.
  • Email us at doc-feedback@kotlinlang.org.
  • Share your feedback with us at the bottom of a specific documentation page by answering No to the question Was this page helpful? and filling in the feedback form.

Widget for providing feedback

What’s next? 👣

We will collect your feedback over the next two weeks and analyze it. If there are critical issues that many of you point out, we will do our best to address them before going to production and deprecating the existing documentation.

We will also communicate our plan to address all the feedback that we receive from you. We can’t promise that we will be able to do everything at once, but we will strive to improve the Kotlin documentation and to bring you the best possible user experience! Your feedback will help us achieve this.

Check the #docs-revamped channel in our public Slack (get an invite) for more information and updates.


Try out the 🆕 documentation now!

Continue ReadingRevamped Kotlin Documentation – Give It a Try

Kotlin 1.4.20 Released

Kotlin 1.4.20 is here with new experimental features for you to try. Being open to community feedback is one of the Kotlin team’s basic principles, and we need your thoughts about the prototypes of the new features. Give them a try and share your feedback on Slack (get an invite here) or YouTrack.

Kotlin 1.4.20

Here are some of the key highlights:

  • Support for new JVM features, like string concatenation via invokedynamic.
  • Improved performance and exception handling for KMM projects.
  • Extensions for JDK Path:
    Path("dir") / "file.txt"

    .

We are also shipping numerous fixes and improvements for existing features, including those added in 1.4.0. So if you encountered problems with any of those features, now’s a good time to give them another try.

Read on to learn more about the features of Kotlin 1.4.20. You can also find a brief overview of the release on the What’s new in Kotlin 1.4.20 page in Kotlin docs. The complete list of changes is available in the change log.

As always, we’d like to thank our external contributors who helped us with this release.

Now let’s dive into the details!

Kotlin/JVM

On the JVM, we’ve added the new JVM 15 target and mostly focused on improving existing functionality and performance, as well as fixing bugs.

invokedynamic string concatenation

Since Java 9, the string concatenation on the JVM has been done via the dynamic method invocation (the

invokedynamic

instruction in the bytecode). This works faster and consumes less memory than the previous implementation, and it leaves space for future optimizations without requiring bytecode changes.

We have started to implement this mechanism in Kotlin for better performance, and it can now compile string concatenations into dynamic invocations on JVM 9+ targets.

Currently, this feature is experimental and covers the following cases:

  • String.plus

    in the operator (

    a + b

    ), explicit (

    a.plus(b)

    ), and reference (

    (a::plus)(b)

    ) forms.

  • toString

    on inline and data classes.

  • String templates, except for those with a single non-constant argument (see KT-42457).

To enable

invokedynamic

string concatenation, add the

-Xstring-concat

compiler option with one of the following values:

Kotlin/JS

Kotlin/JS continues to evolve at a rapid pace, and this release brings a variety of improvements, including new templates for its project wizard, improved DSL for better control over project configuration, and more. The new IR compiler has also received a brand-new way to compile projects ignoring errors in their code.

Gradle DSL changes

The Kotlin/JS Gradle DSL has received a number of updates that simplify project setup and customization, including webpack configuration adjustments, modifications to the auto-generated

package.json

file, and improved control over transitive dependencies.

Single point for webpack configuration

Kotlin 1.4.20 introduces a new configuration block for the

browser

target called

commonWebpackConfig

. Inside it, you can adjust common settings from a single point, instead of duplicating configurations for

webpackTask

,

runTask

, and

testTask

.

To enable CSS support by default for all three tasks, you just need to include the following snippet in the

build.gradle(.kts)

of your project:

kotlin {
	browser {
		commonWebpackConfig {
			cssSupport.enabled = true
		}
		binaries.executable()
	}
}

package.json customization from Gradle

The

package.json

file generally defines how a JavaScript project should behave, identifying scripts that are available to run, dependencies, and more. It is automatically generated for Kotlin/JS projects during build time. Because the contents of

package.json

vary from case to case, we’ve received numerous requests for an easy way to customize this file.

Starting with Kotlin 1.4.20, you can add entries to the

package.json

project file from the Gradle build script. To add custom fields to your

package.json

, use the

customField

function in the compilations

packageJson

block:

kotlin {
    js(BOTH) {
        compilations["main"].packageJson {
            customField("hello", mapOf("one" to 1, "two" to 2))
        }
    }
}

When you build the project, this will add the following block to the configuration file

build/js/packages/projectName/package.json

:

"hello": {
  "one": 1,
  "two": 2
}

Whether you want to add a scripts field to the configuration, making it easy to run your project from the command line, or want to include information for other post-processing tools, we hope that you will find this new way of specifying custom fields useful.

Selective yarn dependency resolutions (experimental)

When including dependencies from npm, there are times when you want to have fine-grained control over their dependencies (transitive dependencies). There are numerous reasons this could be the case. You might want to apply an important upgrade to one of the dependencies of a library you are using. Or you may want to roll back an update of a transitive dependency which currently breaks your application. Yarn’s selective dependency resolutions allow you to override the dependencies specified by the original author, so you can keep on developing.

With Kotlin 1.4.20, we are providing a preliminary (experimental) way to configure this feature from a project’s Gradle build script. While we are still working on a smooth API integration with the rest of the Kotlin/JS options, you can already use the feature through the

YarnRootExtension

inside the

YarnPlugin

. To affect the resolved version of a package for your project, use the

resolution

function. In its arguments, specify the package name selector (as specified by Yarn) and the desired version.

An example configuration for selective dependency resolution in your

build.gradle.kts

file would look like this:

rootProject.plugins.withType<YarnPlugin> {
    rootProject.the<YarnRootExtension>().apply {
        resolution("react", "16.0.0")
        resolution("processor/decamelize", "3.0.0")
    }
}

Here, all of your npm dependencies that require

react

will receive version

16.0.0

, and

processor

will receive its

decamelize

dependency as version

3.0.0

. Additionally, you can also pass

include

and

exclude

invocations to the

resolution

block, which allows you to specify constraints about acceptable versions.

Disabling granular workspaces (experimental)

To speed up build times, the Kotlin/JS Gradle plugin only installs the dependencies that are required for a particular Gradle task. For example, the

webpack-dev-server

package is only installed when you execute one of the

*Run

tasks, and not when you execute the

assemble

task. While this means unnecessary downloads are avoided, it can create problems when running multiple Gradle processes in parallel. When the dependency requirements clash, the two installations of npm packages can cause errors.

To resolve this issue, Kotlin 1.4.20 includes a new (experimental) option to disable these so-called granular workspaces. Like the experimental support for selective dependency resolutions, this feature is currently also accessible through the

YarnRootExtension

, but it will likely be integrated more closely with the rest of the Kotlin/JS Gradle DSL. To use it, add the following snippet to your

build.gradle.kts

file:

rootProject.plugins.withType<YarnPlugin> {
    rootProject.the<YarnRootExtension>().disableGranularWorkspaces()
}

With this configuration, the Kotlin/JS Gradle plugin will install all npm dependencies that may be used by your project, including those used by tasks that are not currently being executed. This means that the first Gradle build might take a bit longer, but the downloaded dependencies will be up to date for all tasks you run. This way, you can avoid conflicts when running multiple Gradle processes in parallel.

New Wizard templates

To give you more convenient ways to customize your project during creation, the project wizard for Kotlin comes with new adjustable templates for Kotlin/JS applications. There are templates for both the browser and Node.js runtime environments. They serve as a good starting point for your project and make it possible to fine tune the initial configuration. This includes settings like enabling the new IR compiler or setting up additional library support.

With Kotlin 1.4.20, there are three templates available:

  • Browser Application allows you to set up a barebones Kotlin/JS Gradle project that runs in the browser.
  • React Application contains everything you need to start building a React app using the appropriate kotlin-wrappers. It provides options to enable integrations for style-sheets, navigational components, and state containers.
  • Node.js Application preconfigures your project to run in a Node.js runtime. It comes with the option to directly include the experimental kotlinx-nodejs package, which we introduced in a previous post.

Ignoring compilation errors (experimental)

With Kotlin 1.4.20, we are also excited to showcase a brand-new feature available in the Kotlin/JS IR compilerignoring compilation errors. This feature allows you to try out your application even while it is in a state where it usually wouldn’t compile. For example, when you’re doing a complex refactoring or working on a part of the system that is completely unrelated to a compilation error. With this new compiler mode, the compiler ignores any erroneous code and replaces it with runtime exceptions instead of refusing to compile.

Kotlin 1.4.20 comes with two tolerance policies for ignoring compilation errors in your code:

  • In
    SEMANTIC

    mode, the compiler will accept code that is syntactically correct but doesn’t make sense semantically. An example for this would be a statement containing a type mismatch (like

    val x: String = 3

    ).

  • In
    SYNTAX

    mode, the compiler will accept any and all code, even if it contains syntax errors. Regardless of what you write, the compiler will still try to generate a runnable executable.

As an experimental feature, ignoring compilation errors requires an opt-in through a compiler option. It’s only available for the Kotlin/JS IR compiler. To enable it, add the following snippet to your

build.gradle.kts

file:

kotlin {
   js(IR) {
       compilations.all {
           compileKotlinTask.kotlinOptions.freeCompilerArgs += listOf("-Xerror-tolerance-policy=SYNTAX")
       }
   }
}

We hope that compilation with errors will help you tighten feedback loops and increase your iteration speed when working on Kotlin/JS projects. We look forward to receiving your feedback, and any issues you find while trying this feature, in our YouTrack.

As we continue to refine the implementation of this feature, we will also offer a deeper integration for it with the Kotlin/JS Gradle DSL and its tasks at a later point.

Kotlin/Native

Performance remains one of Kotlin/Native’s main priorities in 1.4.20. A key feature in this area is a prototype of the new escape analysis mechanism that we plan to polish and improve in coming releases. And of course, there are also smaller performance improvements, such as faster range checks (

in

).

Another aspect of the improvements to Kotlin/Native development in 1.4.20 is polishing and bug fixing. We’ve addressed a number of old issues, as well as those found in new 1.4 features, for example the code sharing mechanism. One set of improvements fixes behavior inconsistencies between Kotlin/Native and Kotlin/JVM in corner cases, such as property initialization or the way

equals

and

hashCode

work on functional references.

Finally, we have extended Objective-C interop capabilities with an option to wrap Objective-C exceptions into Kotlin exceptions, making it possible to handle them in the Kotlin code.

Escape analysis

Escape analysis is a technique that the compiler uses to decide whether an object can be allocated on the stack or should “escape” to the heap. Allocation on the stack is much faster and doesn’t require garbage collection in the future.

Although Kotlin/Native already had a local escape analysis, we are now introducing a prototype implementation of a new, more efficient global escape analysis. This is performed in a separate compilation phase for the release builds (with the

-opt

compiler option).

This prototype has already yielded some promising results, such as a 10% average performance increase on our benchmarks. We’re researching ways to optimize the algorithm so that it finds more objects for stack allocation and speeds up the program even more.

While we continue to work on the prototype, you can greatly help us by trying it out and sharing the results you get on your real-life projects.

If you want to disable the escape analysis phase, use the

-Xdisable-phases=EscapeAnalysis

compiler option.

Opt-in wrapping of Objective-C exceptions

The purpose of exceptions in Objective-C is quite different from that in Kotlin. Their use is normally limited to finding errors during development. But technically, Objective-C libraries can throw exceptions in runtime. Previously, there was no option to handle such exceptions in Kotlin/Native, and encountering an

NSException

thrown from a library caused the termination of the whole Kotlin/Native program.

In 1.4.20, we’ve added an option to handle such exceptions in runtime to avoid program crashes. You can opt in to wrap

NSException

’s into Kotlin’s

ForeignException

’s for further handling in the Kotlin code. Such a

ForeignException

holds the reference to the original

NSException

, which lets you get information about the root cause.

To enable the wrapping of Objective-C exceptions, specify the

-Xforeign-exception-mode objc-wrap

option in the

cinterop

call or add the

foreignExceptionMode = objc-wrap

property to the

.def

file. If you use the CocoaPods integration, specify the option in the

pod {}

build script block of a dependency like this:

pod("foo") {
   extraOpts = listOf("-Xforeign-exception-mode”, “objc-wrap")
}

The default behavior remains unchanged: the program terminates when an exception is thrown from the Objective-C code.

CocoaPods plugin improvements

Improved task execution

In this release, we’ve significantly improved the flow of task execution. For example, if you add a new CocoaPods dependency, existing dependencies are not rebuilt. Adding an extra target also doesn’t trigger the rebuilding of dependencies for existing targets.

Extended DSL

In 1.4.20, we’ve extended the DSL for adding CocoaPods dependencies to your Kotlin project.

In addition to local Pods and Pods from the CocoaPods repository, you can add dependencies on the following types of libraries:

  • A library from a custom spec repository.
  • A remote library from a Git repository.
  • A library from an archive (also available by arbitrary HTTP address).
  • A static library.
  • A library with custom cinterop options.

The previous DSL syntax is still supported.

Let’s examine a couple of DSL changes in the following examples:

  • A dependency on a remote library from a Git repository.
    You can specify a tag, commit, or branch by using corresponding keywords, for example:

        pod("JSONModel") {
        source = git("https://github.com/jsonmodel/jsonmodel.git") {
            branch = "key-mapper-class"
        }
    }
       

    You can also combine these keywords to get the necessary version of a Pod.

  • A dependency on a library from a custom spec repository.
    Use the special

    specRepos

    parameter for it:

        specRepos {
        url("https://github.com/Kotlin/kotlin-cocoapods-spec.git")
    }
    pod("example")
    

You can find more examples in the Kotlin with CocoaPods sample.

Updated integration with Xcode

To work correctly with Xcode, Kotlin requires some Podfile changes:

  • If your Kotlin Pod has any Git, HTTP, or specRepo pod dependencies, you should also specify them in the Podfile. For example, if you add a dependency on

    AFNetworking

    from the CocoaPods repository, declare it in the Podfile, as well:

    pod 'AFNetworking'
  • When you add a library from the custom spec, you also should specify the location of specs at the beginning of your Podfile:

    source 'https://github.com/Kotlin/kotlin-cocoapods-spec.git'
    
    target 'kotlin-cocoapods-xcproj' do
      // ... other Pods ...
      pod 'example'
    end

Integration errors now have detailed descriptions in IntelliJ IDEA, so if you have any problems with your Podfile you will immediately get information about how to fix them.

Take a look at the

withXcproject

branch of the Kotlin with CocoaPods sample. It contains an example of Xcode integration with the existing Xcode project named

kotlin-cocoapods-xcproj

.

Support for Xcode 12 libraries

We have added support for new libraries delivered with Xcode 12. Feel free to use them in your Kotlin code!

Updated structure of multiplatform library publications

Before Kotlin 1.4.20, multiplatform library publications included platform-specific publications and a metadata publication. However, there was no need to depend solely on the metadata publication, so this artifact was never used explicitly.

Starting from Kotlin 1.4.20, there is no longer a separate metadata publication. Metadata artifacts are now included in the root publication, which stands for the whole library and is automatically resolved to the appropriate platform-specific artifacts when added as a dependency to the common source set.

Note that you must not add an empty artifact without a classifier to the root module of your library to meet the requirements of repositories such as Maven Central, as this will result in a conflict with metadata artifacts that are now included in this module.

Compatibility with libraries published in 1.4.20

If you have enabled hierarchical project structure support and want to use a multiplatform library that was published with such support in Kotlin 1.4.20 or higher, you will need to upgrade Kotlin in your project to version 1.4.20 or higher, as well.

If you are a library author and you publish your multiplatform library in Kotlin 1.4.20+ with hierarchical project structure support, keep in mind that users with earlier Kotlin versions who also have hierarchical project structure support enabled will not be able to use your library. They will need to upgrade Kotlin to 1.4.20 or higher.

However, if you or your library’s users do not enable hierarchical project structure support, those with earlier Kotlin versions will still be able to use your library.

Learn more about publishing a multiplatform library.

Standard library changes

Extensions for java.nio.file.Path

Starting from 1.4.20, the standard library provides experimental extensions for

java.nio.file.Path

.

Working with the modern JVM file API in an idiomatic Kotlin way is now similar to working with

java.io.File

extensions from the

kotlin.io

package. There is no longer any need to call static methods of

Files

anymore, because most of them are now available as extensions on the

Path

type.

The extensions are located in the

kotlin.io.path

package. Since

Path

itself is available in JDK 7 and higher, the extensions are placed in the

kotlin-stdlib-jdk7

module. In order to use them, you need to opt in to the experimental annotation

ExperimentalPathApi

.

// construct path with the div (/) operator
val baseDir = Path("/base")
val subDir = baseDir / "subdirectory" 

// list files in a directory
val kotlinFiles: List<Path> = Path("/home/user").listDirectoryEntries("*.kt")
```

We especially want to thank our contributor AJ Alt for submitting the initial PR with these extensions.

Improved performance of the

String.replace

function

We are always thrilled when the Kotlin community suggests improvements, and the following is one such case. In this release, we’ve changed the implementation of the

String.replace()

function.

The case-sensitive variant uses a manual replacement loop based on

indexOf

, while the case-insensitive one uses regular expression matching.

This improvement speeds up the execution of the function in certain cases.

Deprecation of Kotlin Android Extensions

Ever since we created Kotlin Android Extensions, they have played a huge role in the growth of Kotlin’s popularity in the Android ecosystem. With these extensions, we provided developers with convenient and efficient tools for reducing boilerplate code:

  • Synthetic views (
    kotlinx.android.synthetics

    ) for UI interaction.

  • Parcelable

    implementation generator (

    @Parcelize

    ) for passing objects around as

    Parcel

    ‘s.

Initially, we thought about adding more components to

kotlin-android-extensions

. But this didn’t happen, and we’ve even received user requests to split the plugin into independent parts.

On the other hand, the Android ecosystem is always evolving, and developers are getting new tools that make their work easier. Some gaps that Kotlin Android Extensions were filling have now been covered by native mechanisms from Google. For example, regarding the concise syntax for UI interaction, there is now Android Jetpack, which has view binding that replaces

findViewById

, just like Kotlin synthetics.

Given these two factors, we’ve decided to retire synthetics in favor of view binding and move the Parcelable implementation generator to a separate plugin.

In 1.4.20, we’ve extracted the Parcelable implementations generator from

kotlin-android-extensions

and started the deprecation cycle for the rest of it, which currently is only synthetics. For now, they will continue to work with a deprecation warning. In the future, you’ll need to switch your project to another solution. We will soon add the link to the guidelines for migrating Android projects from synthetics to view bindings here, so stay tuned.

The Parcelable implementation generator is now available in the new

kotlin-parcelize

plugin. Apply this plugin instead of

kotlin-android-extensions

. The

@Parcelize

annotation is moved to the

kotlinx.parcelize

package. Note that

kotlin-parcelize

and

kotlin-android-extensions

can’t be applied together in one module.

How to update

Before updating your projects to the latest version of Kotlin, you can try the new language and standard library features online at play.kotl.in. It will be updated to 1.4.20 soon.

In IntelliJ IDEA and Android Studio, you can update the Kotlin Plugin to version 1.4.20 – learn how to do this here.

If you want to work on existing projects that were created with previous versions of Kotlin, use the

1.4.20

Kotlin version in your project configuration. For more information, see the docs for Gradle and for Maven.

You can download the command-line compiler from the Github release page.

You can use the following library versions with this release:

The versions of libraries from

kotlin-wrappers

(

kotlin-react

etc.) can be found in the corresponding repository.

The release details and the list of compatible libraries are also available here.

If you run into any problems with the new release, you can find help on Slack (get an invite here) and report issues in our YouTrack.

External contributors

We’d like to thank all of our external contributors whose pull requests were included in this release:

Jinseong Jeon
Toshiaki Kameyama
Steven Schäfer
Mads Ager
Mark Punzalan
Ivan Gavrilovic
pyos
Jim Sproch
Kristoffer Andersen
Aleksandrina Streltsova
cketti
Konstantin Virolainen
AJ Alt
Henrik Tunedal
Juan Chen
KotlinIsland,
Valeriy Vyrva
Alex Chmyr
Alexey Kudravtsev
Andrey Matveev
Aurimas Liutikas
Dat Trieu
Dereck Bridie
Efeturi Money
Elijah Verdoorn
Enteerman
fee1-dead
Francesco Vasco
Gia Thuan Lam
Guillaume Darmont
Jake Wharton
Julian Kotrba
Kevin Bierhoff
Matthew Gharrity
Matts966
Raluca Sauciuc
Ryan Nett
Sebastian Kaspari
Vladimir Krivosheev
n-p-s
Pavlos-Petros Tournaris
Robert Bares
Yoshinori Isogai
Kris
Derek Bodin
Dominik Wuttke
Sam Wang
Uzi Landsmann
Yuya Urano
Norbert Nogacki
Alexandre Juca

Continue ReadingKotlin 1.4.20 Released

Roman Elizarov is the new Project Lead for Kotlin

TL;DR: I am stepping down as the Project Lead for Kotlin. Roman Elizarov is the new Project Lead.

Kotlin has just had the 10th anniversary of the first commit which I made back in November 2010. In the last ten years we went from a few notes on a whiteboard to millions of users, and from a handful of part-time people to a team of about 100. I’m excited about Kotlin doing so well, thanks to every member of our wonderful community, including every person on the team at JetBrains, every contributor, every speaker and teacher, everybody active on Slack or StackOverflow. Together, we’ll make the future of Kotlin an exciting reality.

Personally, I’ve chipped in with ten years of doing my best. I worked on the design of the language along with Max Shafirov and many other great people who joined later. I wrote the first reference for the language, a big part of the first compiler front-end and the initial IDE integration. I managed the team as it grew bigger and bigger. It was an exciting journey that I’ll never forget.

Now it’s time for me to take a break and think about what I want to do next. I’m not a lifetime project type of person, and the backlog of ideas I’d like to explore is getting a bit too long, many of them outside of the realm of engineering and computer science. So, I’m stepping down as the lead of Kotlin passing the duty and privilege of driving the project to its further success to my excellent colleagues.

Roman Elizarov is the new Project Lead. This includes the authority to finalize all language-related decisions including new features and evolution/retirement of the old ones. Roman is an outstanding engineer and had influenced the design of Kotlin even before he joined the team. We’ve been working together for over 4 years now, and I value every minute of it. Roman is the mastermind of Kotlin coroutines and is now leading all our language research activities. I’m sure he’s a perfect fit for this role. With Roman as the Project Lead, Stanislav Erokhin in charge of the development team, and Egor Tolstoy leading Product Management, I know Kotlin is in good hands. My best wishes! Keep up the great work you folks have been doing all these years.

What’s going to change now for the Kotlin community? Nothing much, really. We’ve been preparing this transition very carefully, and don’t expect any noticeable bumps in the road. I will be in touch with Roman and the team as much as needed, will participate in strategy discussions, and share any and all of my knowledge and experience that the team might need.

JetBrains is 100% committed to Kotlin, and the Kotlin Foundation (created by JetBrains and Google) is doing great. With such a strong backing and such an outstanding community, Kotlin is bound to live long and prosper.

My best memories about the ten years of Kotlin are all about people. I’ve met extraordinary folks at JetBrains, among the early adopters, at Google, Gradle and Spring whom we’ve been partnering with so closely and fruitfully, among the KotlinConf speakers and all the passionate supporters we are so proud to have. I’m so grateful for having the privilege to know you folks, it’s the best part of Kotlin, if you ask me.

I will always remain loyal to Kotlin and our community, it’s a huge and beloved part of my life. You can reach me on Twitter or Kotlin Slack any time. Expect to see you folks in person when offline conferences are a thing again 🙂

All the best, and

Have a nice Kotlin!

Continue ReadingRoman Elizarov is the new Project Lead for Kotlin

Kotlin 1.4 Online Event Recap: Materials and QuizQuest Winners

The Kotlin 1.4 Online Event materials are available for you to watch any time. Recordings of the talks, and Q&As are available on the event web page, along with slides. You can also watch the recordings sequentially using the Kotlin 1.4 Online Event playlist:

Kotlin 1.4 talks

At the time of writing, the recordings from the Kotlin 1.4 playlist have been viewed more than 120 thousand times in total. The five most viewed talks are:

If you’d like to get a better feel for the atmosphere of the event, as well as more insights from the speakers’ interviews, check out the recordings of the streams in the event playlist.

Q&A sessions and the Kotlin team’s AMA session on Reddit

We received more than 700 questions and managed to answer about half of them during the event. The remaining questions have since been covered during the Kotlin team’s recent AMA session on Reddit. Check it out to find the answers to your questions.

About the Kotlin 1.4 Online Event

The Kotlin 1.4 Online Event took place on October 12–15, 2020. Over 23,000 people from 69 countries joined the live stream to watch the talks, chat with the Kotlin community, and pose questions for the speakers. We received 2090 applications for the virtual booth and more than 5000 submissions in the QuizQuest.

Thanks a million to everyone who joined and filled those four days with knowledge, attention, and support for each other, including all of our speakers, organizers, guests, and partners!

We particularly want to thank those who sent us videos for our community compilation. It is a true gem from our event! Enjoy watching:

QuizQuest results

QuizQuest, one of the most exciting activities of the event, is finished and we have winners in three categories:

  1. Quick Thinker. In this category we raffled off 45 stylish bottles among attendees who solved at least 1 quiz correctly and submitted it before the deadline (16:00 CEST the day after the quiz was posted).
  2. Explorer. All QuizQuesters who submitted at least one quiz and answered 50% of questions correctly were enrolled in this raffle. We randomly chose 50 winners who are getting a T-shirt with the event logo.
  3. Captain Kotlin. In this challenging category, players had to work hard and answer as many questions as possible from all the quizzes. The top 12 scorers are getting a branded backpack from JetBrains, as well as special kudos from the Kotlin team and the community! Here are their names and scores:

    • Mikhail Islentyev – 100.0
    • Aakarshit Uppal – 99.007
    • Ivan Mikhnovich – 98.440
    • Jean-François Michel – 98.355
    • Zoltán Kadlecsik – 98.128
    • Sergey Ryabov – 96.794
    • Nicola Corti – 96.142
    • Maciej Uniejewski – 96.0
    • Bogdan Popov – 95.765
    • Mayank Kharbanda – 95.680
    • KAUSHIK N SANJI – 95.461
    • Dmitri Maksimov – 95.433

All the winners will receive an email with instructions on how to redeem their prizes.

The Kotlin 1.4 Online Event was a great success thanks to the vibrant Kotlin community. We would love for you to join our future events, so if you’re interested, please register to be notified about them.

Continue ReadingKotlin 1.4 Online Event Recap: Materials and QuizQuest Winners

Jetpack Compose Screenshot Testing with Shot

Jetpack Compose Screenshot Testing with Shot

Weeks ago, the Jetpack Compose team announced their first alpha release full of useful features, a lot of reference code and some documentation. It was a great moment for the Android ecosystem! We know Compose is going to change the way Android devs build apps around the world and we also wanted to help. That’s why today we announce official Screenshot Testing support for Jetpack Compose in Shot.

Jetpack Compose Screenshot Testing with Shot

Right after the release, José Alcérreca wrote to us to mention there could be an interesting opportunity to implement screenshot testing support as part of our already implemented screenshot testing library.

Screenshot testing is one of the best testing strategies you can use in your UI layer. We have been using it for years, and we love it ❤️ It takes a picture of your views in the state you want and saves it as a reference in your repository. Once you consider these screenshots are correct, these will be used to ensure there are no bugs in your UI layer in the next tests’ execution. If there is any error, you’ll get a report with the screenshot diff showing you where the failure is. Here you have an example of a failed test because, by mistake, the account number is not shown using the * mask it showed before:

Jetpack Compose Screenshot Testing with Shot

Look at the third image! The “diff” column shows where the error is using pixel-perfect precision 📸

Jetpack Compose Screenshot Testing with Shot
original screenshot – new screenshot – diff report

In case you would like to know more about the usage of this testing strategy, you can find a detailed blog post about this testing strategy here by Eduardo Pascua.

However, at this point, you might be wondering what a Jetpack Compose Screenshot test might look like 🤔 Don’t worry, tt’s really simple. We’ve provided a simple interface named “ScreenshotTest” for you. You can add this interface to any instrumentation test and use it to take the components screenshot as follows:


class OverviewBodyTest : ScreenshotTest {

    @get:Rule
    val composeTestRule = createComposeRule()

    @Test
    fun rendersDefaultOverviewBody() {
        renderOverviewBody()

        compareScreenshot(composeTestRule)
    }

    private fun renderOverviewBody() {
        composeTestRule.setContent {
            RallyTheme {
                OverviewBody()
            }
        }
    }

}

Easy, isn’t it? And fully compatible with all the testing tools provided by the Jetpack Compose team! Once you run your tests, Shot will generate a detailed report for you with all the screenshots recorded as follows:

Jetpack Compose Screenshot Testing with Shot

You will also find a useful plain console report whenever you execute your tests in CI or from your terminal

Jetpack Compose Screenshot Testing with Shot

Now it is time to test your components using screenshot tests! If you want to review more information about Shot, you can check the official documentation here. In case you would like to review a sneak peek of what a Compose app with screenshot tests would look like, we’ve prepared a branch of the official Google code samples with some tests written using Shot.

We hope you find Shot super useful and you can start using it soon. Remember it can also be used with regular Android apps if you can’t use Jetpack Compose right now!

Continue ReadingJetpack Compose Screenshot Testing with Shot

The Dark Secrets of Fast Compilation for Kotlin

Compiling a lot of code fast is a hard problem, especially when the compiler has to perform complex analyses such as overload resolution and type inference with generics. In this post, I’ll tell you about a huge and largely invisible part of Kotlin which makes it compile much faster on relatively small changes that happen a lot in an everyday run-test-debug loop.

Also, we are looking for Senior Developers to join the team at JetBrains working on fast compilation for Kotlin, so if you are interested, look at the bottom of this post.

Let’s start with the obligatory XKCD comic #303

XKCD comic 303: Compiling

This post is about one very important aspect of every developer’s life: How long does it take to run a test (or just hit the first line of your program) after you make a change in the code. This is often referred to as time-to-test.

Why this is so important:

  • If time-to-test is too short, you are never forced to get some coffee (or a sword fight),
  • If time-to-test is too long, you start browsing the social media or get distracted in some other ways, and lose track of what that change you made was.

While both situations arguably have their pros and cons, I believe it’s best to take breaks consciously and not when your compiler tells you to. Compilers are smart pieces of software but healthy human work schedules is not what compilers are smart at.

Developers tend to be happier when they feel productive. Compilation pauses break the flow and make us feel stuck, stopped in our tracks, unproductive. Hardly anybody enjoys that.

Why does compilation take so long?

There are generally three big reasons for long compilation times:

  1. Codebase size: compiling 1 MLOC usually takes longer than 1 KLOC.
  2. How much your toolchain is optimized, this includes the compiler itself and any build tools you are using.
  3. How smart your compiler is: whether it figures many things out without bothering the user with ceremony or constantly requires hints and boilerplate code.

The first two factors are kind of obvious, let’s talk about the third one: the smartness of the compiler. It is usually a complicated tradeoff, and in Kotlin, we decided in favor of clean readable type-safe code. This means that the compiler has to be pretty smart, and here’s why.

Where Kotlin stands

Kotlin is designed to be used in an industrial setting where projects live long, grow big and involve a lot of people. So, we want static type safety to catch bugs early and get precise tooling (completion, refactorings and find usages in the IDE, precise code navigation and such). Then, we also want clean readable code without unneeded noise or ceremony. Among other things, this means that we don’t want types all over the code. And this is why we have smart type inference and overload resolution algorithms that support lambdas and extensions function types, we have smart casts (flow-based typing), and so on. The Kotlin compiler figures out a lot of stuff on its own to keep the code clean and type-safe at the same time.

Can one be smart and fast at the same time?

To make a smart compiler run fast you certainly need to optimize every bit of the toolchain, and this is something we are constantly working on. Among other things, we are working on a new-generation Kotlin compiler that will run much faster than the current one. But this post is not about that.

However fast a compiler is, it won’t be too fast on a big project. And it’s a huge waste to recompile the entire codebase on every little change you make while debugging. So, we are trying to reuse as much as we can from previous compilations and only compile what we absolutely have to.

There are two general approaches to reducing the amount of code to recompile:

  • Compile avoidance — only recompile affected modules,
  • Incremental compilation — only recompile affected files.

(One could think of an even finer grained approach that would track changes in individual functions or classes and thus recompile even less than a file, but I’m not aware of practical implementations of such an approach in industrial languages, and altogether it doesn’t seem necessary.)

Now let’s look into compile avoidance and incremental compilation in more details.

Compile avoidance

The core idea of compile avoidance is:

  • Find “dirty” (=changed) files
  • Recompile the modules these files belong to (use the results of the previous compilation of other modules as binary dependencies)
  • Determine which other modules may be affected by the changes
    • Recompile those as well, check their ABIs too
    • Repeat until all affected modules are recompiled

The algorithm is more or less straightforward if you know how to compare ABIs. Otherwise, we get to recompile those modules that got affected by the changes. Of course, a change in a module that nobody depends on will compile faster than a change in the ‘util’ module that everybody depends on (if it affects its ABI).

Tracking ABI changes

ABI stands for Application Binary Interface, and it’s kind of the same as API but for the binaries. Essentially, the ABI is the only part of the binary that dependent modules care about (this is because Kotlin has separate compilation, but we won’t go into this here).

Roughly speaking, a Kotlin binary (be it a JVM class file or a KLib) contains declarations and bodies. Other modules can reference declarations, but not all declarations. So, private classes and members, for example, are not part of the ABI. Can a body be part of an ABI? Yes, if this body is inlined at the call site. Kotlin has inline functions and compile-time constants (const val’s). If a body of an inline function or a value of a const val changes, dependent modules may need to be recompiled.

So, roughly speaking, the ABI of a Kotlin module consists of declarations, bodies of inline functions and values of const vals visible from other modules.

A straightforward way to detect a change in the ABI is

  • Store the ABI from the previous compilation in some form (you might want to store hashes for efficiency),
  • After compiling a module, compare the result with the stored ABI:
    • If it’s the same, we are done;
    • If it’s changed, recompile dependent modules.

Pros and cons of compile avoidance

The biggest advantage of compile avoidance is its relative simplicity.

This approach really helps when modules are small, because the unit of recompilation is an entire module. If your modules are big, recompilations will be long. So, basically, compile avoidance dictates that we have a lot of small modules, and as developers we may or may not want this. Small modules don’t necessarily sound like a bad design but I’d rather structure my code for people, not machines.

Another observation is that many projects have something like a ‘util’ module where many small useful functions reside. And virtually every other module depends on ‘util’ at least transitively. Now, let’s say I want to add another tiny useful function that is used three times across my codebase. It adds to the module ABI, so all dependent modules are affected, and I get into a long corridor sword fight because my entire project is being recompiled.

On top of that, having a lot of small modules (each of which depends on multiple others) means that the configuration of my project may become huge because for each module it includes its unique set of dependencies (source and binary). Configuring each module in Gradle normally takes some 50-100ms. It is not uncommon for large projects to have more than 1000 modules, so total configuration time may be well over a minute. And it has to run on every build and every time the project is imported into the IDE (for example, when a new dependency is added).

There are a number of features in Gradle that mitigate some of the downsides of compile avoidance: configurations can be cached, for example. Still, there’s quite a bit of room for improvement here, and this is why, in Kotlin, we use incremental compilation.

Incremental compilation

Incremental compilation is more granular than compile avoidance: it works on individual files rather than modules. As a consequence, it does not care about module sizes nor does it recompile the whole project when an ABI of a “popular” module is changed insignificantly. In general, this approach does not restrict the user as much and leads to shorter time-to-test. Also, developers’ swords get neglected and rusty and beg for being used at least once in a while.

Incremental compilation has been supported in JPS, IntelliJ’s built-in build system since forever. Gradle only supports compile avoidance out-of-the-box. As of 1.4, the Kotlin Gradle plugin brings a somewhat limited implementation of incremental compilation to Gradle, and there’s still a lot of room for improvement.

Ideally, we’d just look at the changed files, determine exactly which files depended on them, and recompile all these files. Sounds nice and easy but in reality it’s highly non-trivial to determine this set of dependent files precisely. For one thing, there can be circular dependencies between source files, something that’s not allowed for modules in most modern build systems. And dependencies of individual files are not declared explicitly. Note that imports are not enough to determine dependencies because of references to the same package and chain calls: for A.b.c() we need to import at most A, but changes in the type of B will affect us too.

Because of all these complications, incremental compilation tries to approximate the set of affected files by going in multiple rounds, here’s the outline of how it’s done:

  • Find “dirty” (=changed) files
  • Recompile them (use the results of the previous compilation as binary dependencies instead of compiling other source files)
  • Check if the ABI corresponding to these files has changed
    • If not, we are done!
    • If yes, find files affected by the changes, add them to the dirty files set, recompile
    • Repeat until the ABI stabilizes (this is called a “fixpoint”)

Since we already know how to compare ABIs, there are basically only two tricky bits here:

  1. using results of the previous compilation to compile an arbitrary subset of sources, and
  2. finding files affected by a given set of ABI changes.

Both are at least in part features of Kotlin’s incremental compiler. Let’s look at them one-by-one.

Compiling the dirty files

The compiler knows how to use a subset of the previous compilation results to skip compiling non-dirty files and just load the symbols defined in them to produce the binaries for the dirty files. This is not something a compiler would necessarily be able to do if not for incrementality: producing one big binary from a module instead of a small binary per source file is not so common outside of the JVM world. And it’s not a feature of the Kotlin language, it’s an implementation detail of the incremental compiler.

When we compare ABIs of the dirty files with the previous results, we may find out that we got lucky and no more rounds of recompilation are needed. Here are some examples of changes that only require recompilation of dirty files (because they don’t change the ABI):

  • Comments, string literals (except for const val’s) and such
    • Example: change something in the debug output
  • Changes confined to function bodies that are not inline and don’t affect return type inference
    • Example: add/remove debug output, or change the internal logic of a function
  • Changes confined to private declarations (they can be private to classes or files)
    • Example: introduce or rename a private function
  • Reordering of declarations

As you can see, these cases are quite common when debugging and iteratively improving code.

Widening the dirty file set

If we are not so lucky and some declaration has been changed, it means that some files that depend on the dirty ones could produce different results upon recompilation even though not a single line was changed in their code.

A straightforward strategy would be to give up at this point and recompile the whole module. This will bring all the issues with compile avoidance to the table: big modules become a problem as soon as you modify a declaration, and tons of small modules have performance costs too, as described above. So, we need to be more granular: find the affected files and recompile them.

So, we want to find the files that depend on the parts of the ABI that actually changed. For example, if the user renamed

foo

to

bar

, we only want to recompile the files that care about names

foo

and

bar

, and leave other files alone even if they refer to some other parts of this ABI. The incremental compiler remembers which files depend on which declaration from the previous compilation, and we can use this data sort of like a module dependency graph. This, again, is not something non-incremental compilers normally do.

Ideally, for every file we should store which files depend on it, and which parts of the ABI they care about. In practice, it’s too costly to store all dependencies so precisely. And in many cases, there’s no point in storing full signatures.

Consider this example:

File: dirty.kt

// rename this to be 'fun foo(i: Int)'
fun changeMe(i: Int) = if (i == 1) 0 else bar().length

File: clean.kt

fun foo(a: Any) = ""
fun bar() =  foo(1)

Let’s assume that the user renamed the function

changeMe

to

foo

. Note that, although clean.kt is not changed, the body of

bar()

will change upon recompilation: it will now be calling

foo(Int)

from dirty.kt, not

foo(Any)

from clean.kt, and its return type will change too. This means that we have to recompile both dirty.kt and clean.kt. How can the incremental compiler find this out?

We start by recompiling the changed file: dirty.kt. Then we see that something in the ABI has changed:

  • there’s no function
    changeMe

    any more,

  • there’s function
    foo

    that takes an

    Int

    and returns an

    Int

    .

Now we see that clean.kt depends on the name

foo

. This means that we have to recompile both clean.kt and dirty.kt once again. Why? Because types cannot be trusted.

Incremental compilation must produce the same result as would a full recompilation of all sources. Consider the return type of the newly appeared

foo

in dirty.kt. It’s inferred, and in fact it depends on the type of

bar

from clean.kt which is a circular dependency between files. So the return type could change when we add clean.kt to the mix. In this case we’ll get a compilation error, but until we recompile clean.kt together with dirty.kt, we don’t know about it.

The big rule of the state-of-the-art incremental compilation for Kotlin: all you can trust is names. And this is why for each file, we store

  • the ABI it produces, and
  • the names (not full declarations) that were looked up during its compilation.

Some optimizations are possible in how we store all this. For example, some names are never looked up outside the file, e.g. names of local variables and in some cases local functions. We could omit them from the index. To make the algorithm more precise, we record which files were consulted when each of the names was looked up. And to compress the index we use hashing. There’s some space for more improvements here.

As you have probably noticed, we have to recompile the initial set of dirty files multiple times. Alas, there’s no way around this: there can be circular dependencies, and only compiling all the affected files at once will yield a correct result. In the worst case this gets quadratic and incremental compilation may do more work than compile avoidance would, so there should be heuristics in place guarding from it.

Incremental compilation across module boundaries

The biggest challenge to date is incremental compilation that can cross module boundaries.

Say, we have dirty files in one module, we do some rounds and reach a fixpoint there. Now we have the new ABI of this module, and need to do something about the dependent modules.

Of course, we know what names were affected in our initial module’s ABI, and we know which files in the dependent modules looked these names up. Now, we could apply essentially the same incremental algorithm but starting from the ABI change and not from a set of dirty files. BTW, if there’re no circular dependencies between modules, it’s enough to recompile the dependent files alone. But then, if their ABI has changed, we’d need to add more files from the same module to the set and recompile the same files again.

It’s an open challenge to implement this fully in Gradle. It will probably require some changes to the Gradle architecture, but we know from past experience that such things are possible and welcomed by the Gradle team.

Things not covered in this blog post

My goal here was to give you a taste of the fascinating machinery of fast compilation for Kotlin. There’re many more things at play there that I deliberately left out, including but not limited to

  • Build Cache
  • Configuration Cache
  • Task Configuration Avoidance
  • Efficiently storing incremental compilation indexes and other caches on disk
  • Incremental compilation for mixed Kotlin+Java projects
  • Reusing javac data structures in memory to avoid reading Java dependencies twice
  • Incrementality in KAPT and KSP
  • File watchers for quickly finding dirty files

Summary

Now, you have a basic idea of the challenges that fast compilation in a modern programming language poses. Note that some languages deliberately chose to make their compilers not that smart to avoid having to do all this. For better or worse, Kotlin went another way, and it seems that the features that make the Kotlin compiler so smart are the ones the users love the most because they provide powerful abstractions, readability and concise code at the same time.

While we are working on a new-generation compiler front-end that will make compilation much faster by rethinking the implementation of the core type-checking and name resolution algorithms, we know that everything described in this blog post will never go away. One reason for this is the experience with the Java programming language that enjoys the incremental compilation capabilities of IntelliJ IDEA even having a much faster compiler than kotlinc is today. Another reason is that our goal is to get as close as possible to the development roundtrip of interpreted languages that enjoy instantaneous pick up of changes without any compilation at all. So, Kotlin’s strategy for fast compilation is: optimized compiler + optimized toolchain + sophisticated incrementality.

Join the team!

If you are interested in working on this kind of problems please consider the job opening we currently have on the Kotlin’s fast compilation team at JetBrains. Here’s the job listing in English, and one in Russian. Prior experience working on compilers or build tools in NOT required. We are hiring in all JetBrains’ offices (Saint Petersburg, Munich, Amsterdam, Boston, Prague, Moscow, Novosibirsk) or you can work remotely from anywhere in the world. Will be good to hear from you!

Continue ReadingThe Dark Secrets of Fast Compilation for Kotlin

Kotlin 1.4 Online Event

The event of the year is on! Join us on October 12–15 for 4 days of Kotlin as we host the Kotlin 1.4 Online Event. You can tune in at any point during the event to listen to the talks, chat with the team, and ask the speakers questions. In addition to this, there will be lots of activities and entertainment including:

  1. Q&A sessions
  2. QuizQuest, with prize raffles for participants
  3. Virtual Kotlin Booth
  4. Event chat

Read the details below to find out about how to participate in these activities.

Register for Kotlin 1.4 Online Event

Program and speakers

Prepare yourself for four days of exciting talks and discussions about the latest Kotlin release.

Speakers from JetBrains together with Florina Muntenescu from Google and Sébastien Deleuze from Spring will throw light on how we came up with all these new features, and reveal some of the secrets hidden in the depths of the technology.

  • The first day will be about Kotlin in general. There will be a keynote speech, a community overview, and talks on language features and tooling achievements.
  • The second day is dedicated to Kotlin libraries.
  • The third day will be all about using Kotlin everywhere – on Android, for Mobile Multiplatform projects, and Kotlin/JS.
  • The fourth day will cover server-side Kotlin and the future of the language.

The schedule is available on the website. The timetable for the schedule is shown according to your computer’s timezone, so you don’t have to worry about having to figure out exactly when you have to log in to watch the talks you are interested in!

I want to join

How to ask the Kotlin team your questions

Each day will be followed by a live Q&A session with the speakers from that day. They will try to answer as many of your questions as they can. You can already start preparing yours now. There are three ways to ask us:

  • You can ask on Twitter using the hashtag #kotlin14ask, so we can find your question and include it in the discussion.
  • You can ask us directly via this form.
  • You can also ask your questions during the event in the live chat.

The most interesting questions will be rewarded with Kotlin swag!

How to join the virtual Kotlin booth

You can chat with members of the Kotlin team directly – simply go to the registration form and sign up for a Virtual Booth session!

Share your Kotlin story with us, chat about any issues in your project, talk to us about how we can help you to convince your manager to use Kotlin, or even just come by and let us know you love Kotlin! We will do our best to cover as many inquiries as possible, but please keep in mind that we can’t guarantee everyone a one-on-one. So the sooner you send in your request, the greater your chance of getting a Virtual Booth appointment will be. To ensure that we can see as many people as possible, we may extend this opportunity, which means your appointment could be scheduled for a time slot before the event starts or sometime after the event concludes.

So, feel free to register for a Virtual Booth and wait for a confirmation email from the team.

QuizQuest: how to get Kotlin swag

Join the QuizQuest for a chance to win Kotlin swag, JetBrains product licenses, and other great prizes!

The QuizQuest is a series of 16 small quizzes that check what you’ve learned from the talks. After each talk, a quiz will be available on the material from the talk. Answer at least 3 of the questions correctly, and you will be entered into a raffle.

There are three raffle categories with different entry criteria:

  1. Quick Thinker. Take the quiz right after the sessions, submit your answers before the beginning of the next day, and if they are all correct you will be entered into a raffle for one of 12 Corkcicle canteen bottles! The more different quizzes you take, the greater your chances of winning.
  2. Explorer. Participate in at least one quiz before October 26 and you will be entered into a raffle where you can win one of 50 special conference t-shirts.
  3. Captain Kotlin. Solve all the quizzes over the event and you will have the chance to win an awesome backpack! We have 12 backpacks to raffle off. To be in with a chance of winning this raffle, you have to submit all your quiz answers until October 26.

The more quizzes you complete, the better your chances to win!

Live chat

During all the event talks and discussions there will be a live chat for audience members. The Kotlin team will be in the live chat too, so if you have any questions for them, you can ask them right there!

Register for Kotlin 1.4 Online Event

After the event

All the videos will be available on the event’s website and our YouTube channel right after the event, so even if you miss something, you can watch it later at any time. You are also encouraged to use the videos for your community events to watch and discuss the recordings together.

Continue ReadingKotlin 1.4 Online Event

End of content

No more pages to load