Kotlin Plugin Released With IDEA 2020.3

We’ve changed the release cycle of the Kotlin plugin, so that all major updates are now synced with IntelliJ IDEA releases.

In this release:

  • Inline refactoring is now cross-language: you can apply it to any Kotlin elements defined in Java and the conversion will be performed automatically.
  • Now you can search and replace parts of your code based on its structure.
  • Building performant and beautiful user interfaces is now easy with the new experimental JetPack Compose templates.

New infrastructure and release cycle

We now ship new versions of the Kotlin plugin with every release of the IntelliJ Platform and every new version of Kotlin. IDE support for the latest version of the language will be available for the latest two versions of IntelliJ IDEA and for the latest version of Android Studio.

We’ve made this change to minimize the time it takes us to apply platform changes, and to make sure that the IntelliJ team gets the latest Kotlin IDE fixes quickly and with enough time to test them.

Inline refactoring

The Kotlin plugin now supports cross-language conversion. Starting with version 2020.3 of the Kotlin plugin, you can use inline refactoring actions for Kotlin elements defined in Java.

You can apply them via Refactor / Inline… or ⌥⌘N on Mac or Ctrl+Alt+N on Windows and Linux.

We’ve improved the inlining of lambda expressions. You no longer have to rewrite your code after you inline it; the IDE now analyzes the lambda syntax more thoroughly and formats the lambdas correctly.

Structural Search and Replace for Kotlin

Structural search and replace (SSR) actions are now available for Kotlin. Now you can find and replace code patterns, whilst taking the syntax and semantics of the source code into account.

To use the feature, go to Edit | Find | Search Structurally…. You can also write a search template or choose

Existing Templates…

by clicking the tools icon in the top right corner of the window.

You can add filters for variables to narrow down your search, for example, the



Full support of EditorConfig

Starting from 2020.3, the Kotlin plugin fully supports storing code formatting settings in the .editorconfig.

Desktop and Multiplatform templates for JetPack Compose for Desktop

JetPack compose is a modern UI framework for Kotlin that makes it easy and enjoyable to build performant and beautiful user interfaces. The new experimental Jetpack Compose for Desktop templates are now available in the Kotlin Project Wizard. You can create a project using Desktop or Multiplatform templates. A desktop template is for the desktop JVM platform, and the Multiplatform template is for the desktop JVM platform and Android with shared code in common modules.

You can read more about the Jetpack Compose features in this blog post, look through the examples of Compose applications, and try them out in the latest version of the Kotlin plugin.

Learn about the Kotlin Plugin updates in What’s new.

We’ve added a lot of new IDE features that are all designed for greater productivity and to make working with the code more fun.

  • Now when you run an application in debug mode, you get clickable inline hints that you can expand to see all the fields that belong to the variable. Moreover, you can change the variable values inside the drop-down list.
  • Work together on your code with your colleagues. In IntelliJ IDEA 2020.3 you can use Code With Me, a tool for collaborative development and remote pair programming.
  • In IntelliJ IDEA 2020.3, we made it easier to start analyzing snapshots from the updated Profiler tool window. The Profiler allows you to view and study snapshots to track down performance and memory issues. You can open a snapshot file in the Recent Snapshots area and analyze it from there.
  • Evaluate math expressions and find Git branches and commits by their hashes in Search Everywhere.
  • Split the editor with drag and drop tabs.

These and other updates are described in detail in What’s new for IDEA 2020.3

Don’t forget to share your feedback!

You can do that by

Continue Reading Kotlin Plugin Released With IDEA 2020.3

Revamped Kotlin Documentation – Give It a Try

We’re revamping our Kotlin documentation to bring you more helpful features and improve your user experience. To name just a few advantages, the new Kotlin documentation:

  • Is mobile friendly.
  • Has a completely new look and improved structure.
  • Provides easy navigation on each page.
  • Lets you share feedback on every page.
  • Lets you copy code with a single click.

More features such as dark theme support are coming soon.

Kotlin revamped documentation

Before going to production and deprecating the current version of the documentation, we want to ensure that we haven’t missed any critical issues. We’d be extremely grateful if you would look through the new revamped documentation and share your feedback with us.

View new documentation 👀

When you view the Kotlin documentation you’ll still see it in the old format. To view the documentation in the new revamped format, click this link. This will add a cookie to your browser which will enable you to see the documentation in the new format.

New Kotlin documentation

If you want to check anything in the old version of the documentation, open the usual link in another browser or use an incognito window in your current browser.

If you find that there are problems with the new documentation and you would like to revert to the old format, please let us know about these issues and click this link. This link will remove the cookie and you will only see the documentation in the old format.

Share your feedback 🗣

Your feedback is very important for us. You can:

  • Add comments to this blog post.
  • Share feedback in the #docs-revamped channel in our Kotlin public Slack (get an invite).
  • Report an issue to our issue tracker.
  • Email us at doc-feedback@kotlinlang.org.
  • Share your feedback with us at the bottom of a specific documentation page by answering No to the question Was this page helpful? and filling in the feedback form.

Widget for providing feedback

What’s next? 👣

We will collect your feedback over the next two weeks and analyze it. If there are critical issues that many of you point out, we will do our best to address them before going to production and deprecating the existing documentation.

We will also communicate our plan to address all the feedback that we receive from you. We can’t promise that we will be able to do everything at once, but we will strive to improve the Kotlin documentation and to bring you the best possible user experience! Your feedback will help us achieve this.

Check the #docs-revamped channel in our public Slack (get an invite) for more information and updates.

Try out the 🆕 documentation now!

Continue Reading Revamped Kotlin Documentation – Give It a Try

Kotlin 1.4.20 Released

Kotlin 1.4.20 is here with new experimental features for you to try. Being open to community feedback is one of the Kotlin team’s basic principles, and we need your thoughts about the prototypes of the new features. Give them a try and share your feedback on Slack (get an invite here) or YouTrack.

Kotlin 1.4.20

Here are some of the key highlights:

  • Support for new JVM features, like string concatenation via invokedynamic.
  • Improved performance and exception handling for KMM projects.
  • Extensions for JDK Path:
    Path("dir") / "file.txt"


We are also shipping numerous fixes and improvements for existing features, including those added in 1.4.0. So if you encountered problems with any of those features, now’s a good time to give them another try.

Read on to learn more about the features of Kotlin 1.4.20. You can also find a brief overview of the release on the What’s new in Kotlin 1.4.20 page in Kotlin docs. The complete list of changes is available in the change log.

As always, we’d like to thank our external contributors who helped us with this release.

Now let’s dive into the details!


On the JVM, we’ve added the new JVM 15 target and mostly focused on improving existing functionality and performance, as well as fixing bugs.

invokedynamic string concatenation

Since Java 9, the string concatenation on the JVM has been done via the dynamic method invocation (the


instruction in the bytecode). This works faster and consumes less memory than the previous implementation, and it leaves space for future optimizations without requiring bytecode changes.

We have started to implement this mechanism in Kotlin for better performance, and it can now compile string concatenations into dynamic invocations on JVM 9+ targets.

Currently, this feature is experimental and covers the following cases:

  • String.plus

    in the operator (

    a + b

    ), explicit (


    ), and reference (


    ) forms.

  • toString

    on inline and data classes.

  • String templates, except for those with a single non-constant argument (see KT-42457).

To enable


string concatenation, add the


compiler option with one of the following values:


Kotlin/JS continues to evolve at a rapid pace, and this release brings a variety of improvements, including new templates for its project wizard, improved DSL for better control over project configuration, and more. The new IR compiler has also received a brand-new way to compile projects ignoring errors in their code.

Gradle DSL changes

The Kotlin/JS Gradle DSL has received a number of updates that simplify project setup and customization, including webpack configuration adjustments, modifications to the auto-generated


file, and improved control over transitive dependencies.

Single point for webpack configuration

Kotlin 1.4.20 introduces a new configuration block for the


target called


. Inside it, you can adjust common settings from a single point, instead of duplicating configurations for




, and



To enable CSS support by default for all three tasks, you just need to include the following snippet in the


of your project:

kotlin {
	browser {
		commonWebpackConfig {
			cssSupport.enabled = true

package.json customization from Gradle



file generally defines how a JavaScript project should behave, identifying scripts that are available to run, dependencies, and more. It is automatically generated for Kotlin/JS projects during build time. Because the contents of


vary from case to case, we’ve received numerous requests for an easy way to customize this file.

Starting with Kotlin 1.4.20, you can add entries to the


project file from the Gradle build script. To add custom fields to your


, use the


function in the compilations



kotlin {
    js(BOTH) {
        compilations["main"].packageJson {
            customField("hello", mapOf("one" to 1, "two" to 2))

When you build the project, this will add the following block to the configuration file



"hello": {
  "one": 1,
  "two": 2

Whether you want to add a scripts field to the configuration, making it easy to run your project from the command line, or want to include information for other post-processing tools, we hope that you will find this new way of specifying custom fields useful.

Selective yarn dependency resolutions (experimental)

When including dependencies from npm, there are times when you want to have fine-grained control over their dependencies (transitive dependencies). There are numerous reasons this could be the case. You might want to apply an important upgrade to one of the dependencies of a library you are using. Or you may want to roll back an update of a transitive dependency which currently breaks your application. Yarn’s selective dependency resolutions allow you to override the dependencies specified by the original author, so you can keep on developing.

With Kotlin 1.4.20, we are providing a preliminary (experimental) way to configure this feature from a project’s Gradle build script. While we are still working on a smooth API integration with the rest of the Kotlin/JS options, you can already use the feature through the


inside the


. To affect the resolved version of a package for your project, use the


function. In its arguments, specify the package name selector (as specified by Yarn) and the desired version.

An example configuration for selective dependency resolution in your


file would look like this:

rootProject.plugins.withType<YarnPlugin> {
    rootProject.the<YarnRootExtension>().apply {
        resolution("react", "16.0.0")
        resolution("processor/decamelize", "3.0.0")

Here, all of your npm dependencies that require


will receive version


, and


will receive its


dependency as version


. Additionally, you can also pass




invocations to the


block, which allows you to specify constraints about acceptable versions.

Disabling granular workspaces (experimental)

To speed up build times, the Kotlin/JS Gradle plugin only installs the dependencies that are required for a particular Gradle task. For example, the


package is only installed when you execute one of the


tasks, and not when you execute the


task. While this means unnecessary downloads are avoided, it can create problems when running multiple Gradle processes in parallel. When the dependency requirements clash, the two installations of npm packages can cause errors.

To resolve this issue, Kotlin 1.4.20 includes a new (experimental) option to disable these so-called granular workspaces. Like the experimental support for selective dependency resolutions, this feature is currently also accessible through the


, but it will likely be integrated more closely with the rest of the Kotlin/JS Gradle DSL. To use it, add the following snippet to your



rootProject.plugins.withType<YarnPlugin> {

With this configuration, the Kotlin/JS Gradle plugin will install all npm dependencies that may be used by your project, including those used by tasks that are not currently being executed. This means that the first Gradle build might take a bit longer, but the downloaded dependencies will be up to date for all tasks you run. This way, you can avoid conflicts when running multiple Gradle processes in parallel.

New Wizard templates

To give you more convenient ways to customize your project during creation, the project wizard for Kotlin comes with new adjustable templates for Kotlin/JS applications. There are templates for both the browser and Node.js runtime environments. They serve as a good starting point for your project and make it possible to fine tune the initial configuration. This includes settings like enabling the new IR compiler or setting up additional library support.

With Kotlin 1.4.20, there are three templates available:

  • Browser Application allows you to set up a barebones Kotlin/JS Gradle project that runs in the browser.
  • React Application contains everything you need to start building a React app using the appropriate kotlin-wrappers. It provides options to enable integrations for style-sheets, navigational components, and state containers.
  • Node.js Application preconfigures your project to run in a Node.js runtime. It comes with the option to directly include the experimental kotlinx-nodejs package, which we introduced in a previous post.

Ignoring compilation errors (experimental)

With Kotlin 1.4.20, we are also excited to showcase a brand-new feature available in the Kotlin/JS IR compilerignoring compilation errors. This feature allows you to try out your application even while it is in a state where it usually wouldn’t compile. For example, when you’re doing a complex refactoring or working on a part of the system that is completely unrelated to a compilation error. With this new compiler mode, the compiler ignores any erroneous code and replaces it with runtime exceptions instead of refusing to compile.

Kotlin 1.4.20 comes with two tolerance policies for ignoring compilation errors in your code:

  • In

    mode, the compiler will accept code that is syntactically correct but doesn’t make sense semantically. An example for this would be a statement containing a type mismatch (like

    val x: String = 3


  • In

    mode, the compiler will accept any and all code, even if it contains syntax errors. Regardless of what you write, the compiler will still try to generate a runnable executable.

As an experimental feature, ignoring compilation errors requires an opt-in through a compiler option. It’s only available for the Kotlin/JS IR compiler. To enable it, add the following snippet to your



kotlin {
   js(IR) {
       compilations.all {
           compileKotlinTask.kotlinOptions.freeCompilerArgs += listOf("-Xerror-tolerance-policy=SYNTAX")

We hope that compilation with errors will help you tighten feedback loops and increase your iteration speed when working on Kotlin/JS projects. We look forward to receiving your feedback, and any issues you find while trying this feature, in our YouTrack.

As we continue to refine the implementation of this feature, we will also offer a deeper integration for it with the Kotlin/JS Gradle DSL and its tasks at a later point.


Performance remains one of Kotlin/Native’s main priorities in 1.4.20. A key feature in this area is a prototype of the new escape analysis mechanism that we plan to polish and improve in coming releases. And of course, there are also smaller performance improvements, such as faster range checks (



Another aspect of the improvements to Kotlin/Native development in 1.4.20 is polishing and bug fixing. We’ve addressed a number of old issues, as well as those found in new 1.4 features, for example the code sharing mechanism. One set of improvements fixes behavior inconsistencies between Kotlin/Native and Kotlin/JVM in corner cases, such as property initialization or the way




work on functional references.

Finally, we have extended Objective-C interop capabilities with an option to wrap Objective-C exceptions into Kotlin exceptions, making it possible to handle them in the Kotlin code.

Escape analysis

Escape analysis is a technique that the compiler uses to decide whether an object can be allocated on the stack or should “escape” to the heap. Allocation on the stack is much faster and doesn’t require garbage collection in the future.

Although Kotlin/Native already had a local escape analysis, we are now introducing a prototype implementation of a new, more efficient global escape analysis. This is performed in a separate compilation phase for the release builds (with the


compiler option).

This prototype has already yielded some promising results, such as a 10% average performance increase on our benchmarks. We’re researching ways to optimize the algorithm so that it finds more objects for stack allocation and speeds up the program even more.

While we continue to work on the prototype, you can greatly help us by trying it out and sharing the results you get on your real-life projects.

If you want to disable the escape analysis phase, use the


compiler option.

Opt-in wrapping of Objective-C exceptions

The purpose of exceptions in Objective-C is quite different from that in Kotlin. Their use is normally limited to finding errors during development. But technically, Objective-C libraries can throw exceptions in runtime. Previously, there was no option to handle such exceptions in Kotlin/Native, and encountering an


thrown from a library caused the termination of the whole Kotlin/Native program.

In 1.4.20, we’ve added an option to handle such exceptions in runtime to avoid program crashes. You can opt in to wrap


’s into Kotlin’s


’s for further handling in the Kotlin code. Such a


holds the reference to the original


, which lets you get information about the root cause.

To enable the wrapping of Objective-C exceptions, specify the

-Xforeign-exception-mode objc-wrap

option in the


call or add the

foreignExceptionMode = objc-wrap

property to the


file. If you use the CocoaPods integration, specify the option in the

pod {}

build script block of a dependency like this:

pod("foo") {
   extraOpts = listOf("-Xforeign-exception-mode”, “objc-wrap")

The default behavior remains unchanged: the program terminates when an exception is thrown from the Objective-C code.

CocoaPods plugin improvements

Improved task execution

In this release, we’ve significantly improved the flow of task execution. For example, if you add a new CocoaPods dependency, existing dependencies are not rebuilt. Adding an extra target also doesn’t trigger the rebuilding of dependencies for existing targets.

Extended DSL

In 1.4.20, we’ve extended the DSL for adding CocoaPods dependencies to your Kotlin project.

In addition to local Pods and Pods from the CocoaPods repository, you can add dependencies on the following types of libraries:

  • A library from a custom spec repository.
  • A remote library from a Git repository.
  • A library from an archive (also available by arbitrary HTTP address).
  • A static library.
  • A library with custom cinterop options.

The previous DSL syntax is still supported.

Let’s examine a couple of DSL changes in the following examples:

  • A dependency on a remote library from a Git repository.
    You can specify a tag, commit, or branch by using corresponding keywords, for example:

        pod("JSONModel") {
        source = git("https://github.com/jsonmodel/jsonmodel.git") {
            branch = "key-mapper-class"

    You can also combine these keywords to get the necessary version of a Pod.

  • A dependency on a library from a custom spec repository.
    Use the special


    parameter for it:

        specRepos {

You can find more examples in the Kotlin with CocoaPods sample.

Updated integration with Xcode

To work correctly with Xcode, Kotlin requires some Podfile changes:

  • If your Kotlin Pod has any Git, HTTP, or specRepo pod dependencies, you should also specify them in the Podfile. For example, if you add a dependency on


    from the CocoaPods repository, declare it in the Podfile, as well:

    pod 'AFNetworking'
  • When you add a library from the custom spec, you also should specify the location of specs at the beginning of your Podfile:

    source 'https://github.com/Kotlin/kotlin-cocoapods-spec.git'
    target 'kotlin-cocoapods-xcproj' do
      // ... other Pods ...
      pod 'example'

Integration errors now have detailed descriptions in IntelliJ IDEA, so if you have any problems with your Podfile you will immediately get information about how to fix them.

Take a look at the


branch of the Kotlin with CocoaPods sample. It contains an example of Xcode integration with the existing Xcode project named



Support for Xcode 12 libraries

We have added support for new libraries delivered with Xcode 12. Feel free to use them in your Kotlin code!

Updated structure of multiplatform library publications

Before Kotlin 1.4.20, multiplatform library publications included platform-specific publications and a metadata publication. However, there was no need to depend solely on the metadata publication, so this artifact was never used explicitly.

Starting from Kotlin 1.4.20, there is no longer a separate metadata publication. Metadata artifacts are now included in the root publication, which stands for the whole library and is automatically resolved to the appropriate platform-specific artifacts when added as a dependency to the common source set.

Note that you must not add an empty artifact without a classifier to the root module of your library to meet the requirements of repositories such as Maven Central, as this will result in a conflict with metadata artifacts that are now included in this module.

Compatibility with libraries published in 1.4.20

If you have enabled hierarchical project structure support and want to use a multiplatform library that was published with such support in Kotlin 1.4.20 or higher, you will need to upgrade Kotlin in your project to version 1.4.20 or higher, as well.

If you are a library author and you publish your multiplatform library in Kotlin 1.4.20+ with hierarchical project structure support, keep in mind that users with earlier Kotlin versions who also have hierarchical project structure support enabled will not be able to use your library. They will need to upgrade Kotlin to 1.4.20 or higher.

However, if you or your library’s users do not enable hierarchical project structure support, those with earlier Kotlin versions will still be able to use your library.

Learn more about publishing a multiplatform library.

Standard library changes

Extensions for java.nio.file.Path

Starting from 1.4.20, the standard library provides experimental extensions for



Working with the modern JVM file API in an idiomatic Kotlin way is now similar to working with


extensions from the


package. There is no longer any need to call static methods of


anymore, because most of them are now available as extensions on the



The extensions are located in the


package. Since


itself is available in JDK 7 and higher, the extensions are placed in the


module. In order to use them, you need to opt in to the experimental annotation



// construct path with the div (/) operator
val baseDir = Path("/base")
val subDir = baseDir / "subdirectory" 

// list files in a directory
val kotlinFiles: List<Path> = Path("/home/user").listDirectoryEntries("*.kt")

We especially want to thank our contributor AJ Alt for submitting the initial PR with these extensions.

Improved performance of the



We are always thrilled when the Kotlin community suggests improvements, and the following is one such case. In this release, we’ve changed the implementation of the



The case-sensitive variant uses a manual replacement loop based on


, while the case-insensitive one uses regular expression matching.

This improvement speeds up the execution of the function in certain cases.

Deprecation of Kotlin Android Extensions

Ever since we created Kotlin Android Extensions, they have played a huge role in the growth of Kotlin’s popularity in the Android ecosystem. With these extensions, we provided developers with convenient and efficient tools for reducing boilerplate code:

  • Synthetic views (

    ) for UI interaction.

  • Parcelable

    implementation generator (


    ) for passing objects around as



Initially, we thought about adding more components to


. But this didn’t happen, and we’ve even received user requests to split the plugin into independent parts.

On the other hand, the Android ecosystem is always evolving, and developers are getting new tools that make their work easier. Some gaps that Kotlin Android Extensions were filling have now been covered by native mechanisms from Google. For example, regarding the concise syntax for UI interaction, there is now Android Jetpack, which has view binding that replaces


, just like Kotlin synthetics.

Given these two factors, we’ve decided to retire synthetics in favor of view binding and move the Parcelable implementation generator to a separate plugin.

In 1.4.20, we’ve extracted the Parcelable implementations generator from


and started the deprecation cycle for the rest of it, which currently is only synthetics. For now, they will continue to work with a deprecation warning. In the future, you’ll need to switch your project to another solution. We will soon add the link to the guidelines for migrating Android projects from synthetics to view bindings here, so stay tuned.

The Parcelable implementation generator is now available in the new


plugin. Apply this plugin instead of


. The


annotation is moved to the


package. Note that




can’t be applied together in one module.

How to update

Before updating your projects to the latest version of Kotlin, you can try the new language and standard library features online at play.kotl.in. It will be updated to 1.4.20 soon.

In IntelliJ IDEA and Android Studio, you can update the Kotlin Plugin to version 1.4.20 – learn how to do this here.

If you want to work on existing projects that were created with previous versions of Kotlin, use the


Kotlin version in your project configuration. For more information, see the docs for Gradle and for Maven.

You can download the command-line compiler from the Github release page.

You can use the following library versions with this release:

The versions of libraries from




etc.) can be found in the corresponding repository.

The release details and the list of compatible libraries are also available here.

If you run into any problems with the new release, you can find help on Slack (get an invite here) and report issues in our YouTrack.

External contributors

We’d like to thank all of our external contributors whose pull requests were included in this release:

Jinseong Jeon
Toshiaki Kameyama
Steven Schäfer
Mads Ager
Mark Punzalan
Ivan Gavrilovic
Jim Sproch
Kristoffer Andersen
Aleksandrina Streltsova
Konstantin Virolainen
AJ Alt
Henrik Tunedal
Juan Chen
Valeriy Vyrva
Alex Chmyr
Alexey Kudravtsev
Andrey Matveev
Aurimas Liutikas
Dat Trieu
Dereck Bridie
Efeturi Money
Elijah Verdoorn
Francesco Vasco
Gia Thuan Lam
Guillaume Darmont
Jake Wharton
Julian Kotrba
Kevin Bierhoff
Matthew Gharrity
Raluca Sauciuc
Ryan Nett
Sebastian Kaspari
Vladimir Krivosheev
Pavlos-Petros Tournaris
Robert Bares
Yoshinori Isogai
Derek Bodin
Dominik Wuttke
Sam Wang
Uzi Landsmann
Yuya Urano
Norbert Nogacki
Alexandre Juca

Continue Reading Kotlin 1.4.20 Released

Roman Elizarov is the new Project Lead for Kotlin

TL;DR: I am stepping down as the Project Lead for Kotlin. Roman Elizarov is the new Project Lead.

Kotlin has just had the 10th anniversary of the first commit which I made back in November 2010. In the last ten years we went from a few notes on a whiteboard to millions of users, and from a handful of part-time people to a team of about 100. I’m excited about Kotlin doing so well, thanks to every member of our wonderful community, including every person on the team at JetBrains, every contributor, every speaker and teacher, everybody active on Slack or StackOverflow. Together, we’ll make the future of Kotlin an exciting reality.

Personally, I’ve chipped in with ten years of doing my best. I worked on the design of the language along with Max Shafirov and many other great people who joined later. I wrote the first reference for the language, a big part of the first compiler front-end and the initial IDE integration. I managed the team as it grew bigger and bigger. It was an exciting journey that I’ll never forget.

Now it’s time for me to take a break and think about what I want to do next. I’m not a lifetime project type of person, and the backlog of ideas I’d like to explore is getting a bit too long, many of them outside of the realm of engineering and computer science. So, I’m stepping down as the lead of Kotlin passing the duty and privilege of driving the project to its further success to my excellent colleagues.

Roman Elizarov is the new Project Lead. This includes the authority to finalize all language-related decisions including new features and evolution/retirement of the old ones. Roman is an outstanding engineer and had influenced the design of Kotlin even before he joined the team. We’ve been working together for over 4 years now, and I value every minute of it. Roman is the mastermind of Kotlin coroutines and is now leading all our language research activities. I’m sure he’s a perfect fit for this role. With Roman as the Project Lead, Stanislav Erokhin in charge of the development team, and Egor Tolstoy leading Product Management, I know Kotlin is in good hands. My best wishes! Keep up the great work you folks have been doing all these years.

What’s going to change now for the Kotlin community? Nothing much, really. We’ve been preparing this transition very carefully, and don’t expect any noticeable bumps in the road. I will be in touch with Roman and the team as much as needed, will participate in strategy discussions, and share any and all of my knowledge and experience that the team might need.

JetBrains is 100% committed to Kotlin, and the Kotlin Foundation (created by JetBrains and Google) is doing great. With such a strong backing and such an outstanding community, Kotlin is bound to live long and prosper.

My best memories about the ten years of Kotlin are all about people. I’ve met extraordinary folks at JetBrains, among the early adopters, at Google, Gradle and Spring whom we’ve been partnering with so closely and fruitfully, among the KotlinConf speakers and all the passionate supporters we are so proud to have. I’m so grateful for having the privilege to know you folks, it’s the best part of Kotlin, if you ask me.

I will always remain loyal to Kotlin and our community, it’s a huge and beloved part of my life. You can reach me on Twitter or Kotlin Slack any time. Expect to see you folks in person when offline conferences are a thing again 🙂

All the best, and

Have a nice Kotlin!

Continue Reading Roman Elizarov is the new Project Lead for Kotlin

Kotlin 1.4 Online Event Recap: Materials and QuizQuest Winners

The Kotlin 1.4 Online Event materials are available for you to watch any time. Recordings of the talks, and Q&As are available on the event web page, along with slides. You can also watch the recordings sequentially using the Kotlin 1.4 Online Event playlist:

Kotlin 1.4 talks

At the time of writing, the recordings from the Kotlin 1.4 playlist have been viewed more than 120 thousand times in total. The five most viewed talks are:

If you’d like to get a better feel for the atmosphere of the event, as well as more insights from the speakers’ interviews, check out the recordings of the streams in the event playlist.

Q&A sessions and the Kotlin team’s AMA session on Reddit

We received more than 700 questions and managed to answer about half of them during the event. The remaining questions have since been covered during the Kotlin team’s recent AMA session on Reddit. Check it out to find the answers to your questions.

About the Kotlin 1.4 Online Event

The Kotlin 1.4 Online Event took place on October 12–15, 2020. Over 23,000 people from 69 countries joined the live stream to watch the talks, chat with the Kotlin community, and pose questions for the speakers. We received 2090 applications for the virtual booth and more than 5000 submissions in the QuizQuest.

Thanks a million to everyone who joined and filled those four days with knowledge, attention, and support for each other, including all of our speakers, organizers, guests, and partners!

We particularly want to thank those who sent us videos for our community compilation. It is a true gem from our event! Enjoy watching:

QuizQuest results

QuizQuest, one of the most exciting activities of the event, is finished and we have winners in three categories:

  1. Quick Thinker. In this category we raffled off 45 stylish bottles among attendees who solved at least 1 quiz correctly and submitted it before the deadline (16:00 CEST the day after the quiz was posted).
  2. Explorer. All QuizQuesters who submitted at least one quiz and answered 50% of questions correctly were enrolled in this raffle. We randomly chose 50 winners who are getting a T-shirt with the event logo.
  3. Captain Kotlin. In this challenging category, players had to work hard and answer as many questions as possible from all the quizzes. The top 12 scorers are getting a branded backpack from JetBrains, as well as special kudos from the Kotlin team and the community! Here are their names and scores:

    • Mikhail Islentyev – 100.0
    • Aakarshit Uppal – 99.007
    • Ivan Mikhnovich – 98.440
    • Jean-François Michel – 98.355
    • Zoltán Kadlecsik – 98.128
    • Sergey Ryabov – 96.794
    • Nicola Corti – 96.142
    • Maciej Uniejewski – 96.0
    • Bogdan Popov – 95.765
    • Mayank Kharbanda – 95.680
    • KAUSHIK N SANJI – 95.461
    • Dmitri Maksimov – 95.433

All the winners will receive an email with instructions on how to redeem their prizes.

The Kotlin 1.4 Online Event was a great success thanks to the vibrant Kotlin community. We would love for you to join our future events, so if you’re interested, please register to be notified about them.

Continue Reading Kotlin 1.4 Online Event Recap: Materials and QuizQuest Winners

Jetpack Compose Screenshot Testing with Shot

Jetpack Compose Screenshot Testing with Shot

Weeks ago, the Jetpack Compose team announced their first alpha release full of useful features, a lot of reference code and some documentation. It was a great moment for the Android ecosystem! We know Compose is going to change the way Android devs build apps around the world and we also wanted to help. That’s why today we announce official Screenshot Testing support for Jetpack Compose in Shot.

Jetpack Compose Screenshot Testing with Shot

Right after the release, José Alcérreca wrote to us to mention there could be an interesting opportunity to implement screenshot testing support as part of our already implemented screenshot testing library.

Screenshot testing is one of the best testing strategies you can use in your UI layer. We have been using it for years, and we love it ❤️ It takes a picture of your views in the state you want and saves it as a reference in your repository. Once you consider these screenshots are correct, these will be used to ensure there are no bugs in your UI layer in the next tests’ execution. If there is any error, you’ll get a report with the screenshot diff showing you where the failure is. Here you have an example of a failed test because, by mistake, the account number is not shown using the * mask it showed before:

Jetpack Compose Screenshot Testing with Shot

Look at the third image! The “diff” column shows where the error is using pixel-perfect precision 📸

Jetpack Compose Screenshot Testing with Shot
original screenshot – new screenshot – diff report

In case you would like to know more about the usage of this testing strategy, you can find a detailed blog post about this testing strategy here by Eduardo Pascua.

However, at this point, you might be wondering what a Jetpack Compose Screenshot test might look like 🤔 Don’t worry, tt’s really simple. We’ve provided a simple interface named “ScreenshotTest” for you. You can add this interface to any instrumentation test and use it to take the components screenshot as follows:

class OverviewBodyTest : ScreenshotTest {

    val composeTestRule = createComposeRule()

    fun rendersDefaultOverviewBody() {


    private fun renderOverviewBody() {
        composeTestRule.setContent {
            RallyTheme {


Easy, isn’t it? And fully compatible with all the testing tools provided by the Jetpack Compose team! Once you run your tests, Shot will generate a detailed report for you with all the screenshots recorded as follows:

Jetpack Compose Screenshot Testing with Shot

You will also find a useful plain console report whenever you execute your tests in CI or from your terminal

Jetpack Compose Screenshot Testing with Shot

Now it is time to test your components using screenshot tests! If you want to review more information about Shot, you can check the official documentation here. In case you would like to review a sneak peek of what a Compose app with screenshot tests would look like, we’ve prepared a branch of the official Google code samples with some tests written using Shot.

We hope you find Shot super useful and you can start using it soon. Remember it can also be used with regular Android apps if you can’t use Jetpack Compose right now!

Continue Reading Jetpack Compose Screenshot Testing with Shot

The Dark Secrets of Fast Compilation for Kotlin

Compiling a lot of code fast is a hard problem, especially when the compiler has to perform complex analyses such as overload resolution and type inference with generics. In this post, I’ll tell you about a huge and largely invisible part of Kotlin which makes it compile much faster on relatively small changes that happen a lot in an everyday run-test-debug loop.

Also, we are looking for Senior Developers to join the team at JetBrains working on fast compilation for Kotlin, so if you are interested, look at the bottom of this post.

Let’s start with the obligatory XKCD comic #303

XKCD comic 303: Compiling

This post is about one very important aspect of every developer’s life: How long does it take to run a test (or just hit the first line of your program) after you make a change in the code. This is often referred to as time-to-test.

Why this is so important:

  • If time-to-test is too short, you are never forced to get some coffee (or a sword fight),
  • If time-to-test is too long, you start browsing the social media or get distracted in some other ways, and lose track of what that change you made was.

While both situations arguably have their pros and cons, I believe it’s best to take breaks consciously and not when your compiler tells you to. Compilers are smart pieces of software but healthy human work schedules is not what compilers are smart at.

Developers tend to be happier when they feel productive. Compilation pauses break the flow and make us feel stuck, stopped in our tracks, unproductive. Hardly anybody enjoys that.

Why does compilation take so long?

There are generally three big reasons for long compilation times:

  1. Codebase size: compiling 1 MLOC usually takes longer than 1 KLOC.
  2. How much your toolchain is optimized, this includes the compiler itself and any build tools you are using.
  3. How smart your compiler is: whether it figures many things out without bothering the user with ceremony or constantly requires hints and boilerplate code.

The first two factors are kind of obvious, let’s talk about the third one: the smartness of the compiler. It is usually a complicated tradeoff, and in Kotlin, we decided in favor of clean readable type-safe code. This means that the compiler has to be pretty smart, and here’s why.

Where Kotlin stands

Kotlin is designed to be used in an industrial setting where projects live long, grow big and involve a lot of people. So, we want static type safety to catch bugs early and get precise tooling (completion, refactorings and find usages in the IDE, precise code navigation and such). Then, we also want clean readable code without unneeded noise or ceremony. Among other things, this means that we don’t want types all over the code. And this is why we have smart type inference and overload resolution algorithms that support lambdas and extensions function types, we have smart casts (flow-based typing), and so on. The Kotlin compiler figures out a lot of stuff on its own to keep the code clean and type-safe at the same time.

Can one be smart and fast at the same time?

To make a smart compiler run fast you certainly need to optimize every bit of the toolchain, and this is something we are constantly working on. Among other things, we are working on a new-generation Kotlin compiler that will run much faster than the current one. But this post is not about that.

However fast a compiler is, it won’t be too fast on a big project. And it’s a huge waste to recompile the entire codebase on every little change you make while debugging. So, we are trying to reuse as much as we can from previous compilations and only compile what we absolutely have to.

There are two general approaches to reducing the amount of code to recompile:

  • Compile avoidance — only recompile affected modules,
  • Incremental compilation — only recompile affected files.

(One could think of an even finer grained approach that would track changes in individual functions or classes and thus recompile even less than a file, but I’m not aware of practical implementations of such an approach in industrial languages, and altogether it doesn’t seem necessary.)

Now let’s look into compile avoidance and incremental compilation in more details.

Compile avoidance

The core idea of compile avoidance is:

  • Find “dirty” (=changed) files
  • Recompile the modules these files belong to (use the results of the previous compilation of other modules as binary dependencies)
  • Determine which other modules may be affected by the changes
    • Recompile those as well, check their ABIs too
    • Repeat until all affected modules are recompiled

The algorithm is more or less straightforward if you know how to compare ABIs. Otherwise, we get to recompile those modules that got affected by the changes. Of course, a change in a module that nobody depends on will compile faster than a change in the ‘util’ module that everybody depends on (if it affects its ABI).

Tracking ABI changes

ABI stands for Application Binary Interface, and it’s kind of the same as API but for the binaries. Essentially, the ABI is the only part of the binary that dependent modules care about (this is because Kotlin has separate compilation, but we won’t go into this here).

Roughly speaking, a Kotlin binary (be it a JVM class file or a KLib) contains declarations and bodies. Other modules can reference declarations, but not all declarations. So, private classes and members, for example, are not part of the ABI. Can a body be part of an ABI? Yes, if this body is inlined at the call site. Kotlin has inline functions and compile-time constants (const val’s). If a body of an inline function or a value of a const val changes, dependent modules may need to be recompiled.

So, roughly speaking, the ABI of a Kotlin module consists of declarations, bodies of inline functions and values of const vals visible from other modules.

A straightforward way to detect a change in the ABI is

  • Store the ABI from the previous compilation in some form (you might want to store hashes for efficiency),
  • After compiling a module, compare the result with the stored ABI:
    • If it’s the same, we are done;
    • If it’s changed, recompile dependent modules.

Pros and cons of compile avoidance

The biggest advantage of compile avoidance is its relative simplicity.

This approach really helps when modules are small, because the unit of recompilation is an entire module. If your modules are big, recompilations will be long. So, basically, compile avoidance dictates that we have a lot of small modules, and as developers we may or may not want this. Small modules don’t necessarily sound like a bad design but I’d rather structure my code for people, not machines.

Another observation is that many projects have something like a ‘util’ module where many small useful functions reside. And virtually every other module depends on ‘util’ at least transitively. Now, let’s say I want to add another tiny useful function that is used three times across my codebase. It adds to the module ABI, so all dependent modules are affected, and I get into a long corridor sword fight because my entire project is being recompiled.

On top of that, having a lot of small modules (each of which depends on multiple others) means that the configuration of my project may become huge because for each module it includes its unique set of dependencies (source and binary). Configuring each module in Gradle normally takes some 50-100ms. It is not uncommon for large projects to have more than 1000 modules, so total configuration time may be well over a minute. And it has to run on every build and every time the project is imported into the IDE (for example, when a new dependency is added).

There are a number of features in Gradle that mitigate some of the downsides of compile avoidance: configurations can be cached, for example. Still, there’s quite a bit of room for improvement here, and this is why, in Kotlin, we use incremental compilation.

Incremental compilation

Incremental compilation is more granular than compile avoidance: it works on individual files rather than modules. As a consequence, it does not care about module sizes nor does it recompile the whole project when an ABI of a “popular” module is changed insignificantly. In general, this approach does not restrict the user as much and leads to shorter time-to-test. Also, developers’ swords get neglected and rusty and beg for being used at least once in a while.

Incremental compilation has been supported in JPS, IntelliJ’s built-in build system since forever. Gradle only supports compile avoidance out-of-the-box. As of 1.4, the Kotlin Gradle plugin brings a somewhat limited implementation of incremental compilation to Gradle, and there’s still a lot of room for improvement.

Ideally, we’d just look at the changed files, determine exactly which files depended on them, and recompile all these files. Sounds nice and easy but in reality it’s highly non-trivial to determine this set of dependent files precisely. For one thing, there can be circular dependencies between source files, something that’s not allowed for modules in most modern build systems. And dependencies of individual files are not declared explicitly. Note that imports are not enough to determine dependencies because of references to the same package and chain calls: for A.b.c() we need to import at most A, but changes in the type of B will affect us too.

Because of all these complications, incremental compilation tries to approximate the set of affected files by going in multiple rounds, here’s the outline of how it’s done:

  • Find “dirty” (=changed) files
  • Recompile them (use the results of the previous compilation as binary dependencies instead of compiling other source files)
  • Check if the ABI corresponding to these files has changed
    • If not, we are done!
    • If yes, find files affected by the changes, add them to the dirty files set, recompile
    • Repeat until the ABI stabilizes (this is called a “fixpoint”)

Since we already know how to compare ABIs, there are basically only two tricky bits here:

  1. using results of the previous compilation to compile an arbitrary subset of sources, and
  2. finding files affected by a given set of ABI changes.

Both are at least in part features of Kotlin’s incremental compiler. Let’s look at them one-by-one.

Compiling the dirty files

The compiler knows how to use a subset of the previous compilation results to skip compiling non-dirty files and just load the symbols defined in them to produce the binaries for the dirty files. This is not something a compiler would necessarily be able to do if not for incrementality: producing one big binary from a module instead of a small binary per source file is not so common outside of the JVM world. And it’s not a feature of the Kotlin language, it’s an implementation detail of the incremental compiler.

When we compare ABIs of the dirty files with the previous results, we may find out that we got lucky and no more rounds of recompilation are needed. Here are some examples of changes that only require recompilation of dirty files (because they don’t change the ABI):

  • Comments, string literals (except for const val’s) and such
    • Example: change something in the debug output
  • Changes confined to function bodies that are not inline and don’t affect return type inference
    • Example: add/remove debug output, or change the internal logic of a function
  • Changes confined to private declarations (they can be private to classes or files)
    • Example: introduce or rename a private function
  • Reordering of declarations

As you can see, these cases are quite common when debugging and iteratively improving code.

Widening the dirty file set

If we are not so lucky and some declaration has been changed, it means that some files that depend on the dirty ones could produce different results upon recompilation even though not a single line was changed in their code.

A straightforward strategy would be to give up at this point and recompile the whole module. This will bring all the issues with compile avoidance to the table: big modules become a problem as soon as you modify a declaration, and tons of small modules have performance costs too, as described above. So, we need to be more granular: find the affected files and recompile them.

So, we want to find the files that depend on the parts of the ABI that actually changed. For example, if the user renamed




, we only want to recompile the files that care about names




, and leave other files alone even if they refer to some other parts of this ABI. The incremental compiler remembers which files depend on which declaration from the previous compilation, and we can use this data sort of like a module dependency graph. This, again, is not something non-incremental compilers normally do.

Ideally, for every file we should store which files depend on it, and which parts of the ABI they care about. In practice, it’s too costly to store all dependencies so precisely. And in many cases, there’s no point in storing full signatures.

Consider this example:

File: dirty.kt

// rename this to be 'fun foo(i: Int)'
fun changeMe(i: Int) = if (i == 1) 0 else bar().length

File: clean.kt

fun foo(a: Any) = ""
fun bar() =  foo(1)

Let’s assume that the user renamed the function




. Note that, although clean.kt is not changed, the body of


will change upon recompilation: it will now be calling


from dirty.kt, not


from clean.kt, and its return type will change too. This means that we have to recompile both dirty.kt and clean.kt. How can the incremental compiler find this out?

We start by recompiling the changed file: dirty.kt. Then we see that something in the ABI has changed:

  • there’s no function

    any more,

  • there’s function

    that takes an


    and returns an



Now we see that clean.kt depends on the name


. This means that we have to recompile both clean.kt and dirty.kt once again. Why? Because types cannot be trusted.

Incremental compilation must produce the same result as would a full recompilation of all sources. Consider the return type of the newly appeared


in dirty.kt. It’s inferred, and in fact it depends on the type of


from clean.kt which is a circular dependency between files. So the return type could change when we add clean.kt to the mix. In this case we’ll get a compilation error, but until we recompile clean.kt together with dirty.kt, we don’t know about it.

The big rule of the state-of-the-art incremental compilation for Kotlin: all you can trust is names. And this is why for each file, we store

  • the ABI it produces, and
  • the names (not full declarations) that were looked up during its compilation.

Some optimizations are possible in how we store all this. For example, some names are never looked up outside the file, e.g. names of local variables and in some cases local functions. We could omit them from the index. To make the algorithm more precise, we record which files were consulted when each of the names was looked up. And to compress the index we use hashing. There’s some space for more improvements here.

As you have probably noticed, we have to recompile the initial set of dirty files multiple times. Alas, there’s no way around this: there can be circular dependencies, and only compiling all the affected files at once will yield a correct result. In the worst case this gets quadratic and incremental compilation may do more work than compile avoidance would, so there should be heuristics in place guarding from it.

Incremental compilation across module boundaries

The biggest challenge to date is incremental compilation that can cross module boundaries.

Say, we have dirty files in one module, we do some rounds and reach a fixpoint there. Now we have the new ABI of this module, and need to do something about the dependent modules.

Of course, we know what names were affected in our initial module’s ABI, and we know which files in the dependent modules looked these names up. Now, we could apply essentially the same incremental algorithm but starting from the ABI change and not from a set of dirty files. BTW, if there’re no circular dependencies between modules, it’s enough to recompile the dependent files alone. But then, if their ABI has changed, we’d need to add more files from the same module to the set and recompile the same files again.

It’s an open challenge to implement this fully in Gradle. It will probably require some changes to the Gradle architecture, but we know from past experience that such things are possible and welcomed by the Gradle team.

Things not covered in this blog post

My goal here was to give you a taste of the fascinating machinery of fast compilation for Kotlin. There’re many more things at play there that I deliberately left out, including but not limited to

  • Build Cache
  • Configuration Cache
  • Task Configuration Avoidance
  • Efficiently storing incremental compilation indexes and other caches on disk
  • Incremental compilation for mixed Kotlin+Java projects
  • Reusing javac data structures in memory to avoid reading Java dependencies twice
  • Incrementality in KAPT and KSP
  • File watchers for quickly finding dirty files


Now, you have a basic idea of the challenges that fast compilation in a modern programming language poses. Note that some languages deliberately chose to make their compilers not that smart to avoid having to do all this. For better or worse, Kotlin went another way, and it seems that the features that make the Kotlin compiler so smart are the ones the users love the most because they provide powerful abstractions, readability and concise code at the same time.

While we are working on a new-generation compiler front-end that will make compilation much faster by rethinking the implementation of the core type-checking and name resolution algorithms, we know that everything described in this blog post will never go away. One reason for this is the experience with the Java programming language that enjoys the incremental compilation capabilities of IntelliJ IDEA even having a much faster compiler than kotlinc is today. Another reason is that our goal is to get as close as possible to the development roundtrip of interpreted languages that enjoy instantaneous pick up of changes without any compilation at all. So, Kotlin’s strategy for fast compilation is: optimized compiler + optimized toolchain + sophisticated incrementality.

Join the team!

If you are interested in working on this kind of problems please consider the job opening we currently have on the Kotlin’s fast compilation team at JetBrains. Here’s the job listing in English, and one in Russian. Prior experience working on compilers or build tools in NOT required. We are hiring in all JetBrains’ offices (Saint Petersburg, Munich, Amsterdam, Boston, Prague, Moscow, Novosibirsk) or you can work remotely from anywhere in the world. Will be good to hear from you!

Continue Reading The Dark Secrets of Fast Compilation for Kotlin

Kotlin 1.4 Online Event

The event of the year is on! Join us on October 12–15 for 4 days of Kotlin as we host the Kotlin 1.4 Online Event. You can tune in at any point during the event to listen to the talks, chat with the team, and ask the speakers questions. In addition to this, there will be lots of activities and entertainment including:

  1. Q&A sessions
  2. QuizQuest, with prize raffles for participants
  3. Virtual Kotlin Booth
  4. Event chat

Read the details below to find out about how to participate in these activities.

Register for Kotlin 1.4 Online Event

Program and speakers

Prepare yourself for four days of exciting talks and discussions about the latest Kotlin release.

Speakers from JetBrains together with Florina Muntenescu from Google and Sébastien Deleuze from Spring will throw light on how we came up with all these new features, and reveal some of the secrets hidden in the depths of the technology.

  • The first day will be about Kotlin in general. There will be a keynote speech, a community overview, and talks on language features and tooling achievements.
  • The second day is dedicated to Kotlin libraries.
  • The third day will be all about using Kotlin everywhere – on Android, for Mobile Multiplatform projects, and Kotlin/JS.
  • The fourth day will cover server-side Kotlin and the future of the language.

The schedule is available on the website. The timetable for the schedule is shown according to your computer’s timezone, so you don’t have to worry about having to figure out exactly when you have to log in to watch the talks you are interested in!

I want to join

How to ask the Kotlin team your questions

Each day will be followed by a live Q&A session with the speakers from that day. They will try to answer as many of your questions as they can. You can already start preparing yours now. There are three ways to ask us:

  • You can ask on Twitter using the hashtag #kotlin14ask, so we can find your question and include it in the discussion.
  • You can ask us directly via this form.
  • You can also ask your questions during the event in the live chat.

The most interesting questions will be rewarded with Kotlin swag!

How to join the virtual Kotlin booth

You can chat with members of the Kotlin team directly – simply go to the registration form and sign up for a Virtual Booth session!

Share your Kotlin story with us, chat about any issues in your project, talk to us about how we can help you to convince your manager to use Kotlin, or even just come by and let us know you love Kotlin! We will do our best to cover as many inquiries as possible, but please keep in mind that we can’t guarantee everyone a one-on-one. So the sooner you send in your request, the greater your chance of getting a Virtual Booth appointment will be. To ensure that we can see as many people as possible, we may extend this opportunity, which means your appointment could be scheduled for a time slot before the event starts or sometime after the event concludes.

So, feel free to register for a Virtual Booth and wait for a confirmation email from the team.

QuizQuest: how to get Kotlin swag

Join the QuizQuest for a chance to win Kotlin swag, JetBrains product licenses, and other great prizes!

The QuizQuest is a series of 16 small quizzes that check what you’ve learned from the talks. After each talk, a quiz will be available on the material from the talk. Answer at least 3 of the questions correctly, and you will be entered into a raffle.

There are three raffle categories with different entry criteria:

  1. Quick Thinker. Take the quiz right after the sessions, submit your answers before the beginning of the next day, and if they are all correct you will be entered into a raffle for one of 12 Corkcicle canteen bottles! The more different quizzes you take, the greater your chances of winning.
  2. Explorer. Participate in at least one quiz before October 26 and you will be entered into a raffle where you can win one of 50 special conference t-shirts.
  3. Captain Kotlin. Solve all the quizzes over the event and you will have the chance to win an awesome backpack! We have 12 backpacks to raffle off. To be in with a chance of winning this raffle, you have to submit all your quiz answers until October 26.

The more quizzes you complete, the better your chances to win!

Live chat

During all the event talks and discussions there will be a live chat for audience members. The Kotlin team will be in the live chat too, so if you have any questions for them, you can ask them right there!

Register for Kotlin 1.4 Online Event

After the event

All the videos will be available on the event’s website and our YouTube channel right after the event, so even if you miss something, you can watch it later at any time. You are also encouraged to use the videos for your community events to watch and discuss the recordings together.

Continue Reading Kotlin 1.4 Online Event

The State of Kotlin Support in Spring

This is a transcript of “The State of Kotlin Support in Spring” talk by Sebastian Deleuze from KotlinConf:

Do you find this format useful? Please share your feedback with us!


Why Kotlin? (1:30)

First, I think that Kotlin improves the signal-to-noise ratio compared to Java. It’s super important to express our ideas and algorithms with something that is easier to understand.

Second, safety, and especially the nullability. Kotlin has turned Java’s one-billion–dollar mistake into a powerful feature that has allowed it to deal with the absence of value. The trick is to deal with “null” in the type system. Java is not doing that and it makes a lot of difference.

Third, discoverability. You’re using APIs and new APIs are coming and it’s super important to make these APIs discoverable. I’m lazy and I like to use Ctrl + Space to discover new things! I prefer this way rather than using conventions, or things that you need to know before using the framework.

Pleasure is important as well! We mostly use Kotlin for work, but since we spend so much time doing it, I think it’s really important for it to be a pleasurable experience. And I enjoy using Kotlin much more than Java.

The Android ecosystem is huge and it’s coming to Kotlin very fast. Obviously the server side is moving slower, but I think the Android ecosystem could have an impact on the server side ecosystem, just given the number of developers. Sometimes mobile developers do server side things, not that much, but it happens. Maybe later we could see similar stories with the front end.

Spring❤Kotlin. How much? (3:13)

Framework documentation in Kotlin

At Spring, we decided to translate all of the framework reference documentation from Java into Kotlin. That means that for each code sample in the Spring reference documentation, you have two tabs. One is the Java tab, which is obviously still selected by default, but now there is also a Kotlin tab:

And it’s not a line-by-line or an automated translation! When an extension or a DSL is available, it is shown in the Kotlin code sample. I did the translation sample by sample and it’s idiomatic Kotlin using Spring.

I think that’s a good way to learn Kotlin and the idiomatic way of using Kotlin in the context of Spring. This is available in Spring framework 5.2 documentation. You can go to the current reference docs and you’ll see it!

I’ve tried to summarize the current status and the next steps in this table:

Obviously, translating the Spring framework documentation was a huge undertaking, it took many hours!

Next year we should be able to provide translated Spring Boot documentation as well. And if you want to help – feel free to do so! That would let me focus on other features 🙂 Translating the Spring Boot documentation is not that huge, it’s smaller than the Spring framework documentation and the work has already started via community contributions.

Spring security will have Kotlin documentation as well. And Spring Data is a pretty huge project with 17 sub-projects. Some parts of Spring Data will have Kotlin documentation and some parts won’t.

Gradle Kotlin DSL on start.spring.io (5:04)

Another thing that we have done to increase Kotlin support in the Spring ecosystem is to provide Gradle Kotlin DSL support on start.spring.io:

When you start a Kotlin project with the Gradle build system, we now automatically use the new Kotlin DSL in the project that you generated. Feel free to use that!

More DSLs (5:28)

Spring MVC is provided with a mock MVC DSL which is pretty nice, but it’s not easily discoverable and it employs a lot of static imports – it’s “Java-ish”.

MockMvc DSL by Clint Checketts and JB Nizet (5:28)

Thanks to community contributions again, specifically to Clint Checketts and Jean-Baptiste Nizet, we now have a mock MVC Kotlin DSL which you can trigger by using these kinds of



mockMvc.request(HttpMethod.GET, "/person/{name}", "Lee") {
  secure = true
  headers {
     contentLanguage = Locale.FRANCE
  principal = Principal { "foo" }
}.andExpect {
  status { isOk }
  content { contentType(APPLICATION_JSON) }
  jsonPath("$.name") { value("Lee") }
  content { json("""{"someBoolean": false}""", false) }
}.andDo {

When you use these kinds of extensions, it triggers the use of the MockMvc DSL and allows you to describe that in a more Kotlin way. We tried to stay pretty close to the Java way. So if you know MockMVC in Java, it’s not a totally different thing with Kotlin idioms. We have tried to find the right balance between something idiomatic in Kotlin and something which is still close to the Java API.

Spring Cloud Contract DSL by Tim Ysewyn (6:32)

The team from Pivotal has also contributed to the Spring Cloud Contract DSL, so feel free to use that if you’re using Spring Cloud Contract to test your APIs:

contract {
  request {
    url = url("/foo")
    method = PUT
    headers {
        header("foo", "bar")
    body = body("foo" to "bar")
  response {
    status = OK

Spring Security DSL by Eleftheria Stein (6:45)

I’m always happy when someone other than me creates some Kotlin features in the Spring ecosystem. Big thanks to the Spring Security team and especially Eleftheria Stein who has done an amazing job to introduce Spring Security Kotlin DSL:

http {
    formLogin {
        loginPage = "/log-in"
    authorizeRequests {
        authorize("/css/**", permitAll)
        authorize("/user/**", hasAuthority("ROLE_USER"))

Spring security is configurable with the Java DSL which is pretty extensive. Also, Spring security has created an official Spring Security Kotlin DSL which is currently in experimental mode. It’s available at this URL, and next year it should be integrated by default with Spring security 5.3.

Spring MVC DSL and functional API (7:21)

Another big part of the work in Spring framework 5.2 has been to introduce the router DSL and the functional API for the Spring MVC. I don’t know if you have seen the previous presentation I did, but basically with Spring framework 5 we have introduced a new way to add controller


to write your web components via functional API and Kotlin DSL. It was previously available only with WebFlux. So basically to use this kind of DSL, you have to switch directives. Now we provide this DSL for both Spring Mvc and Spring WebFlux.

Defining Handler & Router (8:02)

The DSL is very similar to the WebFlux version. You are writing your handlers with a

 input parameter and a

return value:

fun hello(request: ServerRequest): ServerResponse

You have this API to define the status codes, the body, and use converters. Everything that you can do with Spring MVC and Annotations, you can do with this kind of API:

fun hello(request: ServerRequest) =
     ServerResponse.ok().body("Hello world!")

You can define the router. Here we have a distinct router and handler. And the router usually refers to the handler using credible references:

router {
  GET("/hello", ::hello)

fun hello(request: ServerRequest) =
     ServerResponse.ok().body("Hello world!")

Then you can expose the router to Spring Boot simply by returning the router DSL value and exposing it as a Bean. Spring Boot will automatically take these routes into account:

class RoutesConfiguration {

  fun routes(): RouterFunction<ServerResponse> = router {
     GET("/hello", ::hello)

  fun hello(request: ServerRequest) =
        ServerResponse.ok().body("Hello world!")

More realistic Handler & Router (8:52)

If we take a look at a more realistic handler, you will create a dedicated class where you will inject your repositories and your services. Obviously, in Kotlin we always use constructor-based injection:

class PersonHandler(private val repository: PersonRepository) {

  fun listPeople(request: ServerRequest): ServerResponse {
     // ...

  fun createPerson(request: ServerRequest): ServerResponse {
     // ...

  fun getPerson(request: ServerRequest): ServerResponse {
     // ...

You create values by using handlers. And you create your router, defining your routes and using credible references to reference these handlers:

class RouteConfiguration {

  fun routes(handler: PersonHandler) = router {
     accept(APPLICATION_JSON).nest {
        GET("/person/{id}", handler::getPerson)
        GET("/person", handler::listPeople)
     POST("/person", handler::createPerson)

You can create as many routers as you want. It’s not monolithic.

It supports nested predicates. You can create your own request predicate; the whole mechanism is very extensible.

Creating routes dynamically (9:35)

You can even create routes dynamically:

class RouteConfiguration {

  fun routes(routeRepository: RouteRepository) = router {
     for (route in routeRepository.listRoutes()) {
        GET("/$route") { request ->
          hello(request, route)

  fun hello(request: ServerRequest, message: String) =
        ServerResponse.ok().body("Hello $message!")

It’s not just syntactic sugar or a more Kotlin-ish way to write your web components. It also provides new capabilities because annotations are statically defined. So if you need to create routes depending on something which is in your database – which is not uncommon if you create CMS or e-commerce.

Obviously, you need to take care of caching and stuff like that. It’s not as easy as it is here, but it opens up new capabilities and I think that’s a pretty interesting way of writing your web components.

Status Update (10:10)

Beans DSL and WebFlux router DSL have been supported for quite a long time now. Spring Boot 2.2 brings the Coroutines router DSL – we are going to see it just after that. The WebMvc router DSL, MockMvc DSL, and the Spring Security DSL are available in experimental mode. Next year all of these components should be available in production.

Obviously, that’s an estimation and it’s possible that not all of these features will get done in time.

Coroutines (10:44)

I guess you have heard quite a lot about coroutines. There are talks by Roman Elizarov and various other talks. I will not go into too much detail, but I would like to share my view of coroutines from the Spring point of view and the server side point of view.

Coroutines are lightweight threads; you can create hundreds of thousands of them, whereas you cannot create that many threads. They provide you with building blocks that allow better scalability and allow new constructs, especially if you deal with asynchronous calls and applications with a lot of I/O, which is typically the case on the server side.

The foundations are in the Kotlin language. However, only a very small part is at the language level, and most of the implementation is on the library side in



When you talk about coroutines, be careful about defining what you are talking about. Maybe you are talking about these lightweight threads. Maybe you are talking about the language feature. Maybe you are talking about the library which is pretty extensive and contains a lot of things.

I have seen that usually people talk about coroutines but sometimes refer to different things.

From the Spring point of view, Coroutines let you consume the Spring Reactive stack with a nice balance between imperative and declarative styles. Obviously we have a huge investment in reactive APIs in Spring, but there are two levels. There is the whole stack that we are building, so that’s the whole reactive infrastructure. We have built WebFlux, and we are currently working on R2DBC which is reactive SQL and the GA has been released just a few days ago. We are working on RSocket, which is a way to bring reactive outside of Java, outside of the JVM, and this can be used for mobile and frontend in a reactive way. We have a really huge investment in Reactive, and coroutines compete with Reactor at the API level, but that’s not an issue. Our Reactive core is already designed to be exposed to RxJava, for example, or other reactive/asynchronous APIs. Coroutines is just a new way to consume our reactive stack.

With coroutines, operations are sequential by default. Concurrency is possible but explicit.

With Coroutines 1.3 there are three main building blocks you need to know. There are in fact more pieces in Coroutines, but if you just need to learn three things, this is the first one I’m going to show.

The first one is the suspending function. This is the basic block from the beginning:

suspend fun hello() {
  println("Hello world!")

You are writing your function, you add this


keyword, and it allows you to call an asynchronous operation. Here I’m using a delay suspending function, but imagine you are calling an HTTP client requesting remote data or maybe a database that provides a Coroutines API. This suspend keyword allows us to define what functions are suspending and are designed to request data asynchronously.

An important design point is that suspending functions “color” your API (as explained by Bob Nystrom in his great blog post What Color is Your Function?). That means that you have regular functions and suspending functions. Other outgoing technologies, like Loom for example, take a different approach, but here we have a distinct kind of function. Suspending functions can call regular functions, but regular functions can’t call suspending functions directly. You would need to use some dedicated capabilities in order to do that. That’s not necessarily a bad thing in terms of design, but that’s a thing you need to know. In that aspect it’s pretty similar to reactive and asynchronous APIs where you are using wrappers like Flux/Mono compatible features. You have two different worlds – regular functions and Coroutine functions.

Structured concurrency

Another building block is structured concurrency. Structured concurrency allows you to define asynchronous boundaries:

suspend fun loadAndCombine(name1: String, name2: String) = coroutineScope {
  val deferred1: Deferred<Image> = async { loadImage(name1) }
  val deferred2: Deferred<Image> = async { loadImage(name2) }
  combineImages(deferred1.await(), deferred2.await())

When you are writing asynchronous code in an imperative fashion like this – in this example we are loading two remote images – we want to take care of cases where loading the image fails.

The behavior that we want is that both image loading processes are cancelled and we throw an exception.

We have to define these kinds of asynchronous boundaries like we define transactions with databases. Structured concurrency provides building blocks like CoroutineScope that allow us to define asynchronous boundaries to make coroutines aware of what to do when there is an error, and define this kind of behavior.

Flow<T>* (15:58)

A big feature of coroutines 1.3 – I have pushed a lot to get this type and I will explain why – is called “Flow”. Flow is a new type available in experimental mode for Coroutine 1.2, and available in non-experimental mode for the base API in Coroutine 1.3. And here I’m referring to


, not the flow type which is in Java 9+ which is a kind of container type for a reactive stream interface. It might be a little bit confusing, but “Flow” is a very attractive name, so both Oracle and JetBrains like it.

Coroutines 1.3 introduces a new Reactive type and it’s super important because

 is the equivalent of


 in the Reactor world or

Flowable in the RxJava world but for coroutines.

When we do the exercise to see if we can translate our Reactive API to coroutines, the suspending functions are great for translating Mono, but we were really missing something for the streams. And I don’t like


; I don’t think that’s a good building block for building server side applications so we really miss that. I’m thankful to the coroutines team who did an amazing job providing


in time for the Spring Framework 5.2.


does not implement Reactive Streams directly because it is based on the Coroutines building block instead of regular Java constructs. But it kind of sums up the semantics and behavior of Reactive Streams, even if not completely, and it provides some support for back pressure.

Basically, “back pressure” is a way for a consumer not to be overwhelmed by a very fast producer. It’s a way for the consumer to say: “OK, you can send me 10 chunks of data and I will process that”. And the sender will only send 10 chunks of data, not more. Which allows us to have scalable applications at the architecture level.

That’s the flow API:

interface Flow<T> {
   suspend fun collect(collector: FlowCollector<T>)

interface FlowCollector<T> {
   suspend fun emit(value: T)

The Flow API provides a “collect” suspending method with a FlowCollector that itself provides a suspending “emit” method and that’s it.

The building blocks, the router API, is really constrained.

Operators – I mean operators which are similar to what’s available in the RxJava/Reactor world like






etc – are provided as extensions, and some of them are pretty easy to implement:

fun <T> Flow<T>.filter(predicate: suspend (T) -> Boolean): Flow<T> =
   transform { value ->
      if (predicate(value)) return@transform emit(value)

This is the real implementation of the filter operator that is built-in in the coroutines API.

In the reactive world of RxJava and Reactor, if you want to implement a reactive operator you need to have some special skills. I don’t have them myself. It’s pretty hard to implement a reactive operator.

In coroutines – it depends. It’s not always easy. But if you don’t have any concurrency involved, I would say that it’s pretty easy. If you are implementing an operator with concurrency, it’s much easier to do that if you’re in the Kotlin team 🙂

When you want to create a


, you will use this kind of a Flow Builder. You can see that the code inside it is pretty imperative – you just use a for loop:

val flow = flow {
  for (i in 1..3) {

You can call any kind of a suspending function inside the flow builder. If you want to request remote data from your HTTP client or a database – you can do that. Because this flow builder takes a suspending lambda. Basically, it’s a lambda that is suspending and inside the lambda you can call any kind of suspending function. Here I’m just emitting a number each 100 ms and that’s it.

When you consume a “flow” it’s more declarative, more functional:

flow.filter { it < 2 }.map(::asyncOperation).collect()
<p>I don’t like it when people say that with coroutines 1.3 Reactive is declarative and coroutines are imperative. I think that’s wrong. I think that coroutines 1.3 provide a nice balance between imperative and declarative. Inside operators like “map” you can use more imperative constructs. I like this balance where you can use imperative code where you retrieve a single asynchronous value and you can use a declarative-functional approach when you want to describe how you process streams of data and things like that. I think that’s the perfect balance.</p>
<p><a name="spring-support-for-coroutines"></a></p>
<h2>What about Spring support for coroutines? (21:15)</h2>
<p>Spring provides official Coroutines support for Spring WebFlux, Spring Data (Redis, MongoDB, Cassandra, R2DBC), Spring Vault, and RSocket.</p>
<p>On the Spring Data side, we support Redis, MongoDB, Cassandra, and R2DBC. R2DBC is regular SQL, so we provide support for SQL, H2, MySQL, and Microsoft SQL as well.</p>
<p>There are no new types introduced, because I’m lazy and I don’t want to duplicate the whole Spring API! </p>
<h3>WebFlux suspending methods (21:55)</h3>
<p>And we provide seamless support with the annotation programming model. That means with WebFlux (not Spring Mvc, not for now - that will come later) you can add the suspend keyword under methods annotated with <code>@RequestMapping</code>, <code>@GetMapping</code>, etc. and that will enable you to call any suspending function or any kind of coroutines API inside this handler:</p>
[kotlin runnable=false]
suspend fun suspendingEndpoint(): Banner {
  return Banner("title", "Lorem ipsum")

Conceptually this is like a function that returns a


if you are using Reactor. Or a compatible feature of


. It’s just that you don’t have to use this wrapper, you can just use the



It also works for suspending view rendering:

suspend fun render(model: Model): String {
  model["banner"] = Banner("title", "Lorem ipsum")
  return "index"

Reactive types extensions for Coroutines APIs (22:35)

And for functional APIs we provide Coroutines support via extensions.

We provide various reactive APIs.


is our reactive web client and it natively provides methods like


to return the Flux of something, and


to return the Mono of something. What we have done in order to avoid duplicating the whole API is to provide extensions like


which is like a “wait body” which allows you to provide the coroutines API without involving any reactive parts – in other words, without exposing the reactive API directly:

suspend fun flow(): Flow<Banner> = client.get()
     .map { Banner("title", it) }

That allows you to get


or suspending functions and deal with all of the reactive capabilities underneath in a “coroutines” way. That’s our strategy.

We provide this kind of support for WebFlux Router with these kinds of extensions:

coRouter {
  GET("/hello", ::hello)

suspend fun hello(request: ServerRequest) =
     ServerResponse.ok().bodyValueAndAwait("Hello world!")

RSocket (23:35)

We provide support for




is a way to be reactive between, for example, your server side and your mobile application:

private val requester: RSocketRequester =


fun findRadars(request: MapRequest): Flow<Radar> = requester





I think


has a huge potential on Android, but that will be another talk, maybe next year.

You have this kind of reactive flow, you can use all of the Flow operators. You can think of it as all of the reactive capabilities in Spring being exposed in Coroutines, in addition to being exposed as a Reactor API.

Spring Data R2DBC with transactions (24:12)

We even support Coroutines transactions with databases:

class PersonRepository(private val client: DatabaseClient,
                       private val operator: TransactionalOperator) {

   suspend fun initDatabase() = operator.executeAndAwait {
      save(User("smaldini", "Stéphane", "Maldini"))
      save(User("sdeleuze", "Sébastien", "Deleuze"))
      save(User("bclozel", "Brian", "Clozel"))

   suspend fun save(user: User) {

We have worked a lot on the R2DBC support. R2DBC is reactive SQL. The database client is a kind of a fluent API for R2DBC. It’s totally possible to have a part of the Kotlin ecosystem that uses R2DBC, because R2DBC has no dependencies except reactive streams. Which makes it possible for an exposed or any kind of a pure Kotlin API to leverage R2DBC API without involving any part of Spring.

Here I’m using Spring Data R2DBC, which is our way to expose R2DBC by default. However, R2DBC is really designed as a very low-level thing that is usable in another context, and I really hope that the Coroutines Kotlin ecosystem will leverage all the work we have done at the driver level. Because exposing R2DBC as Coroutines is just using a single extension. There is almost no performance loss and I think that’s a good opportunity for the Kotlin and Coroutines ecosystem.

But here I am using Spring capabilities. I have this transactional operator which is kind of a functional transaction. It allows me to define the transaction boundaries using a reactive stack. There is a transactional extension which is applied to Flow in order to use transactions on top of the Flow type.

We provide support for Spring Data reactive MongoDB and various other things:

class UserRepository(private val mongo: ReactiveFluentMongoOperations) {

  fun findAll(): Flow<User> =

  suspend fun findOne(id: String): User = mongo.query<User>()

  suspend fun insert(user: User): User =

  suspend fun update(user: User): User =

Coroutines are now the default way to go Reactive in Kotlin (26:00)

Coroutines dependency added by default on start.spring.io:

WebFlux & RSocket Kotlin code samples provided with Coroutines API:

Coroutines are now the default way to go Reactive in Kotlin. It’s perfectly fine to use the Reactor API if you want, but I think that people are waiting for some kind of a guideline and opinions. And my thinking is that when you are using the Reactive stack in Kotlin, it’s very likely that you would like to use this Coroutines API.

When you select Kotlin and one of the Reactive dependencies on start.spring.io, the Coroutines dependency will be automatically added. You don’t have to add another dependency manually.

Also, in the reference documentation all of the examples are written using Coroutines and not exposed with Flex. Basically, this is a different way to consume, if you agree, and I hope you do.

Status (26:55)

In terms of scope, we have plenty of things that are already supported: Spring WebFlux functional APIs, WebClients, the annotational programming model with @RequestMapping, RSocket @MessageMapping, Spring Data Reactive APIs, and functional transactions.

What is missing is Spring MVC @RequestMapping. That’s because Spring MVC has some support for Mono and Flux. It’s less advanced than WebFlux, but it’s usable.

I think that in the Spring framework 5.3, Spring MVC will support Coroutines as well. That’s pretty useful when you want to use WebClient in Spring MVC and obviously that is a missing part.

Spring Data repositories will be supported seeing as they have already been implemented in less than one day. So, hopefully that has been taken care of. @Transactional is not supported yet, but I hope we will be able to support it as well. For now you have to use functional transactions.

Spring Boot (28:04)

What are the new features on the Spring Boot side?


These are the kind of configuration properties that I was writing last year.

class BlogProperties {

  lateinit var title: String
  val banner = Banner()

  class Banner {
     var title: String? = null
     lateinit var content: String


is a type-safe way to expose your properties that you write in .properties or .tml. Since it was not designed to be used with immutable classes, we had to use this lateinit var and I was crying every time I was writing that.

Now we have this @ConstructorBinding annotation that allows us to instantiate ConfigurationProperties via the constructor with full mutability support with read-only properties:

data class BlogProperties(val title: String, val banner: Banner) {
   data class Banner(val title: String?, val content: String)

We don’t need properties and we don’t have this kind of crappy lateinit var, and we can just use a class or a data class with read-only properties. Feel free to use it.

start.spring.io helps to share Kotlin projects (29:01)

Another useful thing in the context of Kotlin is that start.spring.io now has two new buttons.

An “Explore” button that lets you peek at the information of the generated project directly in the UI. This avoids the overhead of generating the zip, extracting, etc. This is pretty useful.

The other one is “Share” which allows you to share the current configuration. If you go to start.spring.io, select Kotlin Gradle and you want to share it with your co-workers to increase Kotlin reach – feel free to use that. I keep forgetting about this URL which has been supported for two years now. Anyway, now you have a way to get this kind of customized link.

Kofu: the mother of all DSL (29:43)

Kofu is another topic and Kofu is experimental. Up until now I was speaking about production-ready stuff, but now we’re entering the dangerous land of experimental stuff.

Kofu is basically the application DSL. We have seen the Router DSL, there is a Beans DSL, and there is MockMvc. We try not to overdo it with DSL, but it was tempting to do an application one and I will explain why.

Explicit configuration for Spring Boot using Kotlin DSLs

Spring Boot’s main design is centered around conventions. Basically, with Spring in Java we have two variables to explicitly define things, and so Spring Boot provides conventions. I think that’s a pretty widely used and accepted model to configure Spring Boot, and that’s perfectly fine. There is nothing wrong with using that in Kotlin.

However, since we have this great DSL support and since we have all these other DSLs, it kind of makes sense to try and see what it means for Spring Boot to have a configuration with a DSL. The interesting thing is that it’s explicit configuration. That means you won’t have Jackson support because you have a transitive dependency that brings Jackson and that’s how Spring Boot currently behaves. It’s based on conditions that could trigger on properties and classes present on the class path. If you are a control freak like me, it’s possible that you will like this kind of thing. It’s experimental, so don’t use it in production yet!

It leverages existing DSLs like Beans, Router, and Security. It also provides composable configuration and discoverability via auto-complete instead of convention.

To me that’s just a detail, because I think that with the upcoming Spring Boot version we should be close to Kofu’s performance even with annotations. It currently provides faster startup and less memory consumption:

Currently we have a decrease of 40% in startup time and less memory consumption when you configure your Spring Boot application with Kofu instead of Auto-configuration.

But again, I tend to think that this gap will be reduced by the work that we are currently doing so that is not the main point. The main point is getting explicit configuration.

Demo (32:05)

Here’s a quick demo. I’m just going to create a regular Spring Boot application. It’s important to understand that a Spring Boot application configured with Kofu is just a regular Spring Boot application. You still create it via start.spring.io.

I’m going to the spring-fu website. It’s on spring-projects-experimental. I’m going to just copy and paste that. I’m adding the milestone repository. I’m adding the Kofu dependency and it’s just a regular dependency that provides application DSL. We need to import the changes, and if everything goes fine, I should be able to use my application DSL.

Here I’m going to create a Spring MVC application. So, I’ll select the “SERVLET” type and then I’ll have access to a DSL where I can configure things. For example I am going to add SpringMvc support, I am going to use my router, and I’m going to create a route which will just return an “OK” status and return a body with “Hello KotlinConf”.

Everything is explicit so we then need to run this application. So it’s really a way to use existing things. This router is exactly the one that is provided with the Spring framework, but here I’m reusing it in the context of an application configured in a functional style.

Then if I want to add some kind of Jackson support, for example, let’s say I’m creating a class “Sample”, “val_title”. I want to return this class and use it with Jackson. In Spring Boot it’s automatically configured here. Since the support is explicit I will have to declare, but that’s on purpose.

When I talk in terms of discoverability, here I can see what values are converted without having to read the documentation. Here I’m using Jackson. If I want to configure pretty print – again I’m lazy, I don’t have a very good memory – I just need to use my autocomplete to enable the pretty print option and I can also discover all the great options in Spring Boot that are available to configure Jackson. It’s a way to expose what you can do in Spring Boot as a discoverable DSL.

A very important point is that you can create modular configurations and that is one of the main purposes of this kind of construct – it is not monolithic, otherwise it would not be very interesting.

We are going to create a web configuration, we are going to define a logging configuration, and then you select explicitly what you enable because, again, everything is explicit. And if, let’s say, you only want to select the web config, but not the data config for your tests, and mock other things, you will just declare a new customized application just for your tests that will trigger and run just what you need, which means faster integration testing and things like that.

I don’t have the time to describe everything today, but I guess you can understand the potential in the philosophy. You can find more on the spring-fu project, and I will continue to evolve it. Feel free to give feedback! Not so much about the scope of the support, but more on the root principles.

My goal is to make Kofu the Spring Boot Kotlin DSL integrated in a production-ready way. We are not there yet. I think the current step is to have more Kotlin market share in regular Spring Boot applications. Which is kind of a chicken and egg issue, but I need more people using Spring Boot in Kotlin in order to trigger the major task of exposing every Spring Boot configuration feature as a DSL.

GraalVM Native (38:10)

The last topic is GraalVM native support. There was a nice talk by Oleg in this room just before me. GraalVM Native in one slide:

That’s a way to compile your Java, your JVM application into a static executable. It provides different characteristics: instant startup time, very low memory consumption for small applications, smaller container image because it allows you to create Docker container images from scratch thanks to the feature which lets you create statically linked executables. But it still requires you to use OpenJDK on the development side, so it’s not hugely consistent between the input. In terms of maturity, in terms of throughput, we are not at the same level as OpenJDK. What I want to say is that it’s an interesting platform with very different characteristics – it could make it possible to lower cloud hosting costs, but it also comes with a lot of drawbacks and different characteristics, so be careful and evaluate all of the criteria.

Spring Framework 5.3 should be able to configure reflection, proxies, resources, etc. dynamically with GraalVM native: https://github.com/spring-projects-experimental/spring-graal-native.

Kotlin with Spring MVC or Spring WebFlux works on GraalVM native Coroutines support is broken becse of graal#366.

We are going to work on some out of-the-box setup capabilities for the Spring framework 5.3, and Kotlin is supported. The work is currently being done on spring-graal-native, so feel free to take a look.

It allows you to start your application pretty fast; basically you can start your application almost instantly.

It allows you to create scale-to-zero applications. Imagine that you have a back office application that is used 10% of the time and you deploy that with 20 microservices that are running every time and 40 instances because you want high availability. With this kind of mechanism, each microservice instance will consume less and you can basically shut down the service when it is not used.

This comes at the cost of a lot of other constraints, and not all of the ecosystem is supported and it’s still super-super early, so be careful. But we are interested in that and we’re working on such support.

It works with Kotlin except for Coroutines. There is an issue with Coroutines that should be fixed in the upcoming months. We have discussed that last week with the GraalVM team and it should be possible to fix it in the upcoming releases.

More resources (41:05)


I have covered a lot of topics, but if you want to start things more slowly, in the beginning there is the official Kotlin Spring Boot tutorial which is continuously updated. This is regular Spring Boot, Spring MVC, Annotation, GPA – nothing fancy. So you can start with that – I think that’s a good resource.

There also is my talk at devoxx. The first one is a two or three hour deep dive into Coroutines with Spring, RSocket, R2DBC which is much more detailed than I was able to show today.

There is a dedicated GitHub repository with the front end in Kotlin.js which is another interesting topic. I was not able to go into details, but this topic is covered in more detail in my talk.

The other talk is about running Spring Boot applications, as well as running it via native images. If you want more detailed information about that or to learn more about GraalVM native in a kind of a neutral approach, meaning I try to show the advantages of the platform and the drawbacks in kind of a balanced way, you should take a look at that talk.

Thank you a lot!

Continue Reading The State of Kotlin Support in Spring

Kotlin/Native Memory Management Roadmap

TL;DR: The current automatic memory management implementation in Kotlin/Native has limitations when it comes to concurrency and we are working on a replacement. Existing code will continue to work and will be supported. Read on for the full story.


A bit of history

Kotlin/Native is designed to be the Kotlin solution for smooth integration with native platform-specific environments. In essence, its vision is to become to C-compatible languages what Kotlin/JVM is to JVM languages – a pragmatic, concise, safe, and tooling-friendly language for writing code. A very important part of this story is the Objective-C ecosystem of frameworks on various Apple platforms. Kotlin/Native support for the Objective-C ecosystem makes it possible to efficiently share Kotlin code between mobile applications with the ability to naturally and precisely use all the vendor APIs in a way that is hard or impossible to achieve with JVM-based solutions.

When the Kotlin/Native project started back in 2016, it was necessary to devise a memory management scheme. With Objective-C interoperability on the table, a lot of thought and attention was paid to the way automated memory management works there – using reference counting. We even experimented with an approach that completely mimicked that model, which would have offered the benefit of providing the most seamless fit. However, in the Objective-C ecosystem, object graphs with cycles are not managed automatically by the runtime. Programmers have to identify cyclic references and mark them in a special way in the source code. Having toyed with this approach, we quickly came to the conclusion that it comes into stark conflict with the core Kotlin philosophy of making development more enjoyable. Kotlin does require developers to be more explicit where this precision is needed for safety, but not where this extra code is just boilerplate that could have been managed by the language. Still, the reference-counting memory manager was easy to write to get the Kotlin/Native project started, and a cyclic garbage collector based on a trial-deletion algorithm was added to provide the kind of development experience that Kotlin programmers expect.

As the Kotlin/Native project matured and became more widely adopted, the limitations of such a reference counting-based automated memory management scheme started to become more apparent. For one thing, it’s hard to get high throughput for memory allocation-intensive applications. But while performance is important, it is not the sole factor in Kotlin’s design.

The limitations become worse, though, when you throw multithreading and concurrency into the mix. In all the ecosystems where fully automated reference-counting memory management is successfully used (most notably in Python), concurrency is severely restricted via something like the “Global interpreter lock” mechanism. This approach is not an option for Kotlin. It is imperative for mobile applications to be able to offload CPU-intensive operations into background threads that run in parallel with the main thread.

The current approach

To tackle this, a unique set of restrictions was developed for Kotlin/Native both to make it efficient at running single-thread code and to make it possible to share data between threads. The requirement was added that the object graph must first be frozen to prevent its modification. Only then it could be shared with other threads. Alternatively, it could be fully transferred to another thread as a detached object graph if no references to it remain in the original thread. As a bonus, by only sharing immutable data, the dreaded problem of “shared mutable state” was avoided for the most part. This scheme worked well, for a time.

While conceptually appealing, the current memory management approach in Kotlin/Native has a number of deficiencies that hinder wider Kotlin/Native adoption. Mobile developers are used to being able to freely share their objects between threads, and they have already developed a number of approaches and architectural patterns to avoid data races while doing so. It is possible to write efficient applications that do not block the main thread using Kotlin/Native, as many early adopters have shown, but the ability to do so comes with a steep learning curve.

There’s also another aspect of Kotlin – the goal of being able to share Kotlin code between platforms. However, some concurrent code, even if it is safe and race-free to begin with, is virtually impossible to share between Kotlin/JVM and Kotlin/Native. In particular, various concurrent data structures and synchronization primitives, which could be both generic and domain-specific, turned out to be notoriously hard to share between the two.

We encountered particular challenges when we attempted to implement multithreaded kotlinx.coroutines for Kotlin/Native. Synchronization primitives must internally share a mutable state, which is supported in Kotlin/Native via special atomic references. Yet the existing memory management algorithm does not track cycles through such references. Even after a considerable amount of work, it still suffers from memory leaks in some concurrent execution scenarios, and we don’t have a clear solution to address them.


New memory manager for Kotlin/Native

To solve these problems, we’ve started working on an alternative memory manager for Kotlin/Native that would allow us to lift restrictions on object sharing in Kotlin/Native, automatically track and reclaim all unused Kotlin memory, improve performance, and provide fully leak-free concurrent programming primitives that are safe and don’t require any special management or annotations from the developers. The new memory manager will be used for the whole compiled binary. We plan to introduce it in a way that is mostly compatible with existing code, so code that is currently working will continue to work. Namely, we plan to continue to support object freezing as a safety mechanism for race-free data sharing, and we will be looking at ways to improve Kotlin’s approach to working with immutable data in the whole Kotlin language, not just in Kotlin/Native. Existing annotations that are related to memory management will have an appropriate behavior with the new memory manager to ensure that old code still works. Meanwhile, we’ll continue to support the existing memory manager, and we’ll release multithreaded libraries for Kotlin/Native so you can develop your applications on top of them.


We will share more details in the future as the project develops and as we finalize various design decisions. Stay tuned, and have fun with Kotlin!

Continue Reading Kotlin/Native Memory Management Roadmap

Zolla Project – Part 2

In our latest blog post, we talked about Zolla, a fintech product we built for a US company based in California. In that blog post, we described the main challenges we faced and how we solved them. However, today we are going to review the most visible part of the project, the iOS app we built for Zolla.

Our client needed to implement an iOS app letting the user log into the application using an invitation-based system that used either invitation codes or QR codes. Once logged in, users can link their bank account to make deposits and withdrawals from the app. Additionally, they can review their activity in the platform, including all their money movements performed from the app.

Zolla Project - Part 2

When we started to build this app we decided to base our project scaffolding on the following libraries:

  • SwiftLint – A powerful and fully configurable linter.
  • Bitrise – Our favorite continuous integration system
  • Fabric – Fatal and nonfatal error reporting made easy.
  • SwiftGen – A tool to generate Swift code for accessing the project resources.
  • BothamUI – A Model View Presenter framework for Swift we built.
  • BrightFutures – A framework for using Promises in Swift.
  • Sourcery – Meta-programming for Swift, to stop writing boilerplate code.
  • SwiftyBeaver – A powerful logger we linked to Crashlytics for nonfatal error reporting.
  • SwiftDate – All you need to deal with Dates in Swift.
  • Lottie – An iOS library to natively render After Effects vector animations.
  • SDWebImage – Asynchronous image downloader with cache support as a UIImageView category.
  • XCTest – The popular testing framework Apple created.
  • KIF – An iOS Functional Testing Framework.
  • iOSSnapshotTestCase – Snapshot view unit tests for iOS.
  • SwiftCheck – Property-based testing for Swift.
  • OHHTTPStubs – A library to make HTTP stubbing.
  • Swagger Code Gen – A tool to generate clients based on a Swagger spec.

These are the most relevant tools we used to build this product. But including a bunch of GitHub repositories is not the only thing we did. How we used these tools was the key to success in this project.

Thanks to Sourcery and all the meta-programming templates they already implemented for us we were able to write tests with mocks automatically generated in build time.

class CredentialStorageMock: CredentialStorage {

    // MARK: - save

    var saveSignInInfoHasSignedInCallsCount = 0
    var saveSignInInfoHasSignedInCalled: Bool {
        return saveSignInInfoHasSignedInCallsCount &gt; 0
    var saveSignInInfoHasSignedInReceivedArguments: (signInInfo: SignInInfo, hasSignedIn: Bool)?

    func save(signInInfo: SignInInfo, hasSignedIn: Bool) {
        saveSignInInfoHasSignedInCallsCount += 1
        saveSignInInfoHasSignedInReceivedArguments = (signInInfo: signInInfo, hasSignedIn: hasSignedIn)

    // MARK: - read

    var readCallsCount = 0
    var readCalled: Bool {
        return readCallsCount &gt; 0
    var readReturnValue: (signInInfo: SignInInfo, hasSignedIn: Bool)?

    func read() -&gt; (signInInfo: SignInInfo, hasSignedIn: Bool)? {
        readCallsCount += 1
        return readReturnValue

    // MARK: - clear

    var clearCallsCount = 0
    var clearCalled: Bool {
        return clearCallsCount &gt; 0

    func clear() {
        clearCallsCount += 1


Thanks to the usage of lottie, we could add impressive animations implemented using After Effects in our application. The walkthrough we implemented thanks to these animations is impressive.

Zolla Project - Part 2

The usage of different testing strategies such us property-based testing or screenshot testing let us write automated tests like these:

    // Screenshot test example
    func test_verify_email_try_to_sign_in_once_it_is_opened() {


    // Property-based tests
    property("NextAutoDepositDateCalculator should return the same day if it's today") &lt;- forAll(Date.arbitrary) { today in
            let calculator = NextAutoDepositDateCalculator(dateProvider: { today })

            let nextAutoDepositDate = calculator.execute(repeating: Repeating.Weekly(at: WeekDay(rawValue: today.weekday)!))

            return nextAutoDepositDate == today

Swagger Code Gen combined with an already documented API allowed us to generate the API Client code automatically from the command line. This speeded up our development speed considerably and reduced the number of bugs found in our API layer.

Zolla Project - Part 2

The usage of all these tools and our excellent engineers ended up with an awesome project you can already enjoy in the Apple Store. If you need to develop any similar product and you need help, we’ll always be happy to review your project needs and try to find the best way to create the platform your users are going to love! Contact us at hello@karumi.com

Continue Reading Zolla Project – Part 2

End of content

No more pages to load