Auto Added by WPeMatico

How Do You Use Stack Overflow? The Kotlin Community Survey

Stack Overflow is an essential resource when it comes to learning something new about programming. It is easily searchable, used by millions of people, and extremely popular in the software engineering community. During the Kotlin Census 2020, 55% of respondents mentioned Stack Overflow as a helpful learning resource.

However, sometimes we hear that finding Kotlin-related information on Stack Overflow is not easy. Given the importance of the platform, we’re keen to fix this, and increase the number of questions answered there.

It will help us a lot if you could share how you search for Kotlin-related information on Stack Overflow by filling out our survey.

Fill out the survey

The survey will take around 10 minutes to complete. As a thank you for your help, you will have a chance to win a JetBrains All Product Pack subscription or a $100 Amazon eGift voucher. Winners will be picked at random from among the surveys that are filled out completely.

If you are interested in helping us with our research, please share your email and check the box at the end of the survey. We will keep you posted about our research and how you can participate in future activities through the newsletter.

Continue Reading How Do You Use Stack Overflow? The Kotlin Community Survey

Kotlin Kernel for Jupyter Notebook, v0.9.0

This update of the Kotlin kernel for Jupyter Notebook primarily targets library authors and enables them to easily integrate Kotlin libraries with Jupyter notebooks. It also includes an upgrade of the Kotlin compiler to version 1.5.0, as well as bug fixes and performance improvements.

pip installConda install

The old way to add library integrations

As you may know, it was already possible to integrate a library by creating a JSON file, which we call a library descriptor. In the kernel repository, we have a number of predefined descriptors. You can find the full list of them here.

Creating library descriptors is rather easy. Just create a JSON file and provide a


section with a library description and a


section with a link to the library’s web page. Then add the




sections, describing which repositories to use for dependency resolution and which artifacts the library includes. You can also add an


section, where you list imports that will be automatically added to the notebook when the descriptor is loaded, such as




code snippets,


and so on. When you are finished, save the created file and refer to it from the kernel in whatever way is most convenient for you. In this release, we’ve added some more ways to load descriptors. You can read more about how to create library descriptors here.

This method for integrating libraries is still supported and works particularly well when you are not the author of the library you want to integrate. But it does have some limitations:

  1. It requires additional version synchronization. The integration breaks if a new version of the library is released and a class that was used in the integration is renamed.
  2. It’s not that easy to write Kotlin code in JSON without any IDE support. So if your library provides renderers or initialization code, then you have to go through a potentially long process of trial and error.
  3. Transitive dependencies are not supported. If libraries A and B provide descriptors and library A depends on library B, then adding just the descriptor for library A is not enough, and you also need to run

    %use B


  4. Advanced integration techniques are not allowed. See below for more info.

The new way to add library integrations

One of the best things about Kotlin notebooks (compared to Python notebooks) is that you do not have to think about dependencies. You just load the library you need with the


annotation and use it. All transitive dependencies are loaded automatically, and you do not have to worry about environments or dependency version clashes. An added bonus is that it will work the same way on all computers. So far, however, there hasn’t been a way to define the descriptor mentioned above and to attach it to a library so you don’t have to create and load it separately.

Now there is such a way. You can now define the descriptor inside your library code and use a Gradle plugin to automatically find and load the ID. This means you do not have to write a separate JSON and



If you are the maintainer of a library and can change its code, you may like the new method of integration. It currently utilizes Gradle as a build system, but if you use something else, feel free to open an issue and we will work on adding support for it.

Suppose you have the following Gradle build script written in the Kotlin DSL:

plugins {

group = "org.example"
version = "1.0"

// ...

The published artifact of your library should then have these coordinates:


You normally add your library to the notebook using the


file annotation:


Now suppose you need to add a default import and a renderer for this library in notebooks. First, you apply the Gradle plugin to your build:

plugins {
    kotlin("jupyter.api") version "<jupyterApiVersion>"

Then, you write an integration class and mark it with the



package org.example

import org.jetbrains.kotlinx.jupyter.api.annotations.JupyterLibrary
import org.jetbrains.kotlinx.jupyter.api.*
import org.jetbrains.kotlinx.jupyter.api.libraries.*

internal class Integration : JupyterIntegration() {
    override fun Builder.onLoaded() {
        render<MyClass> { HTML(it.toHTML()) }

It is supposed that


is a class from your library and has the


method, which returns an HTML snippet represented as a string.

After re-publishing your library, restart the kernel and import the library via


again. Now you can use all the packages from


without specifying additional qualifiers, and you can see rendered HTML in cells that return



Advanced integration features

Let’s take a look at some advanced techniques you can use to improve the integration. We’ll use the following set of classes for reference:

data class Person(
    val name: String,
    val lastName: String,
    val age: Int,
    val cars: MutableList<Car> = mutableListOf(),

data class Car(
    val model: String,
    val inceptionYear: Int,
    val owner: Person,

annotation class MarkerAnnotation

Subtype-aware renderers

In the descriptors using the old style, you can define renderers that transform cell results of a specific type. The main problem of this approach is that type matching is done by fully qualified type names. So if you define a renderer for a type


that has a subtype


, the renderer will not be triggered for instances of type



The new API offers you two solutions to this problem. First, you can implement the



import org.jetbrains.kotlinx.jupyter.api.*

class MyClass: Renderable {
    fun toHTML(): String {
        return "<p>Instance of MyClass</p>"

    override fun render(notebook: Notebook) = HTML(toHTML())

It yields the following result:

Rendered MyClass

Another way to do the same thing has actually already been presented above:

render<MyClass> { HTML(it.toHTML()) }

This option is preferable if you want to keep integration logic away from the main code.

Variable converters

Variable converters allow you to add callbacks for variables of a specific type:

    FieldHandlerByClass(Person::class) { host, person, kProperty ->
        person as Person
        if ( != {
            host.execute("val <code>${}</code> = ${}")

This converter creates a new variable with the name

for each


variable defined in the cell. Here’s how it works:

Paul created

Annotations callbacks

You can also add callbacks for file annotations (such as the aforementioned


) and for classes marked with specific annotations.

onClassAnnotation<MarkerAnnotation> { classes ->
    classes.forEach {
        println("Class ${it.simpleName} was marked!")

Here we are simply logging the definition of each class marked with




Cell callbacks

Descriptors allow you to add callbacks that are executed when the library loads (


) and before each cell is executed (


). The new integration method also allows you to add these callbacks with ease and provides support for callbacks triggered after cell execution. Let’s see how it works.

beforeCellExecution {
    println("Before cell callback")

afterCellExecution { _, result ->
    println("Cell [${notebook.currentCell?.id}] was evaluated, result is $result")


Here you see a usage of the


variable, which provides some information about the current notebook.

Dependencies, renderers, and more

There are some other methods you can use to improve Jupyter integration, such as








and more. See the


code for a full list.

Note that you can mark with


any class that implements




. There’s no need to extend



All of the code from this section is provided here. You can also find a more complex example of integration in the Dataframe library.

Maven artifacts for your use case

We now publish a set of artifacts to Maven Central, and you are welcome to use them in your own libraries.





artifacts are used in the code-based integration scenario that was described above. You will not usually need to add them manually – the Gradle plugin does it for you. These artifacts may help in some situations, for example, if you don’t use Gradle or just want to use some classes from the API without integrating it.

If you just want to use the Kotlin REPL configuration and the compiling-related features that are used in the kernel, you may be interested in the


artifact. This artifact was designed to be consistent with the IntelliJ Platform, so you can use it to make IntelliJ plugins.


is a general-purpose library that includes functions for images and HTML rendering. You can load it from the notebook with

%use lib-ext

. It is not included in the kernel distribution because it may require additional dependencies in the future, and it is not a good idea to bundle them by default.

And finally, you can depend on the


artifact if you need the whole kernel bundled into your application. You can use the


method to start the kernel server.

Other artifacts have no clear use case and are just transitive dependencies of other ones.

If your use case is not covered, please open an issue or contact us in the #datascience channel of Kotlin Slack.

Kotlin 1.5.0 and bug fixes

The underlying Kotlin compiler version was updated to 1.5.0 pre-release. It doesn’t use the new JVM IR backend at the moment, but we’ll make that happen soon. The main thing is that we’ve fixed a bug in the REPL compiler that was affecting the updating of scripts’ implicit receivers, so performance should now be better for notebooks with a large number of executed cells.

A number of additional bugs have also been fixed, including these particularly weird ones:

  • Irrelevant error pop-ups in the Notebook client (#109)
  • Incorrect parsing of

    magic (#110)

  • Resolution of transitive dependencies with runtime scope didn’t work
  • Leaking of kernel stdlib into script classpath (#27)

Check out the release changelog for further details.

Let’s Kotlin!

Continue Reading Kotlin Kernel for Jupyter Notebook, v0.9.0

Server-side With Kotlin Webinar Series, Vol 2

We continue our series of Kotlin for server-side webinars. Between February 18 and March 18 we will host 4 webinars to explore applied software development with Kotlin on server-side through live-coding sessions.

Speakers from JetBrains, VMware, Confluent, and Oracle will cover reactive programming, asynchronous applications with the Ktor framework, building microservices with Helidon, and other aspects of using Kotlin for server-side development.

The hosts and speakers will also make sure to answer your questions during the webinars.

Going Reactive

WebFlux Coroutines RSocket webinar



February 18, 2021
17:00 – 18:00 CET (8:00 am – 9:00 am PST).

What is it about?

In this one-hour live-coding session, our two speakers will demonstrate how to migrate a standard Spring Boot application implemented with Spring MVC to Spring WebFlux using Kotlin coroutines. They will also add non-blocking database access and RSocket for streaming events from the backend to the UI to make the application fully reactive.


  • Anton Arhipov, Developer Advocate at JetBrains, the company that creates Kotlin and IntelliJ IDEA.
  • Oleh Dokuka, RSocket maintainer and Project Reactor Team Member at VMware.

Building Microservices

Helidon webinar



March 10, 2021
17:00 – 18:00 CET (8:00 am – 9:00 am PST).

What is it about?

Helidon is a collection of libraries that run on a fast web core powered by Netty. It offers full support for CDI, a reactive DB client, and full GraalVM native image support. All this makes Helidon a great choice for building microservices.

In this session you will learn about the language features that make developing Helidon applications a breeze. Along the way, the presenter will also take a closer look at how to use the Reactive API to boost application performance.

The webinar is hosted by Dmitry Alexandrov, Senior Principal Developer at Project Helidon.

Creating Asynchronous Web Servers and Clients With Ktor

Ktor webinar



March 17, 2021
18:30 – 19:30 CET (8:30 am – 9:30 am PST).

What is it about?

Ktor is a framework that enables developers to create both server and client applications targeting a variety of platforms, including JVM, JavaScript, macOS, Windows, and Linux. In this webinar, Anton will focus primarily on Ktor for backend development. He will show how to create robust asynchronous server-side applications with Ktor as well as look at its deployment models, the features it provides out of the box, and its extensibility model.

The webinar is hosted by Anton Arhipov, Developer Advocate at JetBrains.

Going Further With Ktor and Kafka

Ktor Kafka webinar



March 18, 2021
17:00 – 18:00 CET (8:00 am – 9:00 am PST).

What is it about?

This session will provide an introduction to using Kafka, Kotlin, and Ktor, based on the example of building an application that shares geographical coordinates among clients. The presenters will also demonstrate how event streaming works with Kafka, and what other features the platform provides for scaling the solution.


  • Anton Arhipov, Developer Advocate at JetBrains, the company that creates Kotlin and IntelliJ IDEA.
  • Viktor Gamov, Developer Advocate at Confluent, the maker of an event-streaming platform based on Apache Kafka.

Save the date

Once you’ve registered, we’ll send you a confirmation email with calendar invitations, and we’ll also send you a reminder one day before each of the webinars begins.

All webinars are free to attend and will be recorded and published on YouTube after streaming. Subscribe to the Kotlin by JetBrains YouTube channel to be notified about each uploaded recording.

Learn more about server-side Kotlin

We’ve created a page with lots of useful reference materials about using Kotlin on the backend. Follow the link below for tips, tricks, and best practices, all carefully curated by the Kotlin team.

Go to Kotlin server-side page

Continue Reading Server-side With Kotlin Webinar Series, Vol 2

Lets-Plot, in Kotlin

You can understand a lot about data from metrics, checks, and basic statistics. However, as humans, we grasp trends and patterns way quicker when we see them with our own eyes. If there was ever a moment you wished you could easily and quickly visualize your data, and you were not sure how to do it in Kotlin, this post is for you!

Today I’d like to talk to you about Lets-Plot for Kotlin, an open-source plotting library for statistical data written entirely in Kotlin. You’ll learn about its API, the kinds of plots you can build with it, and what makes this library unique. Let’s start with the API.

ggplot-like API

Lets-Plot Kotlin API is built with layered graphic principles in mind. You may be familiar with this approach if you have ever used the ggplot2 package for R.

“This grammar […] is made up of a set of independent components that can be composed in many different ways. This makes [it] very powerful because you are not limited to a set of pre-specified graphics, but you can create new graphics that are precisely tailored for your problem.” Hadley Wickham, ggplot2: Elegant Graphics for Data Analysis

If you have worked with ggplot2 before, you may recognize the API’s style:

If not, let’s unpack what’s going on here. In Lets-Plot, a plot is represented by at least one layer. Layers are responsible for creating the objects painted on the ‘canvas’ and contain the following elements:

  • Data – the subset of data specified either once for all layers or on a per-layer basis. One plot can combine multiple different datasets (one per layer).
  • Aesthetic mapping – describes how variables in the dataset are mapped to the visual properties of the layer, such as color, shape, size, or position.
  • Geometric object – a geometric object that represents a particular type of chart.
  • Statistical transformation – computes some kind of statistical summary on the raw input data. For example,


    statistic is used for histograms and smooth is used for regression lines.

  • Position adjustment – a method used to compute the final coordinates of geometry. Used to build variants of the same geometric object or to avoid overplotting.

To combine all these parts together, you need to use the following simple formula:

p = lets_plot(<dataframe>) 
p + geom_<chart_type>(stat=<stat>, position=<adjustment>) { <aesthetics mapping> }

You can learn more about the Lets-Plot basics and get a better understanding of what the individual building blocks do by checking out the Getting Started Guide.

Customizable plots

Out of the box, Lets-Plot supports numerous visualization types – histograms, box plots, scatter plots, line plots, contour plots, maps, and more!

All of the plots are flexible and highly customizable, yet the library manages to keep the balance between powerful customization capabilities and ease of use. You can start with simple but useful visualizations like data distribution:

Histogram plot

You have all the tools you need to create complex and nuanced visualizations, like this plot illustrating custom tooltips on a plot for Iris dataset:

Customisable tooltips

Check out these tutorials to explore the available Lets-Plot visualizations and learn how to use them:

Integration with the Kotlin kernel for Jupyter Notebook

You may have noticed from the screenshots that these plots were created in Jupyter Notebook. Indeed, Lets-Plot integrates with the Kotlin kernel for Jupyter Notebook out of the box. If you have the Kotlin kernel installed (see the instructions on how to do so), all you need to do to start plotting is add the following line magic in your notebook:

%use lets-plot

That’s it! Plot away 🙂
Kotlin notebooks are also supported in JetBrains Datalore, an Online Data Science Notebook with smart coding assistance. Check out an example Datalore notebook that uses Lets-Plot.

Lets-Plot Internals

Finally, I wanted to share with you a little bit about the implementation of Lets-Plot, because it is a one-of-a-kind multiplatform library. Due to the unique multiplatform nature of Kotlin, the plotting functionality is written once in Kotlin and can then be packaged as a JavaScript library, JVM library, and a native Python extension.

Lets-Plot Internals

Whichever environment you prefer, you can use the same functionality and API to visualize your data with Lets-Plot!

The Kotlin API is built on top of the JVM jar, however, you can also use the JVM jar independently. For instance, you can embed the plots into a JVM application using either JavaFX or Apache Batik SVG Toolkit for graphics rendering.

Lets-Plot truly is an amazing example of Kotlin’s multiplatform potential and a great visualization tool for your data needs. I hope this post has sparked your interest and you’ll give it a go!

Continue Reading Lets-Plot, in Kotlin

Kotlin Heroes 5: ICPC Round is Approaching

Welcome to Kotlin Heroes 5: ICPC Round, the new round of our regular competitive programming contest co-hosted by JetBrains and Codeforces! Register now and save the date – November 12, 2020.

This is a special round for which we’ve joined forces with the world-famous ICPC, the International Collegiate Programming Contest. It’s a rare chance to test your skills and see how you compare to some of the brightest coding minds.

Kotlin Heroes is fun for everyone regardless of their level of programming experience, and every participant has an equal chance of winning a prize!

What is Kotlin Heroes

Kotlin Heroes is a Kotlin-only coding contest created and hosted by JetBrains and Codeforces, the most popular platform for coding competitions. The previous four rounds attracted more than 4000 participants, including tourist, Egor, Benq, eatmore, Golovanov399, Petr, ecnerwala, and other famous competitive programming champions.

This is a great opportunity to learn about Kotlin’s capabilities, as well as a chance to practice coding and adopt a powerful modern programming language.

I like a challenge, sign me up!


Participants have 2 hours and 30 minutes to solve as many problems as they can. Each problem set includes tasks of varying difficulty to suit everyone from beginners to the most advanced programmers. Participants are faced with up to 10 problems of increasing complexity and ranked according to the number of correct solutions they come up with.


  • The top three winners will be awarded with $512, $256, and $128.
  • The top 50 contestants win Kotlin Heroes t-shirts and stickers.
  • Everyone who gets through the first task will be entered into a raffle to win one of 50 Kotlin Heroes t-shirts.
  • To mark this special round, we’ll be issuing participation certificates from the Kotlin team, JetBrains, Codeforces, and ICPC to everyone who takes part.
  • There is also an outstanding special recognition prize from the ICPC: ICPC- experience, an invitation to the Moscow World Finals 2021, all-inclusive on-site (hotel, meals, ceremonies, and swag, are included; visa, flights, transportation to the contest location is not).

Join Kotlin Heroes!

Who can attend

Kotlin Heroes welcomes everyone. The contest has no limitations on experience or professional background.

How to prepare

We hope to see you at Kotlin Heroes 5: ICPC Round on November 12!

Continue Reading Kotlin Heroes 5: ICPC Round is Approaching

Dark Theme Is Now Available in Toolbox App 1.18.

It’s been a while since you made this wish, and now we’ve finally made it come true! We are happy to introduce the frequently requested feature – the Dark Theme. :tada:

Don’t have the Toolbox App yet? Click the link below to download the free Toolbox App and start working with the theme you like the most.

Download now

You shouldn’t wait any longer to see it in action. Just update your Toolbox App to version 1.18, if you haven’t set the Toolbox App to update automatically and select the Dark Theme in the “Appearance & Behavior” section of the Toolbox App Settings.

Dark Theme

Currently, the app offers two options – Light or Dark Theme – which you can change manually. Go to the Toolbox App Settings and choose the theme you like under the “Appearance & Behavior”.
Theme Settings

Bug Fixes 🛠

In the same release, we’ve fixed the following issues:

TBX-4898 – Generation of shell scripts on macOS for Android Studio 4.0 and 4.1 now works correctly.
TBX-4985 – Toolbox now correctly updates taskbar shortcuts on Windows.
TBX-5031, TBX-5066 – The uninstall process on Windows now works correctly.
TBX-5199 – Toolbox now updates Linux .desktop files if there is a broken symlink.
TBX-5233 – We’ve fixed a bug that caused Android Studio 4.1 not to start from the Toolbox App on macOS.

See the full list of fixed issues here.

As always, the Toolbox App team is happy to get your feedback! Leave us a message in our issue tracker or on Twitter by mentioning @JBToolbox.

Stay safe, and stay productive!
The Toolbox App team

Continue Reading Dark Theme Is Now Available in Toolbox App 1.18.

Introducing Kotlin for Apache Spark Preview

Apache Spark is an open-source unified analytics engine for large-scale distributed data processing. Over the last few years, it has become one of the most popular tools used for processing large amounts of data. It covers a wide range of tasks – from data batch processing and simple ETL (Extract/Transform/Load) to streaming and machine learning.

Due to Kotlin’s interoperability with Java, Kotlin developers can already work with Apache Spark via Java API. This way, however, they cannot use Kotlin to its full potential, and the general experience is far from smooth.

Today, we are happy to share the first preview of the Kotlin API for Apache Spark. This project adds a missing layer of compatibility between Kotlin and Apache Spark. It allows you to write idiomatic Kotlin code using familiar language features such as data classes and lambda expressions.

Kotlin for Apache Spark also extends the existing APIs with a few nice features.

withSpark and withCached functions


is a simple and elegant way to work with SparkSession that will automatically take care of calling


at the end of the block for you.
You can pass parameters to it that may be required to run Spark, such as master location, log level, or app name. It also comes with a convenient set of defaults for running Spark locally.

Here’s a classic example of counting occurrences of letters in lines:

val logFile = "a/path/to/logFile.txt" 
withSpark(master = "yarn", logLevel = SparkLogLevel.DEBUG){ {
       val numAs = filter { it.contains("a") }.count()
       val numBs = filter { it.contains("b") }.count()
       println("Lines with a: $numAs, lines with b: $numBs")

Another useful function in the example above is


. In other APIs, if you want to fork computations into several paths, but compute things only once, you would call the ‘cache’ method. However, this quickly becomes difficult to track and you have to remember to unpersist the cached data. Otherwise, you risk taking up more memory than intended or even breaking things altogether.


takes care of tracking and unpersisting for you.

Null safety

Kotlin for Spark adds




, and other aliases to the existing methods, however, these are null safe by design.

fun main() {

    data class Coordinate(val lon: Double, val lat: Double)
    data class City(val name: String, val coordinate: Coordinate)
    data class CityPopulation(val city: String, val population: Long)

    withSpark(appName = "Find biggest cities to visit") {
        val citiesWithCoordinates = dsOf(
                City("Moscow", Coordinate(37.6155600, 55.7522200)),
                // ...

        val populations = dsOf(
                CityPopulation("Moscow", 11_503_501L),
                // ...
        citiesWithCoordinates.rightJoin(populations, citiesWithCoordinates.col("name") `==` populations.col("city"))
                .filter { (_, citiesPopulation) ->
                    citiesPopulation.population > 15_000_000L
                .map { (city, _) ->
                    // A city may potentially be null in this right join!!!

Note the


line in the example above. A city may potentially be null in this right join. This would’ve caused a


in other JVM Spark APIs, and it would’ve been rather difficult to debug the source of the problem.
Kotlin for Apache Spark takes care of null safety for you and you can conveniently filter out null results.

What’s supported

This initial version of Kotlin for Apache Spark supports Apache Spark 3.0 with the core compiled against Scala 2.12.

The API covers all the methods needed for creating self-contained Spark applications best suited for batch ETL.

Getting started with Kotlin for Apache Spark

To help you quickly get started with Kotlin for Apache Spark, we have prepared a Quick Start Guide that will help you set up the environment, correctly define dependencies for your project, and run your first self-contained Spark application written in Kotlin.

What’s next

We understand that it takes a while to upgrade any existing framework to a newer version, and Spark is no exception. That is why in the next update we are going to add support for the earlier Spark versions: 2.4.2 – 2.4.6.

We are also working on the Kotlin Spark shell so that you can enjoy working with your data in an interactive manner, and perform exploratory data analysis with it.

Currently, Spark Streaming and Spark MLlib are not covered by this API, but we will be closely listening to your feedback and will address it in our roadmap accordingly.

In the future, we hope to see Kotlin join the official Apache Spark project as a first-class citizen. We believe that it can add value both for Kotlin, and for the Spark community. That is why we have opened a Spark Project Improvement Proposal: Kotlin support for Apache Spark. We encourage you to voice your opinions and join the discussion.

Go ahead and try Kotlin for Apache Spark and let us know what you think!

Continue Reading Introducing Kotlin for Apache Spark Preview

Dokka Preview Based on Kotlin 1.4.0-RC

The following post is written by Paweł Marks and Kamil Doległo.

Every programming ecosystem needs documentation to thrive. Kotlin has its roots in the JVM ecosystem, where Javadoc is a standard and universally accepted documentation engine. It was only natural to expect Kotlin to have a similarly seamless tool. That was the initial goal of Dokka – to provide a reliable and simple documentation engine. But the increasing diversity of Kotlin, with features like multiplatform projects, Native support, and so on, requires Dokka to be more complex.
The ongoing development of Kotlin 1.4 gave us a chance to rethink, redesign, and reimplement Dokka from scratch (its version number is now aligned with the Kotlin embedded compiler). In this post, we give you an overview of Dokka’s new features and announce its preview release. We would appreciate it if you could try the preview and share your feedback.


How to Try

Dokka is distributed as a plugin for two of the most popular Kotlin build tools – Maven and Gradle. For advanced use cases, Dokka can be used as a standalone command-line application. For a Gradle-based project, just add the following to your project’s build.gradle or build.gradle.kts files:

plugins {
    id("org.jetbrains.dokka") version "1.4.0-rc"

repositories {

After running

./gradlew dokkaHtml

you should see generated documentation in the


directory inside your project’s build directory.

New HTML format

One of our main goals was to produce good-looking, modern documentation without any need for tweaking and configuration from the user. That is why the default look of Dokka is no longer a simple static web page. Let’s take a quick look at the most important features of the new HTML format. Note that we’ve used the coroutines documentation to demonstrate the new Dokka format, but we won’t actually be converting this documentation until a later time.

Navigation tree

On the left side of your screen, you can see all your project modules, packages, and types organized in a hierarchical menu. This allows you to not only see the project structure at a glance, but also to quickly navigate between different parts of your codebase.

Search bar

If your project is particularly complex, or you don’t know its codebase structure and you need to find some type or function, you can use the built-in search bar. It knows all symbols in the project so it can offer IDE-like autocompletion for search queries.

Linking to sources

Dokka can link from any symbol to its definition in code if your project’s code is hosted online. Just configure your repo URL using the


option as a generation parameter and all your functions and types will get links pointing to the exact line of code where each of them was defined.

Information about platforms

Kotlin is a multiplatform language and its documentation engine needs to reflect all platform information that can be important for end-users. In the documentation for a multiplatform project, you will notice that every symbol is marked with a badge indicating to which platform it applies. Moreover, if there are any differences between documentation or signatures of the same symbol on different platforms you can use the tabs to switch between them.

On every page you can choose to show or hide symbols defined in specific platforms.

Runnable samples

Kotlin has a great tool called Kotlin Playground that allows you to run simple code snippets from your browser. This tool is now also integrated with Dokka. You can specify the source of your samples and they will be included in your documentation, allowing end users to check how to properly use your library. All you have to do is create an ordinary Kotlin file with your code, include it in the documentation using

samples = listOf(&lt;paths to the files&gt;)

and link to the desired method using

@sample &lt;fully qualified name of your method&gt;

and Dokka will automatically copy the source code and use it to create a runnable block.

Other formats


If you want to host your documentation on GitHub pages or other sites that use markdown formatting, Dokka has that covered. Out of the box, it supports two different flavors of Markdown – Jekyll and GFM (GitHub Flavored Markdown). To use them you only need to run





Kotlin-as-java and javadoc

Kotlin is interoperable with Java, which means that Kotlin libraries that target the JVM can be used from Java projects. Dokka can generate documentation for them too. You just need to include


for HTML format or




for GFM and Jekyll formats respectively. You will see your classes and functions in a Java-like format, for example, class properties will be changed to the appropriate




methods. Thanks to our new architecture, the Kotlin-as-java plugin works with HTML, GFM, and Jekyll Markdown, as well as some user-defined formats.

You can go one step further and not only desugar your properties to getters and setters but also generate documentation in the same way as Javadoc does. You just need to run the


task. The new Javadoc generation is completely independent of JDK artifacts, so it is guaranteed to work with every modern version of JDK (8 and newer).

Multimodule projects

By default, Dokka generates one set of documentation per Kotlin module.

  • Run the

    task to collect all definitions for all modules and document them as if they were a single module. This way the code can be divided into small modules while still having one documentation set.

  • Run the

    task to generate documentation for each module in a separate directory and then create a common page with an index that links to all modules with brief previews of their documentation. You can customize this page by providing your own template in a Markdown file.

Other features

Even though Dokka 1.4 is completely rewritten, we wanted to preserve all the useful features and configuration options from previous releases. You can still:

  • Generate documentation for mixed Java and Kotlin sources.
  • Include Markdown documentation for the project, module, and package pages.
  • Generate documentation for non-public symbols.
  • Receive a report after generation about each undocumented symbol.
  • Specify all generation options on a per-package basis.


The new Dokka has a flexible and powerful plugin system. Jekyll, GFM, kotlin-as-java, and Javadoc are only a few examples of plugins that can be created for the new Dokka. Everyone can now provide a custom format or transform the documentation in any way imaginable. Thanks to its robust framework it is intuitive to use and hard to accidentally break something. If you are interested in plugin development, check out our developers’ guide.

Continue Reading Dokka Preview Based on Kotlin 1.4.0-RC

Kotlin 1.4.0-RC: Debugging coroutines

We continue to highlight the upcoming changes in 1.4 release. In this blogpost, we want to describe a couple of important features related to coroutines:

  • New functionality to conveniently debug coroutines
  • The ability to define deep recursive functions

These changes are already available for you to try in the 1.4.0-RC release!

Let’s dive into details.

Debugging coroutines

Coroutines are great for asynchronous programming (but not only for that), and many people already use them or are starting to use them. When you write code with coroutines, however, trying to debug them can be a real pain. Coroutines jump between threads. It can be difficult to understand what a specific coroutine is doing or to check its context. And in some cases, tracking steps over breakpoints simply doesn’t work. As a result, you have to rely on logging or mental effort to debug the code with coroutines. To address this issue, we’re introducing new functionality in the Kotlin plugin that aims to make debugging coroutines much more convenient.

The Debug Tool Window now contains a new Coroutines tab. It is visible by default, and you can switch it on and off:

In this tab, you can find information about both currently running and suspended coroutines. The coroutines are grouped by the dispatcher they are running on. If you started a coroutine with a custom name, you can find it by this name in the Tool Window. In the following example, you can see that the main coroutine is running (we’ve stopped on a breakpoint inside it), and the other four coroutines are suspended:

import kotlinx.coroutines.*

fun main() = runBlocking {
   repeat(4) {
       launch(Dispatchers.Default + CoroutineName("Default-${'a' + it}")) {
           val name = coroutineContext[CoroutineName.Key]?.name
           println("I'm '$name' coroutine")
   // breakpoint
   println("I'm the main coroutine")

With the new functionality, you can check the state of each coroutine and see the values of local and captured variables. This also works for suspended coroutines!

In this example, we check the values of the local variables of suspended coroutines:

import kotlinx.coroutines.*

fun main() = runBlocking<Unit> {
   launch {
       val a = 3
   launch {
       val b = 2
   launch {
       val c = 1
       // breakpoint here:

Choose a suspended coroutine (click


to see its state on that point) and the


tab will show you the state of the local variables:

You can now see a full coroutine creation stack, as well as a call stack inside the coroutine:

Use the ‘Get Coroutines Dump’ option to get a full report containing the state of each coroutine and its stack:

At the moment, the coroutines dump is still rather simple, but we’re going to make it more readable and helpful in future versions.

Note that to make the debugger stop at a given breakpoint inside a coroutine, this breakpoint should have the “Suspend: All” option chosen for it:

To try this new functionality for debugging coroutines, you need to use the latest version of kotlinx.coroutines, 1.3.8-1.4.0-rc, and the latest version of the Kotlin plugin (e.g. 1.4.0-rc-release-IJ2020.1-2).

The functionality is available only for Kotlin/JVM. If you encounter any problems (please don’t forget to share the details with us!), you can switch it off by opening Build, Execution, Deployment | Debugger | Data Views | Kotlin in Preferences and choosing Disable coroutines agent. For now, we’re releasing this functionality for debugging coroutines in the experimental state, and we’re looking forward to your feedback!

Defining deep recursive functions using coroutines

In Kotlin 1.4, you can define recursive functions and invoke them even when the call depth is greater than 100,000, using the standard library support based on coroutines!

Let’s first look at an ordinary recursive function, whose usage results in a


when the recursion depth gets too high. After that, we’ll discuss how you can fix the problem and rewrite the function using the Kotlin standard library.

We’ll use a simple binary tree, where each


node has a reference to its





class Tree(val left: Tree?, val right: Tree?)

The depth of the tree is the length of the longest path from its root to its child nodes. It can be computed using the following recursive function:

fun depth(t: Tree?): Int =
   if (t == null) 0 else maxOf(
   ) + 1

The tree depth is the maximum of the depths of the left and right children increased by one. When the tree is empty it’s zero.

This function works fine when the recursion depth is small:

class Tree(val left: Tree?, val right: Tree?)

fun depth(t: Tree?): Int =
   if (t == null) 0 else maxOf(
   ) + 1

fun main() {
    val tree = Tree(Tree(Tree(null, null), null), null)
    println(depth(tree)) // 3

However, if you create a tree with a depth greater than 100,000, which in practice is not so uncommon, you’ll get


as a result:

class Tree(val left: Tree?, val right: Tree?)

fun depth(t: Tree?): Int =
   if (t == null) 0 else maxOf(
   ) + 1
fun main() {
   val n = 100_000
   val deepTree = generateSequence(Tree(null, null)) { prev ->
       Tree(prev, null)

Exception in thread "main" java.lang.StackOverflowError
  at FileKt.depth(File.kt:5)

The problem is that the call stack gets too large. To solve this issue, you can use a VM option to increase the maximum stack size. However, while this might work for specific use cases, it’s not a practical solution for the general case.

Alternatively, you can rewrite the code and store results for intermediate calls by hand in the heap rather than on the stack. This solution works in most cases and is common in other languages. However, the resulting code becomes non-trivial and complicated, and the beauty and simplicity of the initial function are lost. You can find an example here.

Kotlin now provides a clean way to solve this problem based on the coroutines machinery.

The Kotlin library now includes the definition


, which models recursive calls using the suspension mechanism:

class Tree(val left: Tree?, val right: Tree?)

val depthFunction = DeepRecursiveFunction<Tree?, Int> { t ->
   if (t == null) 0 else maxOf(
   ) + 1

fun depth(t: Tree) = depthFunction(t)

fun main() {
   val n = 100_000
   val deepTree = generateSequence(Tree(null, null)) { prev ->
       Tree(prev, null)

   println(depth(deepTree)) // 100000

You can compare the two versions, the initial one and the one using


, to make sure that the logic remains the same. Your new function now becomes a variable of type


, which you can call using the ‘invoke’ convention as


. The function body now becomes the body of the lambda argument of


, and the recursive call is replaced with


. These changes are straightforward and easy to make. Note that while the new


function uses coroutines under the hood, it is not itself a



Understanding how


is implemented is interesting, but it is not necessary in order for you to use it and benefit from it. You can find the implementation details described in this blog post.


is a part of the Kotlin standard library, not part of the


library, since it’s not about asynchronous programming. At the moment this API is still experimental, so we’re looking forward to your feedback!

How to try it

As always, you can try Kotlin online at

In IntelliJ IDEA and Android Studio, you can update the Kotlin Plugin to version 1.4.0-RC. See how to do this.

If you want to work on existing projects that were created before installing the preview version, you need to configure your build for the preview version in Gradle or Maven. Note that unlike the previous preview versions, Kotlin 1.4.0-RC is also available directly from Maven Central. This means you won’t have to manually add the


repository to your build files.

You can download the command-line compiler from the Github release page.

Share your feedback

We’ll be very thankful if you find and report bugs to our issue tracker. We’ll try to fix all the important issues before the final release, which means you won’t need to wait until the next Kotlin release for your issues to be addressed.

You are also welcome to join the #eap channel in Kotlin Slack (get an invite here). In this channel, you can ask questions, participate in discussions, and get notifications about new preview builds.

Let’s Kotlin!

Continue Reading Kotlin 1.4.0-RC: Debugging coroutines

Kotlin 1.4.0-RC Released

We’re almost there! We’re happy to unveil Kotlin 1.4.0-RC – the release candidate for the next major version of our programming language. Read on to learn about what has changed in Kotlin 1.4.0-RC, and make sure to try its new features before they are officially released with Kotlin 1.4.0.

A special thanks to everyone who tried our milestone releases (1.4-M1, 1.4-M2, and 1.4-M3), shared their feedback, and helped us improve this version of Kotlin!

This post highlights the new features and key improvements that are available in Kotlin 1.4.0-RC:

Improved *.gradle.kts IDE support

We significantly improved the IDE support for Gradle Kotlin DSL scripts (*.gradle.kts files) in Kotlin 1.3.70, and we’ve continued to improve it for Kotlin 1.4.0-RC. Here is what this new version brings:

Loading script configuration explicitly for better performance

Previously, when you added a new plugin to the




block of your


, the new script configuration was loaded automatically in the background. Then, after it was applied, you could use code assistance for the newly added plugin.

To improve performance, we’ve removed this automatic behavior of applying changes to the script configuration upon typing. For Gradle 6.0 and above, you need to explicitly apply changes to the configurations by clicking Load Gradle Changes or by reimporting the Gradle project.

In earlier versions of Gradle, you need to manually load the script configuration by clicking Load Configuration in the editor.

We’ve added one more action in IntelliJ IDEA 2020.1 with Gradle 6.0+, Load Script Configurations, which loads changes to the script configurations without updating the whole project. This takes much less time than reimporting the whole project.

Better error reporting

Previously you could only see errors from the Gradle Daemon (a process that runs in the background and is responsible for all Gradle-related tasks and activities) in separate log files. Now if you use Gradle 6.0 or above, the Gradle Daemon returns all the information about errors directly and shows it in the Build tool window. This saves you both time and effort.

Less boilerplate in your project’s configuration

With improvements to the Kotlin Gradle plugin, you can write less code in your Gradle build files: one of the most common scenarios is now enabled by default.

Making the standard library a default dependency

An overwhelming majority of projects require the Kotlin standard library. Starting from 1.4.0-RC, you no longer need to declare a dependency on


in each source set manually — it will now be added by default. The automatically added version of the standard library will be the same as the version of the Kotlin Gradle plugin, since they have the same versioning.

This is how a typical multiplatform project configuration with Android, iOS, and JavaScript targets looked before 1.4:



sourceSets {
    commonMain {
        dependencies {
    androidMain {
        dependencies {

    jsMain {
        dependencies {

    iosMain {
        dependencies {


Now, you don’t need to explicitly declare a dependency on the standard library at all, and with hierarchical project structure support, announced in 1.4-M2, you have to specify other dependencies only once. So your Gradle build file will become much more concise and easy to read:



sourceSets {
    commonMain {
        dependencies {


For platform source sets and backend-shared source sets, the corresponding standard library will be added, while a common standard library will be added to the rest. The Kotlin Gradle plugin will select the appropriate JVM standard library depending on the



If you declare a standard library dependency explicitly (for example, if you need a different version), the Kotlin Gradle plugin won’t override it or add a second standard library. And if you do not need a standard library at all, you can add the opt-out flag to the Gradle properties:



Simplified management of CocoaPods dependencies

Previously, once you integrated your project with the dependency manager CocoaPods, you could build an iOS, macOS, watchOS, or tvOS part of your project only in Xcode, separate from other parts of your multiplatform project. These other parts could be built in IntelliJ IDEA.

Moreover, every time you added a dependency on an Objective-C library stored in CocoaPods (Pod library), you had to switch from IntelliJ IDEA to Xcode, run the task

pod install

, and run the Xcode build there.

Now you can manage Pod dependencies right in IntelliJ IDEA while enjoying the benefits it provides for working with code, such as code highlighting and completion. You can also build the whole Kotlin project with Gradle, without having to switch to Xcode. This means you only have to go to Xcode when you need to write Swift/Objective-C code or run your application on a simulator or device.

Now you can also work with Pod libraries stored locally.

Depending on your needs, you can add dependencies between:

  • A Kotlin project and Pod libraries from the CocoaPods repository.
  • A Kotlin project and Pod libraries stored locally.
  • A Kotlin Pod (Kotlin project used as a CocoaPods dependency) and an Xcode project with one or more targets.

Complete the initial configuration, and when you add a new dependency to CocoaPods, just re-import the project in IntelliJ IDEA. The new dependency will be added automatically. No additional steps are required.

Below you can find instructions on how to add dependencies on Pod libraries from the CocoaPods repository. The Kotlin 1.4 documentation will cover all scenarios.

How to use the CocoaPods integration

Install the CocoaPods dependency manager and plugin
  1. Install the

    dependency manager (

    sudo gem install cocoapods


  2. Install the

    plugin (

    sudo gem install cocoapods-generate


  3. In the

    file of your project, apply the CocoaPods plugin with



    plugins {
       kotlin("multiplatform") version "1.4.0-rc"
       kotlin("native.cocoapods") version "1.4.0-rc"
Add dependencies on Pod libraries from the CocoaPods repository
  1. Add dependencies on Pod libraries that you want to use from the CocoaPods repository with


    You can also add dependencies as subspecs.

    kotlin {
        cocoapods {
            summary = "CocoaPods test library"
            homepage = ""
            pod("AFNetworking", "~> 4.0.0")
            //Remote Pod added as a subspec
  2. Re-import the project.
    To use dependencies from Kotlin code, import the packages:

    import cocoapods.AFNetworking.*
    import cocoapods.SDWebImage.*

We’re also happy to share a sample project with you that demonstrates how to add dependencies on Pod libraries stored both remotely in the CocoaPods repository and locally.

Generate release .dSYMs on Apple targets by default

Debugging an iOS application crash sometimes involves analyzing crash reports, and crash reports generally require symbolication to become properly readable. To symbolicate addresses in Kotlin, the .dSYM bundle for Kotlin code is required. Starting with 1.4-M3, the Kotlin/Native compiler produces .dSYMs for release binaries on Darwin platforms by default. This can be disabled with the


compiler flag. On other platforms, this option is disabled by default. To toggle this option in Gradle, use:

kotlin {
    targets.withType<org.jetbrains.kotlin.gradle.plugin.mpp.KotlinNativeTarget> {
        binaries.all {
            freeCompilerArgs += "-Xadd-light-debug={enable|disable}"

Performance improvements

We continue to focus on optimizing the overall performance of the Kotlin/Native development process:

  • In 1.3.70 we introduced two new features for improving the performance of Kotlin/Native compilation: caching project dependencies and running the compiler from the Gradle daemon. Thanks to your feedback, we’ve managed to fix numerous issues and improve the overall stability of these features, and we will continue to do so.
  • There are also some runtime performance improvements as well. Overall runtime performance has improved because of optimizations in GC. This improvement will be especially apparent in projects with a large number of long-lived objects.



    collections now work faster by escaping redundant boxing.


With Kotlin 1.4.0-RC, we are making the


annotation compatible with the default compiler backend. We are also providing more robust and fine-grained control over npm dependency management and the Dukat integration for Gradle projects, refining our support for CSS, and offering a first look at our integration with the Node.js APIs, among other things.


annotation for default compiler backend

In the previous milestones for Kotlin 1.4, we introduced the


annotation, which is used to make a top-level declaration available from JavaScript or TypeScript when using the new IR compiler backend. Starting with Kotlin 1.4-M3, it is now also possible to use this annotation with the current default compiler backend. Annotating a top-level declaration with


when using the current default compiler backend turns off name mangling for the declaration. Having this annotation in both compiler backends allows you to transition between them without having to adjust your logic for exporting top-level declarations. Please note that the generation of TypeScript definitions is still only available when using the new IR compiler backend.

Changes to npm dependency management

Explicit version requirement for dependency declarations

Declaring dependencies on npm packages without specifying a version number makes it harder to reliably manage the packages you use. This is why you are required from now on to explicitly specify a version or version range based on npm’s semver syntax for dependencies. The Gradle DSL now also supports multiple ranges for dependencies, allowing you to pinpoint exactly which versions you want to accept in your project, for example:

dependencies {
    implementation(npm("react", "> 14.0.0 <=16.9.0))

Additional types of npm dependencies

Besides regular dependencies from npm, which you can specify using


inside your


block, there are now three more types of dependencies that you can use:

To learn more about when each type of dependency can best be used, have a look at the official documentation linked from npm.

Automatic inclusion and resolution of transitive npm dependencies

Previously, if you depended on a library whose author did not manually add a


file to its artifacts, you would sometimes be required to manually import its npm dependencies. This was the case for


, for example, which required you to include




as dependencies in your Gradle build file for the package to work on Kotlin/JS.

Now the Gradle plugin automatically generates


files for libraries, and includes them in the




artifacts. When including a library of this sort, the file is automatically parsed and the required dependencies are automatically included, removing the need to add them to your Gradle build file manually.

Adjustments for CSS support

With Kotlin 1.4-M2, we introduced support for webpack’s CSS and style loaders directly from Gradle via


. In order to more closely reflect its actual tasks and effects, we have since renamed the configuration parameter to


. Going forward, the Gradle plugin no longer enables CSS support by default – a setting we had experimented with in 1.4-M2. We hope that this change will prevent confusion for those who include their own settings for how style sheets should be handled (for example by using Sass or Less loaders). In these situations, it would not be immediately obvious that a project’s default configuration already injects some CSS settings that could lead to a conflict.

To turn on CSS support in your project, set the


flag in your Gradle build file for




, and


. When creating a new project using the wizards included in IntelliJ IDEA, these settings will automatically be included in the generated



webpackTask {
   cssSupport.enabled = true
runTask {
   cssSupport.enabled = true
testTask {
   useKarma {
      // . . .
      webpackConfig.cssSupport.enabled = true

We realize that having to adjust these settings individually for each task is not very convenient. We are looking at adding a central point of configuration for


in the plugin’s DSL (you can follow our progress here).

Improvements for Dukat integration

The Kotlin/JS Gradle plugin adds more fine-grained control to its integration with Dukat, the tool for automatically converting TypeScript declaration files (


) into Kotlin external declarations. You now have two different ways to select if and when Dukat should generate declarations:

Generating external declarations at build time

The npm dependency function now takes a third parameter after the package name and version:


. This allows you to individually control whether Dukat should generate declarations for a specific dependency, like so:

dependencies {
  implementation(npm("decamelize", "4.0.0", generateExternals = true))

You can use the


flag (formerly named


while it was still experimental) in your file to set the generator’s behavior for all npm dependencies simultaneously. As usual, individual explicit settings take precedence over this general flag.

Manually generating external declarations via Gradle task

If you want to have full control over the declarations generated by Dukat, if you want to apply manual adjustments, or if you’re running into trouble with the auto-generated externals, you can also trigger the creation of the declarations for all your npm dependencies manually using the


Gradle task. This will generate declarations in a directory titled


in your project root. Here, you can review the generated code and copy any parts you would like to use to your source directories. (Please be advised that manually providing external declarations in your source folder and enabling the generation of external declarations at build time for the same dependency can result in resolution issues.)

Migration preparation for kotlin.dom and kotlin.browser to separate artifacts

In order to evolve our browser and DOM bindings for Kotlin/JS faster and decouple them from the release cycle of the language itself, we are deprecating the current APIs located in the




packages. We provide replacements for these APIs in the




packages, which will be extracted to separate artifacts in a future release. Migrating to these new APIs is straightforward. Simply adjust the imports used in your project to point to these new kotlinx packages. Quick-fixes in IntelliJ IDEA, accessible via Alt-Enter, can help with this migration.

Preview: kotlinx-nodejs

We are excited to share a preview of our official bindings for the Node.js APIs


. While it has been possible to target Node.js with Kotlin for a long time, the full potential of the target is unlocked when you have typesafe access to its API. You can check out the


bindings on GitHub.

To add


to your project, make sure


is added to your repositories. You can then simply add a dependency on the artifact:

dependencies {
    // . . .

After loading the Gradle changes, you can then experiment with the API provided by Node.js, for example by making use of their DNS resolution package:

fun main() {
    dns.lookup("") { err, address, family ->
        console.log("address: $address, family IPv$family")

Especially because this is still a preview version, we encourage you to give kotlinx-nodejs a try and report any issues you encounter in the repository’s issue tracker.

Deprecation of kotlin2js and kotlin-dce-js Gradle plugins

Starting with Kotlin 1.4, the old Gradle plugins for targeting JavaScript with Kotlin (




) will be officially deprecated in favor of the


Gradle plugin.
Key functionality that was available in these plugins, alongside the


(which was already deprecated previously) has been condensed into the new plugin, allowing you to configure your Kotlin/JS target using a unified DSL that is also compatible with Kotlin/Multiplatform projects.

Since Kotlin 1.3.70, dead code elimination (DCE) has been applied automatically when using the




tasks, which run and create optimized bundles of your program. (Please note that dead code elimination is currently only available when targeting the browser for production output, not for Node.js or tests. But if you have additional use cases you’d like to see addressed, feel free to share them with us on YouTrack)

Additional quality-of-life improvements and notable fixes

  • We have added more compiler errors for prohibited usages of the

    annotation to highlight such problems.

  • When using the IR compiler backend, we have enabled a new strategy that includes incremental compilation for

    s, which is one of many steps we are taking to improve compilation time.

  • The configuration for the webpack development server has been adjusted, preventing errors like
    ENOENT: no such file or directory

    when using the hot reload functionality.

Evolving the Kotlin Standard Library API

Kotlin 1.4 is a feature release in terms of Kotlin’s evolution, so it brings a lot of new features that you already know about from previous blog posts. However, another important aspect of a feature release is that it includes significant evolutionary changes in the existing API. Here’s a brief overview of the changes you can expect with the 1.4 release.

Stabilization of the experimental API

In order to ship the new things you want to see in Kotlin libraries as fast as possible, we provide experimental versions of them. This status indicates that work on the API is still in progress and that it could be changed incompatibly in the future. When you try to use the experimental API, the compiler warns you about its status and requires an opt-in (



In feature releases, experimental APIs can be promoted to stable. At this point, we guarantee that their form and behavior won’t change suddenly (changes are only possible with a proper deprecation cycle). Once an API is officially stable, you can use the API safely without warnings or opt-ins.

With 1.4, we are promoting a number of experimental functions in the Kotlin libraries to stable. Here are some examples, along with versions in which they were introduced:

More API functions and classes are becoming stable in 1.4. Starting from this version (1.4.0-RC), using them in your project won’t require the



Deprecation cycles

Feature releases also involve taking the next steps in existing deprecation cycles. While in incremental releases we only start new deprecation cycles with the


level, in feature releases we tighten them to


. In turn, API elements that already have the


level can be completely hidden from new uses in code and only remain in binary form to preserve compatibility for already compiled code. Together, these steps ensure the gradual removal of deprecated API elements.

If your code uses API elements with a deprecation level of


, the compiler warns you about such usages. When you update to Kotlin 1.4.0-RC, some of these warnings will turn into errors. Use the IDE prompts to properly replace erroneous usages with the provided alternatives and make sure that your code compiles again.

Detailed information about breaking changes in the Kotlin Standard Library API can be found in the Compatibility Guide for Kotlin 1.4.


We skipped this section in a couple of previous blog posts, but we haven’t stopped working on Kotlin scripting to make it more stable, faster, and easier to use in 1.4. In the RC version, you can already observe better performance along with numerous fixes and functional improvements.

Artifacts renaming

In order to avoid confusion about artifact names, we’ve renamed




to just






). These artifacts depend on the


artifact, which shades the bundled third-party libraries to avoid usage conflicts. With this renaming, we’re making the usage of


(which is safer in general) the default for scripting artifacts.
If, for some reason, you need artifacts that depend on the unshaded


, use the artifact versions with the


suffix, such as


. Note that this renaming affects only the scripting artifacts that are supposed to be used directly; names of other artifacts remain unchanged.

CLion IDE plugin is now deprecated

We’ve launched a deprecation cycle for the CLion IDE plugin. Originally it was intended to be used to debug Kotlin/Native executables. Now this capability is available in IntelliJ IDEA Ultimate. We’ll stop publishing the CLion IDE plugin after the 1.4 release. Please contact us if this deprecation causes any problems. We will do our best to help you to solve them.


As in all major releases, some deprecation cycles of previously announced changes are coming to an end with Kotlin 1.4. All of these cases were carefully reviewed by the language committee and are listed in the Compatibility Guide for Kotlin 1.4. You can also explore these changes on YouTrack.

Release candidate notes

Now that we’ve reached the final release candidate for Kotlin 1.4, it is time for you to start compiling and publishing! Unlike previous milestone releases, binaries created with Kotlin 1.4.0-RC are guaranteed to be compatible with Kotlin 1.4.0.

How to try the latest features

As always, you can try Kotlin online at

In IntelliJ IDEA and Android Studio, you can update the Kotlin Plugin to version 1.4.0-RC. See how to do this.

If you want to work on existing projects that were created before installing the preview version, you need to configure your build for the preview version in Gradle or Maven. Note that unlike the previous preview versions, Kotlin 1.4.0-RC is also available directly from Maven Central. This means you won’t have to manually add the


repository to your build files.

You can download the command-line compiler from the GitHub release page.

You can use the following versions of the libraries published together with this release:

The release details and the list of compatible libraries are also available here.

Share your feedback

We’ll be very thankful if you find and report bugs to our issue tracker. We’ll try to fix all the important issues before the final release, which means you won’t need to wait until the next Kotlin release for your issues to be addressed.

You are also welcome to join the #eap channel in Kotlin Slack (get an invite here). In this channel, you can ask questions, participate in discussions, and get notifications about new preview builds.

Let’s Kotlin!

External contributions

We’d like to thank all of our external contributors whose pull requests were included in this release:

Continue Reading Kotlin 1.4.0-RC Released

End of content

No more pages to load