Amper Update – February 2024

Amper is an experimental project configuration tool by JetBrains. With the 0.2.0 release and its accompanying tooling, we have some exciting feature updates and highlights to share.

Amper now supports Gradle version catalogs, completion for adding new dependencies, and more! Read on for more details.

Gradle version catalogs

To make it easier to add Amper to an existing project that uses Gradle version catalogs, Amper can now access dependencies declared in these catalogs, using the $libs.library.name syntax.

The IDE offers completion for libraries declared in a catalog:

You can also navigate from an Amper manifest to referenced catalog entries and find the usages of catalog entries in Amper modules:

Intention actions make it easy to add a new dependency to a catalog, and inspections will warn you if you are using a dependency directly when it’s also available as a catalog entry:

Thanks to the declarative nature of both Amper manifests and the version catalog file, completion and inspections update in real time as you edit the catalog file, without requiring you to re-import the project.

Completion support for dependencies

To make it easier to find dependencies and their versions, you will now get completion for dependencies when editing an Amper manifest, powered by Package Search:

This completion feature is aware of which dependencies block you’re working in, and will only suggest compatible dependencies:

In this example, searching for Coil for all platforms offers only the KMP-ready 3.x releases, while searching for Coil for Android also shows the 2.x releases that are Android-only.

Creating Amper projects in IntelliJ IDEA

Starting from IntelliJ IDEA 2024.1 (currently available in EAP), you can create a new Kotlin project based on Amper when using the New Project wizard:

IDE features

Amper is supported in Fleet, IntelliJ IDEA, and Android Studio. These IDEs offer dedicated tooling for working with Amper module manifests.

You can easily run any Amper application from gutter icons in the manifest file:

On top of regular completion, nested completion of the schema makes it easy to find specific configuration values that you might want to set:

The quick documentation shortcut can help you discover the correct syntax and the available options for various configuration entries:

Get started with Amper

To get started, take a look at the tutorial and the sample projects. You can also check out the KMP template apps with shared UI and native UI.

If you’re using Fleet, the features shown here are available in version 1.30 and later. You can download the latest version using the JetBrains Toolbox App.

To get access to these Amper features in IntelliJ IDEA, use the 2024.1 EAP version or later and make sure you have the latest version of the Amper plugin installed.

Update to the latest version

If you’re already using Amper in a project, update it to the latest version:

plugins {
    id("org.jetbrains.amper.settings.plugin").version("0.2.0")
}

This new version also requires some additional repositories to be added:

pluginManagement {
    repositories {
        …

        // Amper repositories
 
        maven("https://maven.pkg.jetbrains.space/public/p/amper/amper")
        maven("https://www.jetbrains.com/intellij-repository/releases")
        maven("https://packages.jetbrains.team/maven/p/ij/intellij-dependencies")
    }
}

Share your feedback

Amper is still experimental and under active development. While you should not use it in production at this early stage, we encourage you to try Amper and share your thoughts about the tool with us. Your feedback will help shape the future of Amper!

To provide feedback about your experience, join the discussion in the #amper channel on the Kotlinlang Slack, or share your suggestions and ideas in an issue on YouTrack.

Continue ReadingAmper Update – February 2024

2023 Kotlin 中文开发者大会

2023 年对 Kotlin 开发者来说是非常精彩的一年,除了语言新特性、K2 编译器持续推进外,KotlinConf 也正式恢复举办。而 Kotlin Multiplatform 这个最为人关注的亮点特性,也在 11 月正式宣布进入稳定阶段。随之而来一系列的生态更新及工具发布,包括 Compose Multiplatform 支持 iOS、Fleet 支持 KMP 开发、更关注用户经验的构建工具 Amper,再次刷新开发者对 Kotlin 生态的想像。

Kotlin 中文开发者大会是由 JetBrains 团队与中文 Kotlin User Group 合作举办的免费线上直播活动,活动将会由 Kotlin 团队与大家分享最新 Kotlin 新闻,并邀请社区的技术大佬分享最新前沿技术、实用的开发经验。自 2021 年举办以来,大会成为 Kotlin 中文开发者发表、学习、交流的最佳平台。每年活动结束后,总能收到许多观众的好评。每当时间接近年底时,也会收到许多小伙伴敲碗寻问今年的大会计划。

今年我们将延续这个传统,持续举办 Kotlin 中文开发者大会。今年大会将于 12 月 16 日(六) 12 月 17 日(日)两天进行线上直播,本次邀请到 16 位来自各公司的 Kotlin 技术专家,带来 Kotlin Multiplatform 的第一手案例实践、Kotlin 编译器核心讲解、使用 KSP 辅助开发、Kotlin 在后端的应用等话题。首次设立 Live Coding 环节带大家一起使用 Compose 写个小游戏。另外还有来自 Kotlin 教育项目及 KUG 技术社区的小伙伴,跟大家分享 Kotlin 团队在教育方面的投入、KUG 社区同学参与 Kotlin 多平台竞赛的心得以及 KUG 社区运营等话题。如此丰富的主题,机会难得,不容错过!

立即预约直播

本次活动将在 JetBrains 中国官方 BiliBili 频道和微信视频号同步直播,并与其他听众在弹幕上、直播群里互动,还有机会抽中由 JetBrains 赞助提供的精美小礼品和 IntelliJ IDEA 授权大奖。没时间参与的小伙伴也别担心,大会内容将全程录影,待剪辑后会陆续公布视频,请关注 JetBrains 微信公众号以获取第一手活动信息。别忘了前往大会活动网页提前预约本次直播,将会在活动开始前收到邮件提醒。

精彩议程

12 月 16 日 星期六

本次大会第一天主题的侧重点在 Kotlin Multiplatform 的最新信息及案例实践,以及编译器核心讲解及 KSP 应用场景。各专场简介如下:

14:00-14:30 回顾 2023 及 Kotlin Multiplatform 最新信息

第一场将由 JetBrains 技术布道师范圣佑与 Kotlin 团队专门负责 KMP 方向的布道师 Pamela Hill 搭配,由圣佑先为大家回顾 2023 年 Kotlin 生态的重点更新,再由 Pamela 带来 Kotlin Multiplatform 的第一手信息,想知道最新的 Kotlin 前沿技术就听这场!

更新:本场直播活动已结束,收看直播回放,下载 讲师 PPT

14:30-15:10 Kotlin Multiplatform:跨平台开发的后起之秀

第二场邀请到美团多平台开发专家刘银龙,为大家介绍他们在选择多平台技术时的经验,并说明美团内部是如何通过 Kotlin Multiplatform 实践多平台开发,过程中遇到的问题和解决方案。

更新:本场直播活动已结束,收看直播回放,下载 讲师 PPT

15:10-15:50 做个另类 Kotlin 开发者:从零入门 Kotlin 编译器插件

Kotlin 编译器也支持插件开发,换句话说,通过插件,开发者可以改变 Kotlin 开发者的行为,大大释放了开发的可能性。第三场分享邀请到在 Meta 伦敦分部,负责编写内部构建工具的黄惠勤,为大家进行一场 Kotlin 编译器插件开发入门课。通过她的分享,将会对编译器底层及相关必备技能有更进一步的认识。

更新:本场直播活动已结束,收看直播回放,下载 讲师 PPT

15:50-16:30 使用 Reactive Quarkus 搭配 Coroutine 及 MongoDB 做后端开发

第四场会由电獭副执行长苏芃翰介绍他们团队打造数字广告联播网服务时,是如何结合 Kotlin、Coroutine、Quarkus 框架及 MongoDB 等技术栈来实现后端服务,并分享使用 Full Reactive Programming 开发是一种什么样的体验。

更新:本场直播活动已结束,收看直播回放,下载 讲师 PPT

16:30-17:10 KSP(Kotlin Symbol Processing)让你我的工作更加轻松

还在因为编程时,需要编写大量重复代码而感到烦心吗?第五场分享邀请到 Android GDE 叶楠跟大家介绍如何通过 KSP 自动生成这些繁琐的代码,减少日常工作中繁琐的重复任务,让你的开发更高效。

更新:本场直播活动已结束,收看直播回放,下载 讲师 PPT

17:10-17:50 使用 KCP 打造更安全的 Gson 与更快的 Moshi

第一天的最后一场将由猿辅导 Kotlin 开发者江军祥发表他们团队以 Kotlin 编译器插件打造的开源项目,内容将聚焦于如何通过 Kotlin 编译器插件为 Gson 增强类型空安全和主构造器参数默认值的支持,以及如何利用 Kotlin 编译器插件简化 JSONReader 的使用过程。

更新:本场直播活动已结束,收看直播回放,下载 讲师 PPT

12 月 17 日 星期日

大会第二天将带来更多元的主题,除了有 Android、后端、Compose 等技术主题外,还有与 分享 Kotlin 团队在教育方面的投入,以及 Kotlin User Group 技术社区运营有关的主题。各专场简介如下:

14:00-14:40 Kotlin 教育项目与跨平台竞赛之旅

第二天议程的第一场,将由 JetBrains 教育项目布道师 Ksenia Shneyveys 及南京 KUG 组织者于瑞搭配,先由 Ksenia 介绍 JetBrains 的 Kotlin 教育项目有哪些学习资源可以提供给大家,接着再由于瑞分享他去年参加 Kotlin 跨平台竞赛的经验及参赛的开源项目。

更新:本场直播活动已结束,收看直播回放,下载 Ksenia Shneyveys 的 PPT于瑞的 PPT

14:40-15:20 榨干 Kotlin:Compose 的魔法代码揭秘

第二场邀请到 Android 及 Kotlin 双料 GDE、人气 B 站 Up 主朱凯老师带来 Android 的技术主题,跟大家分享那些与 Jetpack Compose 相关,但开发者往往不是这么清楚的 Kotlin 语言特性,包括 @DslMarker、StateFlow、Extension 的 Implicit Receiver 等,听完这场后将会让你功力大增!

更新:本场直播活动已结束,收看直播回放,下载 讲师 PPT

15:20-16:05 解密 Kotlin 技术社区的运营技巧

第三场邀请到来自武汉、北京、台湾 KUG 的小伙伴,跟大家聊聊他们 2023 年分别在线上、线下举办社区活动的经验,包括如何筹备议程、邀请讲师、征稿审稿、准备线下展位等各种让技术社区质量显著成长的技巧。分享的最后还会邀请 Kotlin 社区经理 Maria Krishtal,跟大家预告 2024 年 KotlinConf 的最新信息以及如何申请 KotlinConf Global 活动的计划。

更新:本场直播活动已结束,收看直播回放,下载 刘清晨的 PPT江军祥的 PPT廖健智的 PPT杨楚伶的 PPT

16:05-16:45 用 Kotlin 让 Java 项目进入微服务的时代

Kotlin 可与 Java 互操作的特性,让既有的 Java 项目可以平滑地迁移到 Kotlin,如此不仅降低转换成本,也能让开发团队在工作中逐渐熟悉 Kotlin。而 Kotlin 的 Coroutine、空安全、扩展函数等特性,在开发微服务时也带来独特的优势。第四场分享邀请到台积电(TSMC)技术副理 Brandy Chang,与大家深入探讨如何实际操作,充分利用 Kotlin 的强大功能。

更新:本场直播活动已结束,收看直播回放,下载 讲师 PPT

16:45-17:25 Live Coding:用 Compose Multiplatform 写出第一个小游戏

Compose 问世后,让开发者见识到其在 UI 绘制上的潜力。既然 Compose 有强大的渲染能力,那能用来写游戏吗?在大会第二天的最后一场分享,将由 JetBrains 技术布道师范圣佑与高级工程师刘长炯两人搭配,为大家演示如何通过 Compose Multiplatform 技术,写出经典的太空射击小游戏!

更新:本场直播活动已结束,收看直播回放,下载 讲师 PPT

17:25-17:50 大会圆桌问答

活动结束前,将邀请大会嘉宾统一回答大家对分享内容的疑问。另外,活动报名时,也能向 Kotlin 团队提问,届时也将在活动内,由主持人代为回答。大家务必保握这个难得的机会发问!

立即预约直播

还有抽奖活动!

本大会由 JetBrains 赞助,提供每天 2 组,两天共计 4 组 IntelliJ IDEA 一年期个人版授权。将会在每天活动最后于直播间以答题的方式抽出,请务必全程参与以取得抽奖资格。

Continue Reading2023 Kotlin 中文开发者大会

Amper Update – December 2023

Last month JetBrains introduced Amper, a tool to improve the project configuration user experience. It offers concise, declarative configuration with sensible defaults for common use cases and carefully considered extension points.

Since then, we’ve received a lot of feedback from the community and have continued our development work as well. In this post, we’ll recap some of the highlights of what we’ve heard from you and talk a little about where we’re headed next.

Your feedback

At this early stage of the project, receiving feedback is of utmost importance, as it helps us build Amper in a way that meets real needs and solves real problems. We’re grateful to everyone who has already shared their thoughts with us, and we hope to hear more from you. Your comments have reaffirmed that the simplicity offered by Amper is indeed something you want.

We’ve also received a lot of feedback about our choice of YAML as the language of Amper’s module definition files. Rest assured that we’ve taken this feedback to heart. As stated in our initial announcement, this language choice is not final. We’re still evaluating the available options, and we look forward to sharing more on this topic soon.

New releases

Since the initial announcement, we’ve released a few updates, adding support for configuring Kover (a community contribution from Landry Norris, which we’re grateful for) and delivering bug fixes (AMPER-222, AMPER-256).

We’ve also published the source code for the current prototype implementation, which you can now browse in the GitHub repository of the project.

Amper is being actively developed, and we also have other exciting additions coming soon. These include one of the most commonly requested features: support for version catalogs. We’re also working on improved IDE capabilities, such as better typing assistance and code completion in your Amper manifest files.

How to try Amper

If you haven’t tried Amper yet, we encourage you to give it a go and share your thoughts on it with us. 

To get started, open your project in the latest version of IntelliJ IDEA or Fleet, and follow the setup instructions. You can also take a look at the tutorial, sample projects, and documentation to learn more.

Explore the docs

A good approach for trying Amper in your own project is to create a separate branch where you replace your existing configuration with Amper modules. You can see this in action in our KMP app template repositories with shared Compose Multiplatform UI and with native UI implementations, which now have separate branches configured with Amper.

For example, here’s how the shared module of the template with native UI was converted from the original build file to its new module manifest that uses Amper.

Before / after converting a shared KMP module in the template project

Because Amper offers interoperability with Gradle, you can still use Gradle plugins and write custom Gradle tasks if necessary. For example, the module above uses this interop to include the Gradle plugin for SKIE in the project. You also have the option to keep the file layout that Gradle projects use to simplify migration.

To provide feedback about your experience, join the discussion in the #amper channel on the Kotlin Slack, or share your suggestions and ideas in an issue on YouTrack. Your input will really help shape Amper’s future, and we look forward to hearing from you.

Join the Slack channel

More resources

Continue ReadingAmper Update – December 2023

We Are Improving Library Authors’ Experience!

A modern programming language ecosystem includes everything from testing frameworks, to libraries for machine learning, to web development frameworks. These options are usually provided to the community by library authors. 

The Kotlin team understands how essential authors’ work is for every user. That is why we want to support them by providing tools and documentation. Library authors’ experience is one of our key priorities for the Kotlin roadmap 2023.

In this post, we will tell you more about our plans and what’s already been done, particularly the Dokka documentation update.

Improving library authors’ experience

We’re focusing on improving the main pain points in documenting public APIs, helping library authors with API design, and providing a convenient development environment setup, including project templates and CI scripts. Read more about our plans and feel free to discuss them in dedicated tickets on YouTrack.

KDoc experience improvements KT-55073.

  • Improve the formatting of KDoc and multiline comments.
  • Support links to specific overloads in KDoc comments.
  • Support highlights and suggestions for sample code in KDoc comments.
  • Provide an inspection for missing KDoc comments for the public API when the explicit-api mode is on. 

Dokka stable release. KT-48998

  • Stabilize Dokka with the HTML output.
  • Release the fully revamped Dokka documentation. 

Kotlin API guide for library authors. KT-55077

  • Provide a complete API guide for library authors.
  • Describe best practices for JVM and multiplatform library development.
  • List some tools that help in library development, including project setup, ensuring backward compatibility, and publishing.

Introducing the new Dokka documentation!

Take a look at the rewritten, more user-friendly Dokka documentation, and let us know your opinion. 

We added Groovy DSL examples for the Gradle project configuration and examples for Maven projects. This will help developers who use these scripting and build tools.

We also reorganized the structure of the pages, added more configuration examples, and provided descriptions of the configuration options to better onboard beginners. 

To learn more about planned improvements for library authors and other Kotlin plans, take a look at the Kotlin Roadmap.

See also

Continue ReadingWe Are Improving Library Authors’ Experience!

What to Expect From the Kotlin Team in 2022–23: Key Projects and Productivity Features

Kotlin is commonly used for writing server-side, multiplatform, and Android apps, but there are tons of lesser known use cases. Among them are Minecraft plugin development, writing software for robots, or even creating PowerPoint presentations using Compose for Desktop. The number of use cases to address and potential improvements to implement is huge, and it’s important for us to focus on the things that would be most beneficial for you. Every year we select a number of key projects and work hard to deliver them. For 2022–2023, our key projects are:

  • Improve the quality and stability of Kotlin releases
  • Release the Beta version of the K2 compiler
  • Release the Stable version of Kotlin Multiplatform Mobile
  • Release the Alpha version of the Kotlin IntelliJ IDEA plugin with K2 support plugin
  • Release the Stable version of the Kotlin/JS IR backend

These key projects, together with our other plans, constitute about 50 roadmap tickets. Some of them will affect your experience directly, while others might go unnoticed. To help you navigate through the Kotlin roadmap, we’ve divided the improvements into a few categories. We’d greatly appreciate it if you vote for the most important tickets and leave your feedback!

Please note that this is just a selection, not the whole roadmap.

If you want to save more time

Consider these improvements:

If you write multiplatform mobile apps

The following improvements particularly deserve your attention:

If you’re a library developer

Be sure to check out these improvements:

  • The stabilization of the Compiler Plugin API will give you a solid foundation for building your tools.
  • Namespaces support will help you build cleaner APIs by grouping declarations together under a common prefix.
  • The Dokka Stable release will improve your onboarding experience and improve Dokka’s API and layouts.

Pick the new features you’re most eagerly anticipating and share them in the comments section of this blogpost or on Twitter!

Learn more

Continue ReadingWhat to Expect From the Kotlin Team in 2022–23: Key Projects and Productivity Features

Visión hacia el futuro

Visión hacia el futuro

Desde el inicio de Karumi hemos intentado apostar por una forma de trabajar y unas tendencias del mercado con las que nos sentíamos identificados. En 2013, cuando surge la idea de Karumi, no existían en España consultorías “boutique”, sólo algunos freelances pero nada como lo que pretendíamos hacer. Por ejemplo, desde el principio fuimos “remote first” o lo que ahora se llama “full remote”, y a lo largo de los años hicimos otras apuestas, como buscar perks y cuidados para nuestros empleados que tuvieran en cuenta su salud mental y que pudieran desconectar del trabajo.

Algunas compañías nos han dicho que nos tomaron como referencia y ciertamente algunos han seguido nuestros pasos permitiéndonos, a su vez, aprender de ellos. Además de enseñar lo que hacíamos y estar siempre reevaluándonos como empresa, uno de los pilares de Karumi es la honestidad. Tanto hacia nuestros clientes, no mintiendoles o escondiéndoles verdades, como con nuestros empleados, explicándoles lo que hacemos y por qué lo hacemos, así como con la gente en general que nos sigue. Siempre hemos sido sinceros hablando en blogposts y en eventos sobre aquellas cosas que hacemos y no sobre las que decimos que hacemos.

Desde hace un tiempo nos hemos encontrado con la necesidad de ser honestos con nosotros mismos, de pensar en las cosas que creemos y en cómo debemos hacerlas. En los últimos años algunos compañeros de trabajo han buscado nuevos retos fuera de Karumi y desde Karumi hemos abierto nuevas ofertas de empleo. Pero desde el año pasado nos cuesta mucho encontrar perfiles que encajen en lo que buscamos, no porque no existan sino porque no podemos pagarles lo que se merecen. Hemos visto como en los dos últimos años el salario que estábamos pagando a esas personas se ha duplicado. Esto no es por asomo nada malo, ni una crítica a que esto esté pasando, es simplemente un reflejo de la realidad y como tal se lo hemos dicho a aquellas personas que salieron de Karumi: “Hacéis muy bien y hay que aprovechar las ‘oportunidades’. Si nosotros pudiéramos también os pagaríamos ese dinero porque os lo mereceis”.

Frente a esta situación teníamos dos opciones: la primera es intentar incrementar nuestras tarifas a los clientes, algo en lo que hemos estado trabajando desde hace años y que hemos ido consiguiendo gracias a nuestro esfuerzo y trabajo. Tristemente este incremento no es ni por asomo a día de hoy capaz de incrementar el salario a precios competitivos para el nivel de la gente que trabaja en Karumi.

La otra opción es empezar a contratar a gente con perfiles más junior, con potencial para que se conviertan en esos trabajadores que buscamos. El desafío de ésto es que cambiaría la figura de la empresa: como grupo pequeño de personas que ofrecemos un trabajo senior y de calidad, podemos permitirnos tener un par de personas más juniors que ayuden y aprendan, que vayan creciendo con nosotros, pero por la naturaleza de Karumi no podemos ser un grupo de juniors liderados por Davide y Jorge. Además estamos percibiendo que el ciclo de permanencia en las empresas está siendo muy bajo, casi rondando el año. Esto no nos permitiría ser capaces de formar seniors que nos ayudaran a ir enseñando a nuevos juniors, con lo que el plan a medio-largo plazo sería inviable.

Creemos que todos estos factores son algo que va a perdurar en el tiempo, ya que la globalización de las empresas y el incremento del trabajo en remoto va a hacer más fácil la contratación para muchas empresas, abriendo un abanico de posibilidades más grande. Esto es un gran beneficio para la industria, pero también supone un reto de cara a cómo vamos a enfrentarnos a ello en los próximos años.

Llegados a este punto teníamos que ser honestos con nosotros mismos. No queremos cambiar los principios de Karumi, convertirnos en un tipo de empresa que no somos o ver como poco a poco se va degradando lo que construimos.

Por otro lado, esta falta de contrataciones estaba produciendo que tuviéramos problemas internos de cara a nuestro clientes, como tener que retrasar tiempos o cargas de trabajo inecesarias para hacer un trabajo eficiente que siguiera nuestros estándares de calidad, siendo algo que no nos gusta y con lo que no nos sentimos cómodos.

Con todo esto llegamos a la conclusión de que no tenía sentido continuar con Karumi a nivel de consultora. Esto se traduce en que esta aventura de ocho años se termina. Ha sido una decisión muy difícil de tomar pero estamos convencidos de que es la decisión buena.

Hablamos con los trabajadores a finales de septiembre y les comunicamos la situación. A continuación hablamos con nuestros clientes: íbamos a dejar de trabajar con ellos a finales de noviembre.

Lo que hemos hecho con los clientes ha sido cerrar con ellos todos los frentes que tuviéramos abiertos e intentar buscar y recomendar otras empresas con las cuales pudieran trabajar y que fueran de la misma calidad que Karumi. En algunos casos también formamos a sus empleados para que pudieran mantener o ampliar nuestro trabajo.

Con respecto a los trabajadores de Karumi, que son los que realmente iban a sufrir más esta situación, planteamos las siguientes acciones: lo principal fue hablar con ellos con tiempo suficiente para que supieran cómo estaba la situación. Un mes antes de esta comunicación estuvimos hablando con empresas amigas del sector a las cuales les explicamos la situación y les enviamos los CVs de nuestra gente. Con esto queríamos que tuvieran alguna alternativa laboral sin tener que buscarla ellos. Lo siguiente fue que, aunque los proyectos se acabarán en noviembre, el mes de diciembre todos los empleados van a cobrar su nómina con la idea de que puedan dedicar ese mes a lo que prefieran, ya sea descansar o a una búsqueda activa de empleo. También les dijimos que si alguien se quería ir antes no habría problema alguno, ya que lo importante era que miraran sobre todo por ellos mismos. Nuestra mayor preocupación de estos últimos meses es que los Karumis estuvieran bien y hemos hecho lo que ha estado en nuestra mano para conseguirlo.

Respecto a nosotros, Davide y Jorge, aún no sabemos qué vamos a hacer. Lo primero es descansar de esta aventura excelente de ocho años que nos ha regalado la vida y donde hemos conocido a gente apasionante y nos hemos demostrado que se pueden hacer las cosas bien, con cariño y tratando a los trabajadores y clientes con honestidad. Creemos fervientemente que se puede hacer una consultoría honesta, distinta y que debemos pensar que las empresas de software están formadas por personas y que debemos ser ‘employee first’.

Con Karumi nos quedan aún cosas que decir. Se va a quedar como marca, ya que queremos seguir dando algún curso, aunque con un modelo diferente. ¡Además tenemos alguna idea que queremos desarrollar tras la marca que esperamos que sea más pronto que tarde y esperamos que os guste!

Y sobre todo gracias a Alberto Gragera, Miguel Lara, Irene Herranz, Pedro Vicente Gómez, Sergio Gutiérrez, Fran Fernández, Laura Perandones, Antonio López, Elena Mateos, Beatriz Hernández, Antonio Estévez y Sergio Arroyo. Por formar parte de esta pequeña familia, por apoyarnos, ayudarnos a crecer y por ser las mejores personas con las cuales compartir estos maravillosos ocho años. ¡Os queremos!

Continue ReadingVisión hacia el futuro

How We Test Concurrent Primitives in Kotlin Coroutines

Today we would like to share how we test concurrency primitives in Kotlin Coroutines.

Many of our users are delighted with the experience of using coroutines to write asynchronous code. This does not come as a surprise, since with coroutines we are able to write, with almost all the asynchronicity happening under the hood. For simplicity, we can think of coroutines in Kotlin as super-lightweight threads with some additional power, such as cancellability and structured concurrency. However, coroutines also make the code much safer. Traditional concurrent programming involves manipulating a shared mutable state, which is arguably error-prone. As an alternative, coroutines provide special communicating primitives, such as Channel, to perform the synchronization. Using channels and a couple of other primitives as building blocks, we can construct extremely powerful things, such as the recently introduced Flows. However, nothing is free. While the message passing approach is typically safer, the required primitives are not trivial to implement and, therefore, have to be tested as thoroughly as possible. Simultaneously, testing concurrent code may be as complicated as writing it.

That is why we have Lincheck – a special framework for testing concurrent data structures on the JVM. This framework’s main advantage is that it provides a simple and declarative way to write concurrent tests. Instead of describing how to perform the test, you specify what to test by declaring all the operations to examine along with the required correctness property (linearizability is the default one, but various extensions are supported as well) and restrictions (e.g. “single-consumer” for queues).

Lincheck Overview

Let’s consider a simple concurrent data structure, such as the stack algorithm presented below. In addition to the standard push(value) and pop() operations, we also implement a non-linearizable (thus, incorrect) size(), which increases and decreases the corresponding field following successful push and pop invocations.

class Stack<T> {
    private val top  = atomic<Node<T>?>(null)
    private val _size = atomic(0)

   fun push(value: T): Unit = top.loop { cur ->
        val newTop = Node(cur, value)
        if (top.compareAndSet(cur, newTop)) { // try to add
            _size.incrementAndGet() // <-- INCREMENT SIZE
            return
        }
    }

   fun pop(): T? = top.loop { cur ->
        if (cur == null) return null // is stack empty?
        if (top.compareAndSet(cur, cur.next)) { // try to retrieve
            _size.decrementAndGet() // <-- DECREMENT SIZE
            return cur.value
        }
    }

   val size: Int get() = _size.value
}
class Node<T>(val next: Node<T>?, val value: T)

To write a concurrent test for this stack without any tool, you have to manually run parallel threads, invoke the stack operations, and finally check that some sequential history can explain the obtained results. We have used such manual tests in the past, and all of them contained at least a hundred lines of boilerplate code. But with Lincheck, this machinery is automated, and tests become short and informative.

To write a concurrent test with Lincheck, you need to list the data structure operations and mark them with a special @Operation annotation. The initial state is specified in the constructor (here, we create a new TriebeStack<Int> instance). After that, we need to configure the testing modes, which can be done using special annotations on the test class – @StressCTest for stress testing and @ModelCheckingCTest for model checking mode. Finally, we can run the analysis by invoking the Linchecker.check(..) function on the testing class.

Let’s write a test for our incorrect stack:

@StressCTest
@ModelCheckingCTest
class StackTest {
    private val s = TrieberStack<Int>()

   @Operation fun push(value: Int) = s.push(value)
    @Operation fun pop() = s.pop()
    @Operation fun size() = s.size

   @Test fun runTest() = LinChecker.check(this::class)
}

There are just 12 lines of easy-to-follow code that describe the testing data structure, and that is all you need to do with Lincheck! It automatically:

  1. Generates several random concurrent scenarios.
  2. Examines each of them using either the stress or the model checking strategy, performing the number of scenario invocations specified by the user.
  3. Verifies that each of the invocation results satisfies the required correctness property (e.g. linearizability).For this step, we use a sequential specification, which is defined with the testing data structure by default; though, a custom sequential specification can be set.

If an invocation hangs, fails with an exception, or the result is incorrect, the test fails with an error similar to the one below. Here we smoothly come to the main advantage of the model checking mode. While Lincheck always provides a failure scenario with the incorrect results (if any are found), the model checking mode also provides a trace to reproduce the error. In the example below, the failing execution starts with the first thread, pushes 7 into the stack, and stops before incrementing the size. The execution switches to the second thread, which retrieves the already pushed 2 from the stack and decreases the size to -1. The following size() invocation returns -1, which is incorrect since it is negative. While the error seems obvious even without a trace, having the trace helps considerably when you are working on real-world concurrent algorithms, such as synchronization primitives in Kotlin Coroutines.

= Invalid execution results =
| push(7): void | pop():  7  |
|               | size(): -1 |

= The following interleaving leads to the error =
| push(7)                                |                      |
|   top.READ: null                       |                      |
|       at Stack.push(Stack.kt:5)        |                      |
|   top.compareAndSet(null,Node@1): true |                      |
|       at Stack.push(Stack.kt:7)        |                      |
|   switch                               |                      |
|                                        | pop(): 7             |
|                                        | size(): -1           |
|                                        |   thread is finished |
|   _size.incrementAndGet(): 0           |                      |
|       at Stack.push(Stack.kt:8)        |                      |
|   result: void                         |                      |
|   thread is finished                   |                      |

Scenario Generation
In Lincheck, we provide the numbers of parallel threads and operations within them that can be configured in @<Mode>CTest annotations. Lincheck generates the specified number of scenarios by filling the threads with randomly chosen operations. Note that the push(..) operation on the stack above takes an integer parameter value, which specifies the element to be inserted. Simultaneously, the error message provides a specific scenario with an input value to the push(..) invocation. This parameter is also randomly generated in a specified range, which can also be configured. It is also possible to add restrictions like “single-consumer” so that the corresponding operations cannot be invoked concurrently.

Testing Modes
Lincheck runs the generated scenario in either stress testing or model checking mode. In stress testing mode, the scenario is executed on parallel threads many times to detect interleavings that yield incorrect results. In model checking mode, Lincheck examines many different interleavings with a bounded number of context switches. Compared to stress testing, model checking increases test coverage as well as the failing execution trace of the found incorrect behavior. However, it assumes the sequential consistency memory model, which means it ignores weak memory model effects. In particular, it cannot find a bug caused by a missed @Volatile. Therefore, in practice we use both stress testing and model checking mode.

Minimizing Failing Scenarios
When writing a test, it makes sense to configure it so that several threads are executed, each with several operations. However, most bugs can be reproduced with fewer threads and operations. Thus, Lincheck “minimizes” the failing scenario by trying to remove an operation from it and checking whether the test still fails. It continues to remove operations until it is no longer possible to do so without also causing the test not to fail. While this greedy approach is not theoretically optimal, we find it works well enough in practice.

Model Checking Details

Let’s dig deeper into model checking,one of Lincheck’s most impressive features, and see how everything works under the hood. If you want to try Lincheck before going further, just add the org.jetbrains.kotlinx:lincheck:2.12 dependency to your Gradle or Maven project!

Before we started working on model checking, we already had Lincheck with stress testing mode. While it provided a nice way to write tests and significantly improved their quality, we were still spending countless hours trying to understand how to reproduce a discovered bug. Spending all this time motivated us to find a way to automate the process. That is how we came to adopt the bounded model checking approach to our practical Lincheck. The idea was simple — once we implemented model checking, Lincheck would be able to provide an interleaving that re-produced the found bug automatically, meaning we would no longer need to spend hours trying to understand it. One of the main challenges was finding a way to make everything work without having to change the code.

In our experience, most bugs in complicated concurrent algorithms can be reproduced using the sequential consistency memory model. Simultaneously, the model-checking approaches for weak memory models are very complicated, so we decided to use a bounded model checking under the sequential consistency memory model. This was originally inspired by the one used in the CHESS framework for C#, which studies all possible schedules with a bounded number of context switches by fully controlling the execution and putting context switches in different locations in threads. In contrast to CHESS, Lincheck bounds the number of schedules to be studied instead of the number of context switches. This way, the test time is predictable independently of scenario size and algorithm complexity.

In short, Lincheck starts by studying all interleavings with one context switch but does this evenly, trying to explore a variety of interleavings simultaneously. This way, we increase the total coverage if the number of available invocations is not sufficient to cover all the interleavings. Once all the interleavings with one context switch have been reviewed, Lincheck starts examining interleavings with two context switches, and repeats the process, increasing the number of context switches each time, until the available invocations exceed the maximum or all possible interleavings are covered. This strategy increases the testing coverage and allows you to find an incorrect schedule with the lowest number of context switches possible, marking a significant improvement for bug investigation.

Switch Points
Lincheck inserts switch points into the testing code to control the execution. These points identify where a context switch can be performed. The interesting switch point locations are shared memory accesses, such as field and array element reads or updates in the JVM. These reads or updates can be performed by either the corresponding byte-code instructions or special methods, like the ones in AtomicFieldUpdater or Unsafe classes. To insert a switch point, we transform the testing code via the ASM framework and add our internal function invocations before each access. The transformation is performed on the fly using a custom class loader, which means that technically we’ve created a transformed copy of the testing code.

Interleaving Tree
To explore different possible schedules, Lincheck constructs an interleaving tree, the edges of which represent choices that can be performed by the scheduler. The figure below presents a partially built interleaving tree with one context switch for the scenario from the overview. First, Lincheck decides where the execution should be started. From there, several switch points are available to be examined. In the figure, only the first switch point related to top.READ is fully explored.

To explore interleavings evenly, Lincheck tracks the percentage of explored interleavings for each subtree, using weighted randomness to make its choices about where to go. The weights are proportional to the ratios of unobserved interleavings. In our example above, Lincheck is more likely to start the next interleaving from the second thread. Since the exact number of interleavings is unknown, all children have equal weights initially. After all the current tree’s interleavings have been explored, Lincheck increases the number of context switches, thus increasing the tree depth.

Execution Trace
The key advantage of model checking mode is that it provides an execution trace that reproduces the error. To increase its readability, Lincheck captures the arguments and the results of each of the shared variable accesses (such as reads, writes, and CAS-s) and the information about the fields. For example, in the trace in the listing from the overview, the first event is a reading of null from the top field, and the source code location is also provided. At the same time, if a function with multiple shared variable accesses is executed without a context switch in the middle, Lincheck shows this function invocation with the corresponding arguments and result instead of the events inside it. This also makes the trace easier to follow, as evidenced in the second thread in the trace.

To sum up

We hope you enjoyed reading this post and found Lincheck interesting and useful. In Kotlin, we successfully use Lincheck both to test existing concurrent data structures in Kotlin Coroutines and to develop new algorithms more productively and enjoyably. As a final note, we would like to say that Lincheck tests should be treated like regular unit tests — they can check your data structure for the specified correctness contract, nothing more and nothing else. So, even if you have Lincheck tests, do not hesitate to add integration tests when your data structure is used in the real world.

Want to try Lincheck? Just add the org.jetbrains.kotlinx:lincheck:2.12 dependency to your Gradle or Maven project and enjoy!

Lincheck on Github

The post How We Test Concurrent Primitives in Kotlin Coroutines first appeared on JetBrains Blog.

Continue ReadingHow We Test Concurrent Primitives in Kotlin Coroutines

1.4.30 Is Released With a New JVM Backend and Language and Multiplatform Features

Kotlin 1.4.30 is now available. This is the last 1.4 incremental release, so we have lots of new experimental features that we plan to make stable in 1.5.0. We would really appreciate it if you would try them and share your feedback with us. We hope you enjoy testing out all these new updates, and please let us know what you think.

What’s changed in this release:

Language features and compiler

We’ve decided to cover two of the significant updates in separate blog posts so that we can provide more details on these features.

Compiler

The new JVM backend reached Beta, and it now produces stable binaries. This means you can safely use it in your projects.

More details about the update, ways to enable the new JVM IR backend, and how you can help stabilize it can be found here.

New language features preview

Among the new language features we plan to release in Kotlin 1.5.0 are inline value classes, JVM records, and sealed interfaces. You can read more details about them in this post, and here’s a brief overview:

Inline classes. Inline classes were previously a separate language feature, but now they have become a specific JVM optimization for a value class with one parameter. Value classes represent a more general concept and will support different optimizations in the future. They currently support inline classes, and they will support Valhalla primitive classes when project Valhalla becomes available.

Java records. Another upcoming improvement in the JVM ecosystem is Java records. They are analogous to Kotlin data classes, mainly used as simple holders of data. Interoperability with Java always has been and always will be a priority for Kotlin. Kotlin code “understands” the new Java records and sees them as classes with Kotlin properties.

Sealed interfaces. Interfaces can be declared sealed as well as classes. The sealed modifier works on interfaces the same way: all implementations of a sealed interface are known at compile time. Once a module with a sealed interface is compiled, no new implementations can appear.

So now we kindly ask you to give these language features a try and share your feedback with us. We would like to know what expectations you have for them, the use cases where you want to apply these features, and any thoughts or ideas you have about them.

You can find a detailed description of the new language features and instructions on how to try them out in this blog post.

Build tools

Configuration cache support in Kotlin Gradle Plugin

As of Kotlin 1.4.30, the Kotlin Gradle plugin is fully compatible with the Gradle configuration cache. This speeds up the build process. For example, the Square, which uses Kotlin for Android, has a build (Android, Java, Kotlin) of 1800 modules. Its team reports the following numbers:

  • The very first build took 16 minutes 30 seconds.
  • The second one was much shorter; it took 5 minutes 45 seconds.
    More specifically, for Square, Configuration Cache saves 1 minute 10 seconds of configuration and task graph creation per build.

When you run the command, Gradle executes the configuration phase and calculates the task graph. Gradle caches the result and reuses it for subsequent builds, saving you time.

To start using this feature, use the Gradle command or set up your IntelliJ-based IDE. And if anything doesn’t work as expected, please report it via YouTrack.

Please note that this feature is still in Alpha for multiplatform.

Kotlin/Native

Compilation time improvements

We’ve improved compilation time in 1.4.30. The time taken to rebuild the KMM Networking and data storage sample framework has been reduced from 9.5 seconds (in 1.4.10) to 4.5 seconds (in 1.4.30).

We plan to continue optimizing the compiler, you can follow the issue in YouTrack.

64-bit watchOS simulator support

With the 1.3.60 Kotlin release in October 2018, we introduced support for building Kotlin apps for Apple Watch simulators. Last November the Apple Watch simulator architecture was changed from i386 to x86_64, creating issues for developers working on this feature. The new Kotlin/Native watchosX64 target can be used to run the watchOS simulator on 64-bit architecture and it works on WatchOS starting from version 7.0.

Xcode 12.2 SDK support

Kotlin/Native now supports Xcode 12.2. The macOS frameworks that have been added to the Xcode 12.2 release can be used with this Kotlin update. For example, the MLCompute framework is now available for users developing applications for macOS.

Kotlin/JS

Prototype lazy initialization for top-level properties.

We’ve made lazy initialization of top-level properties available as Experimental. You can read more about this in What’s new.

Standard library

Locale-agnostic API for upper/lowercasing text

This release introduces an experimental locale-agnostic API for changing the case of strings and characters. The current toLowerCase(), toUpperCase(), capitalize(), decapitalize() API functions are locale-sensitive, which is not obvious and inconvenient in some cases. In the case of different platform locale settings it affects the code behavior – for example, in the Turkish locale, when the string “kotlin” is converted by toUpperCase it results in “KOTLİN”, not “KOTLIN”. Now it uses the root locale, so it will work as expected.

You can find the full list of changes of text processing functions in KEEP. Remember that this API is experimental, and please share your feedback with us in YouTrack.

Unambiguous API for Char conversion

The current conversion functions from Char to numbers, that return its UTF-16 code expressed in different numeric types, are often confused with the similar String-to-Int conversion returning the numeric value of a string.

To avoid this confusion we’ve decided to separate Char conversions into two following sets of clearly named functions: functions to get the integer code of Char and to construct Char, and functions to convert Char to the numeric value of the digit it represents.

This feature is also Experimental, but we plan to make it stable for the 1.5.0 release. See more details in KEEP.

Find more about all the updates for 1.4.30 in What’s new and New JVM Backend and Language Features blog posts.

How to update

IntelliJ IDEA will suggest updating the Kotlin plugin to 1.4.30 automatically, or you can update it manually by following these instructions. The Kotlin plugin for Android Studio Arctic Fox will be released later.

If you want to work on existing projects created with previous versions of Kotlin, use the 1.4.30 Kotlin version in your project configuration. For more information, see the docs for Gradle and Maven.

You can download the command-line compiler from the Github release page.

The release details and the list of compatible libraries are available here.

If you run into any problems with the new release, you can find help on Slack (get an invite here) and report issues in our YouTrack.

Before updating your projects to the latest version of Kotlin, you can try the new language and standard library features online at play.kotl.in.

External contributors

We want to thank all of our external contributors whose pull requests were included in this release:

Jinseong Jeon
Toshiaki Kameyama
pyos
Mads Ager
Steven Schäfer
Mark Punzalan
Ivan Gavrilovic
Kristoffer Andersen
Bingran
Juan Chen
zhelenskiy
Kris
Hung Nguyen
Victor Turansky
AJ
Louis CAD
Kevin Bierhoff
Hollow Man
Francesco Vasco
Uzi Landsmann
Dominik Wuttke
Derek Bodin
Ciaran Treanor
rbares
Martin Petrov
Yuya Urano
KotlinIsland
Jiaxiang Chen
Jake Wharton
Sam Wang
MikeKulasinski-visa
Matthew Gharrity
Mikhail Likholetov

Continue Reading1.4.30 Is Released With a New JVM Backend and Language and Multiplatform Features

New Language Features Preview in Kotlin 1.4.30

We’re planning to add new language features in Kotlin 1.5, and you can already try them out in Kotlin 1.4.30:

To try these new features, you need to specify the 1.5 language version.

The new release cadence means that Kotlin 1.5 is going to be released in a few months, but new features are already available for preview in 1.4.30. Your early feedback is crucial to us, so please give these new features a try now!

Stabilization of inline value classes

Inline classes have been available in Alpha since Kotlin 1.3, and in 1.4.30 they are promoted to Beta.

Kotlin 1.5 stabilizes the concept of inline classes but makes it a part of a more general feature, value classes, which we will describe later in this post.

We’ll begin with a refresher on how inline classes work. If you are already familiar with inline classes, you can skip this section and go directly to the new changes.

As a quick reminder, an inline class eliminates a wrapper around a value:

inline class Color(val rgb: Int)

An inline class can be a wrapper both for a primitive type and for any reference type, like String.

The compiler replaces inline class instances (in our example, the Color instance) with the underlying type (Int) in the bytecode, when possible:

fun changeBackground(color: Color) 
val blue = Color(255)
changeBackground(blue)

Under the hood, the compiler generates the changeBackground function with a mangled name taking Int as a parameter, and it passes the 255 constant directly without creating a wrapper at the call site:

fun changeBackground-euwHqFQ(color: Int) 
changeBackground-euwHqFQ(255) // no extra object is allocated! 

The name is mangled to allow the seamless overload of functions taking instances of different inline classes and to prevent accidental invocations from the Java code that could violate the internal constraints of an inline class. Read below to find out how to make it usable from Java.

The wrapper is not always eliminated in the bytecode. This happens only when possible, and it works very similarly to built-in primitive types. When you define a variable of the Color type or pass it directly into a function, it gets replaced with the underlying value:

val color = Color(0)        // primitive
changeBackground(color)     // primitive

In this example, the color variable has the type Color during compilation, but it’s replaced with Int in the bytecode.

If you store it in a collection or pass it to a generic function, however, it gets boxed into a regular object of the Color type:

genericFunc(color)         // boxed
val list = listOf(color)   // boxed
val first = list.first()   // unboxed back to primitive

Boxing and unboxing is done automatically by the compiler. You don’t need to do anything about it, but it’s useful to understand the internals.

Changing JVM name for Java calls

Starting from 1.4.30, you can change the JVM name of a function taking an inline class as a parameter to make it usable from Java. By default, such names are mangled to prevent accidental usages from Java or conflicting overloads (like changeBackground-euwHqFQ in the example above).

If you annotate a function with @JvmName, it changes the name of this function in the bytecode and makes it possible to call it from Java and pass a value directly:

// Kotlin declarations
inline class Timeout(val millis: Long)

val Int.millis get() = Timeout(this.toLong())
val Int.seconds get() = Timeout(this * 1000L)

@JvmName("greetAfterTimeoutMillis")
fun greetAfterTimeout(timeout: Timeout)

// Kotlin usage
greetAfterTimeout(2.seconds)

// Java usage
greetAfterTimeoutMillis(2000);

As always with a function annotated with @JvmName, from Kotlin you call it by its Kotlin name. Kotlin usage is type-safe, since you can only pass a value of the Timeout type as an argument, and the units are obvious from the usage.

From Java you can pass a long value directly. It’s no longer type-safe, and that’s why it doesn’t work by default. If you see greetAfterTimeout(2) in the code, it’s not immediately obvious whether it’s 2 seconds, 2 milliseconds, or 2 years.

By providing the annotation you explicitly emphasize that you intend this function to be called from Java. A descriptive name helps avoid confusion: adding the “Millis” suffix to the JVM name makes the units clear for Java users.

Init blocks

Another improvement for inline classes in 1.4.30 is that you now can define initialization logic in the init block:

inline class Name(val s: String) {
   init {
       require(s.isNotEmpty())
   }
}

This was previously forbidden.

You can read more details about inline classes in the corresponding KEEP, in the documentation, and in the discussion under this issue.

Inline value classes

Kotlin 1.5 stabilizes the concept of inline classes and makes it a part of a more general feature: value classes.

Until now, “inline” classes constituted a separate language feature, but they are now becoming a specific JVM optimization for a value class with one parameter. Value classes represent a more general concept and will support different optimizations: inline classes now, and Valhalla primitive classes in the future when project Valhalla becomes available (more about this below).

The only thing that changes for you at the moment is syntax. Since an inline class is an optimized value class, you have to declare it differently than you used to:

@JvmInline
value class Color(val rgb: Int)

You define a value class with one constructor parameter and annotate it with @JvmInline. We expect everyone to use this new syntax starting from Kotlin 1.5. The old syntax inline class will continue to work for some time. It will be deprecated with a warning in 1.5 that will include an option to migrate all your declarations automatically. It will later be deprecated with an error.

Value classes

A value class represents an immutable entity with data. At the moment, a value class can contain only one property to support the use-case of “old” inline classes.

In future Kotlin versions with full support for this feature, it will be possible to define value classes with many properties. All the values should be read-only vals:

value class Point(val x: Int, val y: Int)

Value classes have no identity: they are completely defined by the data stored and === identity checks aren’t allowed for them. The == equality check automatically compares the underlying data.

This “identityless” quality of value classes allows significant future optimizations: project Valhalla’s arrival to the JVM will allow value classes to be implemented as JVM primitive classes under the hood.

The immutability constraint, and therefore the possibility of Valhalla optimizations, makes value classes different from data classes.

Future Valhalla optimization

Project Valhalla introduces a new concept to Java and the JVM: primitive classes.

The main goal of primitive classes is to combine performant primitives with the object-oriented benefits of regular JVM classes. Primitive classes are data holders whose instances can be stored in variables, on the computation stack, and operated on directly, without headers and pointers. In this regard, they are similar to primitive values like int, long, etc. (in Kotlin, you don’t work with primitive types directly but the compiler generates them under the hood).

An important advantage of primitive classes is that they allow the flat and dense layout of objects in memory. Currently, Array<Point> is an array of references. With Valhalla support, when defining Point as a primitive class (in Java terminology) or as a value class with the underlying optimization (in Kotlin terminology), the JVM can optimize it and store an array of Points in a “flat” layout, as an array of many xs and ys directly, not as an array of references.

We’re really looking forward to the upcoming JVM changes and we want Kotlin to benefit from them. At the same time, we don’t want to force our community to depend on new JVM versions to use value classes, and so we are going to support it for earlier JVM versions as well. When compiling the code to the JVM with Valhalla support, the latest JVM optimizations will work for value classes.

Mutating methods

There’s much more to say regarding the functionality of value classes. Since value classes represent “immutable” data, mutating methods, like those in Swift, are possible for them. A mutating method is when a member function or property setter returns a new instance rather than updating an existing one, and the main benefit is that you use them with a familiar syntax. This still needs to be prototyped in the language.

More details

@JvmInline annotation is JVM-specific. On other backends value classes can be implemented differently. For instance, as Swift structs in Kotlin/Native.

You can read the details about value classes in the Design Note for Kotlin value classes, or watch an extract from Roman Elizarov’s “A look into the future” talk.

Support for JVM records

Another upcoming improvement in the JVM ecosystem is Java records. They are analogous to Kotlin data classes and are mainly simple holders of data.

Java records don’t follow the JavaBeans convention, and they have ‘x()’ and ‘y()’ methods instead of the familiar ‘getX()’ and ‘getY()’.

Interoperability with Java always was and remains a priority for Kotlin. Thus, Kotlin code “understands” new Java records and sees them as classes with Kotlin properties. This works like it does for regular Java classes following the JavaBeans convention:

// Java
record Point(int x, int y) { }
// Kotlin
fun foo(point: Point) {
    point.x // seen as property
    point.x() // also works
}

Mainly for interoperability reasons, you can annotate your data class with @JvmRecord to have new JVM record methods generated:

@JvmRecord
data class Point(val x: Int, val y: Int)

The @JvmRecord annotation makes the compiler generate x() and y() methods instead of the standard getX() and getX() methods. We assume that you only need to use this annotation to preserve the API of the class when converting it from Java to Kotlin. In all the other use cases, Kotlin’s familiar data classes can be used instead without issue.

This annotation is only available if you compile Kotlin code to version 15+ of the JVM version. You can read more about this feature in the corresponding KEEP or in the documentation, as well as in the discussion in this issue.

Sealed interfaces and sealed classes improvements

When you make a class sealed, it restricts the hierarchy to defined subclasses, which allows exhaustive checks in when branches. In Kotlin 1.4, the sealed class hierarchy comes with two constraints. First, the top class can’t be a sealed interface, it should be a class. Second, all the subclasses should be located in the same file.

Kotlin 1.5 removes both constraints: you can now make an interface sealed. The subclasses (both to sealed classes and sealed interfaces) should be located in the same compilation unit and in the same package as the super class, but they can now be located in different files.

sealed interface Expr
data class Const(val number: Double) : Expr
data class Sum(val e1: Expr, val e2: Expr) : Expr
object NotANumber : Expr

fun eval(expr: Expr): Double = when(expr) {
    is Const -> expr.number
    is Sum -> eval(expr.e1) + eval(expr.e2)
    NotANumber -> Double.NaN
}

Sealed classes, and now interfaces, are useful for defining abstract data type (ADT) hierarchies.

Another important use case that can now be nicely addressed with sealed interfaces is closing an interface for inheritance outside the library. Defining an interface as sealed restricts its implementation to the same compilation unit and the same package, which in the case of a library, makes it impossible to implement outside of the library.

For example, the Job interface from the kotlinx.coroutines package is only intended to be implemented inside the kotlinx.coroutines library. Making it sealed makes this intention explicit:

package kotlinx.coroutines
sealed interface Job { ... }

As a user of the library, you are no longer allowed to define your own subclass of Job. This was always “implied”, but with sealed interfaces, the compiler can formally forbid that.

Using JVM support in the future

Preview support for sealed classes has been introduced in Java 15 and on the JVM. In the future, we’re going to use the natural JVM support for sealed classes if you compile Kotlin code to the latest JVM (most likely JVM 17 or later when this feature becomes stable).

In Java, you explicitly list all the subclasses of the given sealed class or interface:

// Java
public sealed interface Expression
    permits Const, Sum, NotANumber { ... }

This information is stored in the class file using the new PermittedSubclasses attribute. The JVM recognizes sealed classes at runtime and prevents their extension by unauthorized subclasses.

In the future, when you compile Kotlin to the latest JVM, this new JVM support for sealed classes will be used. Under the hood, the compiler will generate a permitted subclasses list in the bytecode to ensure there is JVM support and additional runtime checks.

// for JVM 17 or later
Expr::class.java.permittedSubclasses // [Const, Sum, NotANumber]

In Kotlin, you don’t need to specify the subclasses list! The compiler will generate the list based on the declared subclasses in the same package.

The ability to explicitly specify the subclasses for a super class or interface might be added later as an optional specification. At the moment we suspect it won’t be necessary, but we’ll be happy to hear about your use cases, and whether you need this functionality!

Note that for older JVM versions it’s theoretically possible to define a Java subclass to the Kotlin sealed interface, but don’t do it! Since JVM support for permitted subclasses is not yet available, this constraint is enforced only by the Kotlin compiler. We’ll add IDE warnings to prevent doing this accidentally. In the future, the new mechanism will be used for the latest JVM versions to ensure there are no “unauthorized” subclasses from Java.

You can read more about sealed interfaces and the loosened sealed classes restrictions in the corresponding KEEP or in the documentation, and see discussion in this issue.

How to try the new features

You need to use Kotlin 1.4.30. Specify language version 1.5 to enable the new features:

compileKotlin {
    kotlinOptions {
        languageVersion = "1.5"
        apiVersion = "1.5"
    }
}

To try JVM records, you additionally need to use jvmTarget 15 and enable JVM preview features: add the compiler options -language-version 1.5 and -Xjvm-enable-preview.

Pre-release notes

Note that support for the new features is experimental and the 1.5 language version support is in the pre-release status. Setting the language version to 1.5 in the Kotlin 1.4.30 compiler is equivalent to trying the 1.5-M0 preview. The backward compatibility guarantees do not cover pre-release versions. The features and the API may change in subsequent releases. When we reach a final Kotlin 1.5-RC, all binaries produced by pre-release versions will be outlawed by the compiler, and you will be required to recompile everything that was compiled by 1.5‑Mx.

Share your feedback

Please try the new features described in this post and share your feedback! You can find further details in KEEPs and take part in the discussions in YouTrack, and you can report new issues if something doesn’t work for you. Please share your findings regarding how well the new features address use cases in your projects!

Further reading and discussion:

Continue ReadingNew Language Features Preview in Kotlin 1.4.30

The JVM Backend Is in Beta: Let’s Make It Stable Together

We’ll make the new backend Stable soon, so we need each of you to adopt it, let’s see how to do it.

We have been working to implement a new JVM IR backend as part of our ongoing project to rewrite the whole compiler. This new compiler will boost performance both for Kotlin users and for the Kotlin team itself by providing a versatile infrastructure that makes it easy to add new language features.

Our work on the JVM IR backend is nearly complete, and we’ll be making it Stable soon. Before we can do that, though, we need you to use it. In Kotlin 1.4.30, we’re making the new backend produce stable binaries, which means you will be able to safely use it in your projects. Read on to learn about the changes this new backend brings, as well as how to contribute to the process of finalizing this part of the compiler.

What changes with the new backend:

  • We’ve fixed a number of bugs that were present in the old backend.
  • The development of new language features will be much faster.
  • We will add all future performance improvements to the new JVM backend.
  • The new Jetpack Compose will only work with the new backend.

Another point in favor of starting to use the new JVM IR backend now is that it will become the default in Kotlin 1.5.0. Before we make it the default, we want to make sure we fix as many bugs as we can, and by adopting the new backend early you will help ensure that the migration is as smooth as possible.

To start using the new JVM IR backend

  1. Update the Kotlin dependency to 1.4.30 in your project.
  2. In the build configuration file, add the following lines to the target platform block of your project/module to turn the new compiler on.
    For Gradle add the following:

    • In Groovy
      compileKotlin {
          kotlinOptions.useIR = true
      }
      
    • In Kotlin
      	Kotlin
          import org.jetbrains.kotlin.gradle.tasks.KotlinCompile
          // ...
          val compileKotlin: KotlinCompile by tasks
          compileKotlin.kotlinOptions.useIR = true
      

    And for Maven add the following:

     
    <configuration>
     	<args>
    		<arg>-Xuse-ir</arg>
    	</args>
    </configuration>
    
  3. Please make a clean build and run tests after enabling the new backend to verify that your project compiles successfully.

You shouldn’t notice any difference, but if you do, please report it in YouTrack or send us a message in this Slack channel (get an invite here). When you do, please attach a list of steps to reproduce the problem and a code sample if possible.

You can switch back to the old backend anytime simply by removing the line from step two and rebuilding the project.

Continue ReadingThe JVM Backend Is in Beta: Let’s Make It Stable Together

Kotlin Plugin Released With IDEA 2020.3

We’ve changed the release cycle of the Kotlin plugin, so that all major updates are now synced with IntelliJ IDEA releases.

In this release:

  • Inline refactoring is now cross-language: you can apply it to any Kotlin elements defined in Java and the conversion will be performed automatically.
  • Now you can search and replace parts of your code based on its structure.
  • Building performant and beautiful user interfaces is now easy with the new experimental JetPack Compose templates.

New infrastructure and release cycle

We now ship new versions of the Kotlin plugin with every release of the IntelliJ Platform and every new version of Kotlin. IDE support for the latest version of the language will be available for the latest two versions of IntelliJ IDEA and for the latest version of Android Studio.

We’ve made this change to minimize the time it takes us to apply platform changes, and to make sure that the IntelliJ team gets the latest Kotlin IDE fixes quickly and with enough time to test them.

Inline refactoring

The Kotlin plugin now supports cross-language conversion. Starting with version 2020.3 of the Kotlin plugin, you can use inline refactoring actions for Kotlin elements defined in Java.

You can apply them via Refactor / Inline… or ⌥⌘N on Mac or Ctrl+Alt+N on Windows and Linux.

We’ve improved the inlining of lambda expressions. You no longer have to rewrite your code after you inline it; the IDE now analyzes the lambda syntax more thoroughly and formats the lambdas correctly.

Structural Search and Replace for Kotlin

Structural search and replace (SSR) actions are now available for Kotlin. Now you can find and replace code patterns, whilst taking the syntax and semantics of the source code into account.

To use the feature, go to Edit | Find | Search Structurally…. You can also write a search template or choose Existing Templates… by clicking the tools icon in the top right corner of the window.

You can add filters for variables to narrow down your search, for example, the Type filter:

Full support of EditorConfig

Starting from 2020.3, the Kotlin plugin fully supports storing code formatting settings in the .editorconfig.

Desktop and Multiplatform templates for JetPack Compose for Desktop

JetPack compose is a modern UI framework for Kotlin that makes it easy and enjoyable to build performant and beautiful user interfaces. The new experimental Jetpack Compose for Desktop templates are now available in the Kotlin Project Wizard. You can create a project using Desktop or Multiplatform templates. A desktop template is for the desktop JVM platform, and the Multiplatform template is for the desktop JVM platform and Android with shared code in common modules.

You can read more about the Jetpack Compose features in this blog post, look through the examples of Compose applications, and try them out in the latest version of the Kotlin plugin.

Learn about the Kotlin Plugin updates in What’s new.


We’ve added a lot of new IDE features that are all designed for greater productivity and to make working with the code more fun.

  • Now when you run an application in debug mode, you get clickable inline hints that you can expand to see all the fields that belong to the variable. Moreover, you can change the variable values inside the drop-down list.
  • Work together on your code with your colleagues. In IntelliJ IDEA 2020.3 you can use Code With Me, a tool for collaborative development and remote pair programming.
  • In IntelliJ IDEA 2020.3, we made it easier to start analyzing snapshots from the updated Profiler tool window. The Profiler allows you to view and study snapshots to track down performance and memory issues. You can open a snapshot file in the Recent Snapshots area and analyze it from there.
  • Evaluate math expressions and find Git branches and commits by their hashes in Search Everywhere.
  • Split the editor with drag and drop tabs.

These and other updates are described in detail in What’s new for IDEA 2020.3

Don’t forget to share your feedback!

You can do that by

Continue ReadingKotlin Plugin Released With IDEA 2020.3

End of content

No more pages to load