Auto Added by WPeMatico

Compose Multiplatform 1.6.0 – Resources, UI Testing, iOS Accessibility, and Preview Annotation

Compose Multiplatform is a declarative UI framework built by JetBrains that allows developers to share UI implementations across different platforms. The 1.6.0 release brings several powerful features, as well as compatibility with the latest Kotlin version and changes from Google’s latest Jetpack Compose update.

Get Started with Compose Multiplatform

This release of Compose Multiplatform:

  • Revamps the resource management library.
  • Introduces a UI testing API.
  • Adds iOS accessibility support.
  • Brings a host of other features and improvements.

For a description of all notable changes, see our What’s new in Compose Multiplatform 1.6.0 page or check the release notes on GitHub.

Common resources API

The biggest and most anticipated change in Compose Multiplatform 1.6.0 is the improvement of the API for sharing and accessing resources in common Kotlin code. This API now allows you to include and access more types of resources in your Compose Multiplatform applications.

The resources are organized in a number of directories as part of the commonMain source set:

  • composeResources/drawable contains images
  • composeResources/font contains fonts
  • composeResources/values contains strings (in the format of strings.xml)
  • composeResources/files contains any other files.

Compose Multiplatform generates type-safe accessors for all of these resource types (excluding the files directory). For example, after placing a vector image compose-multiplatform.xml in the composeResources/drawable directory, you can access it in your Compose Multiplatform code using the generated Res object:

import androidx.compose.foundation.Image
import androidx.compose.runtime.Composable
import kotlinproject.composeapp.generated.resources.*
import org.jetbrains.compose.resources.ExperimentalResourceApi
import org.jetbrains.compose.resources.painterResource

@OptIn(ExperimentalResourceApi::class)
@Composable
fun Logo() {
    Image(
        painterResource(Res.drawable.compose_multiplatform),
        contentDescription = "CMP Logo"
    )
}

The new resources API also allows you to provide variations of the same resource for different use cases, including locale, screen density, or theme. Whether you’re localizing text, changing the colors of your icons in dark mode, or providing alternative images based on screen resolution, you can express these constraints by adding qualifiers to the directory names.

For a closer look at qualifiers for resources, as well as an in-depth overview of the new resources API in Compose Multiplatform 1.6.0, take a look at the official documentation!

Common API for UI testing

UI testing helps you make sure that your application behaves the way it is supposed to. With Compose Multiplatform 1.6.0, we’re introducing an experimental API that allows you to write common UI tests for Compose Multiplatform that validate the behavior of your application’s user interface across the different platforms supported by the framework.

For example, you may want to ensure that a custom component correctly shows an informative string with the proper prefix:

@Composable
fun MyInfoComposable(info: String, modifier: Modifier) {
    Text(modifier = modifier, text = "[IFNO] $info")
}

In the latest version of Compose Multiplatform, you can now use UI tests to validate that the component does indeed correctly prefix the text when rendered. To do so, you can use the same finders, assertions, actions, and matchers as you would with Jetpack Compose on Android. After following the setup instructions from the documentation you can write a test that ensures the prefix is added properly:

import androidx.compose.ui.test.ExperimentalTestApi
import androidx.compose.ui.test.runComposeUiTest
...

class InformationTest {
    @OptIn(ExperimentalTestApi::class)
    @Test
    fun shouldPrefixWithInfoTag() = runComposeUiTest {
        setContent {
            MyInfoComposable("Important things!", modifier = Modifier.testTag("info"))
        }
        onNodeWithTag("info").assertTextContains("[INFO]", substring = true)
    }
}

Running this test on any of your target platforms will show you the test results (and in this case, help you spot and correct a typo!):

iOS accessibility support

Compose Multiplatform for iOS now allows people with disabilities to interact with the Compose UI with the same level of comfort as with the native UI:

  • Screen readers and VoiceOver can access the content of the Compose Multiplatform UI.
  • Compose Multiplatform UI supports the same gestures as the native UIs for navigation and interaction.

This is possible because the Compose Multiplatform semantic data is automatically mapped into an accessibility tree. You can also make this data available to Accessibility Services and use it for testing with the XCTest framework.

For details on the implementation and limitations of the current accessibility support, see the documentation page.

@Preview annotation for Fleet

With 1.6.0, Compose Multiplatform introduces the common @Preview annotation (previously available only for Android and Desktop). This annotation is supported by JetBrains Fleet (starting with Fleet 1.31). Add @Preview to your @Composable function, and you’ll be able to open the preview via the gutter icon:

Try it out with a project generated by the Kotlin Multiplatform wizard!

Fleet currently supports the @Preview annotation for @Composable functions without parameters. To use this common annotation, add the experimental compose.components.uiToolingPreview library to your dependencies (as opposed to compose.uiTooling, used for desktop and Android).

Separate platform views for popups, dialogs, and dropdown menus

When mixing Compose Multiplatform with SwiftUI, you may only want to have some widgets of a given screen rendered using Compose. Starting with version 1.6, when you create Dialog, Popup, or Dropdown composables in these scenarios, they can expand beyond the bounds of the individual widget and even expand across the entire screen!

This is also possible for desktop targets, although as an experimental feature for now.

Dialog, Popup, or Dropdown in Compose Multiplatform

Other notable changes

To learn about the rest of the changes included in this release:

What else to read and watch

Continue ReadingCompose Multiplatform 1.6.0 – Resources, UI Testing, iOS Accessibility, and Preview Annotation

Kotlin Dataframe 0.9.1 released!

It’s time for another Kotlin Dataframe update to start off the new year.
There have been a lot of exciting changes since the last 0.8.0 preview release. So without any further ado, let’s jump right in!

TL;DR:

Kotlin DataFrame on GitHub

OpenAPI Type Schemas

JSON schema inference is great, but it’s not perfect. DataFrame has had the ability to generate data schemas based on given data for a while now, but this can lead to errors in types or nullability when the sample doesn’t correctly reflect how future data might look.
Today, more and more APIs offer OpenAPI (Swagger) specifications. Aside from API endpoints, they also hold Data Models (Schemas) which include all the information about the types that can be returned from or supplied to the API. Obviously, we don’t want to reinvent the wheel and use our own schema inference when we can use the one provided by the API. Not only will we now get the proper names of the types, but we will also get enums, correct inheritance, and overall better type safety.

From DataFrame 0.9.1 onward, we will support the automatic generation of data schemas based on OpenAPI 3.0 type schemas.

To get started, simply import the OpenAPI specification file (.json or .yaml) as you would import any other data you would want to generate data schemas for. An OpenAPI file can contain any number of type schemas that will all be converted to a data schema.
We’ll use the pet store example from OpenAPI itself.

Your project does need an extra dependency for this to work:

implementation("org.jetbrains.kotlinx:dataframe-openapi:{VERSION}")

Importing data schemas can be done using a file annotation:

@file:ImportDataSchema(
    path = "https://petstore3.swagger.io/api/v3/openapi.json",
    name = "PetStore",
)

import org.jetbrains.kotlinx.dataframe.annotations.ImportDataSchema

Or using Gradle:

dataframes {
    schema {
        data = "https://petstore3.swagger.io/api/v3/openapi.json"
        name = "PetStore"
    }
}

And in Jupyter:

val PetStore = importDataSchema(
    "https://petstore3.swagger.io/api/v3/openapi.json"
)

After generating the data schemas, all type schemas from the OpenAPI spec file will have a corresponding data schema in Kotlin that’s ready to parse any JSON content adhering to it.
These will be grouped together under the name you give, which in this case is PetStore. Since the pet store OpenApi schema has the type schemas Order, Customer, Pet, etc., you will have access to the data schemas PetStore.Order, PetStore.Customer, PetStore.Pet, etc. that you can use to read and parse JSON data. (Hint: You can explore this generated code in your IDE and see what it looks like.)

For example:

val df = PetStore.Pet.readJson(
   "https://petstore3.swagger.io/api/v3/pet/findByStatus?status=available"
)
val names: DataColumn<String> = df
    .filter { /* this: DataRow<Pet>, it: DataRow<Pet> */
        category.name == "Dogs" &&
            status == Status1.AVAILABLE
    }
    .name

If you’re interested in the specifics of how this is done, I’ll break down an example below. Otherwise, you can continue to the next section.

OpenAPI Deep Dive

We can compare and see how, for instance, Pet is converted from the OpenAPI spec to Kotlin DataSchema interfaces (examples have been cleaned up a bit):

Pet, in the OpenApi spec is defined as:

"Pet": {
  "required": [ "name", "photoUrls" ],
  "type": "object",
  "properties": {
    "id": {
      "type": "integer",
      "format": "int64",
      "example": 10
    },
    "name": {
      "type": "string",
      "example": "doggie"
    },
    "category": { "$ref": "#/components/schemas/Category" },
    "photoUrls": {
      "type": "array",
      "items": { "type": "string" }
    },
    "tags": {
      "type": "array",
      "items": { "$ref": "#/components/schemas/Tag" }
    },
    "status": {
      "type": "string",
      "description": "pet status in the store",
      "enum": [ "available", "pending", "sold" ]
    }
  }
}

As you can see, it’s an object type that has multiple properties. Some properties are required, like name and photoUrls. Others, like id and category are not. No properties are nullable in this particular example, but since Kotlin has no concept of undefined properties, non-required properties will be seen as nullable too. There are primitive properties, such as id and name, but also references to other types, like Category and Tag. Let’s see what DataFrame generates using this example:

enum class Status1(override val value: String) : DataSchemaEnum {
    AVAILABLE("available"),
    PENDING("pending"),
    SOLD("sold");
}
    
@DataSchema(isOpen = false)
interface Pet {
    val id: Long?
    val name: String
    val category: Category?
    val photoUrls: List<String>
    val tags: DataFrame<Tag?>
    val status: Status1?
    
    companion object {
      val keyValuePaths: List<JsonPath>
        get() = listOf()
      fun DataFrame<*>.convertToPet(convertTo: ConvertSchemaDsl<Pet>.() -> Unit = {}): DataFrame<Pet> = convertTo<Pet> {
          convertDataRowsWithOpenApi()
          convertTo()
      }
      fun readJson(url: java.net.URL): DataFrame<Pet> = 
        DataFrame.readJson(url, typeClashTactic = ANY_COLUMNS, keyValuePaths = keyValuePaths)
          .convertToPet()
      fun readJson(path: String): DataFrame<Pet> = ...
      ...
    }
}

Let’s look at the generated interface Pet. All properties from the OpenAPI JSON appear to be there: id, name, and so on. Non-required or nullable properties are correctly marked with a ?. References to other types, like Category and Tag, are working too and are present elsewhere in the generated file.
Interestingly, since tags is supposed to come in the form of an array of objects, this is represented as a List of DataRows, or more specifically, a data frame. Thus, when Pet is used as a DataFrame type, tags will become a FrameColumn.
Finally, status was an enum that was defined inline in the OpenAPI JSON. We cannot define a type inline like that in Kotlin, so it’s generated outside of Pet.
Since DataSchemaEnum is used here, this might also be a good opportunity to introduce it properly. Enums can implement this interface to control how their values are read/written from/to data frames. This allows enums to be created with names that might be illegal in Kotlin (such as numbers or empty strings) but legal in other languages.

To be able to quickly read data as a certain type, the generated types have specific .readJson() methods. The example only shows the URL case in full, but the others are very similar. After calling one of them, the data frame is converted to the right type (in this case, using convertToPet(), which applies, corrects, and converts all the properties to the expected types). Those conversion functions can also be used to convert your own data frames to one of these generated types.

Adding support for OpenAPI type schemas was a difficult task. OpenAPI is very flexible in ways Kotlin and DataFrame cannot always follow. We’re certain it will not work with 100% of the OpenAPI specifications out there, so if you notice some strange behavior with one of your APIs, please let us know on Github or Slack so we can improve the support. 🙂

JSON Options

To make the OpenAPI integration work better, we made several changes to how JSON is read in DataFrame. While the default behavior is the same, we added some extra options that might be directly beneficial to you too!

Key/Value Paths

Have you ever encountered a JSON file that, when read into a data frame, resulted in hundreds of columns? This can happen if your JSON data contains an object with many properties (key/value pairs). Unlike a large list of data, a huge map like this is not so easily stored in a column-based fashion, making it easy for you to lose grip on your data. Plus, if you’re generating data schemas, the compiler will most likely run out of memory due to the sheer number of interfaces it needs to create.

It would make more sense to convert all these columns into just two columns: “key” and “value”. This is exactly what the new key/value paths achieve.

Let’s look at an example:

By calling the API from APIS.GURU (a website/API that holds a collection of OpenAPI APIs), we get a data frame of 2366 columns in the form as shown here:

DataFrame.read("https://api.apis.guru/v2/list.json")

Inspecting the JSON as a data frame, we can find two places where conversion to keys/values might be useful: The root of the JSON and the versions property inside each website’s object. Let’s read it again but now with these key/value paths. We can use the JsonPath class to help construct these paths (available in Gradle too, but not available in KSP) and since we have a key/value object at the root, we’ll need to unpack the result by taking the first row and first column:

DataFrame.readJson(
    path = "https://api.apis.guru/v2/list.json",
    keyValuePaths = listOf(
        JsonPath(), // generates '$'
        JsonPath() // generates '$[*]["versions"]'
            .appendWildcard()
            .append("versions"),
    ),
)[0][0] as AnyFrame

Way more manageable, right? To play around more with this example, check out the Jupyter notebook or Datalore. This notebook contains examples of key/value paths and examples of the new OpenAPI functionality.

Type Clash Tactics

A little-known feature of DataFrame is how type clashes are handled when creating data frames from JSON. Let’s look at an example:

Using the default type clash tactic ARRAY_AND_VALUE_COLUMNS, JSON is read as follows:

[
    { "a": "text" },
    { "a": { "b": 2 } },
    { "a": [6, 7, 8] }
]
⌌----------------------------------------------⌍
|  | a:{b:Int?, value:String?, array:List<Int>}|
|--|-------------------------------------------|
| 0|         { b:null, value:"text", array:[] }|
| 1|              { b:2, value:null, array:[] }|
| 2|    { b:null, value:null, array:[6, 7, 8] }|
⌎----------------------------------------------⌏

Clashes between array elements, value elements, and object elements are solved by creating a ColumnGroup in the data frame with the columns array (containing all arrays), value (containing all values), and a column for each property in all of the objects. For non-array elements, the array column will contain an empty list. For non-value elements, the value column will contain null. This also applies to elements that don’t contain a property of one of the objects.

If you’re not very fond of this conversion and would rather have a more direct representation of the JSON data, you could use the type clash tactic ANY_COLUMNS. This tactic is also used by OpenAPI to better represent the provided type schema. Using this tactic to read the same JSON sample as above results in the following data frame:

⌌-------------⌍
|  |     a:Any|
|--|----------|
| 0|    "text"|
| 1|   { b:2 }|
| 2| [6, 7, 8]|
⌎-------------⌏

We could consider more type clash tactics in the future. Let us know if you have any ideas!

How to use JSON Options

Both of these JSON options can be used when reading JSON using the DataFrame.readJson() functions and (for generating data schemas) using the Gradle- and KSP plugins:

Functions:

DataFrame.readJson(
    path = "src/main/resources/someData.json",
    keyValuePaths = listOf(
        JsonPath()
            .appendArrayWithWildcard()
            .append("data"),
    ),
    typeClashTactic = JSON.TypeClashTactic.ARRAY_AND_VALUE_COLUMNS,
)

Gradle:

dataframes {
    schema {
        data = "src/main/resources/someData.json"
        name = "com.example.package.SomeData"
        jsonOptions {
            keyValuePaths = listOf(
                JsonPath()
                    .appendArrayWithWildcard()
                    .append("data"),
            )
            typeClashTactic = JSON.TypeClashTactic.ARRAY_AND_VALUE_COLUMNS
        }
    }
}

KSP:

@file:ImportDataSchema(
    path = "src/main/resources/someData.json",
    name = "SomeData",
    jsonOptions = JsonOptions(
        keyValuePaths = [
            """$[*]["data"]""",
        ],
        typeClashTactic = JSON.TypeClashTactic.ARRAY_AND_VALUE_COLUMNS,
    ),
)

Apache Arrow

Thanks to @Kopilov, our support for Apache Arrow files has further improved!

To use it, add the following dependency:

implementation("org.jetbrains.kotlinx:dataframe-arrow:{VERSION}")

On the reading side, this includes better reading of Data and Time types, UInts, and configurable nullability options. For more information, check out the docs.

On the writing side, well, this is completely new! DataFrame gained the ability to write to both the Arrow IPC Streaming format (.ipc) and the Arrow Random Access format (.feather). You can use both formats to save the data to a file, stream, byte channel, or byte array:

df.writeArrowIPC(file) // writes df to an .ipc file
df.writeArrowFeather(file) // writes df to a .feather file
 
val ipcByteArray: ByteArray = df.saveArrowIPCToByteArray()
val featherByteArray: ByteArray = df.saveArrowFeatherToByteArray()

If you need more configuration, then you can use arrowWriter. For example:

// Get schema from anywhere you want. It can be deserialized from JSON, generated from another dataset
// (including the DataFrame.columns().toArrowSchema() method), created manually, and so on.
val schema = Schema.fromJSON(schemaJson)

df.arrowWriter(

    // Specify your schema
    targetSchema = schema,

    // Specify desired behavior mode
    mode = ArrowWriter.Mode(
        restrictWidening = true,
        restrictNarrowing = true,
        strictType = true,
        strictNullable = false,
    ),

    // Specify mismatch subscriber
    mismatchSubscriber = { message: ConvertingMismatch ->
        System.err.println(message)
    },

).use { writer: ArrowWriter ->

    // Save to any format and sink, like in the previous example
    writer.writeArrowFeather(file)
}

For more information, check out the docs.

Other New Stuff

Let’s finish this blog with a quick-fire round of some bug fixes and new features. Of course, there are far too many to mention, so we’ll stick to the ones that stand out:

Have a nice Kotlin!

Continue ReadingKotlin Dataframe 0.9.1 released!

KotlinDL 0.5 Has Come to Android!

Version 0.5 of our deep learning library, KotlinDL, is now available!

This release focuses on the new API for the flexible and easy-to-use deployment of ONNX models on Android. We have reworked the Preprocessing DSL, introduced support for ONNX runtime execution providers, and more. Here’s a summary of what you can expect from this release:

  1. Android support
  2. Preprocessing DSL
  3. Inference on accelerated hardware
  4. Custom models inference

KotlinDL on GitHub Android demo app


Android support

We introduced ONNX support in KotlinDL 0.3. ONNX is an open-source format for representing deep learning models with flexible and extensible specification, and it is supported by many different frameworks. It was designed to be fast, portable, and interoperable with existing toolsets such as TensorFlow or PyTorch. With this release of KotlinDL, you can now run ONNX models on Android devices using the concise Kotlin API!

The most convenient way to start with KotlinDL on Android is to load the model through ModelHub. You can easily instantiate models included in our ModelHub through the ONNXModels API, and then you’ll be ready to run inference on Bitmap or ImageProxy.

Use cases supported by the Android ONNX ModelHub include:

  • Object Detection
  • Image Classification
  • Pose Detection
  • Face Detection
  • Face Alignment
Models from Android ONNX ModelHub used in the sample app

KotlinDL ONNX is ready to be used with the ImageAnalysis API. This allows you, for example, to directly infer models shipped via ModelHub on the ImageProxy object.

class KotlinDLAnalyzer(
	context: Context,
	private val uiUpdateCallBack: (List<detectedobject>) -> Unit
) : ImageAnalysis.Analyzer {
	val hub = ONNXModelHub(context)
	val model = ONNXModels.ObjectDetection.EfficientDetLite0.pretrainedModel(hub)
	override fun analyze(image: ImageProxy) {
    	val detections = model.detectObjects(image, topK=5)
    	uiUpdateCallBack(detections)
	}
}

Inference of the EfficientDetLite0 model directly on the ImageProxy input. Check out the demo here.

Note that the orientation of the retrieved camera image will be corrected automatically.


Preprocessing DSL

When working with images, it is often necessary to perform some preprocessing steps before feeding them to the model. KotlinDL provides a convenient DSL for preprocessing, allowing you to apply a sequence of transformations to the input image easily. The DSL is based on the concept of a pipeline, where each transformation is a pipeline stage. Each stage is described by an input and output data type. If the output type of one stage matches the input type of another, you can combine them into a pipeline. In this case, type-checking occurs at compilation time.

This approach allows you to implement different transformations for BufferedImage on the desktop and Bitmap on Android while utilizing a single DSL.

// Preprocessing pipeline for an Android bitmap       	// Preprocessing pipeline for a BufferedImage
val preprocessing = pipeline<bitmap>()                 	val preprocessing = pipeline<bufferedimage>()
	.resize {                                               	.resize {
    	outputHeight = 224                                      	outputHeight = 224
    	outputWidth = 224                                       	outputWidth = 224
	}                                                       	}
	.rotate { degrees = 90f }                               	.rotate { degrees = 90f }
	.toFloatArray { layout = TensorLayout.NCHW }            	.toFloatArray {}
	.transpose { axes = intArrayOf(2, 0, 1) }

Bitmap preprocessing vs. BufferedImage preprocessing (side by side)

Note that the DSL is not limited to image preprocessing. You can use it to implement any preprocessing pipeline for your data.

We implemented the following set of operations for an Android Bitmap:

  • Resize
  • Rotate
  • Crop
  • Rescale
  • Normalize
  • ConvertToFloatArray

The ConvertToFloatArray operation supports two popular layouts for the low-level representation of a tensor, ChannelFirst (TensorLayout.NCHW), and ChannelsLast (TensorLayout.NHWC).


Inference on accelerated hardware

With KotlinDL 0.5, it is possible to run models on optimized hardware using the ONNX Runtime Execution Providers (EP) framework. This interface provides flexibility for you to deploy their ONNX models in different environments in the cloud and at the edge and optimize execution by taking advantage of the platform’s computational capabilities.

KotlinDL currently supports the following EPs:

  • CPU (default)
  • CUDA (for the devices with GPU and CUDA support)
  • NNAPI (for Android devices with API 27+)

NNAPI is a framework that allows you to run inference on Android devices using hardware acceleration. With NNAPI, resource-intensive computations can be performed up to 9 times as fast as with the CPU execution provider on the same device. However, it is important to note that NNAPI cannot accelerate all models. NNAPI supports only a subset of operators. If the model contains unsupported operators, the runtime falls back to using CPU. Therefore you may not get any performance improvements, and you may even encounter performance degradation due to data transfer between the CPU and the accelerator. The ONNX runtime provides a tool for checking whether NNAPI can accelerate your model inference.

One option for defining execution providers for inference is to use the initializeWith function.

val model = modelHub.loadModel(ONNXModels.CV.MobilenetV1())
model.initializeWith(NNAPI())

Loading and initialization of the model with NNAPI execution provider

Another option is to use convenience functions.


Custom models inference

KotlinDL ONNX ModelHub is a great way to start with KotlinDL on Android. However, if you have your own ONNX model, you can easily use it with KotlinDL.

Any ONNX model can be loaded and inferred using a lower-level API.

val modelBytes = resources.openRawResource(R.raw.model).readBytes()
val model = OnnxInferenceModel(modelBytes)
val preprocessing = pipeline<bitmap>()
	.resize {
    	outputHeight = 224
    	outputWidth = 224
	}
	.toFloatArray { layout = TensorLayout.NCHW }
	.call(InputType.TORCH.preprocessing(channelsLast = false))
val (tensor, shape) = preprocessing.apply(bitmap)
val logits = model.predictSoftly(tensor)
val labelId = logits.argmax()

Loading and inference of custom model using the OnnxInferenceModel API. Check out the demo here.

As we see here, it’s possible to instantiate OnnxInferenceModel from the byte representation of the model file. The model file can be stored in the application’s resources or retrieved over the network.


Additional details

Breaking changes in the Preprocessing DSL

Starting with version 0.5, KotlinDL has a new syntax for describing preprocessing pipelines.

val preprocessing: Preprocessing = preprocess {       	val preprocessing = pipeline<bufferedimage>()
	transformImage {                                       	.resize {
    	resize {                                               	outputHeight = 224
        	outputHeight = 224                                 	outputWidth = 224
        	outputWidth = 224                              	}
        	interpolation = InterpolationType.BILINEAR         	.onResult { ImageIO.write(it, "jpg", File("image.jpg")) }
        	save {                                         	.convert { colorMode = ColorMode.BGR }
            	dirLocation = "image.jpg"                  	.toFloatArray { }
        	}                                              	.call(ResNet50().preprocessor)
    	}
    	convert { colorMode = ColorMode.BGR }
	}
	transformTensor {
    	sharpen {
        	modelTypePreprocessing = TFModels.CV.ResNet50()
    	}
	}
}

The old DSL vs. the new one (side by side)

Before version 0.5, the Preprocessing DSL had several limitations. The DSL described a preprocessing pipeline with a fixed structure, namely the BufferedImage processing stage (transformImage) and the subsequent tensor representation processing stage (transformTensor). We’ve changed this approach in version 0.5. From now on, the Preprocessing DSL allows you to build a pipeline from an arbitrary set of operations.

The save operation is no longer supported, but you can use the onResult operation as an alternative. This operation allows you to apply a lambda to the output of a previous transformation. This method can be useful for debugging purposes.

Another feature of the Preprocessing DSL in KotlinDL 0.5 is the ability to reuse entire chunks of the pipeline in different places using the call function. Many models require input data with identical preprocessing, and the call function can reduce code duplication in such scenarios.

Convenience functions for inference with execution providers

KotlinDL provides convenience extension functions for the inference of ONNX models using different execution providers:

  • inferUsing
  • inferAndCloseUsing

Those functions explicitly declare the EPs to be used for inference in their scope. Although these two functions have the same goal of explicitly initializing the model with the given execution providers, they behave slightly differently. inferAndCloseUsing has Kotlin’s use scope function semantics, which means that it closes the model at the end of the block; meanwhile, inferUsing is designed for repeated use and has Kotlin’s run scope function semantics.

You can find some examples here.

Implementing custom preprocessing operations

If you need to implement a custom preprocessing operation, you can do so by implementing the Operation interface and corresponding extension functions.

Check out this example implementation of custom operations.

New API for composite output parsing

Most of the models in KotlinDL have a single output tensor. However, some models have multiple outputs.

For example, the SSD model has three output tensors. In KotlinDL 0.5, we introduce a new API for parsing composite outputs and add a bunch of convenience functions for processing the OrtSession.Result, for example:

  • getFloatArrayWithShape
  • get2DFloatArray
  • getByteArrayWithShape
  • getShortArray

This allows you to write more explicit and readable code and reduces the number of unchecked casts.

For example, this code is used to get the output of the SSD model.


Learn more and share your feedback

We hope you enjoyed this brief overview of the new features in KotlinDL 0.5! Visit the project’s home on GitHub for more information, including the up-to-date Readme file.

If you have previously used KotlinDL, use the changelog to find out what has changed.

We’d be very thankful if you would report any bugs you find to our issue tracker. We’ll try to fix all the critical issues in the 0.5.1 release.

You are also welcome to join the #kotlindl channel in Kotlin Slack (get an invite here). In this channel, you can ask questions, participate in discussions, and receive notifications about new preview releases and models in ModelHub.

Continue ReadingKotlinDL 0.5 Has Come to Android!

Kotlin 1.5.0-RC Released: Changes to the Standard and Test Libraries

Kotlin 1.5.0-RC is available with all the features planned for 1.5.0 – check out the entire scope of the upcoming release! New language features, stdlib updates, an improved testing library, and many more changes are receiving a final polish. The only additional changes before the release will be fixes.

Try the modern Kotlin APIs on your real-life projects with 1.5.0-RC and help us make the release version better! Report any issues you find to our issue tracker, YouTrack.

Install 1.5.0-RC

In this post, we’ll walk you through the changes to the Kotlin standard and test libraries in 1.5.0-RC:

You can find all the details below!

Stable unsigned integer types

The standard library includes the unsigned integer API that comes in useful for dealing with non-negative integer operations. It includes:

  • Unsigned number types: UInt, ULong, UByte, UShort, and related functions, such as conversions.
  • Aggregate types: arrays, ranges, and progressions of unsigned integers: UIntArray, UIntRange, and similar containers for other types.

Unsigned integer types have been available in Beta since Kotlin 1.3. Now we are classifying the unsigned integer types and operations as stable, making them available without opt-in and safe to use in real-life projects.

Namely, the new stable APIs are:

  • Unsigned integer types
  • Ranges and progressions of unsigned integer types
  • Functions that operate with unsigned integer types
fun main() {
//sampleStart
    val zero = 0U // Define unsigned numbers with literal suffixes
    val ten = 10.toUInt() // or by converting non-negative signed numbers
    //val minusOne: UInt = -1U // Error: unary minus is not defined
    val range: UIntRange = zero..ten // Separate types for ranges and progressions

    for (i in range) print(i)
    println()
    println("UInt covers the range from ${UInt.MIN_VALUE} to ${UInt.MAX_VALUE}") // UInt covers the range from 0 to 4294967295
//sampleEnd
}

Arrays of unsigned integers remain in Beta. So do unsigned integer varargs that are backed by arrays. If you want to use them in your code, you can opt-in with the @ExperimentalUnsignedTypes annotation.

Learn more about unsigned integers in Kotlin.

Extensions for java.nio.file.Path API

Kotlin now provides a way to use the modern non-blocking Java IO in a Kotlin-idiomatic style out of the box via the extension functions for java.nio.file.Path.

Here is a small example:

import kotlin.io.path.*
import java.nio.file.Path

fun main() {
    // construct path with the div (/) operator
    val baseDir = Path("/base")
    val subDir = baseDir / "subdirectory"

    // list files in a directory
    val kotlinFiles = Path("/home/user").listDirectoryEntries("*.kt")
    // count lines in all kotlin files
    val totalLines = kotlinFiles.sumOf { file -> file.useLines { lines -> lines.count() } }
}

These extensions were introduced as an experimental feature in Kotlin 1.4.20, and are now available without an opt-in. Check out the kotlin.io.path package for the list of functions that you can use.

The existing extensions for File API remain available, so you are free to choose the API you like best.

Locale-agnostic API for uppercase and lowercase

Many of you are familiar with the stdlib functions for changing the case of strings and characters: toUpperCase(), toLowerCase(), toTitleCase(). They generally work fine, but they can cause a headache when it comes to dealing with different platform locales – they are all locale-sensitive, which means their result can differ depending on the locale. For example, what does ”Kotlin”.toUpperCase() return? “Obviously KOTLIN”, you would say. But in the Turkish locale, the capital i is İ, so the result is different: KOTLİN.

Now there is a new locale-agnostic API for changing the case of strings and characters: uppercase(), lowercase(), titlecase() extensions, and their *Char() counterparts. You may have already tried its preview in 1.4.30.

The new functions work the same way regardless of the platform locale settings. Just call these functions and leave the rest to the stdlib.

The new functions work the same way regardless of the platform locale settings. Just call these functions and leave the rest to the stdlib.

fun main() {
//sampleStart
    // replace the old API
    println("Kotlin".toUpperCase()) // KOTLIN or KOTLİN or?..

    // with the new API
    println("Kotlin".uppercase()) // Always KOTLIN
//sampleEnd
}

On the JVM, you can perform locale-sensitive case change by calling the new functions with the current locale as an argument:

"Kotlin".uppercase(Locale.getDefault()) // Locale-sensitive uppercasing

The new functions will completely replace the old ones, which we’re deprecating now.

Clear Char-to-code and Char-to-digit conversions

The operation for getting a UTF-16 code of a character – the toInt() function – was a common pitfall because it looks pretty similar to String.toInt() on one-digit strings that produces an Int presented by this digit.

"4".toInt() // returns 4
'4'.toInt() // returns 52

Additionally, there was no common function that would return the numeric value 4 for Char '4'.

To solve these issues, there is now a set of new functions for conversion between characters and their integer codes and numeric values:

  • Char(code) and Char.code convert between a char and its code.
  • Char.digitToInt(radix: Int) and its *OrNull version create an integer from a digit in the specified radix.
  • Int.digitToChar(radix: Int) creates a char from a digit that represents an integer in the specified radix.

These functions have clear names and make the code more readable:

fun main() {
//sampleStart
    val capsK = Char(75) // ‘K’
    val one = '1'.digitToInt(10) // 1
    val digitC = 12.digitToChar(16) // hexadecimal digit ‘C’

    println("${capsK}otlin ${one}.5.0-R${digitC}") // “Kotlin 1.5.0-RC”
    println(capsK.code) // 75
//sampleEnd
}

The new functions have been available since Kotlin 1.4.30 in the preview mode and are now stable. The old functions for char-to-number conversion (Char.toInt() and similar functions for other numeric types) and number-to-char conversion (Long.toChar() and similar except for Int.toChar()) are now deprecated.

Extended multiplatform char API

We’re continuing to extend the multiplatform part of the standard library to provide all of its capabilities to the multiplatform project common code.

Now we’ve made a number of Char functions available on all platforms and in common code. These functions are:

  • Char.isDigit(), Char.isLetter(), Char.isLetterOrDigit() that check if a char is a letter or a digit.
  • Char.isLowerCase(), Char.isUpperCase(), Char.isTitleCase() that check the case of a char.
  • Char.isDefined() that checks whether a char has a Unicode general category other than Cn (undefined).
  • Char.isISOControl() that checks whether a char is an ISO control character, that has a code in the ranges u0000..u001F or u007F..u009F.

The property Char.category and its return type enum class CharCategory, which indicates a character’s general category according to Unicode, are now available in multiplatform projects.

fun main() {
//sampleStart
    val array = "Kotlin 1.5.0-RC".toCharArray()
    val (letterOrDigit, punctuation) = array.partition { it.isLetterOrDigit() }
    val (upperCase, notUpperCase ) = array.partition { it.isUpperCase() }

    println("$letterOrDigit, $punctuation") // [K, o, t, l, i, n, 1, 5, 0, R, C], [ , ., ., -]
    println("$upperCase, $notUpperCase") // [K, R, C], [o, t, l, i, n, , 1, ., 5, ., 0, -]

    if (array[0].isDefined()) println(array[0].category)
//sampleEnd
}

Strict versions of String?.toBoolean()

Kotlin’s String?.toBoolean() function is widely used for creating boolean values from strings. It works pretty simply: it’s true on a string “true” regardless of its case and false on all other strings, including null.

While this behavior seems natural, it can hide potentially erroneous situations. Whatever you convert with this function, you get a boolean even if the string has some unexpected value.

New case-sensitive strict versions of the String?.toBoolean() are here to help avoid such mistakes:

  • String.toBooleanStrict() throws an exception for all inputs except literals “true” and “false”.
  • String.toBooleanStrictOrNull() returns null for all inputs except literals “true” and “false”.
fun main() {
//sampleStart
    println("true".toBooleanStrict()) // True
    // println("1".toBooleanStrict()) // Exception
    println("1".toBooleanStrictOrNull()) // null
    println("True".toBooleanStrictOrNull()) // null: the function is case-sensitive
//sampleEnd
}

Duration API changes

The experimental duration and time measurement API has been available in the stdlib since version 1.3.50. It offers an API for the precise measurement of time intervals.

One of the key classes of this API is Duration. It represents the amount of time between two time instants. In 1.5.0, Duration receives significant changes both in the API and internal representation.

Duration now uses a Long value for the internal representation instead of Double. The range of Long values enables representing more than a hundred years with nanosecond precision or a hundred million years with millisecond precision. However, the previously supported sub-nanosecond durations are no longer available.

We are also introducing new properties for retrieving a duration as a Long value. They are available for various time units: Duration.inWholeMinutes, Duration.inWholeSeconds, and others. These functions come to replace the Double-based properties such as Duration.inMinutes.

Another change is a set of new factory functions for creating Duration instances from integer values. They are defined directly in the Duration type and replace the old extension properties of numeric types such as Int.seconds.

@ExperimentalTime
fun main() {
//sampleStart
    val duration = Duration.milliseconds(120000)
    println("There are ${duration.inWholeSeconds} seconds in ${duration.inWholeMinutes} minutes")
//sampleEnd
}

Given such major changes, the whole duration and time measurement API remains experimental in 1.5.0 and requires an opt-in with the @ExperimentalTime annotation.

Please try the new version and share your feedback in our issue tracker, YouTrack.

Math operations: floored division and the mod operator

In Kotlin, the division operator (/) on integers represents the truncated division, which drops the fractional part of the result. In modular arithmetics, there is also an alternative – floored division that rounds the result down (towards the lesser integer), which produces a different result on negative numbers.

Previously, floored division required a custom function like:

fun floorDivision(i: Int, j: Int): Int {
    var result = i / j
    if (i != 0 && result <= 0) result--
    return result
}

In 1.5.0-RC, we present the floorDiv() function that performs floored division on integers.

fun main() {
//sampleStart
    println("Truncated division -5/3: ${-5 / 3}")
    println("Floored division -5/3: ${-5.floorDiv(3)}")
//sampleEnd
}

In 1.5.0, we’re introducing the new mod() function. It now works exactly as its name suggests – it returns the modulus that is the remainder of the floored division.

It differs from Kotlin’s rem() (or % operator). Modulus is the difference between a and a.floorDiv(b) * b. Non-zero modulus always has the same sign as b while a % b can have a different one. This can be useful, for example, when implementing cyclic lists:

fun main() {
//sampleStart
    fun getNextIndexCyclic(current: Int, size: Int ) = (current + 1).mod(size)
    fun getPreviousIndexCyclic(current: Int, size: Int ) = (current - 1).mod(size)
    // unlike %, mod() produces the expected non-negative value even if (current - 1) is less than 0

    val size = 5
    for (i in 0..(size * 2)) print(getNextIndexCyclic(i, size))
    println()
    for (i in 0..(size * 2)) print(getPreviousIndexCyclic(i, size))
//sampleEnd
}

Collections: firstNotNullOf() and firstNotNullOfOrNull()

Kotlin collections API covers a range of popular operations on collections with built-in functions. For cases that aren’t common, you usually combine calls of these functions. It works, but this doesn’t always look very elegant and can cause overhead.

For example, to get the first non-null result of a selector function on the collection elements, you could call mapNotNull() and first(). In 1.5.0, you can do this in a single call of a new function firstNotNullOf(). Together with firstNotNullOf(), we’re adding its *orNull() counterpart that produces null if there is no value to return.

Here is an example of how it can shorten your code.

Assume that you have a class with a nullable property and you need its first non-null value from a list of the class instances.

class Item(val name: String?)

You can implement this by iterating the collection and checking if a property in not null:

// Option 1: manual implementation
for (element in collection) {
    val itemName = element.name
    if (itemName != null) return itemName
}
return null

Another way is to use the previously existing functions mapNotNull() and firstOrNull(). Note that mapNotNull() builds an intermediate collection, which requires additional memory, especially for big collections. And so, a transformation to a sequence may be also needed here.

// Option 2: old stdlib functions
return collection
    // .asSequence() // Avoid creating intermediate list for big collections
    .mapNotNull { it.name }
    .firstOrNull()

And this is how it looks with the new function:

// Option 3: new firstNotNullOfOrNull()
return collection.firstNotNullOfOrNull { it.name }

Test library changes

We haven’t shipped major updates to the Kotlin test library kotlin-test for several releases, but now we’re providing some long-awaited changes. With 1.5.0-RC, you can try a number of new features:

  • Single kotlin-test dependency in multiplatform projects.
  • Automatic choice of a testing framework for Kotlin/JVM source sets.
  • Assertion function updates.

kotlin-test dependency in multiplatform projects

We’re continuing our development of the configuration process for multiplatform projects. In 1.5.0, we’ve made it easier to set up a dependency on kotlin-test for all source sets.

Now the kotlin-test dependency in the common test source set is the only one you need to add. The Gradle plugin will infer the corresponding platform dependency for other source sets:

  • kotlin-test-junit for JVM source sets. You can also switch to kotlin-test-junit-5 or kotlin-test-testng if you enable them explicitly (read on to learn how).
  • kotlin-test-js for Kotlin/JS source sets.
  • kotlin-test-common and kotlin-test-annotations-common for common source sets.
  • No extra artifact for Kotlin/Native source sets because Kotlin/Native provides built-in implementations of the kotlin-test API.

Automatic choice of a testing framework for Kotlin/JVM source sets

Once you specify the kotlin-test dependency in the common test source set as described above, the JVM source sets automatically receive the dependency on JUnit 4. That’s it! You can write and run tests right away!

This is how it looks in the Groovy DSL:

kotlin {
    sourceSets {
        commonTest {
            dependencies {
                 // This brings the dependency
                // on JUnit 4 transitively
                implementation kotlin('test')
            }
        }
    }
}

And in the Kotlin DSL it is:

kotlin {
    sourceSets {
        val commonTest by getting {
            dependencies {
                // This brings the dependency
                // on JUnit 4 transitively
                implementation(kotlin("test"))
            }
        }
    }
}

You can also switch to JUnit 5 or TestNG by simply calling a function in the test task: useJUnitPlatform() or useTestNG().

kotlin {
    jvm {
        testRuns["test"].executionTask.configure {
            // enable TestNG support
            useTestNG()
            // or
            // enable JUnit Platform (a.k.a. JUnit 5) support
            useJUnitPlatform()
        }
    }
}

The same works in JVM-only projects when you add the kotlin-test dependency.

Assertion functions updates

For 1.5.0, we’ve prepared a number of new assertion functions along with improvements to existing ones.

First, let’s take a quick look at the new functions:

  • assertIs<T>() and assertIsNot<T>() check the value’s type.
  • assertContentEquals() compares the container content for arrays, sequences, and any Iterable. More precisely, it checks whether expected and actual contain the same elements in the same order.
  • assertEquals() and assertNotEquals() for Double and Float have new overloads with a third parameter – precision.
  • assertContains() checks the presence of an item in any object with the contains() operator defined: array, list, range, and so on.

Here is a brief example that shows the usage of these functions:

@Test
fun test() {
    val expectedArray = arrayOf(1, 2, 3)
    val actualArray = Array(3) { it + 1 }

    assertIs(actualArray[0])
    assertContentEquals(expectedArray, actualArray)
    assertContains(expectedArray, 2)

    val x = sin(PI)

    // precision parameter
    val tolerance = 0.000001

    assertEquals(0.0, x, tolerance)
}

Regarding the existing assertion functions – it’s now possible to call suspending functions inside the lambda passed to assertTrue(), assertFalse(), and expect() because these functions are now inline.

Try all the features of Kotlin 1.5.0

Bring all these modern Kotlin APIs to your real-life projects with 1.5.0-RC!

In IntelliJ IDEA or Android Studio, install the Kotlin plugin 1.5.0-RC. Learn how to get the EAP plugin versions.

Build your existing projects with 1.5.0-RC to check how they will work with 1.5.0. With the new simplified configuration for preview releases, you just need to change the Kotlin version to 1.5.0-RC and adjust the dependency versions, if necessary.

Install 1.5.0-RC

The latest version will be available online in the Kotlin Playground soon.

Compatibility

As with all feature releases, some deprecation cycles of previously announced changes are coming to an end with Kotlin 1.5.0. All of these cases were carefully reviewed by the language committee and are listed in the Compatibility Guide for Kotlin 1.5. You can also explore these changes on YouTrack.

Release candidate notes

Now that we’ve reached the release candidate for Kotlin 1.5.0, it is time for you to start compiling and publishing! Unlike previous milestone releases, binaries created with Kotlin 1.5.0-RC are guaranteed to be compatible with Kotlin 1.5.0.

Share feedback

This is the final opportunity for you to affect the next feature release! Share any issues you find with us in the issue tracker. Make Kotlin 1.5.0 better for you and the community!

Install 1.5.0-RC

Continue ReadingKotlin 1.5.0-RC Released: Changes to the Standard and Test Libraries

Kotlin Plugin 2021.1 Released: Improved IDE Performance and Better Support for Refactorings

The newest release of IntelliJ IDEA, version 2021.1, comes with the improved Kotlin plugin. Enjoy an improved developer experience with faster code completion and highlighting, change signatures with better IDE support, benefit from better debugging experience for evaluating properties, and more.

Here is what you’ll get by installing the new plugin:

Auto-update to this new release

IntelliJ IDEA will give you the option to automatically update to the new release once it is out.

If you are not yet an IntelliJ IDEA user, you can download the newest version and it will already come bundled with the latest Kotlin plugin.

Enjoy quick code completion and highlighting

Sometimes writing code may not be as much fun as it could be, especially if you need to wait for the IDE to highlight your code and help you with completion. Our goal is to make code highlighting and completion seamless for you.

The new Kotlin plugin significantly improves performance for Kotlin code highlighting and completion.

Based on our tests, highlighting speed has improved by about 25% and code completion is now more than 50% faster, which brings it much closer to the performance level offered for Java.

Note that these numbers are just based on our tests. The improvements in your actual projects may not be as significant as our tests show, but you should notice much better performance.

Kotlin code highlighting in the new plugin

Here you can see a comparison of the speed of Kotlin code highlighting between the previous Kotlin plugin, version 2020.3, and the new one, version 2021.1. The results are based on our benchmark tests that check code highlighting in complex Kotlin files.

Faster Kotlin code highlighting

Kotlin code completion in the new plugin

And here is a comparison of the performance of Kotlin code completion between version 2020.3 and version 2021.1. The results are based on our benchmark tests that check code completion in complex Kotlin files.

Faster Kotlin code completion

Change signature with better IDE support

If you’ve used the Change Signature refactoring in previous versions of the Kotlin plugin, you may have encountered issues and limitations, as well as a lack of visibility regarding what went wrong in those cases.

Update to the new Kotlin plugin to reap the benefits of more than 40 bug fixes and improvements to the Change Signature refactoring.

Here are some of the most important improvements:

Evaluate custom getters right in the Variables view

Based on our research, the debugging experience for Kotlin requires significant improvements to deliver a better developer experience, and we already have a number of plans in the works to provide them.

This release provides a small but important improvement for evaluating properties in the Variables view.

Previously, during a debug session, you could only see the properties that didn’t have a custom getter and those with a backing field. Properties with a custom getter didn’t appear because they are represented as regular methods on the JVM. In version 2021.1 of the Kotlin plugin, you can see all such properties and evaluate them on demand by clicking on get() near the property name.

For example, when debugging the following code, you can execute the get() method to see the versionString value:

class LanguageVersion(val major: Int, val minor: Int) {
    val isStable: Boolean
        get() = major <= 1 && minor <= 4
    val isExperimental: Boolean
        get() = !isStable
    val versionString: String
        get() = "$major.$minor"
    override fun toString() = versionString
}
fun main() {
    val version = LanguageVersion(1, 4)
}

In the Debug window, you can see the values of the properties:

Debugging in Variables view

We would greatly appreciate it if you could try this feature out and provide your feedback in this ticket or as comments to this blog post.

Use code completion for type parameters

The new Kotlin plugin can now complete code for type parameters. Previously, you had to write this code manually without the benefit of the IDE’s assistance features.

Now code completion suggests functions and properties after generic functions and restores type arguments where needed. When you select such a function from the list, the IDE adds the correct type parameter to the preceding code.

In the following example, the IDE automatically adds the <String>() type:

Code completion for type parameters

After you apply the IDE’s suggestion, you’ll get the following code:

fun typeParametersAtCodeCompletion() {
    //  Function definition from stdlib:
    // public fun <T> emptyList(): List<T>
    val listA: List<String> = emptyList() // T is inferred from the context (explicit variable type)
    val listB: List<String> =
        emptyList<String>().reversed() // type argument for emptyList() is required to evaluate expression type
}

Review the structure of your Kotlin code with class diagrams

With the new release, you can review the structure of your Kotlin code via UML Class diagrams. To build a diagram, select Diagrams | Show Diagram… | Kotlin Classes in the Project View.

UML Class diagrams for Kotlin code

Currently, the diagrams only show inheritance and nesting relationships. Additional, more detailed association connections, like aggregation, construction, dependency, and others, will become available in future releases.

Benefit from other IDE improvements

Since the Kotlin plugin and the IntelliJ Platform have been moved to the same codebase and now ship simultaneously, you will also have the ability to do the following to benefit your Kotlin experience:

See also

Continue ReadingKotlin Plugin 2021.1 Released: Improved IDE Performance and Better Support for Refactorings

Kotlin 1.5.0-M2 Released – Ensure Smooth Migration to Kotlin 1.5.0

Kotlin 1.5.0-M2 is the last milestone release for Kotlin 1.5.0, which is coming this spring. So this is the last chance to make sure that your projects will successfully work with Kotlin 1.5.0.

Install 1.5.0-M2

If you migrate your projects now, you can save yourself time and energy in the future, when Kotlin 1.5.0 comes out, and you can help us provide urgent fixes before the release.

For example, if you try the new JVM IR backend, which is becoming the default in 1.5.0, and discover any issues now, we’ll try to deliver fixes before the release. You can report any issues you encounter to our issue tracker, YouTrack.

Note that Jetpack Compose only works with the new JVM IR backend. So if you’ve tried Jetpack Compose, you’ve already used the new backend.

Enjoy simplified configuration for preview releases

Previously, if you decided to use a preview release in your existing projects, you had to specify an additional Bintray repository in your Gradle files. Now all Kotlin preview artifacts are stored in Maven Central and there is no need to add the repository manually.

Save time! Install the M2 release, change the Kotlin version in your projects to 1.5.0-M2, and adjust any library dependencies if necessary.

Install 1.5.0-M2

Share your real-world cases with new language features

Have you had a chance to test out the experimental language features in Kotlin 1.4.30?

If you haven’t, give them a try now! And if you’ve already worked with them or have just been experimenting, you might have a good use case for us. 😉 Please don’t keep it in secret. Share it with us!

Our documentation team often gets feedback like this 🗣:

“Please add more samples to documentation”.

“It’s not clear without real-world examples”.

“Could you provide samples with detailed explanations?”

Please help us provide more real-world examples in our language docs and make it more helpful for developers like you.

Share a language feature sample

Don’t miss the Kotlin 1.5.0 videos

The Kotlin YouTube channel is available for those of you who like watching videos. We are continually updating the channel with new videos, and we plan to publish a video series dedicated to Kotlin 1.5.0. Be the first to watch 🎥!

Subscribe to Kotlin YouTube


Haven’t validated your projects with Kotlin 1.5.0 yet? Install 1.5.0-M2 now to avoid issues in the future.

Continue ReadingKotlin 1.5.0-M2 Released – Ensure Smooth Migration to Kotlin 1.5.0

Kotlin 1.4-M3: Generating Default Methods in Interfaces

In Kotlin 1.4, we’re adding new experimental ways for generating default methods in interfaces in the bytecode for the Java 8 target. Later, we’re going to be deprecating the @JvmDefault annotation in favor of generating all the method bodies in interfaces directly when the code is compiled in a special mode. Read more details of how it currently works and what will change, below.

In Kotlin, you can define methods with bodies in interfaces. It works if your code runs on Java 6 or 7, even before support for the default methods appeared on the JVM.

interface Alien {
   fun speak() = "Wubba lubba dub dub"
}

class BirdPerson : Alien

To make it work for older Java versions, the Kotlin compiler generates an additional class that contains an implementation of a default method as a static member. This is what the generated code looks like under the hood, at the bytecode level:

public interface Alien {
  String speak();

  public static final class DefaultImpls {
     public static String speak(Alien obj) {
        return "Wubba lubba dub dub";
     }
  }
}
public final class BirdPerson implements Alien {
  public String speak() {
    return Alien.DefaultImpls.speak(this);
  }
}

The Kotlin compiler generates the DefaultImpls class with the speak method. This method contains the default implementation. It takes an instance of an interface as a parameter and interprets it as this (in case you call other members of this interface inside). The class BirdPerson implementing the interface contains the same method, which only delegates to the implementation in DefaultImpls passing an actual this as an argument.

In Kotlin 1.2, we added experimental support for @JvmDefault annotation that works if your code targets Java 8. You can annotate each interface method having default implementation with @JvmDefault in order to get the default implementation generated in the bytecode:

interface Alien {
   @JvmDefault
   fun speak() = "Wubba lubba dub dub"
}

class BirdPerson : Alien

That only works in a special compiler mode: you can only use it when you specify the -Xjvm-default compiler argument.

The @JvmDefault annotation is going to be deprecated later. There’s no need to annotate each member with it; most probably you had to annotate all the interface methods with bodies, and it was quite verbose.

Eventually, we want to generate method bodies in interfaces by default when your code targets Java 8 or higher. It’s not easy to quickly make this change: we want to make sure you don’t have problems when you mix the libraries or modules of your application that are compiled with different Kotlin versions and different modes. The Kotlin compiler for future versions will continue to “understand” the old scheme of default methods, but we’ll slowly migrate to the new scheme.

New modes for generating default methods in interfaces

If your code targets Java 8 and you want to generate default methods in interfaces, you can use one of two new modes in Kotlin 1.4: -Xjvm-default=all or -Xjvm-default=all-compatibility.

In all mode, you only have default methods generated by the compiler, no more DefaultImpls objects, and no need to additionally annotate separate methods. This is the generated code for our initial sample:

// -Xjvm-default=all
public interface Alien {
  default String speak() {
     return "Wubba lubba dub dub";
 }
}
public final class BirdPerson implements Alien {}

Note that class BirdPerson implementing the interface doesn’t contain the speak method: it automatically reuses the “super” implementation thanks to the JVM support.

The Kotlin compiler of the newer versions will “understand” the old scheme. If your class compiled with the new scheme implements an interface compiled with the old scheme (with DefaultImpls), the compiler will recognize this and generate a hidden method in the class that delegates to the corresponding DefaultImpls method, as before.

The only problem that may arise is if you recompile your old code with the default method implementation and some other code depends on it, which you don’t recompile. In this case, use the all-compatibility mode. Then both default method bodies and DefaultImpls classes are generated:

// -Xjvm-default=all-compatibility
public interface Alien {
  default String speak() {
     return "Wubba lubba dub dub";
  }

  public static final class DefaultImpls {
     public static String speak(Alien obj) {
        // Calling the default method from the interface:
        return obj.$default$speak();
     }
  }
}
public final class BirdPerson implements Alien {}

Inside DefaultImpls the Kotlin compiler calls specifically the default method defined in the interface. (To make it a non-virtual call, the compiler makes a special trick: it generates an additional synthetic method inside an interface and calls it instead.)

With all-compatibility mode you don’t need to recompile the classes that already use your interface; they continue to work correctly:

public final class Moopian implements Alien {
  public String speak() {
    return Alien.DefaultImpls.speak(this);
  }
}

all-compatibility mode guarantees binary compatibility for Kotlin clients but generates more methods and classes in the bytecode.

Fixing an issue with delegates

Before, it was a bit confusing to use an interface with @JvmDefault methods together with the “implementation by delegation” feature. If you used an interface with @JvmDefault as a delegate, the default method implementations were called even if the actual delegate type provided its own implementation:

interface Producer {
   fun produce() = "in interface"
}

class ProducerImpl : Producer {
   override fun produce() = "in class"
}

class DelegatedProducer(val p: Producer) : Producer by p

fun main() {
   val prod = ProducerImpl()
   // prints "in interface" if 'produce()' is annotated with @JvmDefault
   // prints "in class" in new jvm-default modes
   println(DelegatedProducer(prod).produce())
}

With the new jvm-default modes, it works as you would expect: the overridden version of produce is called when you delegate your implementation to the ProducerImpl class.

@JvmDefaultWithoutCompatibility

If you compile your code with all-compatibility mode and add a new interface, you can annotate it with the @JvmDefaultWithoutCompatibility annotation. It turns on “no compatibility mode” (-Xjvm-default=all) for this specific class. This way, no DefaultImpls objects will be generated. Since you’ve just added a new interface, there’s no code that calls it via the old scheme, and nothing can break.

To be precise, in all-compatibility mode you can use @JvmDefaultWithoutCompatibility to annotate all interfaces which aren’t a part of the public API (more correct is to consider public binary interface, and to say ABI), and therefore aren’t used by the existing clients.

More about all-compatibility mode for library authors

The all-compatibility mode is designed specifically for library authors to allow them to switch to the new scheme gradually and guarantee the binary compatibility for the library. And so, the following details and compatibility issues are aimed mainly at library authors.

Guaranteeing the binary compatibility between the new and old schemes is not totally “seamless”.
To prevent compatibility issues that might arise, the compiler reports an error in specific corner cases, while the @JvmDefaultWithoutCompatibility annotation suppresses this error. The following section describes the reasons for it and the use cases.

Consider a class that inherits from a generic interface:

interface LibGeneric<T> {
   fun foo(p: T): T = p
}

open class LibString : LibGeneric<String>

In -Xjvm-default=all-compatibility mode, the Kotlin compiler generates an error. Let’s first see why and then discuss how you can fix it.

Under the hood, to make such code work with DefaultImpls scheme, the Kotlin compiler of the previous version (or without using any -Xjvm-default flags) generates an additional method with the specialized signature in the class:

open class LibString {
   // Generated implicitly:
   fun foo(String): String { ... }
}

Sometimes, this specialized method is called in the generated bytecode. In pure Kotlin, it happens only in rare cases when your LibString class is open and you call foo from a subclass of LibString via super.foo(). In mixed projects, if you use this code from Java, the specialized version gets called every time you call foo on a LibString instance!

Without such override, you could easily break the binary compatibility: if you recompiled your LibString class with the new all-compatibility mode, and run it against the old binaries, you could get a NoSuchMethodError error!

The goal of all-compatibility mode is to guarantee binary compatibility at least for the Kotlin clients. That’s why having unexpected NoSuchMethodError errors is unacceptable. In order to prevent this, the Kotlin compiler could potentially generate the same hidden specialized method as before, however, it would cause problems when updating from all-compatibility to all mode, and it would also have issues with using default methods in diamond hierarchies. Generating such auxiliary implicit methods was necessary with the DefaultImpls scheme but is not needed when default methods are supported on the JVM level and can cause more confusion (for more details see Appendix: why we don’t like implicit methods).

We decided to prevent this binary compatibility problem by making your choice explicit.

Fixing the compiler error

One option you have is to provide an explicit override:

interface LibGeneric<T> {
   fun foo(p: T): T = p
}

open class LibString : LibGeneric<String> {
   override fun foo(p: String): String = super.foo(p)
}

Yes, it’s a bit of verbosity but for a good reason! If this code can be used from subclasses in Kotlin or from Java, adding explicit override guarantees that the older binaries will continue to work with new versions of your library compiled in all-compatibility mode.

Another option is to annotate your class with the @JvmDefaultWithoutCompatibility annotation. It turns on “no compatibility mode” for this specific class. Then an explicit override method is not required and no implicit methods are generated:

interface LibGeneric<T> {
   fun foo(p: T): T = p
}

@JvmDefaultWithoutCompatibility
open class LibString : LibGeneric<String> {
    // no implicit member
}

Appendix: Why we don’t like implicit methods

Why don’t we generate hidden methods like in the old scheme? Consider the following diagram which represents a diamond hierarchy – the Java class JavaClass implements the Kotlin Base interface both through extending KotlinClass and implementing Derived interface:

Let’s imagine that Kotlin continues to generate implicit overrides (as was necessary before with the DefaultImpls scheme). Then the code JavaClass().foo() prints 0 and not 42! That becomes a new puzzler: there are only two methods (returning 0 and returning 42) and it’s really confusing why the method from the base class is called and not the more specific one from Derived. When you take into consideration an implicit method from KotlinClass, the result makes sense. But we really want to avoid such puzzlers by not generating the implicit methods in the first place – and rather force developers to provide explicit methods when it’s necessary for compatibility reasons.

Conclusion

If you used the @JvmDefault annotation before, you can safely remove it and use one of the new modes. If you already used -Xjvm-default=enable, which generated only the default method implementations, you can now replace it with -Xjvm-default=all.

So far this support remains experimental but we’re going to switch the default mode continuously first to all-compatibility and then to all in the future major Kotlin versions. If no -Xjvm-default is specified now, the generated code will continue to use DefaultImpls.

How to try it

You can already try these new modes with Kotlin 1.4-M3 version. See here how to update the Kotlin Plugin to it.

Share your feedback

We’re grateful for all your bug reports in our issue tracker, and we’ll do our best to fix all the most important issues before the final release.

You are also welcome to join the #eap channel in our Kotlin Slack (get an invite here). In this channel, you can ask questions, participate in discussions, and get notifications of new preview builds.

Let’s Kotlin!

Continue ReadingKotlin 1.4-M3: Generating Default Methods in Interfaces

End of content

No more pages to load