Auto Added by WPeMatico

KotlinDL 0.2: Functional API, Model Zoo With ResNet and MobileNet, Idiomatic Kotlin DSL for Image Preprocessing, and Many New Layers

Introducing version 0.2 of our deep learning library, KotlinDL.

KotlinDL 0.2 is available now on Maven Central with a variety of new features – check out all the changes coming to the new release! New layers, a special Kotlin-idiomatic DSL for image preprocessing, a few types of Datasets, a great Model Zoo with support for the ResNet and MobileNet model families, and many more changes are now receiving a final polish.

KotlinDL on GitHub

In this post, we’ll walk you through the changes to the Kotlin Deep Learning library in the 0.2 release:

Functional API

With the previous version of the library, you could only use the Sequential API to describe your model. Using the

Sequential.of(..)

method call, it has been possible to build a sequence of layers to describe models in a style similar to VGG.

Since 2014, many new architectures have addressed the disadvantages inherent in simple layer sequences, such as vanishing gradients or the degradation (accuracy saturation) problem. The famous residual neural networks (ResNet) use skip connections or shortcuts to jump over layers. In version 0.2, we’ve added a new Functional API that makes it possible for you to build models such as ResNet or MobileNet.

The Functional API provides a way to create models that are more flexible than the Sequential API. The Functional API can handle models with non-linear topology, shared layers, and even multiple inputs or outputs.

The main idea behind it is that a deep learning model is usually a directed acyclic graph (DAG) of layers. So the Functional API is a way to build graphs of layers.

Let’s build a ToyResNet model for the FashionMnist dataset to demonstrate this:

val (train, test) = fashionMnist()

val inputs = Input(28, 28, 1)
val conv1 = Conv2D(32)(inputs)
val conv2 = Conv2D(64)(conv1)
val maxPool = MaxPool2D(poolSize = intArrayOf(1, 3, 3, 1), 
                        strides = intArrayOf(1, 3, 3, 1))(conv2)

val conv3 = Conv2D(64)(maxPool)
val conv4 = Conv2D(64)(conv3)
val add1 = Add()(conv4, maxPool)

val conv5 = Conv2D(64)(add1)
val conv6 = Conv2D(64)(conv5)
val add2 = Add()(conv6, add1)

val conv7 = Conv2D(64)(add2)
val globalAvgPool2D = GlobalAvgPool2D()(conv7)
val dense1 = Dense(256)(globalAvgPool2D)
val outputs = Dense(10, activation = Activations.Linear)(dense1)

val model = Functional.fromOutput(outputs)

model.use {
   it.compile(
       optimizer = Adam(),
       loss = Losses.SOFT_MAX_CROSS_ENTROPY_WITH_LOGITS,
       metric = Metrics.ACCURACY
   )

   it.summary()

   it.fit(dataset = train, epochs = 3, batchSize = 1000)

   val accuracy = it.evaluate(dataset = test, batchSize = 1000)
                    .metrics[Metrics.ACCURACY]

   println("Accuracy after: $accuracy")
}

Here’s a summary of the model:

And here is a representation of the  model architecture typical of the whole ResNet model family:

The main design of this API is borrowed from the Keras library, but it is not a complete copy. If you find any discrepancies between our API and the Keras library API, please refer to our documentation.

Model Zoo: ResNet and MobileNet models families support

Starting with the 0.2 release, Kotlin DL will include a Model Zoo, a collection of deep convolutional networks pre-trained on a large image dataset known as ImageNet.

The Model Zoo is important because modern architectures of convolutional neural networks can have hundreds of layers and tens of millions of parameters. Training models to an acceptable accuracy level (~70-80%) on ImageNet may require hundreds or thousands of hours of computation on a cluster of GPUs. With the Model Zoo, there is no need to train a model from scratch every time you need one. You can get a ready, pre-trained model from our repository of models and immediately use it for image recognition or transfer learning.

The following models are currently supported:

  • VGG’16
  • VGG’19
  • ResNet50
  • ResNet101
  • ResNet152
  • ResNet50v2
  • ResNet101v2
  • ResNet152v2
  • MobileNet
  • MobileNetv2

All the models in the Model Zoo include a special loader of model configs and model weights, as well as the special data preprocessing function that was applied when the models were trained on the ImageNet dataset.

Here’s an example of how to use one of these models, ResNet50, for prediction:

// specify the model type to be loaded, ResNet50, for example
val loader =
   ModelZoo(commonModelDirectory = File("cache/pretrainedModels"), modelType = ModelType.ResNet_50)

// obtain the model configuration
val model = loader.loadModel() as Functional

// load class labels (from ImageNet dataset in ResNet50 case)
val imageNetClassLabels = loader.loadClassLabels()

// load weights if required (for Transfer Learning purposes)
val hdfFile = loader.loadWeights()

Now, you’ve got a model and weights, and you can use it in KotlinDL.

NOTE: Don’t forget to apply model-specific preprocessing for the new data. All the preprocessing functions are included in the Model Zoo and can be called via the preprocessInput function.

If you want to train VGG or ResNet models from scratch, you can simply load the model configuration or start from the full model code written in Kotlin. All Model Zoo models are available via top-level functions located in the org.jetbrains.kotlinx.dl.api.core.model package.

val model = resnet50Light(imageSize = 28, 
                          numberOfClasses = 10, 
                          numberOfChannels = 1, 
                          lastLayerActivation = Activations.Linear)

A full example of how to use VGG’19 for prediction and transfer learning with additional training on a custom dataset can be found in this tutorial.

DSL for image preprocessing

Python developers have access to a huge number of utilities and libraries for data preprocessing. However, in JVM languages there are specific difficulties with preprocessing images, videos, and music. Most libraries for image preprocessing in Java and Kotlin use the BufferedImage class, whose methods are sometimes inconsistent and at a very low level of abstraction. We decided to simplify the lives of Kotlin developers by making an easy and straightforward DSL using lambdas with receivers for setting the image preprocessing pipeline.

The DSL for image preprocessing can use the following operations:

  • Load
  • Crop
  • Resize
  • Rotate
  • Rescale
  • Sharpen
  • Save
val preprocessing: Preprocessing = preprocess {
   transformImage {
       load {
           pathToData = imageDirectory
           imageShape = ImageShape(224, 224, 3)
           colorMode = ColorOrder.BGR
       }
       rotate {
           degrees = 30f
       }
       crop {
           left = 12
           right = 12
           top = 12
           bottom = 12
       }
       resize {
           outputWidth = 400
           outputHeight = 400
           interpolation = InterpolationType.NEAREST
       }
   }
   transformTensor {
       rescale {
           scalingCoefficient = 255f
       }
   }
}

As a result, basic augmentation can be implemented manually. Below you can find several images obtained by the preprocessing application mentioned above, changing the rotation angle and image size:

If you use additional image preprocessing steps, please feel free to make a feature request in our issue tracker.

New layers

We’ve implemented a variety of new layers that are required for ResNet and MobileNet models: 

  • BatchNorm
  • ActivationLayer
  • DepthwiseConv2D
  • SeparableConv2D
  • Merge (Add, Subtract, Multiply, Average, Concatenate, Maximum, Minimum)
  • GlobalAvgPool2D
  • Cropping2D
  • Reshape
  • ZeroPadding2D*

* Kudos to Anton Kosyakov for implementing ZeroPadding2D! 

If you would like to contribute a layer, we would be delighted to take a look at your pull requests.

Dataset API and its implementations: OnHeapDataset & OnFlyDataset

The standard way to run data through a neural network in forward mode is to load batches one by one into RAM and then into the memory area controlled by the TensorFlow computational graph.

We support this approach with an on-the-fly dataset (OnFlyDataset). It sequentially loads batch after batch into RAM during one training epoch, applying the preprocessing described in advance (if it was defined). 

But what if our data fits into RAM? You can use an OnHeapDataset to load and keep all this data in RAM without reading it repeatedly from the disk at each epoch.

Embedded datasets

For those of you who are just starting your journey in deep learning, we recommend practicing building your first neural networks on well-known datasets, such as a set of handwritten numbers (MNIST dataset), a similar set of images of fashion items from Zalando (FashionMNIST), the famous Cifar’10 dataset (50,000 images), or a collection of photos of cats and dogs from one of the most popular Kaggle competitions (25,000 images of various sizes).

All these datasets are stored remotely and, if necessary, can be downloaded to a folder on your disk. If the dataset has already been downloaded, it will be downloaded again and loaded immediately from the disk.

Adding KotlinDL to your project

To use KotlinDL in your project, you need to add the following dependency to your build.gradle file:

repositories {
    mavenCentral()
}

dependencies {
    implementation 'org.jetbrains.kotlinx:kotlin-deeplearning-api:0.2.0'
}

You can also take advantage of Kotlin DL functionality in any existing Java project, even if you don’t have any other Kotlin code in it yet. Here is an example of the LeNet-5 model written completely in Java.

Learn more and give feedback

We hope you enjoyed this brief overview of the new features in KotlinDL version 0.2!

  • For more information see GitHub.
  • Check out the KotlinDL guide, which covers the library’s basic and advanced features.
  • Join the #kotlindl channel in Kotlin Slack (get an invite here)
  • If you have previously used KotlinDL, use the changelog for migration.
  • Check out this talk from Alexey Zinoviev, which offers a closer look at the library’s design, ideology, etc.
  • Issue tracker is here

Let’s Kotlin!

Continue Reading KotlinDL 0.2: Functional API, Model Zoo With ResNet and MobileNet, Idiomatic Kotlin DSL for Image Preprocessing, and Many New Layers

Kotlin Kernel for Jupyter Notebook, v0.9.0

This update of the Kotlin kernel for Jupyter Notebook primarily targets library authors and enables them to easily integrate Kotlin libraries with Jupyter notebooks. It also includes an upgrade of the Kotlin compiler to version 1.5.0, as well as bug fixes and performance improvements.

pip installConda install

The old way to add library integrations

As you may know, it was already possible to integrate a library by creating a JSON file, which we call a library descriptor. In the kernel repository, we have a number of predefined descriptors. You can find the full list of them here.

Creating library descriptors is rather easy. Just create a JSON file and provide a

description

section with a library description and a

link

section with a link to the library’s web page. Then add the

repositories

and

dependencies

sections, describing which repositories to use for dependency resolution and which artifacts the library includes. You can also add an

imports

section, where you list imports that will be automatically added to the notebook when the descriptor is loaded, such as

init

and

initCell

code snippets,

renderers,

and so on. When you are finished, save the created file and refer to it from the kernel in whatever way is most convenient for you. In this release, we’ve added some more ways to load descriptors. You can read more about how to create library descriptors here.

This method for integrating libraries is still supported and works particularly well when you are not the author of the library you want to integrate. But it does have some limitations:

  1. It requires additional version synchronization. The integration breaks if a new version of the library is released and a class that was used in the integration is renamed.
  2. It’s not that easy to write Kotlin code in JSON without any IDE support. So if your library provides renderers or initialization code, then you have to go through a potentially long process of trial and error.
  3. Transitive dependencies are not supported. If libraries A and B provide descriptors and library A depends on library B, then adding just the descriptor for library A is not enough, and you also need to run

    %use B

    .

  4. Advanced integration techniques are not allowed. See below for more info.

The new way to add library integrations

One of the best things about Kotlin notebooks (compared to Python notebooks) is that you do not have to think about dependencies. You just load the library you need with the

@DependsOn

annotation and use it. All transitive dependencies are loaded automatically, and you do not have to worry about environments or dependency version clashes. An added bonus is that it will work the same way on all computers. So far, however, there hasn’t been a way to define the descriptor mentioned above and to attach it to a library so you don’t have to create and load it separately.

Now there is such a way. You can now define the descriptor inside your library code and use a Gradle plugin to automatically find and load the ID. This means you do not have to write a separate JSON and

%use

directive.

If you are the maintainer of a library and can change its code, you may like the new method of integration. It currently utilizes Gradle as a build system, but if you use something else, feel free to open an issue and we will work on adding support for it.

Suppose you have the following Gradle build script written in the Kotlin DSL:

plugins {
    kotlin("jvm")
}

group = "org.example"
version = "1.0"

// ...

The published artifact of your library should then have these coordinates:

org.example:library:1.0

You normally add your library to the notebook using the

DependsOn

file annotation:

@file:DependsOn("org.example:library:1.0")

Now suppose you need to add a default import and a renderer for this library in notebooks. First, you apply the Gradle plugin to your build:

plugins {
    kotlin("jvm")
    kotlin("jupyter.api") version "<jupyterApiVersion>"
}

Then, you write an integration class and mark it with the

JupyterLibrary

annotation:

package org.example

import org.jetbrains.kotlinx.jupyter.api.annotations.JupyterLibrary
import org.jetbrains.kotlinx.jupyter.api.*
import org.jetbrains.kotlinx.jupyter.api.libraries.*

@JupyterLibrary
internal class Integration : JupyterIntegration() {
    override fun Builder.onLoaded() {
        import("org.example.*")
        render<MyClass> { HTML(it.toHTML()) }
    }
}

It is supposed that

MyClass

is a class from your library and has the

toHTML()

method, which returns an HTML snippet represented as a string.

After re-publishing your library, restart the kernel and import the library via

DependsOn

again. Now you can use all the packages from

org.example

without specifying additional qualifiers, and you can see rendered HTML in cells that return

MyClass

instances!

Advanced integration features

Let’s take a look at some advanced techniques you can use to improve the integration. We’ll use the following set of classes for reference:

data class Person(
    val name: String,
    val lastName: String,
    val age: Int,
    val cars: MutableList<Car> = mutableListOf(),
)

data class Car(
    val model: String,
    val inceptionYear: Int,
    val owner: Person,
)

annotation class MarkerAnnotation

Subtype-aware renderers

In the descriptors using the old style, you can define renderers that transform cell results of a specific type. The main problem of this approach is that type matching is done by fully qualified type names. So if you define a renderer for a type

A

that has a subtype

B

, the renderer will not be triggered for instances of type

B

.

The new API offers you two solutions to this problem. First, you can implement the

org.jetbrains.kotlinx.jupyter.api.Renderable

interface:

import org.jetbrains.kotlinx.jupyter.api.*

class MyClass: Renderable {
    fun toHTML(): String {
        return "<p>Instance of MyClass</p>"
    }

    override fun render(notebook: Notebook) = HTML(toHTML())
}

It yields the following result:

Rendered MyClass

Another way to do the same thing has actually already been presented above:

render<MyClass> { HTML(it.toHTML()) }

This option is preferable if you want to keep integration logic away from the main code.

Variable converters

Variable converters allow you to add callbacks for variables of a specific type:

addTypeConverter(
    FieldHandlerByClass(Person::class) { host, person, kProperty ->
        person as Person
        if (person.name != kProperty.name) {
            host.execute("val <code>${person.name}</code> = ${kProperty.name}")
        }
    }
)

This converter creates a new variable with the name

person.name

for each

Person

variable defined in the cell. Here’s how it works:

Paul created

Annotations callbacks

You can also add callbacks for file annotations (such as the aforementioned

DependsOn

) and for classes marked with specific annotations.

onClassAnnotation<MarkerAnnotation> { classes ->
    classes.forEach {
        println("Class ${it.simpleName} was marked!")
    }
}

Here we are simply logging the definition of each class marked with

MarkerAnnotation

:

MarkerAnnotation

Cell callbacks

Descriptors allow you to add callbacks that are executed when the library loads (

init

) and before each cell is executed (

initCell

). The new integration method also allows you to add these callbacks with ease and provides support for callbacks triggered after cell execution. Let’s see how it works.

beforeCellExecution {
    println("Before cell callback")
}

afterCellExecution { _, result ->
    println("Cell [${notebook.currentCell?.id}] was evaluated, result is $result")
}

before-after-cell

Here you see a usage of the

notebook

variable, which provides some information about the current notebook.

Dependencies, renderers, and more

There are some other methods you can use to improve Jupyter integration, such as

render

,

import

,

dependencies

,

repositories,

and more. See the

JupyterIntegration

code for a full list.

Note that you can mark with

@JupyterLibrary

any class that implements

LibraryDefinition

or

LibraryDefinitionProducer

. There’s no need to extend

JupyterIntegration

.

All of the code from this section is provided here. You can also find a more complex example of integration in the Dataframe library.

Maven artifacts for your use case

We now publish a set of artifacts to Maven Central, and you are welcome to use them in your own libraries.

The

kotlin-jupyter-api

and

kotlin-jupyter-api-annotations

artifacts are used in the code-based integration scenario that was described above. You will not usually need to add them manually – the Gradle plugin does it for you. These artifacts may help in some situations, for example, if you don’t use Gradle or just want to use some classes from the API without integrating it.

If you just want to use the Kotlin REPL configuration and the compiling-related features that are used in the kernel, you may be interested in the

kotlin-jupyter-shared-compiler

artifact. This artifact was designed to be consistent with the IntelliJ Platform, so you can use it to make IntelliJ plugins.

kotlin-jupyter-lib-ext

is a general-purpose library that includes functions for images and HTML rendering. You can load it from the notebook with

%use lib-ext

. It is not included in the kernel distribution because it may require additional dependencies in the future, and it is not a good idea to bundle them by default.

And finally, you can depend on the

kotlin-jupyter-kernel

artifact if you need the whole kernel bundled into your application. You can use the

embedKernel

method to start the kernel server.

Other artifacts have no clear use case and are just transitive dependencies of other ones.

If your use case is not covered, please open an issue or contact us in the #datascience channel of Kotlin Slack.

Kotlin 1.5.0 and bug fixes

The underlying Kotlin compiler version was updated to 1.5.0 pre-release. It doesn’t use the new JVM IR backend at the moment, but we’ll make that happen soon. The main thing is that we’ve fixed a bug in the REPL compiler that was affecting the updating of scripts’ implicit receivers, so performance should now be better for notebooks with a large number of executed cells.

A number of additional bugs have also been fixed, including these particularly weird ones:

  • Irrelevant error pop-ups in the Notebook client (#109)
  • Incorrect parsing of
    %use

    magic (#110)

  • Resolution of transitive dependencies with runtime scope didn’t work
  • Leaking of kernel stdlib into script classpath (#27)

Check out the release changelog for further details.

Let’s Kotlin!

Continue Reading Kotlin Kernel for Jupyter Notebook, v0.9.0

Lets-Plot, in Kotlin

You can understand a lot about data from metrics, checks, and basic statistics. However, as humans, we grasp trends and patterns way quicker when we see them with our own eyes. If there was ever a moment you wished you could easily and quickly visualize your data, and you were not sure how to do it in Kotlin, this post is for you!

Today I’d like to talk to you about Lets-Plot for Kotlin, an open-source plotting library for statistical data written entirely in Kotlin. You’ll learn about its API, the kinds of plots you can build with it, and what makes this library unique. Let’s start with the API.

ggplot-like API

Lets-Plot Kotlin API is built with layered graphic principles in mind. You may be familiar with this approach if you have ever used the ggplot2 package for R.

“This grammar […] is made up of a set of independent components that can be composed in many different ways. This makes [it] very powerful because you are not limited to a set of pre-specified graphics, but you can create new graphics that are precisely tailored for your problem.” Hadley Wickham, ggplot2: Elegant Graphics for Data Analysis

If you have worked with ggplot2 before, you may recognize the API’s style:
Lets-Plot

If not, let’s unpack what’s going on here. In Lets-Plot, a plot is represented by at least one layer. Layers are responsible for creating the objects painted on the ‘canvas’ and contain the following elements:

  • Data – the subset of data specified either once for all layers or on a per-layer basis. One plot can combine multiple different datasets (one per layer).
  • Aesthetic mapping – describes how variables in the dataset are mapped to the visual properties of the layer, such as color, shape, size, or position.
  • Geometric object – a geometric object that represents a particular type of chart.
  • Statistical transformation – computes some kind of statistical summary on the raw input data. For example,

    bin

    statistic is used for histograms and smooth is used for regression lines.

  • Position adjustment – a method used to compute the final coordinates of geometry. Used to build variants of the same geometric object or to avoid overplotting.

To combine all these parts together, you need to use the following simple formula:

p = lets_plot(<dataframe>) 
p + geom_<chart_type>(stat=<stat>, position=<adjustment>) { <aesthetics mapping> }

You can learn more about the Lets-Plot basics and get a better understanding of what the individual building blocks do by checking out the Getting Started Guide.

Customizable plots

Out of the box, Lets-Plot supports numerous visualization types – histograms, box plots, scatter plots, line plots, contour plots, maps, and more!

All of the plots are flexible and highly customizable, yet the library manages to keep the balance between powerful customization capabilities and ease of use. You can start with simple but useful visualizations like data distribution:

Histogram plot

You have all the tools you need to create complex and nuanced visualizations, like this plot illustrating custom tooltips on a plot for Iris dataset:

Customisable tooltips

Check out these tutorials to explore the available Lets-Plot visualizations and learn how to use them:

Integration with the Kotlin kernel for Jupyter Notebook

You may have noticed from the screenshots that these plots were created in Jupyter Notebook. Indeed, Lets-Plot integrates with the Kotlin kernel for Jupyter Notebook out of the box. If you have the Kotlin kernel installed (see the instructions on how to do so), all you need to do to start plotting is add the following line magic in your notebook:

%use lets-plot

That’s it! Plot away 🙂
Kotlin notebooks are also supported in JetBrains Datalore, an Online Data Science Notebook with smart coding assistance. Check out an example Datalore notebook that uses Lets-Plot.

Lets-Plot Internals

Finally, I wanted to share with you a little bit about the implementation of Lets-Plot, because it is a one-of-a-kind multiplatform library. Due to the unique multiplatform nature of Kotlin, the plotting functionality is written once in Kotlin and can then be packaged as a JavaScript library, JVM library, and a native Python extension.

Lets-Plot Internals

Whichever environment you prefer, you can use the same functionality and API to visualize your data with Lets-Plot!

The Kotlin API is built on top of the JVM jar, however, you can also use the JVM jar independently. For instance, you can embed the plots into a JVM application using either JavaFX or Apache Batik SVG Toolkit for graphics rendering.

Lets-Plot truly is an amazing example of Kotlin’s multiplatform potential and a great visualization tool for your data needs. I hope this post has sparked your interest and you’ll give it a go!

Continue Reading Lets-Plot, in Kotlin

Deep Learning With Kotlin: Introducing KotlinDL-alpha

Hi folks!
Today we would like to share with you the first preview of KotlinDL (v.0.1.0), a high-level Deep Learning framework written in Kotlin and inspired by Keras. It offers simple APIs for building, training, and deploying deep learning models in a JVM environment. High-level APIs and sensible defaults for many parameters make it easy to get started with KotlinDL. You can create and train your first simple neural network with a only a few lines of Kotlin code:

private val model = Sequential.of(
    Input(28, 28, 1),
    Flatten(),
    Dense(300),
    Dense(100),
    Dense(10)
)

fun main() {
    val (train, test) = Dataset.createTrainAndTestDatasets(
        trainFeaturesPath = "datasets/mnist/train-images-idx3-ubyte.gz",
        trainLabelsPath = "datasets/mnist/train-labels-idx1-ubyte.gz",
        testFeaturesPath = "datasets/mnist/t10k-images-idx3-ubyte.gz",
        testLabelsPath = "datasets/mnist/t10k-labels-idx1-ubyte.gz",
        numClasses = 10,
        ::extractImages,
        ::extractLabels
    )
    val (newTrain, validation) = train.split(splitRatio = 0.95)

    model.use {
        it.compile(
            optimizer = Adam(),
            loss = Losses.SOFT_MAX_CROSS_ENTROPY_WITH_LOGITS,
            metric = Metrics.ACCURACY
        )

        it.summary()

        it.fit(
            dataset = newTrain,
            epochs = 10,
            batchSize = 100,
            verbose = false
        )

        val accuracy = it.evaluate(
            dataset = validation,
            batchSize = 100
        ).metrics[Metrics.ACCURACY]

        println("Accuracy: $accuracy")
        it.save(File("src/model/my_model"))
    }
}

GPU support

Training deep learning models can be resource-heavy, and you may wish to accelerate the process by running it on a GPU. This is easily achievable with KotlinDL!
With just one additional dependency, you can run the above code without any modifications on an NVIDIA GPU device.

Rich API

KotlinDL comes with all the necessary APIs for building and training feedforward neural networks, including Convolutional Neural Networks. It provides reasonable defaults for most hyperparameters and offers a wide range of optimizers, weight initializers, activation functions, and all the other necessary levers for you to tweak your model.
With KotlinDL, you can save the resulting model, and import it for inference in your JVM backend application.

Keras models import

Out of the box, KotlinDL offers APIs for building, training, saving deep learning models, and loading them to run inference. When importing a model for inference, you can use a model trained with KotlinDL, or you can import a model trained in Python with Keras (versions 2.*).

For models trained with KotlinDL or Keras, KotlinDL supports transfer learning methods that allow you to make use of an existing pre-trained model and fine-tune it to your task.

Temporary limitations

In this first alpha release, only a limited number of layers are available. These are:

Input()

,

Flatten()

,

Dense()

,

Dropout()

,

Conv2D()

,

MaxPool2D()

, and

AvgPool2D()

. This limitation means that not all Keras models are currently supported. You can import and fine-tune a pre-trained VGG-16 or VGG-19 model, but not, for example, a ResNet50 model. We are working hard on bringing more layers for you in the upcoming releases.

Another temporary limitation concerns deployment. You can deploy a model in a server-side JVM environment, however, inference on Android devices is not yet supported, but it is coming in later releases.

What’s under the hood?

KotlinDL is built on top of the TensorFlow Java API which is being actively developed by the open source community.

Give it a try!

We’ve prepared some tutorials to help you get started with KotlinDL:

Feel free to share your feedback through GitHub issues, create your own pull requests, and join the #deeplearning community on Kotlin slack.

Continue Reading Deep Learning With Kotlin: Introducing KotlinDL-alpha

End of content

No more pages to load