Jetpack Compose Screenshot Testing with Shot

Jetpack Compose Screenshot Testing with Shot

Weeks ago, the Jetpack Compose team announced their first alpha release full of useful features, a lot of reference code and some documentation. It was a great moment for the Android ecosystem! We know Compose is going to change the way Android devs build apps around the world and we also wanted to help. That’s why today we announce official Screenshot Testing support for Jetpack Compose in Shot.

Jetpack Compose Screenshot Testing with Shot

Right after the release, José Alcérreca wrote to us to mention there could be an interesting opportunity to implement screenshot testing support as part of our already implemented screenshot testing library.

Screenshot testing is one of the best testing strategies you can use in your UI layer. We have been using it for years, and we love it ❤️ It takes a picture of your views in the state you want and saves it as a reference in your repository. Once you consider these screenshots are correct, these will be used to ensure there are no bugs in your UI layer in the next tests’ execution. If there is any error, you’ll get a report with the screenshot diff showing you where the failure is. Here you have an example of a failed test because, by mistake, the account number is not shown using the * mask it showed before:

Jetpack Compose Screenshot Testing with Shot

Look at the third image! The “diff” column shows where the error is using pixel-perfect precision 📸

Jetpack Compose Screenshot Testing with Shot
original screenshot – new screenshot – diff report

In case you would like to know more about the usage of this testing strategy, you can find a detailed blog post about this testing strategy here by Eduardo Pascua.

However, at this point, you might be wondering what a Jetpack Compose Screenshot test might look like 🤔 Don’t worry, tt’s really simple. We’ve provided a simple interface named “ScreenshotTest” for you. You can add this interface to any instrumentation test and use it to take the components screenshot as follows:


class OverviewBodyTest : ScreenshotTest {

    @get:Rule
    val composeTestRule = createComposeRule()

    @Test
    fun rendersDefaultOverviewBody() {
        renderOverviewBody()

        compareScreenshot(composeTestRule)
    }

    private fun renderOverviewBody() {
        composeTestRule.setContent {
            RallyTheme {
                OverviewBody()
            }
        }
    }

}

Easy, isn’t it? And fully compatible with all the testing tools provided by the Jetpack Compose team! Once you run your tests, Shot will generate a detailed report for you with all the screenshots recorded as follows:

Jetpack Compose Screenshot Testing with Shot

You will also find a useful plain console report whenever you execute your tests in CI or from your terminal

Jetpack Compose Screenshot Testing with Shot

Now it is time to test your components using screenshot tests! If you want to review more information about Shot, you can check the official documentation here. In case you would like to review a sneak peek of what a Compose app with screenshot tests would look like, we’ve prepared a branch of the official Google code samples with some tests written using Shot.

We hope you find Shot super useful and you can start using it soon. Remember it can also be used with regular Android apps if you can’t use Jetpack Compose right now!

Continue Reading Jetpack Compose Screenshot Testing with Shot

The Dark Secrets of Fast Compilation for Kotlin

Compiling a lot of code fast is a hard problem, especially when the compiler has to perform complex analyses such as overload resolution and type inference with generics. In this post, I’ll tell you about a huge and largely invisible part of Kotlin which makes it compile much faster on relatively small changes that happen a lot in an everyday run-test-debug loop.

Also, we are looking for Senior Developers to join the team at JetBrains working on fast compilation for Kotlin, so if you are interested, look at the bottom of this post.

Let’s start with the obligatory XKCD comic #303

XKCD comic 303: Compiling

This post is about one very important aspect of every developer’s life: How long does it take to run a test (or just hit the first line of your program) after you make a change in the code. This is often referred to as time-to-test.

Why this is so important:

  • If time-to-test is too short, you are never forced to get some coffee (or a sword fight),
  • If time-to-test is too long, you start browsing the social media or get distracted in some other ways, and lose track of what that change you made was.

While both situations arguably have their pros and cons, I believe it’s best to take breaks consciously and not when your compiler tells you to. Compilers are smart pieces of software but healthy human work schedules is not what compilers are smart at.

Developers tend to be happier when they feel productive. Compilation pauses break the flow and make us feel stuck, stopped in our tracks, unproductive. Hardly anybody enjoys that.

Why does compilation take so long?

There are generally three big reasons for long compilation times:

  1. Codebase size: compiling 1 MLOC usually takes longer than 1 KLOC.
  2. How much your toolchain is optimized, this includes the compiler itself and any build tools you are using.
  3. How smart your compiler is: whether it figures many things out without bothering the user with ceremony or constantly requires hints and boilerplate code.

The first two factors are kind of obvious, let’s talk about the third one: the smartness of the compiler. It is usually a complicated tradeoff, and in Kotlin, we decided in favor of clean readable type-safe code. This means that the compiler has to be pretty smart, and here’s why.

Where Kotlin stands

Kotlin is designed to be used in an industrial setting where projects live long, grow big and involve a lot of people. So, we want static type safety to catch bugs early and get precise tooling (completion, refactorings and find usages in the IDE, precise code navigation and such). Then, we also want clean readable code without unneeded noise or ceremony. Among other things, this means that we don’t want types all over the code. And this is why we have smart type inference and overload resolution algorithms that support lambdas and extensions function types, we have smart casts (flow-based typing), and so on. The Kotlin compiler figures out a lot of stuff on its own to keep the code clean and type-safe at the same time.

Can one be smart and fast at the same time?

To make a smart compiler run fast you certainly need to optimize every bit of the toolchain, and this is something we are constantly working on. Among other things, we are working on a new-generation Kotlin compiler that will run much faster than the current one. But this post is not about that.

However fast a compiler is, it won’t be too fast on a big project. And it’s a huge waste to recompile the entire codebase on every little change you make while debugging. So, we are trying to reuse as much as we can from previous compilations and only compile what we absolutely have to.

There are two general approaches to reducing the amount of code to recompile:

  • Compile avoidance — only recompile affected modules,
  • Incremental compilation — only recompile affected files.

(One could think of an even finer grained approach that would track changes in individual functions or classes and thus recompile even less than a file, but I’m not aware of practical implementations of such an approach in industrial languages, and altogether it doesn’t seem necessary.)

Now let’s look into compile avoidance and incremental compilation in more details.

Compile avoidance

The core idea of compile avoidance is:

  • Find “dirty” (=changed) files
  • Recompile the modules these files belong to (use the results of the previous compilation of other modules as binary dependencies)
  • Determine which other modules may be affected by the changes
    • Recompile those as well, check their ABIs too
    • Repeat until all affected modules are recompiled

The algorithm is more or less straightforward if you know how to compare ABIs. Otherwise, we get to recompile those modules that got affected by the changes. Of course, a change in a module that nobody depends on will compile faster than a change in the ‘util’ module that everybody depends on (if it affects its ABI).

Tracking ABI changes

ABI stands for Application Binary Interface, and it’s kind of the same as API but for the binaries. Essentially, the ABI is the only part of the binary that dependent modules care about (this is because Kotlin has separate compilation, but we won’t go into this here).

Roughly speaking, a Kotlin binary (be it a JVM class file or a KLib) contains declarations and bodies. Other modules can reference declarations, but not all declarations. So, private classes and members, for example, are not part of the ABI. Can a body be part of an ABI? Yes, if this body is inlined at the call site. Kotlin has inline functions and compile-time constants (const val’s). If a body of an inline function or a value of a const val changes, dependent modules may need to be recompiled.

So, roughly speaking, the ABI of a Kotlin module consists of declarations, bodies of inline functions and values of const vals visible from other modules.

A straightforward way to detect a change in the ABI is

  • Store the ABI from the previous compilation in some form (you might want to store hashes for efficiency),
  • After compiling a module, compare the result with the stored ABI:
    • If it’s the same, we are done;
    • If it’s changed, recompile dependent modules.

Pros and cons of compile avoidance

The biggest advantage of compile avoidance is its relative simplicity.

This approach really helps when modules are small, because the unit of recompilation is an entire module. If your modules are big, recompilations will be long. So, basically, compile avoidance dictates that we have a lot of small modules, and as developers we may or may not want this. Small modules don’t necessarily sound like a bad design but I’d rather structure my code for people, not machines.

Another observation is that many projects have something like a ‘util’ module where many small useful functions reside. And virtually every other module depends on ‘util’ at least transitively. Now, let’s say I want to add another tiny useful function that is used three times across my codebase. It adds to the module ABI, so all dependent modules are affected, and I get into a long corridor sword fight because my entire project is being recompiled.

On top of that, having a lot of small modules (each of which depends on multiple others) means that the configuration of my project may become huge because for each module it includes its unique set of dependencies (source and binary). Configuring each module in Gradle normally takes some 50-100ms. It is not uncommon for large projects to have more than 1000 modules, so total configuration time may be well over a minute. And it has to run on every build and every time the project is imported into the IDE (for example, when a new dependency is added).

There are a number of features in Gradle that mitigate some of the downsides of compile avoidance: configurations can be cached, for example. Still, there’s quite a bit of room for improvement here, and this is why, in Kotlin, we use incremental compilation.

Incremental compilation

Incremental compilation is more granular than compile avoidance: it works on individual files rather than modules. As a consequence, it does not care about module sizes nor does it recompile the whole project when an ABI of a “popular” module is changed insignificantly. In general, this approach does not restrict the user as much and leads to shorter time-to-test. Also, developers’ swords get neglected and rusty and beg for being used at least once in a while.

Incremental compilation has been supported in JPS, IntelliJ’s built-in build system since forever. Gradle only supports compile avoidance out-of-the-box. As of 1.4, the Kotlin Gradle plugin brings a somewhat limited implementation of incremental compilation to Gradle, and there’s still a lot of room for improvement.

Ideally, we’d just look at the changed files, determine exactly which files depended on them, and recompile all these files. Sounds nice and easy but in reality it’s highly non-trivial to determine this set of dependent files precisely. For one thing, there can be circular dependencies between source files, something that’s not allowed for modules in most modern build systems. And dependencies of individual files are not declared explicitly. Note that imports are not enough to determine dependencies because of references to the same package and chain calls: for A.b.c() we need to import at most A, but changes in the type of B will affect us too.

Because of all these complications, incremental compilation tries to approximate the set of affected files by going in multiple rounds, here’s the outline of how it’s done:

  • Find “dirty” (=changed) files
  • Recompile them (use the results of the previous compilation as binary dependencies instead of compiling other source files)
  • Check if the ABI corresponding to these files has changed
    • If not, we are done!
    • If yes, find files affected by the changes, add them to the dirty files set, recompile
    • Repeat until the ABI stabilizes (this is called a “fixpoint”)

Since we already know how to compare ABIs, there are basically only two tricky bits here:

  1. using results of the previous compilation to compile an arbitrary subset of sources, and
  2. finding files affected by a given set of ABI changes.

Both are at least in part features of Kotlin’s incremental compiler. Let’s look at them one-by-one.

Compiling the dirty files

The compiler knows how to use a subset of the previous compilation results to skip compiling non-dirty files and just load the symbols defined in them to produce the binaries for the dirty files. This is not something a compiler would necessarily be able to do if not for incrementality: producing one big binary from a module instead of a small binary per source file is not so common outside of the JVM world. And it’s not a feature of the Kotlin language, it’s an implementation detail of the incremental compiler.

When we compare ABIs of the dirty files with the previous results, we may find out that we got lucky and no more rounds of recompilation are needed. Here are some examples of changes that only require recompilation of dirty files (because they don’t change the ABI):

  • Comments, string literals (except for const val’s) and such
    • Example: change something in the debug output
  • Changes confined to function bodies that are not inline and don’t affect return type inference
    • Example: add/remove debug output, or change the internal logic of a function
  • Changes confined to private declarations (they can be private to classes or files)
    • Example: introduce or rename a private function
  • Reordering of declarations

As you can see, these cases are quite common when debugging and iteratively improving code.

Widening the dirty file set

If we are not so lucky and some declaration has been changed, it means that some files that depend on the dirty ones could produce different results upon recompilation even though not a single line was changed in their code.

A straightforward strategy would be to give up at this point and recompile the whole module. This will bring all the issues with compile avoidance to the table: big modules become a problem as soon as you modify a declaration, and tons of small modules have performance costs too, as described above. So, we need to be more granular: find the affected files and recompile them.

So, we want to find the files that depend on the parts of the ABI that actually changed. For example, if the user renamed

foo

to

bar

, we only want to recompile the files that care about names

foo

and

bar

, and leave other files alone even if they refer to some other parts of this ABI. The incremental compiler remembers which files depend on which declaration from the previous compilation, and we can use this data sort of like a module dependency graph. This, again, is not something non-incremental compilers normally do.

Ideally, for every file we should store which files depend on it, and which parts of the ABI they care about. In practice, it’s too costly to store all dependencies so precisely. And in many cases, there’s no point in storing full signatures.

Consider this example:

File: dirty.kt

// rename this to be 'fun foo(i: Int)'
fun changeMe(i: Int) = if (i == 1) 0 else bar().length

File: clean.kt

fun foo(a: Any) = ""
fun bar() =  foo(1)

Let’s assume that the user renamed the function

changeMe

to

foo

. Note that, although clean.kt is not changed, the body of

bar()

will change upon recompilation: it will now be calling

foo(Int)

from dirty.kt, not

foo(Any)

from clean.kt, and its return type will change too. This means that we have to recompile both dirty.kt and clean.kt. How can the incremental compiler find this out?

We start by recompiling the changed file: dirty.kt. Then we see that something in the ABI has changed:

  • there’s no function
    changeMe

    any more,

  • there’s function
    foo

    that takes an

    Int

    and returns an

    Int

    .

Now we see that clean.kt depends on the name

foo

. This means that we have to recompile both clean.kt and dirty.kt once again. Why? Because types cannot be trusted.

Incremental compilation must produce the same result as would a full recompilation of all sources. Consider the return type of the newly appeared

foo

in dirty.kt. It’s inferred, and in fact it depends on the type of

bar

from clean.kt which is a circular dependency between files. So the return type could change when we add clean.kt to the mix. In this case we’ll get a compilation error, but until we recompile clean.kt together with dirty.kt, we don’t know about it.

The big rule of the state-of-the-art incremental compilation for Kotlin: all you can trust is names. And this is why for each file, we store

  • the ABI it produces, and
  • the names (not full declarations) that were looked up during its compilation.

Some optimizations are possible in how we store all this. For example, some names are never looked up outside the file, e.g. names of local variables and in some cases local functions. We could omit them from the index. To make the algorithm more precise, we record which files were consulted when each of the names was looked up. And to compress the index we use hashing. There’s some space for more improvements here.

As you have probably noticed, we have to recompile the initial set of dirty files multiple times. Alas, there’s no way around this: there can be circular dependencies, and only compiling all the affected files at once will yield a correct result. In the worst case this gets quadratic and incremental compilation may do more work than compile avoidance would, so there should be heuristics in place guarding from it.

Incremental compilation across module boundaries

The biggest challenge to date is incremental compilation that can cross module boundaries.

Say, we have dirty files in one module, we do some rounds and reach a fixpoint there. Now we have the new ABI of this module, and need to do something about the dependent modules.

Of course, we know what names were affected in our initial module’s ABI, and we know which files in the dependent modules looked these names up. Now, we could apply essentially the same incremental algorithm but starting from the ABI change and not from a set of dirty files. BTW, if there’re no circular dependencies between modules, it’s enough to recompile the dependent files alone. But then, if their ABI has changed, we’d need to add more files from the same module to the set and recompile the same files again.

It’s an open challenge to implement this fully in Gradle. It will probably require some changes to the Gradle architecture, but we know from past experience that such things are possible and welcomed by the Gradle team.

Things not covered in this blog post

My goal here was to give you a taste of the fascinating machinery of fast compilation for Kotlin. There’re many more things at play there that I deliberately left out, including but not limited to

  • Build Cache
  • Configuration Cache
  • Task Configuration Avoidance
  • Efficiently storing incremental compilation indexes and other caches on disk
  • Incremental compilation for mixed Kotlin+Java projects
  • Reusing javac data structures in memory to avoid reading Java dependencies twice
  • Incrementality in KAPT and KSP
  • File watchers for quickly finding dirty files

Summary

Now, you have a basic idea of the challenges that fast compilation in a modern programming language poses. Note that some languages deliberately chose to make their compilers not that smart to avoid having to do all this. For better or worse, Kotlin went another way, and it seems that the features that make the Kotlin compiler so smart are the ones the users love the most because they provide powerful abstractions, readability and concise code at the same time.

While we are working on a new-generation compiler front-end that will make compilation much faster by rethinking the implementation of the core type-checking and name resolution algorithms, we know that everything described in this blog post will never go away. One reason for this is the experience with the Java programming language that enjoys the incremental compilation capabilities of IntelliJ IDEA even having a much faster compiler than kotlinc is today. Another reason is that our goal is to get as close as possible to the development roundtrip of interpreted languages that enjoy instantaneous pick up of changes without any compilation at all. So, Kotlin’s strategy for fast compilation is: optimized compiler + optimized toolchain + sophisticated incrementality.

Join the team!

If you are interested in working on this kind of problems please consider the job opening we currently have on the Kotlin’s fast compilation team at JetBrains. Here’s the job listing in English, and one in Russian. Prior experience working on compilers or build tools in NOT required. We are hiring in all JetBrains’ offices (Saint Petersburg, Munich, Amsterdam, Boston, Prague, Moscow, Novosibirsk) or you can work remotely from anywhere in the world. Will be good to hear from you!

Continue Reading The Dark Secrets of Fast Compilation for Kotlin

Kotlin 1.4 Online Event

The event of the year is on! Join us on October 12–15 for 4 days of Kotlin as we host the Kotlin 1.4 Online Event. You can tune in at any point during the event to listen to the talks, chat with the team, and ask the speakers questions. In addition to this, there will be lots of activities and entertainment including:

  1. Q&A sessions
  2. QuizQuest, with prize raffles for participants
  3. Virtual Kotlin Booth
  4. Event chat

Read the details below to find out about how to participate in these activities.

Register for Kotlin 1.4 Online Event

Program and speakers

Prepare yourself for four days of exciting talks and discussions about the latest Kotlin release.

Speakers from JetBrains together with Florina Muntenescu from Google and Sébastien Deleuze from Spring will throw light on how we came up with all these new features, and reveal some of the secrets hidden in the depths of the technology.

  • The first day will be about Kotlin in general. There will be a keynote speech, a community overview, and talks on language features and tooling achievements.
  • The second day is dedicated to Kotlin libraries.
  • The third day will be all about using Kotlin everywhere – on Android, for Mobile Multiplatform projects, and Kotlin/JS.
  • The fourth day will cover server-side Kotlin and the future of the language.

The schedule is available on the website. The timetable for the schedule is shown according to your computer’s timezone, so you don’t have to worry about having to figure out exactly when you have to log in to watch the talks you are interested in!

I want to join

How to ask the Kotlin team your questions

Each day will be followed by a live Q&A session with the speakers from that day. They will try to answer as many of your questions as they can. You can already start preparing yours now. There are three ways to ask us:

  • You can ask on Twitter using the hashtag #kotlin14ask, so we can find your question and include it in the discussion.
  • You can ask us directly via this form.
  • You can also ask your questions during the event in the live chat.

The most interesting questions will be rewarded with Kotlin swag!

How to join the virtual Kotlin booth

You can chat with members of the Kotlin team directly – simply go to the registration form and sign up for a Virtual Booth session!

Share your Kotlin story with us, chat about any issues in your project, talk to us about how we can help you to convince your manager to use Kotlin, or even just come by and let us know you love Kotlin! We will do our best to cover as many inquiries as possible, but please keep in mind that we can’t guarantee everyone a one-on-one. So the sooner you send in your request, the greater your chance of getting a Virtual Booth appointment will be. To ensure that we can see as many people as possible, we may extend this opportunity, which means your appointment could be scheduled for a time slot before the event starts or sometime after the event concludes.

So, feel free to register for a Virtual Booth and wait for a confirmation email from the team.

QuizQuest: how to get Kotlin swag

Join the QuizQuest for a chance to win Kotlin swag, JetBrains product licenses, and other great prizes!

The QuizQuest is a series of 16 small quizzes that check what you’ve learned from the talks. After each talk, a quiz will be available on the material from the talk. Answer at least 3 of the questions correctly, and you will be entered into a raffle.

There are three raffle categories with different entry criteria:

  1. Quick Thinker. Take the quiz right after the sessions, submit your answers before the beginning of the next day, and if they are all correct you will be entered into a raffle for one of 12 Corkcicle canteen bottles! The more different quizzes you take, the greater your chances of winning.
  2. Explorer. Participate in at least one quiz before October 26 and you will be entered into a raffle where you can win one of 50 special conference t-shirts.
  3. Captain Kotlin. Solve all the quizzes over the event and you will have the chance to win an awesome backpack! We have 12 backpacks to raffle off. To be in with a chance of winning this raffle, you have to submit all your quiz answers until October 26.

The more quizzes you complete, the better your chances to win!

Live chat

During all the event talks and discussions there will be a live chat for audience members. The Kotlin team will be in the live chat too, so if you have any questions for them, you can ask them right there!

Register for Kotlin 1.4 Online Event

After the event

All the videos will be available on the event’s website and our YouTube channel right after the event, so even if you miss something, you can watch it later at any time. You are also encouraged to use the videos for your community events to watch and discuss the recordings together.

Continue Reading Kotlin 1.4 Online Event

The State of Kotlin Support in Spring

This is a transcript of “The State of Kotlin Support in Spring” talk by Sebastian Deleuze from KotlinConf:

Do you find this format useful? Please share your feedback with us!

TOC:

Why Kotlin? (1:30)

First, I think that Kotlin improves the signal-to-noise ratio compared to Java. It’s super important to express our ideas and algorithms with something that is easier to understand.

Second, safety, and especially the nullability. Kotlin has turned Java’s one-billion–dollar mistake into a powerful feature that has allowed it to deal with the absence of value. The trick is to deal with “null” in the type system. Java is not doing that and it makes a lot of difference.

Third, discoverability. You’re using APIs and new APIs are coming and it’s super important to make these APIs discoverable. I’m lazy and I like to use Ctrl + Space to discover new things! I prefer this way rather than using conventions, or things that you need to know before using the framework.

Pleasure is important as well! We mostly use Kotlin for work, but since we spend so much time doing it, I think it’s really important for it to be a pleasurable experience. And I enjoy using Kotlin much more than Java.

The Android ecosystem is huge and it’s coming to Kotlin very fast. Obviously the server side is moving slower, but I think the Android ecosystem could have an impact on the server side ecosystem, just given the number of developers. Sometimes mobile developers do server side things, not that much, but it happens. Maybe later we could see similar stories with the front end.

Spring❤Kotlin. How much? (3:13)

Framework documentation in Kotlin

At Spring, we decided to translate all of the framework reference documentation from Java into Kotlin. That means that for each code sample in the Spring reference documentation, you have two tabs. One is the Java tab, which is obviously still selected by default, but now there is also a Kotlin tab:

And it’s not a line-by-line or an automated translation! When an extension or a DSL is available, it is shown in the Kotlin code sample. I did the translation sample by sample and it’s idiomatic Kotlin using Spring.

I think that’s a good way to learn Kotlin and the idiomatic way of using Kotlin in the context of Spring. This is available in Spring framework 5.2 documentation. You can go to the current reference docs and you’ll see it!

I’ve tried to summarize the current status and the next steps in this table:

Obviously, translating the Spring framework documentation was a huge undertaking, it took many hours!

Next year we should be able to provide translated Spring Boot documentation as well. And if you want to help – feel free to do so! That would let me focus on other features 🙂 Translating the Spring Boot documentation is not that huge, it’s smaller than the Spring framework documentation and the work has already started via community contributions.

Spring security will have Kotlin documentation as well. And Spring Data is a pretty huge project with 17 sub-projects. Some parts of Spring Data will have Kotlin documentation and some parts won’t.

Gradle Kotlin DSL on start.spring.io (5:04)

Another thing that we have done to increase Kotlin support in the Spring ecosystem is to provide Gradle Kotlin DSL support on start.spring.io:

When you start a Kotlin project with the Gradle build system, we now automatically use the new Kotlin DSL in the project that you generated. Feel free to use that!

More DSLs (5:28)

Spring MVC is provided with a mock MVC DSL which is pretty nice, but it’s not easily discoverable and it employs a lot of static imports – it’s “Java-ish”.

MockMvc DSL by Clint Checketts and JB Nizet (5:28)

Thanks to community contributions again, specifically to Clint Checketts and Jean-Baptiste Nizet, we now have a mock MVC Kotlin DSL which you can trigger by using these kinds of

<code>MockMvc.request

extensions:

mockMvc.request(HttpMethod.GET, "/person/{name}", "Lee") {
  secure = true
  accept = APPLICATION_JSON
  headers {
     contentLanguage = Locale.FRANCE
  }
  principal = Principal { "foo" }
}.andExpect {
  status { isOk }
  content { contentType(APPLICATION_JSON) }
  jsonPath("$.name") { value("Lee") }
  content { json("""{"someBoolean": false}""", false) }
}.andDo {
  print()
}

When you use these kinds of extensions, it triggers the use of the MockMvc DSL and allows you to describe that in a more Kotlin way. We tried to stay pretty close to the Java way. So if you know MockMVC in Java, it’s not a totally different thing with Kotlin idioms. We have tried to find the right balance between something idiomatic in Kotlin and something which is still close to the Java API.

Spring Cloud Contract DSL by Tim Ysewyn (6:32)

The team from Pivotal has also contributed to the Spring Cloud Contract DSL, so feel free to use that if you’re using Spring Cloud Contract to test your APIs:

contract {
  request {
    url = url("/foo")
    method = PUT
    headers {
        header("foo", "bar")
    }
    body = body("foo" to "bar")
  }
  response {
    status = OK
  }
}

Spring Security DSL by Eleftheria Stein (6:45)

I’m always happy when someone other than me creates some Kotlin features in the Spring ecosystem. Big thanks to the Spring Security team and especially Eleftheria Stein who has done an amazing job to introduce Spring Security Kotlin DSL:

http {
    formLogin {
        loginPage = "/log-in"
    }
    authorizeRequests {
        authorize("/css/**", permitAll)
        authorize("/user/**", hasAuthority("ROLE_USER"))
    }
}

Spring security is configurable with the Java DSL which is pretty extensive. Also, Spring security has created an official Spring Security Kotlin DSL which is currently in experimental mode. It’s available at this URL, and next year it should be integrated by default with Spring security 5.3.

Spring MVC DSL and functional API (7:21)

Another big part of the work in Spring framework 5.2 has been to introduce the router DSL and the functional API for the Spring MVC. I don’t know if you have seen the previous presentation I did, but basically with Spring framework 5 we have introduced a new way to add controller

<code>@RequestMapping

to write your web components via functional API and Kotlin DSL. It was previously available only with WebFlux. So basically to use this kind of DSL, you have to switch directives. Now we provide this DSL for both Spring Mvc and Spring WebFlux.

Defining Handler & Router (8:02)

The DSL is very similar to the WebFlux version. You are writing your handlers with a

<code>ServerRequest
 input parameter and a
ServerResponse

return value:

fun hello(request: ServerRequest): ServerResponse

You have this API to define the status codes, the body, and use converters. Everything that you can do with Spring MVC and Annotations, you can do with this kind of API:

fun hello(request: ServerRequest) =
     ServerResponse.ok().body("Hello world!")

You can define the router. Here we have a distinct router and handler. And the router usually refers to the handler using credible references:

router {
  GET("/hello", ::hello)
}

fun hello(request: ServerRequest) =
     ServerResponse.ok().body("Hello world!")

Then you can expose the router to Spring Boot simply by returning the router DSL value and exposing it as a Bean. Spring Boot will automatically take these routes into account:

@Configuration
class RoutesConfiguration {

  @Bean
  fun routes(): RouterFunction<ServerResponse> = router {
     GET("/hello", ::hello)
  }

  fun hello(request: ServerRequest) =
        ServerResponse.ok().body("Hello world!")
}

More realistic Handler & Router (8:52)

If we take a look at a more realistic handler, you will create a dedicated class where you will inject your repositories and your services. Obviously, in Kotlin we always use constructor-based injection:

@Component
class PersonHandler(private val repository: PersonRepository) {

  fun listPeople(request: ServerRequest): ServerResponse {
     // ...
  }

  fun createPerson(request: ServerRequest): ServerResponse {
     // ...
  }

  fun getPerson(request: ServerRequest): ServerResponse {
     // ...
  }
}

You create values by using handlers. And you create your router, defining your routes and using credible references to reference these handlers:

@Configuration
class RouteConfiguration {

  @Bean
  fun routes(handler: PersonHandler) = router {
     accept(APPLICATION_JSON).nest {
        GET("/person/{id}", handler::getPerson)
        GET("/person", handler::listPeople)
     }
     POST("/person", handler::createPerson)
  }
 
}

You can create as many routers as you want. It’s not monolithic.

It supports nested predicates. You can create your own request predicate; the whole mechanism is very extensible.

Creating routes dynamically (9:35)

You can even create routes dynamically:

@Configuration
class RouteConfiguration {

  @Bean
  fun routes(routeRepository: RouteRepository) = router {
     for (route in routeRepository.listRoutes()) {
        GET("/$route") { request ->
          hello(request, route)
        }
     }
  }

  fun hello(request: ServerRequest, message: String) =
        ServerResponse.ok().body("Hello $message!")
}

It’s not just syntactic sugar or a more Kotlin-ish way to write your web components. It also provides new capabilities because annotations are statically defined. So if you need to create routes depending on something which is in your database – which is not uncommon if you create CMS or e-commerce.

Obviously, you need to take care of caching and stuff like that. It’s not as easy as it is here, but it opens up new capabilities and I think that’s a pretty interesting way of writing your web components.

Status Update (10:10)

Beans DSL and WebFlux router DSL have been supported for quite a long time now. Spring Boot 2.2 brings the Coroutines router DSL – we are going to see it just after that. The WebMvc router DSL, MockMvc DSL, and the Spring Security DSL are available in experimental mode. Next year all of these components should be available in production.

Obviously, that’s an estimation and it’s possible that not all of these features will get done in time.

Coroutines (10:44)

I guess you have heard quite a lot about coroutines. There are talks by Roman Elizarov and various other talks. I will not go into too much detail, but I would like to share my view of coroutines from the Spring point of view and the server side point of view.

Coroutines are lightweight threads; you can create hundreds of thousands of them, whereas you cannot create that many threads. They provide you with building blocks that allow better scalability and allow new constructs, especially if you deal with asynchronous calls and applications with a lot of I/O, which is typically the case on the server side.

The foundations are in the Kotlin language. However, only a very small part is at the language level, and most of the implementation is on the library side in

<code>kotlinx.coroutines

.

When you talk about coroutines, be careful about defining what you are talking about. Maybe you are talking about these lightweight threads. Maybe you are talking about the language feature. Maybe you are talking about the library which is pretty extensive and contains a lot of things.

I have seen that usually people talk about coroutines but sometimes refer to different things.

From the Spring point of view, Coroutines let you consume the Spring Reactive stack with a nice balance between imperative and declarative styles. Obviously we have a huge investment in reactive APIs in Spring, but there are two levels. There is the whole stack that we are building, so that’s the whole reactive infrastructure. We have built WebFlux, and we are currently working on R2DBC which is reactive SQL and the GA has been released just a few days ago. We are working on RSocket, which is a way to bring reactive outside of Java, outside of the JVM, and this can be used for mobile and frontend in a reactive way. We have a really huge investment in Reactive, and coroutines compete with Reactor at the API level, but that’s not an issue. Our Reactive core is already designed to be exposed to RxJava, for example, or other reactive/asynchronous APIs. Coroutines is just a new way to consume our reactive stack.

With coroutines, operations are sequential by default. Concurrency is possible but explicit.

With Coroutines 1.3 there are three main building blocks you need to know. There are in fact more pieces in Coroutines, but if you just need to learn three things, this is the first one I’m going to show.

The first one is the suspending function. This is the basic block from the beginning:

suspend fun hello() {
  delay(1000)
  println("Hello world!")
}

You are writing your function, you add this

<code>suspend

keyword, and it allows you to call an asynchronous operation. Here I’m using a delay suspending function, but imagine you are calling an HTTP client requesting remote data or maybe a database that provides a Coroutines API. This suspend keyword allows us to define what functions are suspending and are designed to request data asynchronously.

An important design point is that suspending functions “color” your API (as explained by Bob Nystrom in his great blog post What Color is Your Function?). That means that you have regular functions and suspending functions. Other outgoing technologies, like Loom for example, take a different approach, but here we have a distinct kind of function. Suspending functions can call regular functions, but regular functions can’t call suspending functions directly. You would need to use some dedicated capabilities in order to do that. That’s not necessarily a bad thing in terms of design, but that’s a thing you need to know. In that aspect it’s pretty similar to reactive and asynchronous APIs where you are using wrappers like Flux/Mono compatible features. You have two different worlds – regular functions and Coroutine functions.

Structured concurrency

Another building block is structured concurrency. Structured concurrency allows you to define asynchronous boundaries:

suspend fun loadAndCombine(name1: String, name2: String) = coroutineScope {
  val deferred1: Deferred<Image> = async { loadImage(name1) }
  val deferred2: Deferred<Image> = async { loadImage(name2) }
  combineImages(deferred1.await(), deferred2.await())
}

When you are writing asynchronous code in an imperative fashion like this – in this example we are loading two remote images – we want to take care of cases where loading the image fails.

The behavior that we want is that both image loading processes are cancelled and we throw an exception.

We have to define these kinds of asynchronous boundaries like we define transactions with databases. Structured concurrency provides building blocks like CoroutineScope that allow us to define asynchronous boundaries to make coroutines aware of what to do when there is an error, and define this kind of behavior.

Flow<T>* (15:58)

A big feature of coroutines 1.3 – I have pushed a lot to get this type and I will explain why – is called “Flow”. Flow is a new type available in experimental mode for Coroutine 1.2, and available in non-experimental mode for the base API in Coroutine 1.3. And here I’m referring to

[kotlinx.coroutines.flow.Flow](https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/-flow/)

, not the flow type which is in Java 9+ which is a kind of container type for a reactive stream interface. It might be a little bit confusing, but “Flow” is a very attractive name, so both Oracle and JetBrains like it.

Coroutines 1.3 introduces a new Reactive type and it’s super important because

<code>Flow
 is the equivalent of

Flux

 in the Reactor world or

Flowable in the RxJava world but for coroutines.

When we do the exercise to see if we can translate our Reactive API to coroutines, the suspending functions are great for translating Mono, but we were really missing something for the streams. And I don’t like

[Channel](https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.channels/-channel/)

; I don’t think that’s a good building block for building server side applications so we really miss that. I’m thankful to the coroutines team who did an amazing job providing

Flow

in time for the Spring Framework 5.2.

Flow

does not implement Reactive Streams directly because it is based on the Coroutines building block instead of regular Java constructs. But it kind of sums up the semantics and behavior of Reactive Streams, even if not completely, and it provides some support for back pressure.

Basically, “back pressure” is a way for a consumer not to be overwhelmed by a very fast producer. It’s a way for the consumer to say: “OK, you can send me 10 chunks of data and I will process that”. And the sender will only send 10 chunks of data, not more. Which allows us to have scalable applications at the architecture level.

That’s the flow API:

interface Flow<T> {
   suspend fun collect(collector: FlowCollector<T>)
}

interface FlowCollector<T> {
   suspend fun emit(value: T)
}

The Flow API provides a “collect” suspending method with a FlowCollector that itself provides a suspending “emit” method and that’s it.

The building blocks, the router API, is really constrained.

Operators – I mean operators which are similar to what’s available in the RxJava/Reactor world like

filter

,

map

,

zip

etc – are provided as extensions, and some of them are pretty easy to implement:

fun <T> Flow<T>.filter(predicate: suspend (T) -> Boolean): Flow<T> =
   transform { value ->
      if (predicate(value)) return@transform emit(value)
   }

This is the real implementation of the filter operator that is built-in in the coroutines API.

In the reactive world of RxJava and Reactor, if you want to implement a reactive operator you need to have some special skills. I don’t have them myself. It’s pretty hard to implement a reactive operator.

In coroutines – it depends. It’s not always easy. But if you don’t have any concurrency involved, I would say that it’s pretty easy. If you are implementing an operator with concurrency, it’s much easier to do that if you’re in the Kotlin team 🙂

When you want to create a

Flow

, you will use this kind of a Flow Builder. You can see that the code inside it is pretty imperative – you just use a for loop:

val flow = flow {
  for (i in 1..3) {
     delay(100)
     emit(i)
  }
}

You can call any kind of a suspending function inside the flow builder. If you want to request remote data from your HTTP client or a database – you can do that. Because this flow builder takes a suspending lambda. Basically, it’s a lambda that is suspending and inside the lambda you can call any kind of suspending function. Here I’m just emitting a number each 100 ms and that’s it.

When you consume a “flow” it’s more declarative, more functional:

flow.filter { it < 2 }.map(::asyncOperation).collect()
[/kotlin]
<p>I don’t like it when people say that with coroutines 1.3 Reactive is declarative and coroutines are imperative. I think that’s wrong. I think that coroutines 1.3 provide a nice balance between imperative and declarative. Inside operators like “map” you can use more imperative constructs. I like this balance where you can use imperative code where you retrieve a single asynchronous value and you can use a declarative-functional approach when you want to describe how you process streams of data and things like that. I think that’s the perfect balance.</p>
<p><a name="spring-support-for-coroutines"></a></p>
<h2>What about Spring support for coroutines? (21:15)</h2>
<p>Spring provides official Coroutines support for Spring WebFlux, Spring Data (Redis, MongoDB, Cassandra, R2DBC), Spring Vault, and RSocket.</p>
<p>On the Spring Data side, we support Redis, MongoDB, Cassandra, and R2DBC. R2DBC is regular SQL, so we provide support for SQL, H2, MySQL, and Microsoft SQL as well.</p>
<p>There are no new types introduced, because I’m lazy and I don’t want to duplicate the whole Spring API! </p>
<h3>WebFlux suspending methods (21:55)</h3>
<p>And we provide seamless support with the annotation programming model. That means with WebFlux (not Spring Mvc, not for now - that will come later) you can add the suspend keyword under methods annotated with <code>@RequestMapping</code>, <code>@GetMapping</code>, etc. and that will enable you to call any suspending function or any kind of coroutines API inside this handler:</p>
[kotlin runnable=false]
@GetMapping("/api/banner")
suspend fun suspendingEndpoint(): Banner {
  delay(10)
  return Banner("title", "Lorem ipsum")
}

Conceptually this is like a function that returns a

Mono&lt;Banner&gt;

if you are using Reactor. Or a compatible feature of

Banner

. It’s just that you don’t have to use this wrapper, you can just use the

suspend

keyword.

It also works for suspending view rendering:

@GetMapping("/banner")
suspend fun render(model: Model): String {
  delay(10)
  model["banner"] = Banner("title", "Lorem ipsum")
  return "index"
}

Reactive types extensions for Coroutines APIs (22:35)

And for functional APIs we provide Coroutines support via extensions.

We provide various reactive APIs.

WebClient

is our reactive web client and it natively provides methods like

bodyToFlux

to return the Flux of something, and

bodyToMono

to return the Mono of something. What we have done in order to avoid duplicating the whole API is to provide extensions like

bodyToFlow

which is like a “wait body” which allows you to provide the coroutines API without involving any reactive parts – in other words, without exposing the reactive API directly:

@GetMapping("/banners")
suspend fun flow(): Flow<Banner> = client.get()
     .uri("/messages")
     .accept(MediaType.TEXT_EVENT_STREAM)
     .retrieve()
     .bodyToFlow<String>()
     .map { Banner("title", it) }

That allows you to get

Flow

or suspending functions and deal with all of the reactive capabilities underneath in a “coroutines” way. That’s our strategy.

We provide this kind of support for WebFlux Router with these kinds of extensions:

coRouter {
  GET("/hello", ::hello)
}

suspend fun hello(request: ServerRequest) =
     ServerResponse.ok().bodyValueAndAwait("Hello world!")

RSocket (23:35)

We provide support for

RSocket

.

RSocket

is a way to be reactive between, for example, your server side and your mobile application:

private val requester: RSocketRequester =

@MessageMapping(“locate.radars.within”)

fun findRadars(request: MapRequest): Flow<Radar> = requester

.route(“locate.radars.within”)

 .data(request.**viewBox**)

 ._retrieveFlow_<AirportLocation>()

 ._take_(request.**maxRadars**)

I think

RSocket

has a huge potential on Android, but that will be another talk, maybe next year.

You have this kind of reactive flow, you can use all of the Flow operators. You can think of it as all of the reactive capabilities in Spring being exposed in Coroutines, in addition to being exposed as a Reactor API.

Spring Data R2DBC with transactions (24:12)

We even support Coroutines transactions with databases:

class PersonRepository(private val client: DatabaseClient,
                       private val operator: TransactionalOperator) {

   suspend fun initDatabase() = operator.executeAndAwait {
      save(User("smaldini", "Stéphane", "Maldini"))
      save(User("sdeleuze", "Sébastien", "Deleuze"))
      save(User("bclozel", "Brian", "Clozel"))
   }

   suspend fun save(user: User) {
      client.insert().into<User>().table("users").using(user).await()
   }
}

We have worked a lot on the R2DBC support. R2DBC is reactive SQL. The database client is a kind of a fluent API for R2DBC. It’s totally possible to have a part of the Kotlin ecosystem that uses R2DBC, because R2DBC has no dependencies except reactive streams. Which makes it possible for an exposed or any kind of a pure Kotlin API to leverage R2DBC API without involving any part of Spring.

Here I’m using Spring Data R2DBC, which is our way to expose R2DBC by default. However, R2DBC is really designed as a very low-level thing that is usable in another context, and I really hope that the Coroutines Kotlin ecosystem will leverage all the work we have done at the driver level. Because exposing R2DBC as Coroutines is just using a single extension. There is almost no performance loss and I think that’s a good opportunity for the Kotlin and Coroutines ecosystem.

But here I am using Spring capabilities. I have this transactional operator which is kind of a functional transaction. It allows me to define the transaction boundaries using a reactive stack. There is a transactional extension which is applied to Flow in order to use transactions on top of the Flow type.

We provide support for Spring Data reactive MongoDB and various other things:

class UserRepository(private val mongo: ReactiveFluentMongoOperations) {

  fun findAll(): Flow<User> =
mongo.query<User>().flow()

  suspend fun findOne(id: String): User = mongo.query<User>()
    .matching(query(where("id").isEqualTo(id))).awaitOne()

  suspend fun insert(user: User): User =
mongo.insert<User>().oneAndAwait(user)

  suspend fun update(user: User): User =
mongo.update<User>().replaceWith(user)
.asType<User>().findReplaceAndAwait()
}

Coroutines are now the default way to go Reactive in Kotlin (26:00)

Coroutines dependency added by default on start.spring.io:

WebFlux & RSocket Kotlin code samples provided with Coroutines API:

Coroutines are now the default way to go Reactive in Kotlin. It’s perfectly fine to use the Reactor API if you want, but I think that people are waiting for some kind of a guideline and opinions. And my thinking is that when you are using the Reactive stack in Kotlin, it’s very likely that you would like to use this Coroutines API.

When you select Kotlin and one of the Reactive dependencies on start.spring.io, the Coroutines dependency will be automatically added. You don’t have to add another dependency manually.

Also, in the reference documentation all of the examples are written using Coroutines and not exposed with Flex. Basically, this is a different way to consume, if you agree, and I hope you do.

Status (26:55)

In terms of scope, we have plenty of things that are already supported: Spring WebFlux functional APIs, WebClients, the annotational programming model with @RequestMapping, RSocket @MessageMapping, Spring Data Reactive APIs, and functional transactions.

What is missing is Spring MVC @RequestMapping. That’s because Spring MVC has some support for Mono and Flux. It’s less advanced than WebFlux, but it’s usable.

I think that in the Spring framework 5.3, Spring MVC will support Coroutines as well. That’s pretty useful when you want to use WebClient in Spring MVC and obviously that is a missing part.

Spring Data repositories will be supported seeing as they have already been implemented in less than one day. So, hopefully that has been taken care of. @Transactional is not supported yet, but I hope we will be able to support it as well. For now you have to use functional transactions.

Spring Boot (28:04)

What are the new features on the Spring Boot side?

@ConfigurationProperties

These are the kind of configuration properties that I was writing last year.

@ConfigurationProperties("blog")
class BlogProperties {

  lateinit var title: String
  val banner = Banner()

  class Banner {
     var title: String? = null
     lateinit var content: String
  }
}

<code>ConfigurationProperties

is a type-safe way to expose your properties that you write in .properties or .tml. Since it was not designed to be used with immutable classes, we had to use this lateinit var and I was crying every time I was writing that.

Now we have this @ConstructorBinding annotation that allows us to instantiate ConfigurationProperties via the constructor with full mutability support with read-only properties:

@ConstructorBinding
@ConfigurationProperties("blog")
data class BlogProperties(val title: String, val banner: Banner) {
   data class Banner(val title: String?, val content: String)
}

We don’t need properties and we don’t have this kind of crappy lateinit var, and we can just use a class or a data class with read-only properties. Feel free to use it.

start.spring.io helps to share Kotlin projects (29:01)

Another useful thing in the context of Kotlin is that start.spring.io now has two new buttons.

An “Explore” button that lets you peek at the information of the generated project directly in the UI. This avoids the overhead of generating the zip, extracting, etc. This is pretty useful.

The other one is “Share” which allows you to share the current configuration. If you go to start.spring.io, select Kotlin Gradle and you want to share it with your co-workers to increase Kotlin reach – feel free to use that. I keep forgetting about this URL which has been supported for two years now. Anyway, now you have a way to get this kind of customized link.

Kofu: the mother of all DSL (29:43)

Kofu is another topic and Kofu is experimental. Up until now I was speaking about production-ready stuff, but now we’re entering the dangerous land of experimental stuff.

Kofu is basically the application DSL. We have seen the Router DSL, there is a Beans DSL, and there is MockMvc. We try not to overdo it with DSL, but it was tempting to do an application one and I will explain why.

Explicit configuration for Spring Boot using Kotlin DSLs

Spring Boot’s main design is centered around conventions. Basically, with Spring in Java we have two variables to explicitly define things, and so Spring Boot provides conventions. I think that’s a pretty widely used and accepted model to configure Spring Boot, and that’s perfectly fine. There is nothing wrong with using that in Kotlin.

However, since we have this great DSL support and since we have all these other DSLs, it kind of makes sense to try and see what it means for Spring Boot to have a configuration with a DSL. The interesting thing is that it’s explicit configuration. That means you won’t have Jackson support because you have a transitive dependency that brings Jackson and that’s how Spring Boot currently behaves. It’s based on conditions that could trigger on properties and classes present on the class path. If you are a control freak like me, it’s possible that you will like this kind of thing. It’s experimental, so don’t use it in production yet!

It leverages existing DSLs like Beans, Router, and Security. It also provides composable configuration and discoverability via auto-complete instead of convention.

To me that’s just a detail, because I think that with the upcoming Spring Boot version we should be close to Kofu’s performance even with annotations. It currently provides faster startup and less memory consumption:

Currently we have a decrease of 40% in startup time and less memory consumption when you configure your Spring Boot application with Kofu instead of Auto-configuration.

But again, I tend to think that this gap will be reduced by the work that we are currently doing so that is not the main point. The main point is getting explicit configuration.

Demo (32:05)

Here’s a quick demo. I’m just going to create a regular Spring Boot application. It’s important to understand that a Spring Boot application configured with Kofu is just a regular Spring Boot application. You still create it via start.spring.io.

I’m going to the spring-fu website. It’s on spring-projects-experimental. I’m going to just copy and paste that. I’m adding the milestone repository. I’m adding the Kofu dependency and it’s just a regular dependency that provides application DSL. We need to import the changes, and if everything goes fine, I should be able to use my application DSL.

Here I’m going to create a Spring MVC application. So, I’ll select the “SERVLET” type and then I’ll have access to a DSL where I can configure things. For example I am going to add SpringMvc support, I am going to use my router, and I’m going to create a route which will just return an “OK” status and return a body with “Hello KotlinConf”.

Everything is explicit so we then need to run this application. So it’s really a way to use existing things. This router is exactly the one that is provided with the Spring framework, but here I’m reusing it in the context of an application configured in a functional style.

Then if I want to add some kind of Jackson support, for example, let’s say I’m creating a class “Sample”, “val_title”. I want to return this class and use it with Jackson. In Spring Boot it’s automatically configured here. Since the support is explicit I will have to declare, but that’s on purpose.

When I talk in terms of discoverability, here I can see what values are converted without having to read the documentation. Here I’m using Jackson. If I want to configure pretty print – again I’m lazy, I don’t have a very good memory – I just need to use my autocomplete to enable the pretty print option and I can also discover all the great options in Spring Boot that are available to configure Jackson. It’s a way to expose what you can do in Spring Boot as a discoverable DSL.

A very important point is that you can create modular configurations and that is one of the main purposes of this kind of construct – it is not monolithic, otherwise it would not be very interesting.

We are going to create a web configuration, we are going to define a logging configuration, and then you select explicitly what you enable because, again, everything is explicit. And if, let’s say, you only want to select the web config, but not the data config for your tests, and mock other things, you will just declare a new customized application just for your tests that will trigger and run just what you need, which means faster integration testing and things like that.

I don’t have the time to describe everything today, but I guess you can understand the potential in the philosophy. You can find more on the spring-fu project, and I will continue to evolve it. Feel free to give feedback! Not so much about the scope of the support, but more on the root principles.

My goal is to make Kofu the Spring Boot Kotlin DSL integrated in a production-ready way. We are not there yet. I think the current step is to have more Kotlin market share in regular Spring Boot applications. Which is kind of a chicken and egg issue, but I need more people using Spring Boot in Kotlin in order to trigger the major task of exposing every Spring Boot configuration feature as a DSL.

GraalVM Native (38:10)

The last topic is GraalVM native support. There was a nice talk by Oleg in this room just before me. GraalVM Native in one slide:

That’s a way to compile your Java, your JVM application into a static executable. It provides different characteristics: instant startup time, very low memory consumption for small applications, smaller container image because it allows you to create Docker container images from scratch thanks to the feature which lets you create statically linked executables. But it still requires you to use OpenJDK on the development side, so it’s not hugely consistent between the input. In terms of maturity, in terms of throughput, we are not at the same level as OpenJDK. What I want to say is that it’s an interesting platform with very different characteristics – it could make it possible to lower cloud hosting costs, but it also comes with a lot of drawbacks and different characteristics, so be careful and evaluate all of the criteria.

Spring Framework 5.3 should be able to configure reflection, proxies, resources, etc. dynamically with GraalVM native: https://github.com/spring-projects-experimental/spring-graal-native.

Kotlin with Spring MVC or Spring WebFlux works on GraalVM native Coroutines support is broken becse of graal#366.

We are going to work on some out of-the-box setup capabilities for the Spring framework 5.3, and Kotlin is supported. The work is currently being done on spring-graal-native, so feel free to take a look.

It allows you to start your application pretty fast; basically you can start your application almost instantly.

It allows you to create scale-to-zero applications. Imagine that you have a back office application that is used 10% of the time and you deploy that with 20 microservices that are running every time and 40 instances because you want high availability. With this kind of mechanism, each microservice instance will consume less and you can basically shut down the service when it is not used.

This comes at the cost of a lot of other constraints, and not all of the ecosystem is supported and it’s still super-super early, so be careful. But we are interested in that and we’re working on such support.

It works with Kotlin except for Coroutines. There is an issue with Coroutines that should be fixed in the upcoming months. We have discussed that last week with the GraalVM team and it should be possible to fix it in the upcoming releases.

More resources (41:05)

https://spring.io/guides/tutorials/spring-boot-kotlin/

I have covered a lot of topics, but if you want to start things more slowly, in the beginning there is the official Kotlin Spring Boot tutorial which is continuously updated. This is regular Spring Boot, Spring MVC, Annotation, GPA – nothing fancy. So you can start with that – I think that’s a good resource.

There also is my talk at devoxx. The first one is a two or three hour deep dive into Coroutines with Spring, RSocket, R2DBC which is much more detailed than I was able to show today.

There is a dedicated GitHub repository with the front end in Kotlin.js which is another interesting topic. I was not able to go into details, but this topic is covered in more detail in my talk.

The other talk is about running Spring Boot applications, as well as running it via native images. If you want more detailed information about that or to learn more about GraalVM native in a kind of a neutral approach, meaning I try to show the advantages of the platform and the drawbacks in kind of a balanced way, you should take a look at that talk.

Thank you a lot!

Continue Reading The State of Kotlin Support in Spring

Kotlin/Native Memory Management Roadmap

TL;DR: The current automatic memory management implementation in Kotlin/Native has limitations when it comes to concurrency and we are working on a replacement. Existing code will continue to work and will be supported. Read on for the full story.

file

A bit of history

Kotlin/Native is designed to be the Kotlin solution for smooth integration with native platform-specific environments. In essence, its vision is to become to C-compatible languages what Kotlin/JVM is to JVM languages – a pragmatic, concise, safe, and tooling-friendly language for writing code. A very important part of this story is the Objective-C ecosystem of frameworks on various Apple platforms. Kotlin/Native support for the Objective-C ecosystem makes it possible to efficiently share Kotlin code between mobile applications with the ability to naturally and precisely use all the vendor APIs in a way that is hard or impossible to achieve with JVM-based solutions.

When the Kotlin/Native project started back in 2016, it was necessary to devise a memory management scheme. With Objective-C interoperability on the table, a lot of thought and attention was paid to the way automated memory management works there – using reference counting. We even experimented with an approach that completely mimicked that model, which would have offered the benefit of providing the most seamless fit. However, in the Objective-C ecosystem, object graphs with cycles are not managed automatically by the runtime. Programmers have to identify cyclic references and mark them in a special way in the source code. Having toyed with this approach, we quickly came to the conclusion that it comes into stark conflict with the core Kotlin philosophy of making development more enjoyable. Kotlin does require developers to be more explicit where this precision is needed for safety, but not where this extra code is just boilerplate that could have been managed by the language. Still, the reference-counting memory manager was easy to write to get the Kotlin/Native project started, and a cyclic garbage collector based on a trial-deletion algorithm was added to provide the kind of development experience that Kotlin programmers expect.

As the Kotlin/Native project matured and became more widely adopted, the limitations of such a reference counting-based automated memory management scheme started to become more apparent. For one thing, it’s hard to get high throughput for memory allocation-intensive applications. But while performance is important, it is not the sole factor in Kotlin’s design.

The limitations become worse, though, when you throw multithreading and concurrency into the mix. In all the ecosystems where fully automated reference-counting memory management is successfully used (most notably in Python), concurrency is severely restricted via something like the “Global interpreter lock” mechanism. This approach is not an option for Kotlin. It is imperative for mobile applications to be able to offload CPU-intensive operations into background threads that run in parallel with the main thread.

The current approach

To tackle this, a unique set of restrictions was developed for Kotlin/Native both to make it efficient at running single-thread code and to make it possible to share data between threads. The requirement was added that the object graph must first be frozen to prevent its modification. Only then it could be shared with other threads. Alternatively, it could be fully transferred to another thread as a detached object graph if no references to it remain in the original thread. As a bonus, by only sharing immutable data, the dreaded problem of “shared mutable state” was avoided for the most part. This scheme worked well, for a time.

While conceptually appealing, the current memory management approach in Kotlin/Native has a number of deficiencies that hinder wider Kotlin/Native adoption. Mobile developers are used to being able to freely share their objects between threads, and they have already developed a number of approaches and architectural patterns to avoid data races while doing so. It is possible to write efficient applications that do not block the main thread using Kotlin/Native, as many early adopters have shown, but the ability to do so comes with a steep learning curve.

There’s also another aspect of Kotlin – the goal of being able to share Kotlin code between platforms. However, some concurrent code, even if it is safe and race-free to begin with, is virtually impossible to share between Kotlin/JVM and Kotlin/Native. In particular, various concurrent data structures and synchronization primitives, which could be both generic and domain-specific, turned out to be notoriously hard to share between the two.

We encountered particular challenges when we attempted to implement multithreaded kotlinx.coroutines for Kotlin/Native. Synchronization primitives must internally share a mutable state, which is supported in Kotlin/Native via special atomic references. Yet the existing memory management algorithm does not track cycles through such references. Even after a considerable amount of work, it still suffers from memory leaks in some concurrent execution scenarios, and we don’t have a clear solution to address them.

file

New memory manager for Kotlin/Native

To solve these problems, we’ve started working on an alternative memory manager for Kotlin/Native that would allow us to lift restrictions on object sharing in Kotlin/Native, automatically track and reclaim all unused Kotlin memory, improve performance, and provide fully leak-free concurrent programming primitives that are safe and don’t require any special management or annotations from the developers. The new memory manager will be used for the whole compiled binary. We plan to introduce it in a way that is mostly compatible with existing code, so code that is currently working will continue to work. Namely, we plan to continue to support object freezing as a safety mechanism for race-free data sharing, and we will be looking at ways to improve Kotlin’s approach to working with immutable data in the whole Kotlin language, not just in Kotlin/Native. Existing annotations that are related to memory management will have an appropriate behavior with the new memory manager to ensure that old code still works. Meanwhile, we’ll continue to support the existing memory manager, and we’ll release multithreaded libraries for Kotlin/Native so you can develop your applications on top of them.

file

We will share more details in the future as the project develops and as we finalize various design decisions. Stay tuned, and have fun with Kotlin!

Continue Reading Kotlin/Native Memory Management Roadmap

Zolla Project – Part 2

In our latest blog post, we talked about Zolla, a fintech product we built for a US company based in California. In that blog post, we described the main challenges we faced and how we solved them. However, today we are going to review the most visible part of the project, the iOS app we built for Zolla.

Our client needed to implement an iOS app letting the user log into the application using an invitation-based system that used either invitation codes or QR codes. Once logged in, users can link their bank account to make deposits and withdrawals from the app. Additionally, they can review their activity in the platform, including all their money movements performed from the app.

Zolla Project - Part 2

When we started to build this app we decided to base our project scaffolding on the following libraries:

  • SwiftLint – A powerful and fully configurable linter.
  • Bitrise – Our favorite continuous integration system
  • Fabric – Fatal and nonfatal error reporting made easy.
  • SwiftGen – A tool to generate Swift code for accessing the project resources.
  • BothamUI – A Model View Presenter framework for Swift we built.
  • BrightFutures – A framework for using Promises in Swift.
  • Sourcery – Meta-programming for Swift, to stop writing boilerplate code.
  • SwiftyBeaver – A powerful logger we linked to Crashlytics for nonfatal error reporting.
  • SwiftDate – All you need to deal with Dates in Swift.
  • Lottie – An iOS library to natively render After Effects vector animations.
  • SDWebImage – Asynchronous image downloader with cache support as a UIImageView category.
  • XCTest – The popular testing framework Apple created.
  • KIF – An iOS Functional Testing Framework.
  • iOSSnapshotTestCase – Snapshot view unit tests for iOS.
  • SwiftCheck – Property-based testing for Swift.
  • OHHTTPStubs – A library to make HTTP stubbing.
  • Swagger Code Gen – A tool to generate clients based on a Swagger spec.

These are the most relevant tools we used to build this product. But including a bunch of GitHub repositories is not the only thing we did. How we used these tools was the key to success in this project.

Thanks to Sourcery and all the meta-programming templates they already implemented for us we were able to write tests with mocks automatically generated in build time.


class CredentialStorageMock: CredentialStorage {

    // MARK: - save

    var saveSignInInfoHasSignedInCallsCount = 0
    var saveSignInInfoHasSignedInCalled: Bool {
        return saveSignInInfoHasSignedInCallsCount &gt; 0
    }
    var saveSignInInfoHasSignedInReceivedArguments: (signInInfo: SignInInfo, hasSignedIn: Bool)?

    func save(signInInfo: SignInInfo, hasSignedIn: Bool) {
        saveSignInInfoHasSignedInCallsCount += 1
        saveSignInInfoHasSignedInReceivedArguments = (signInInfo: signInInfo, hasSignedIn: hasSignedIn)
    }

    // MARK: - read

    var readCallsCount = 0
    var readCalled: Bool {
        return readCallsCount &gt; 0
    }
    var readReturnValue: (signInInfo: SignInInfo, hasSignedIn: Bool)?

    func read() -&gt; (signInInfo: SignInInfo, hasSignedIn: Bool)? {
        readCallsCount += 1
        return readReturnValue
    }

    // MARK: - clear

    var clearCallsCount = 0
    var clearCalled: Bool {
        return clearCallsCount &gt; 0
    }

    func clear() {
        clearCallsCount += 1
    }

}

Thanks to the usage of lottie, we could add impressive animations implemented using After Effects in our application. The walkthrough we implemented thanks to these animations is impressive.

Zolla Project - Part 2

The usage of different testing strategies such us property-based testing or screenshot testing let us write automated tests like these:


    // Screenshot test example
    func test_verify_email_try_to_sign_in_once_it_is_opened() {
        givenThereIsAnErrorVerifyingTheAccount()

        serviceLocator.rootNavigator.goToVerifyEmail()

        verify(serviceLocator.currentWindow?.rootViewController?.view)
    }
   
    // Property-based tests
    property("NextAutoDepositDateCalculator should return the same day if it's today") &lt;- forAll(Date.arbitrary) { today in
            let calculator = NextAutoDepositDateCalculator(dateProvider: { today })

            let nextAutoDepositDate = calculator.execute(repeating: Repeating.Weekly(at: WeekDay(rawValue: today.weekday)!))

            return nextAutoDepositDate == today
        }

Swagger Code Gen combined with an already documented API allowed us to generate the API Client code automatically from the command line. This speeded up our development speed considerably and reduced the number of bugs found in our API layer.

Zolla Project - Part 2

The usage of all these tools and our excellent engineers ended up with an awesome project you can already enjoy in the Apple Store. If you need to develop any similar product and you need help, we’ll always be happy to review your project needs and try to find the best way to create the platform your users are going to love! Contact us at hello@karumi.com

Continue Reading Zolla Project – Part 2

Zolla Project – Part 2

In our latest blog post, we talked about Zolla, a fintech product we built for a US company based in California. In that blog post, we described the main challenges we faced and how we solved them. However, today we are going to review the most visible part of the project, the iOS app we built for Zolla.

Our client needed to implement an iOS app letting the user log into the application using an invitation-based system that used either invitation codes or QR codes. Once logged in, users can link their bank account to make deposits and withdrawals from the app. Additionally, they can review their activity in the platform, including all their money movements performed from the app.

Zolla Project - Part 2

When we started to build this app we decided to base our project scaffolding on the following libraries:

  • SwiftLint – A powerful and fully configurable linter.
  • Bitrise – Our favorite continuous integration system
  • Fabric – Fatal and nonfatal error reporting made easy.
  • SwiftGen – A tool to generate Swift code for accessing the project resources.
  • BothamUI – A Model View Presenter framework for Swift we built.
  • BrightFutures – A framework for using Promises in Swift.
  • Sourcery – Meta-programming for Swift, to stop writing boilerplate code.
  • SwiftyBeaver – A powerful logger we linked to Crashlytics for nonfatal error reporting.
  • SwiftDate – All you need to deal with Dates in Swift.
  • Lottie – An iOS library to natively render After Effects vector animations.
  • SDWebImage – Asynchronous image downloader with cache support as a UIImageView category.
  • XCTest – The popular testing framework Apple created.
  • KIF – An iOS Functional Testing Framework.
  • iOSSnapshotTestCase – Snapshot view unit tests for iOS.
  • SwiftCheck – Property-based testing for Swift.
  • OHHTTPStubs – A library to make HTTP stubbing.
  • Swagger Code Gen – A tool to generate clients based on a Swagger spec.

These are the most relevant tools we used to build this product. But including a bunch of GitHub repositories is not the only thing we did. How we used these tools was the key to success in this project.

Thanks to Sourcery and all the meta-programming templates they already implemented for us we were able to write tests with mocks automatically generated in build time.


class CredentialStorageMock: CredentialStorage {

    // MARK: - save

    var saveSignInInfoHasSignedInCallsCount = 0
    var saveSignInInfoHasSignedInCalled: Bool {
        return saveSignInInfoHasSignedInCallsCount &gt; 0
    }
    var saveSignInInfoHasSignedInReceivedArguments: (signInInfo: SignInInfo, hasSignedIn: Bool)?

    func save(signInInfo: SignInInfo, hasSignedIn: Bool) {
        saveSignInInfoHasSignedInCallsCount += 1
        saveSignInInfoHasSignedInReceivedArguments = (signInInfo: signInInfo, hasSignedIn: hasSignedIn)
    }

    // MARK: - read

    var readCallsCount = 0
    var readCalled: Bool {
        return readCallsCount &gt; 0
    }
    var readReturnValue: (signInInfo: SignInInfo, hasSignedIn: Bool)?

    func read() -&gt; (signInInfo: SignInInfo, hasSignedIn: Bool)? {
        readCallsCount += 1
        return readReturnValue
    }

    // MARK: - clear

    var clearCallsCount = 0
    var clearCalled: Bool {
        return clearCallsCount &gt; 0
    }

    func clear() {
        clearCallsCount += 1
    }

}

Thanks to the usage of lottie, we could add impressive animations implemented using After Effects in our application. The walkthrough we implemented thanks to these animations is impressive.

Zolla Project - Part 2

The usage of different testing strategies such us property-based testing or screenshot testing let us write automated tests like these:


    // Screenshot test example
    func test_verify_email_try_to_sign_in_once_it_is_opened() {
        givenThereIsAnErrorVerifyingTheAccount()

        serviceLocator.rootNavigator.goToVerifyEmail()

        verify(serviceLocator.currentWindow?.rootViewController?.view)
    }
   
    // Property-based tests
    property("NextAutoDepositDateCalculator should return the same day if it's today") &lt;- forAll(Date.arbitrary) { today in
            let calculator = NextAutoDepositDateCalculator(dateProvider: { today })

            let nextAutoDepositDate = calculator.execute(repeating: Repeating.Weekly(at: WeekDay(rawValue: today.weekday)!))

            return nextAutoDepositDate == today
        }

Swagger Code Gen combined with an already documented API allowed us to generate the API Client code automatically from the command line. This speeded up our development speed considerably and reduced the number of bugs found in our API layer.

Zolla Project - Part 2

The usage of all these tools and our excellent engineers ended up with an awesome project you can already enjoy in the Apple Store. If you need to develop any similar product and you need help, we’ll always be happy to review your project needs and try to find the best way to create the platform your users are going to love! Contact us at hello@karumi.com

Continue Reading Zolla Project – Part 2

Zolla Project – Part 2

In our latest blog post, we talked about Zolla, a fintech product we built for a US company based in California. In that blog post, we described the main challenges we faced and how we solved them. However, today we are going to review the most visible part of the project, the iOS app we built for Zolla.

Our client needed to implement an iOS app letting the user log into the application using an invitation-based system that used either invitation codes or QR codes. Once logged in, users can link their bank account to make deposits and withdrawals from the app. Additionally, they can review their activity in the platform, including all their money movements performed from the app.

Zolla Project - Part 2

When we started to build this app we decided to base our project scaffolding on the following libraries:

  • SwiftLint – A powerful and fully configurable linter.
  • Bitrise – Our favorite continuous integration system
  • Fabric – Fatal and nonfatal error reporting made easy.
  • SwiftGen – A tool to generate Swift code for accessing the project resources.
  • BothamUI – A Model View Presenter framework for Swift we built.
  • BrightFutures – A framework for using Promises in Swift.
  • Sourcery – Meta-programming for Swift, to stop writing boilerplate code.
  • SwiftyBeaver – A powerful logger we linked to Crashlytics for nonfatal error reporting.
  • SwiftDate – All you need to deal with Dates in Swift.
  • Lottie – An iOS library to natively render After Effects vector animations.
  • SDWebImage – Asynchronous image downloader with cache support as a UIImageView category.
  • XCTest – The popular testing framework Apple created.
  • KIF – An iOS Functional Testing Framework.
  • iOSSnapshotTestCase – Snapshot view unit tests for iOS.
  • SwiftCheck – Property-based testing for Swift.
  • OHHTTPStubs – A library to make HTTP stubbing.
  • Swagger Code Gen – A tool to generate clients based on a Swagger spec.

These are the most relevant tools we used to build this product. But including a bunch of GitHub repositories is not the only thing we did. How we used these tools was the key to success in this project.

Thanks to Sourcery and all the meta-programming templates they already implemented for us we were able to write tests with mocks automatically generated in build time.


class CredentialStorageMock: CredentialStorage {

    // MARK: - save

    var saveSignInInfoHasSignedInCallsCount = 0
    var saveSignInInfoHasSignedInCalled: Bool {
        return saveSignInInfoHasSignedInCallsCount &gt; 0
    }
    var saveSignInInfoHasSignedInReceivedArguments: (signInInfo: SignInInfo, hasSignedIn: Bool)?

    func save(signInInfo: SignInInfo, hasSignedIn: Bool) {
        saveSignInInfoHasSignedInCallsCount += 1
        saveSignInInfoHasSignedInReceivedArguments = (signInInfo: signInInfo, hasSignedIn: hasSignedIn)
    }

    // MARK: - read

    var readCallsCount = 0
    var readCalled: Bool {
        return readCallsCount &gt; 0
    }
    var readReturnValue: (signInInfo: SignInInfo, hasSignedIn: Bool)?

    func read() -&gt; (signInInfo: SignInInfo, hasSignedIn: Bool)? {
        readCallsCount += 1
        return readReturnValue
    }

    // MARK: - clear

    var clearCallsCount = 0
    var clearCalled: Bool {
        return clearCallsCount &gt; 0
    }

    func clear() {
        clearCallsCount += 1
    }

}

Thanks to the usage of lottie, we could add impressive animations implemented using After Effects in our application. The walkthrough we implemented thanks to these animations is impressive.

Zolla Project - Part 2

The usage of different testing strategies such us property-based testing or screenshot testing let us write automated tests like these:


    // Screenshot test example
    func test_verify_email_try_to_sign_in_once_it_is_opened() {
        givenThereIsAnErrorVerifyingTheAccount()

        serviceLocator.rootNavigator.goToVerifyEmail()

        verify(serviceLocator.currentWindow?.rootViewController?.view)
    }
   
    // Property-based tests
    property("NextAutoDepositDateCalculator should return the same day if it's today") &lt;- forAll(Date.arbitrary) { today in
            let calculator = NextAutoDepositDateCalculator(dateProvider: { today })

            let nextAutoDepositDate = calculator.execute(repeating: Repeating.Weekly(at: WeekDay(rawValue: today.weekday)!))

            return nextAutoDepositDate == today
        }

Swagger Code Gen combined with an already documented API allowed us to generate the API Client code automatically from the command line. This speeded up our development speed considerably and reduced the number of bugs found in our API layer.

Zolla Project - Part 2

The usage of all these tools and our excellent engineers ended up with an awesome project you can already enjoy in the Apple Store. If you need to develop any similar product and you need help, we’ll always be happy to review your project needs and try to find the best way to create the platform your users are going to love! Contact us at hello@karumi.com

Continue Reading Zolla Project – Part 2

Zolla Project – Part 2

In our latest blog post, we talked about Zolla, a fintech product we built for a US company based in California. In that blog post, we described the main challenges we faced and how we solved them. However, today we are going to review the most visible part of the project, the iOS app we built for Zolla.

Our client needed to implement an iOS app letting the user log into the application using an invitation-based system that used either invitation codes or QR codes. Once logged in, users can link their bank account to make deposits and withdrawals from the app. Additionally, they can review their activity in the platform, including all their money movements performed from the app.

Zolla Project - Part 2

When we started to build this app we decided to base our project scaffolding on the following libraries:

  • SwiftLint – A powerful and fully configurable linter.
  • Bitrise – Our favorite continuous integration system
  • Fabric – Fatal and nonfatal error reporting made easy.
  • SwiftGen – A tool to generate Swift code for accessing the project resources.
  • BothamUI – A Model View Presenter framework for Swift we built.
  • BrightFutures – A framework for using Promises in Swift.
  • Sourcery – Meta-programming for Swift, to stop writing boilerplate code.
  • SwiftyBeaver – A powerful logger we linked to Crashlytics for nonfatal error reporting.
  • SwiftDate – All you need to deal with Dates in Swift.
  • Lottie – An iOS library to natively render After Effects vector animations.
  • SDWebImage – Asynchronous image downloader with cache support as a UIImageView category.
  • XCTest – The popular testing framework Apple created.
  • KIF – An iOS Functional Testing Framework.
  • iOSSnapshotTestCase – Snapshot view unit tests for iOS.
  • SwiftCheck – Property-based testing for Swift.
  • OHHTTPStubs – A library to make HTTP stubbing.
  • Swagger Code Gen – A tool to generate clients based on a Swagger spec.

These are the most relevant tools we used to build this product. But including a bunch of GitHub repositories is not the only thing we did. How we used these tools was the key to success in this project.

Thanks to Sourcery and all the meta-programming templates they already implemented for us we were able to write tests with mocks automatically generated in build time.


class CredentialStorageMock: CredentialStorage {

    // MARK: - save

    var saveSignInInfoHasSignedInCallsCount = 0
    var saveSignInInfoHasSignedInCalled: Bool {
        return saveSignInInfoHasSignedInCallsCount &gt; 0
    }
    var saveSignInInfoHasSignedInReceivedArguments: (signInInfo: SignInInfo, hasSignedIn: Bool)?

    func save(signInInfo: SignInInfo, hasSignedIn: Bool) {
        saveSignInInfoHasSignedInCallsCount += 1
        saveSignInInfoHasSignedInReceivedArguments = (signInInfo: signInInfo, hasSignedIn: hasSignedIn)
    }

    // MARK: - read

    var readCallsCount = 0
    var readCalled: Bool {
        return readCallsCount &gt; 0
    }
    var readReturnValue: (signInInfo: SignInInfo, hasSignedIn: Bool)?

    func read() -&gt; (signInInfo: SignInInfo, hasSignedIn: Bool)? {
        readCallsCount += 1
        return readReturnValue
    }

    // MARK: - clear

    var clearCallsCount = 0
    var clearCalled: Bool {
        return clearCallsCount &gt; 0
    }

    func clear() {
        clearCallsCount += 1
    }

}

Thanks to the usage of lottie, we could add impressive animations implemented using After Effects in our application. The walkthrough we implemented thanks to these animations is impressive.

Zolla Project - Part 2

The usage of different testing strategies such us property-based testing or screenshot testing let us write automated tests like these:


    // Screenshot test example
    func test_verify_email_try_to_sign_in_once_it_is_opened() {
        givenThereIsAnErrorVerifyingTheAccount()

        serviceLocator.rootNavigator.goToVerifyEmail()

        verify(serviceLocator.currentWindow?.rootViewController?.view)
    }
   
    // Property-based tests
    property("NextAutoDepositDateCalculator should return the same day if it's today") &lt;- forAll(Date.arbitrary) { today in
            let calculator = NextAutoDepositDateCalculator(dateProvider: { today })

            let nextAutoDepositDate = calculator.execute(repeating: Repeating.Weekly(at: WeekDay(rawValue: today.weekday)!))

            return nextAutoDepositDate == today
        }

Swagger Code Gen combined with an already documented API allowed us to generate the API Client code automatically from the command line. This speeded up our development speed considerably and reduced the number of bugs found in our API layer.

Zolla Project - Part 2

The usage of all these tools and our excellent engineers ended up with an awesome project you can already enjoy in the Apple Store. If you need to develop any similar product and you need help, we’ll always be happy to review your project needs and try to find the best way to create the platform your users are going to love! Contact us at hello@karumi.com

Continue Reading Zolla Project – Part 2

Zolla Project – Part 2

In our latest blog post, we talked about Zolla, a fintech product we built for a US company based in California. In that blog post, we described the main challenges we faced and how we solved them. However, today we are going to review the most visible part of the project, the iOS app we built for Zolla.

Our client needed to implement an iOS app letting the user log into the application using an invitation-based system that used either invitation codes or QR codes. Once logged in, users can link their bank account to make deposits and withdrawals from the app. Additionally, they can review their activity in the platform, including all their money movements performed from the app.

Zolla Project - Part 2

When we started to build this app we decided to base our project scaffolding on the following libraries:

  • SwiftLint – A powerful and fully configurable linter.
  • Bitrise – Our favorite continuous integration system
  • Fabric – Fatal and nonfatal error reporting made easy.
  • SwiftGen – A tool to generate Swift code for accessing the project resources.
  • BothamUI – A Model View Presenter framework for Swift we built.
  • BrightFutures – A framework for using Promises in Swift.
  • Sourcery – Meta-programming for Swift, to stop writing boilerplate code.
  • SwiftyBeaver – A powerful logger we linked to Crashlytics for nonfatal error reporting.
  • SwiftDate – All you need to deal with Dates in Swift.
  • Lottie – An iOS library to natively render After Effects vector animations.
  • SDWebImage – Asynchronous image downloader with cache support as a UIImageView category.
  • XCTest – The popular testing framework Apple created.
  • KIF – An iOS Functional Testing Framework.
  • iOSSnapshotTestCase – Snapshot view unit tests for iOS.
  • SwiftCheck – Property-based testing for Swift.
  • OHHTTPStubs – A library to make HTTP stubbing.
  • Swagger Code Gen – A tool to generate clients based on a Swagger spec.

These are the most relevant tools we used to build this product. But including a bunch of GitHub repositories is not the only thing we did. How we used these tools was the key to success in this project.

Thanks to Sourcery and all the meta-programming templates they already implemented for us we were able to write tests with mocks automatically generated in build time.


class CredentialStorageMock: CredentialStorage {

    // MARK: - save

    var saveSignInInfoHasSignedInCallsCount = 0
    var saveSignInInfoHasSignedInCalled: Bool {
        return saveSignInInfoHasSignedInCallsCount &gt; 0
    }
    var saveSignInInfoHasSignedInReceivedArguments: (signInInfo: SignInInfo, hasSignedIn: Bool)?

    func save(signInInfo: SignInInfo, hasSignedIn: Bool) {
        saveSignInInfoHasSignedInCallsCount += 1
        saveSignInInfoHasSignedInReceivedArguments = (signInInfo: signInInfo, hasSignedIn: hasSignedIn)
    }

    // MARK: - read

    var readCallsCount = 0
    var readCalled: Bool {
        return readCallsCount &gt; 0
    }
    var readReturnValue: (signInInfo: SignInInfo, hasSignedIn: Bool)?

    func read() -&gt; (signInInfo: SignInInfo, hasSignedIn: Bool)? {
        readCallsCount += 1
        return readReturnValue
    }

    // MARK: - clear

    var clearCallsCount = 0
    var clearCalled: Bool {
        return clearCallsCount &gt; 0
    }

    func clear() {
        clearCallsCount += 1
    }

}

Thanks to the usage of lottie, we could add impressive animations implemented using After Effects in our application. The walkthrough we implemented thanks to these animations is impressive.

Zolla Project - Part 2

The usage of different testing strategies such us property-based testing or screenshot testing let us write automated tests like these:


    // Screenshot test example
    func test_verify_email_try_to_sign_in_once_it_is_opened() {
        givenThereIsAnErrorVerifyingTheAccount()

        serviceLocator.rootNavigator.goToVerifyEmail()

        verify(serviceLocator.currentWindow?.rootViewController?.view)
    }
   
    // Property-based tests
    property("NextAutoDepositDateCalculator should return the same day if it's today") &lt;- forAll(Date.arbitrary) { today in
            let calculator = NextAutoDepositDateCalculator(dateProvider: { today })

            let nextAutoDepositDate = calculator.execute(repeating: Repeating.Weekly(at: WeekDay(rawValue: today.weekday)!))

            return nextAutoDepositDate == today
        }

Swagger Code Gen combined with an already documented API allowed us to generate the API Client code automatically from the command line. This speeded up our development speed considerably and reduced the number of bugs found in our API layer.

Zolla Project - Part 2

The usage of all these tools and our excellent engineers ended up with an awesome project you can already enjoy in the Apple Store. If you need to develop any similar product and you need help, we’ll always be happy to review your project needs and try to find the best way to create the platform your users are going to love! Contact us at hello@karumi.com

Continue Reading Zolla Project – Part 2

Zolla Project – Part 2

In our latest blog post, we talked about Zolla, a fintech product we built for a US company based in California. In that blog post, we described the main challenges we faced and how we solved them. However, today we are going to review the most visible part of the project, the iOS app we built for Zolla.

Our client needed to implement an iOS app letting the user log into the application using an invitation-based system that used either invitation codes or QR codes. Once logged in, users can link their bank account to make deposits and withdrawals from the app. Additionally, they can review their activity in the platform, including all their money movements performed from the app.

Zolla Project - Part 2

When we started to build this app we decided to base our project scaffolding on the following libraries:

  • SwiftLint – A powerful and fully configurable linter.
  • Bitrise – Our favorite continuous integration system
  • Fabric – Fatal and nonfatal error reporting made easy.
  • SwiftGen – A tool to generate Swift code for accessing the project resources.
  • BothamUI – A Model View Presenter framework for Swift we built.
  • BrightFutures – A framework for using Promises in Swift.
  • Sourcery – Meta-programming for Swift, to stop writing boilerplate code.
  • SwiftyBeaver – A powerful logger we linked to Crashlytics for nonfatal error reporting.
  • SwiftDate – All you need to deal with Dates in Swift.
  • Lottie – An iOS library to natively render After Effects vector animations.
  • SDWebImage – Asynchronous image downloader with cache support as a UIImageView category.
  • XCTest – The popular testing framework Apple created.
  • KIF – An iOS Functional Testing Framework.
  • iOSSnapshotTestCase – Snapshot view unit tests for iOS.
  • SwiftCheck – Property-based testing for Swift.
  • OHHTTPStubs – A library to make HTTP stubbing.
  • Swagger Code Gen – A tool to generate clients based on a Swagger spec.

These are the most relevant tools we used to build this product. But including a bunch of GitHub repositories is not the only thing we did. How we used these tools was the key to success in this project.

Thanks to Sourcery and all the meta-programming templates they already implemented for us we were able to write tests with mocks automatically generated in build time.


class CredentialStorageMock: CredentialStorage {

    // MARK: - save

    var saveSignInInfoHasSignedInCallsCount = 0
    var saveSignInInfoHasSignedInCalled: Bool {
        return saveSignInInfoHasSignedInCallsCount &gt; 0
    }
    var saveSignInInfoHasSignedInReceivedArguments: (signInInfo: SignInInfo, hasSignedIn: Bool)?

    func save(signInInfo: SignInInfo, hasSignedIn: Bool) {
        saveSignInInfoHasSignedInCallsCount += 1
        saveSignInInfoHasSignedInReceivedArguments = (signInInfo: signInInfo, hasSignedIn: hasSignedIn)
    }

    // MARK: - read

    var readCallsCount = 0
    var readCalled: Bool {
        return readCallsCount &gt; 0
    }
    var readReturnValue: (signInInfo: SignInInfo, hasSignedIn: Bool)?

    func read() -&gt; (signInInfo: SignInInfo, hasSignedIn: Bool)? {
        readCallsCount += 1
        return readReturnValue
    }

    // MARK: - clear

    var clearCallsCount = 0
    var clearCalled: Bool {
        return clearCallsCount &gt; 0
    }

    func clear() {
        clearCallsCount += 1
    }

}

Thanks to the usage of lottie, we could add impressive animations implemented using After Effects in our application. The walkthrough we implemented thanks to these animations is impressive.

Zolla Project - Part 2

The usage of different testing strategies such us property-based testing or screenshot testing let us write automated tests like these:


    // Screenshot test example
    func test_verify_email_try_to_sign_in_once_it_is_opened() {
        givenThereIsAnErrorVerifyingTheAccount()

        serviceLocator.rootNavigator.goToVerifyEmail()

        verify(serviceLocator.currentWindow?.rootViewController?.view)
    }
   
    // Property-based tests
    property("NextAutoDepositDateCalculator should return the same day if it's today") &lt;- forAll(Date.arbitrary) { today in
            let calculator = NextAutoDepositDateCalculator(dateProvider: { today })

            let nextAutoDepositDate = calculator.execute(repeating: Repeating.Weekly(at: WeekDay(rawValue: today.weekday)!))

            return nextAutoDepositDate == today
        }

Swagger Code Gen combined with an already documented API allowed us to generate the API Client code automatically from the command line. This speeded up our development speed considerably and reduced the number of bugs found in our API layer.

Zolla Project - Part 2

The usage of all these tools and our excellent engineers ended up with an awesome project you can already enjoy in the Apple Store. If you need to develop any similar product and you need help, we’ll always be happy to review your project needs and try to find the best way to create the platform your users are going to love! Contact us at hello@karumi.com

Continue Reading Zolla Project – Part 2

End of content

No more pages to load