Saturday, June 24, 2017

Kotlin: Getting Started - Part 1

I've been using Kotlin for a month or so now, and here are some of the smaller things that I found very useful.

Some Basic Patterns

Here are some basic patterns that you probably (hopefully) use in Java, and how to do them with Kotlin.


In Java you may have classes with many overloaded constructors, or with constructors that have lots of nullable parameters.   I think we can all agree that this is just all around tedious for both the producer of the class and the consumer / caller.   You might decide to make a builder, which allows you to have a single constructor, and have immutable fields in your target class.   While this can make things a bit easier for the consumer/caller,  it's still a lot of boilerplate code.

 Kotlin provides with two features that can help with this in some cases:
  1. Optional parameters with default values
  2. Named parameters
Of course, you can still make a builder if you want.  Also, you can define secondary constructors.

Caveat for Java integration: These features are not available for Java code calling Kotlin, so don't expect this to magically improve Java.


It's really easy to create an 'initialize once' field in Kotlin: just use: as lazy { ... }

You can chain these together, due to the fact that the 'as lazy' properties act just like normal properties.

Integrating with Existing Java Code

If you're considering Kotlin, but you have a bunch of existing Java code you might be concerned about using Kotlin into your code base.   Fortunately, Kotlin support is really easy to add to your build, and it's very easy to interoperate between Kotlin and Java.

Adding Kotlin Support - Gradle

To add Kotlin compilation support to an existing Gradle build.

  1. Put the Kotlin plugins in the buildscript class path:

    buildscript {
        ext {
            kotlinVersion = '1.1.2-4'
        repositories {
            maven { url "" }
        dependencies {    // Gradle Plugin classpath.
            classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:${kotlinVersion}"
  2. Enable the Kotlin plugins:
        apply plugin: 'kotlin'
        apply plugin: 'kotlin-spring'
  4. Set the target JVM (optional, but I prefer to do this):

        compileKotlin {
            kotlinOptions.jvmTarget = "1.8"
        compileTestKotlin {
            kotlinOptions.jvmTarget = "1.8"

  5. Add the Kotlin runtime library dependencies:

        dependencies { 
            // Kotlin/JVM libraries
            // Kotlin SLF4J utility
            compile 'io.github.microutils:kotlin-logging:1.4.4'

Calling Kotlin Functions from Java

Java will see Kotlin functions as static methods in a class named like this: <package><KotlinFileName>Kt.   Basically, just the class name you might expect, plus 'Kt' on the end.

Thursday, February 16, 2017

Tips for Building Docker Images

A few tips for building Docker images, from my experience so far.

Stick with the conventions

As with most tools, it's best to start with the conventions and stick with them unless you have a very compelling reason to customize ("you are not special").   Some of the important conventions with Docker images:

  • Put your Dockerfile in the root directory of your project (git repo).
  • Base your image on another image! This allows you to inherit all the environment variables and such from the parent. Also, if it's in docker hub, you can refer to the documentation.
  • Add ENV, ENTRYPOINT and EXPOSE instructions in your Dockerfile.  This will tell image users how to configure your image.
  • Add comments to indicate what files / directories can be overridden with 'volumes' for configuration.
  • Use ARG to allow you to pass in a variable during build time.   This is really good for version numbers, etc.

Create The Image

To create the image, just do docker build from the root directory of the project:
docker build -t test-image --force-rm .

  • -t test-image : gives the image a name (tag) in the local docker environment.
  • --force-rm : removes intermediate containers

Parameterized Image Building with ARG

If you have an image where you need to download a version of some file and you would like to not update the Dockerfile for every version, you can use ARG to define a variable that you can pass in to docker build like this:


FROM openjdk:8-jre-alpine



CMD ["java", "-jar", "-Dconfig.file=/elasticmq/custom.conf", "/elasticmq/server.jar"]
COPY custom.conf /elasticmq/custom.conf

ADD "${ELASTICMQ_VERSION}.jar" /elasticmq/server.jar

  • The ARG defines ELASTICMQ_VERSION as an expected argument at build time.
You can then build this image, overriding the ELASTICMQ_VERSION, like this:
docker build -t=my-elasticmq:${VER} --force-rm --build-arg ELASTICMQ_VERSION=${VER}
  • -t test-image : gives the image a name (tag) in the local docker environment.
  • --force-rm : removes intermediate containers

Explore The Image

So, if you want to shell around and look at what is in the image, you can do that easily with:

docker run -it --rm --entrypoint /bin/bash test-image
  • -it : runs an interactive terminal session
  • --rm : removes the container on exit (this is really useful! Saves on having to clean up containers all the time.)
  • --entrypoint /bin/bash : the shell you want to use. We want to override the entry point so the container won't fully start whatever it usually does.
  • test-image : The image we want to start, if you gave it a name.

Tuesday, February 7, 2017

Install Groovy in an Alpine-based Docker Image

If you're making a custom image based on an Alpine Linux image, you may have a little trouble installing things that require bash, like Groovy.    I tried using SDKMAN, but unfortunately I encountered a lot of problems with compatibility of unzip, and other tools.   In my case I'm creating an image based on Tomcat and I want Groovy for doing some configuration work.

First, we install the Alpine packages we need:
  1. bash
  2. curl
  3. zip
  4. libstdc++ (Gradle needed this, but I don't think Groovy does :shrug:)

RUN apk add --update bash libstdc++ curl zip && \
    rm -rf /var/cache/apk/*

Now we need a workaround for fact that Groovy's shell scripts start with #!/bin/sh :

# Workaround and other 'busybox' related issues.
RUN rm /bin/sh && ln -s /bin/bash /bin/sh

Now we can install Groovy. This could probably be done a little more optimally, but it works:
# Install groovy
# Use curl -L to follow redirects
# Also, use sed to make a workaround for
RUN curl -L -o /tmp/ && \
    cd /usr/local && \
    unzip /tmp/ && \
    rm /tmp/ && \
    ln -s /usr/local/groovy-2.4.8 groovy && \
    /usr/local/groovy/bin/groovy -v && \
    cd /usr/local/bin && \
    ln -s /usr/local/groovy/bin/groovy groovy

As always, any suggestions about how to make it better, let me know.

Wednesday, January 25, 2017

Git-fu: How to merge without actually merging

Sometimes in your life with git, you'll encounter a situation where you try to merge, for example a hotfix branch back into develop, and the merge ends up:
  1. Having a huge number of conflicts, and/or...
  2. Backing out changes in the target branch that should remain.
The reason for this usually has something to do with rebasing or cherry-picking in a way that Git can't follow, but that's not really important if you're in this situation and you need to finish up some merges quickly.

A simple solution is to have git think the merge has happened, but not actually merge the files.   This is actually very simple:

First, merge without auto-committing or fast-forwarding:

$ git merge hotfix/1.2.3 --no-commit --no-ff

This will do all the merging, but it will not create the merge commit.   You can then discard all the changes, or only some of them, and commit: 

$ git commit

Subsequent merges to the target branch will not try to re-apply any of the changes, as it thinks everything has been merged.

Friday, April 22, 2016

Spring Gotchas - Default value expressions not working for @Value

The @Value annotation is very useful in Spring, and the default value syntax also comes in handy.  However, when working on a new project and setting up your initial configuration, or when setting up a test fixture bean configuration, you may encounter situations where the default value syntax simply doesn't work.   For example:

    private int mySetting;

So here, we wanted a default value of 8 if the some.settings property is not found. Simple enough, but still... you end up getting this kind of error:

org.springframework.beans.factory.BeanCreationException: Error creating bean with name '.... blah blah blah ...' 
Caused by: org.springframework.beans.factory.BeanCreationException: Could not autowire field: private int; nested exception is org.springframework.beans.TypeMismatchException: Failed to convert value of type [java.lang.String] to required type [int]; nested exception is java.lang.NumberFormatException: For input string: "${some.setting:8}"
Caused by: org.springframework.beans.TypeMismatchException: Failed to convert value of type [java.lang.String] to required type [int]; nested exception is java.lang.NumberFormatException: For input string: "${some.setting:8}"
Caused by: java.lang.NumberFormatException: For input string: "${some.setting:8}"
This means that Spring does not know how to interpret the default value expression. To enable the Spring Expression Language in @Value just add PropertySourcesPlaceholderConfigurer to the configuration. In Java annotations:
public class MyConfig

    public static PropertySourcesPlaceholderConfigurer getPropertySourcesPlaceholderConfigurer()
        return new PropertySourcesPlaceholderConfigurer();

In XML, this is usually not a problem because you've got:
<context:property-placeholder location=""/>

Thanks to MyKong for the solution!

Saturday, December 13, 2014

Server Side Development Environment - VirtualBox and Vagrant on OSX

If you're doing server-side development you probably want to take a look at using the VirtualBox / Vagrant combination.   This will allow your team to share standardized dev server configurations through your version control system, that is, you can define a standard server OS with provisioned software right in the Git project.   Developers can then easily create a 'production like' environment right on the workstations, or on any cloud provider like AWS or RackSpace.   This frees up your devops team from having to worry about supporting the server side software packages on whatever OS the developers like to use.   Quirks of MySQL, Java, Rails, or Python on Windows or OSX?  Forget it!   Just install and provision a the same software versions you are using in production on a virtual machine.

Basically, your 'developer setup' page (and you DO have one of these, don't you?) goes from some long list of steps (with different sections for different OS's) to:
  1. Install VirtualBox
  2. Install Vagrant
  3. Clone the project repo
  4. 'vagrant up' from the command-line
 Now, you have to figure out how best to deploy.

Why VirtualBox?

It's free, supports most common platforms, and Vagrant has built in support for it.

To install, just download and run the installer.   You probably won't be using VirutalBox directly.   Vagrant will be creating and starting the VirtualBox hosts.    However, you may want to just launch the application once to make sure it's installed properly.

The second step is to install Vagrant.

Why Vagrant? 

Lots of reasons!
  • Share the machine configs with your team, by checking in a Vagrant file into version control.
  • By default, the Vagrant machines share a directory with the main host.   This is much more convenient than scp-ing files to and from the virtual machine.
  • Share the running machine on the internet - Vagrant can expose the virtual machines on the internet for other people to test and such.  This is done via Hashicorp' Atlas service.
  • Provisioning - Not only does Vagrant start up the hosts, it can configure them.  You can use:
    • Shell
    • Chef
    • Puppet
    • Docker (new and cool - but probably not quite ready for production use at this point)
  • Providers - You can use VirtualBox, AWS, or any number of supported providers.  :)
My main purpose for using Vagrant is to start learning about Chef.

To install, just download and run the installer.

Vagrant IDEA Plugin

IntelliJ IDEA has a Vagrant plugin.  At the moment, this seems to mainly just provide a convenient way to do 'vagrant up', but it could come in handy.

What's in the Vagrantfile?

Basically, this file sits at the root of your project and defines the server OS, and provisioning mechanism for installing the required software.   Here are the important parts (IMO):
  1. The VM 'box' definition. This is equivalent to the 'AMI' (Amazon Machine Image) in AWS.  The Hashicorp Atlas service provides a whole bunch of 'box' definitions for most common Linux distros.
  2. Port mappings - This allows you to map ports on the outer host to ports on the guest OS.   You can use this to forward web server ports and ports for debugging, so you can attach your favorite IDE to the server process in the guest OS.
  3.  Shared folders.   By default, the folder that has the Vagrantfile in it is shared under /vagrant.   This is a very convenient way to transfer files to and view files on the guest.
  4. Provisioning - This is how Vagrant will install and configure the required software on the machine.  Start with a simple shell provisioner.   Basically, it's just a shell script that Vagrant will run after bringing up the machine.

Sunday, November 30, 2014

Spring for Java EE Developers - Part 3

Related to Factory Objects - Prototype Scope

In the previous post, I mentioned a few ways to make a factory or provider object.

  1. A configuration bean - The bean class is annotated with @Configuration, and you can add various @Bean methods that get called to create the instances.
  2. Factory Bean / Factory Method  -
A related technique is Spring's prototype scope.   This tells spring to make a new instance of the bean for every injection and every lookup.   In XML, it looks like this:

<bean id="makeOne" class="" scope="prototype"/>

Similarly, with annotations:

public class SomeBean


Spring also has an event framework, along with some standard events that the framework produces, allowing you to extend the framework more easily.    While this is not as annotation driven and fully decoupled as the CDI event framework, it functions in pretty much the same way.

To create your own event, simply extend ApplicationEvent.

public class MyEvent extends ApplicationEvent
    private final String message;

    public MyEvent(Object source, String message)
        this.message = message;

    public String getMessage()
        return message;

To produce events, beans must implement ApplicationEventPublisherAware.    Usually this class will store the ApplicationEventPublisher and use it later on to publish events.

public class MyEventProducer implements ApplicationEventPublisherAware
    private ApplicationEventPublisher applicationEventPublisher;

    public void setApplicationEventPublisher(ApplicationEventPublisher applicationEventPublisher)
        this.applicationEventPublisher = applicationEventPublisher;

    public void someBusinessMethod()
        applicationEventPublisher.publishEvent(new MyEvent(this, "Hey!  Something happened!"));

NOTE: It is important to understand that all of the listeners will be called on the caller's thread unless you configure the application event system to be asynchronous.   I'll cover that in another blog post.   The benefit of having the listeners execute on the caller's thread is that the Spring transactional context will propagate to the listeners.

To observe events, have a component implement ApplicationListener<T>, where T is the event class.

public class MyListener implements ApplicationListener<MyEvent>
    private SomeBusinessLogic logic;

    public void onApplicationEvent(MyEvent event)

The Downside of ApplicationEvent

One noticeable downside of using Spring's ApplicationEvents is that IDEA does not recognize them as it does with CDI events.   This is kind of a bummer, but it's no worse than using Guava's EventBus, for example.

Mitigation?   I think that using the event class (the subclass of ApplicationEvent) for one and only one purpose is probably sufficient.   It's a good idea to have purpose built DTOs anyway.

The Benefits of ApplicationEvent

The benefits of using ApplicationEvent over other possibilities can make them very worthwhile:
  1. De-coupling excessively coupled components - Often, a business logic component will trigger many different actions that don't need to be tightly coupled.   For example, notifying users via email / SMS and IM is best left de-coupled from the actual business logic.   The notification channels don't need to know about the business logic, and vice versa.   Also, you can much more easily add new notification channels without modifying the business logic at all!

    This was a very useful technique in improving the architecture of an existing Spring application that I have been working on.
  2. Zero additional libraries - You're already using Spring, so there's nothing to add.  No additional dependencies.
  3. Listen for Spring's own events - You can hook into events that Spring itself fires, which can be very useful.   Application start and stop, for example.

Request and Session Scope

Request and Session scopes are not hard to understand - each scope defines a set of objects that exist for the duration of the scope, and are destroyed when the scope ends.   However, the challenge comes when a longer lived scope wants to inject a bean in a shorter lived scope (e.g. an application scoped bean wants to inject a session or request scoped bean), this gets a little more complicated.

In implementing this, Spring takes a very different approach than that of CDI and Seam.  In CDI and Seam, an application scoped component is injected with request / session / conversation scoped beans on every method call (and un-injected when the method completes!).

Spring takes a different approach:  rather than inject the beans on every single method call, Spring injects a proxy and that proxy is modified to refer to the bean instance in the proper scope by the framework.

@Scope(value = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public class RequestBean
    private final long createdOn = System.currentTimeMillis();

    public long getCreatedOn()
        return createdOn;

Of course, this only works when Spring MVC is enabled, as otherwise there is no request context.

See also: