Friday, April 22, 2016

Spring Gotchas - Default value expressions not working for @Value

The @Value annotation is very useful in Spring, and the default value syntax also comes in handy.  However, when working on a new project and setting up your initial configuration, or when setting up a test fixture bean configuration, you may encounter situations where the default value syntax simply doesn't work.   For example:

    private int mySetting;

So here, we wanted a default value of 8 if the some.settings property is not found. Simple enough, but still... you end up getting this kind of error:

org.springframework.beans.factory.BeanCreationException: Error creating bean with name '.... blah blah blah ...' 
Caused by: org.springframework.beans.factory.BeanCreationException: Could not autowire field: private int; nested exception is org.springframework.beans.TypeMismatchException: Failed to convert value of type [java.lang.String] to required type [int]; nested exception is java.lang.NumberFormatException: For input string: "${some.setting:8}"
Caused by: org.springframework.beans.TypeMismatchException: Failed to convert value of type [java.lang.String] to required type [int]; nested exception is java.lang.NumberFormatException: For input string: "${some.setting:8}"
Caused by: java.lang.NumberFormatException: For input string: "${some.setting:8}"
This means that Spring does not know how to interpret the default value expression. To enable the Spring Expression Language in @Value just add PropertySourcesPlaceholderConfigurer to the configuration. In Java annotations:
public class MyConfig

    public static PropertySourcesPlaceholderConfigurer getPropertySourcesPlaceholderConfigurer()
        return new PropertySourcesPlaceholderConfigurer();

In XML, this is usually not a problem because you've got:
<context:property-placeholder location=""/>

Thanks to MyKong for the solution!

Saturday, December 13, 2014

Server Side Development Environment - VirtualBox and Vagrant on OSX

If you're doing server-side development you probably want to take a look at using the VirtualBox / Vagrant combination.   This will allow your team to share standardized dev server configurations through your version control system, that is, you can define a standard server OS with provisioned software right in the Git project.   Developers can then easily create a 'production like' environment right on the workstations, or on any cloud provider like AWS or RackSpace.   This frees up your devops team from having to worry about supporting the server side software packages on whatever OS the developers like to use.   Quirks of MySQL, Java, Rails, or Python on Windows or OSX?  Forget it!   Just install and provision a the same software versions you are using in production on a virtual machine.

Basically, your 'developer setup' page (and you DO have one of these, don't you?) goes from some long list of steps (with different sections for different OS's) to:
  1. Install VirtualBox
  2. Install Vagrant
  3. Clone the project repo
  4. 'vagrant up' from the command-line
 Now, you have to figure out how best to deploy.

Why VirtualBox?

It's free, supports most common platforms, and Vagrant has built in support for it.

To install, just download and run the installer.   You probably won't be using VirutalBox directly.   Vagrant will be creating and starting the VirtualBox hosts.    However, you may want to just launch the application once to make sure it's installed properly.

The second step is to install Vagrant.

Why Vagrant? 

Lots of reasons!
  • Share the machine configs with your team, by checking in a Vagrant file into version control.
  • By default, the Vagrant machines share a directory with the main host.   This is much more convenient than scp-ing files to and from the virtual machine.
  • Share the running machine on the internet - Vagrant can expose the virtual machines on the internet for other people to test and such.  This is done via Hashicorp' Atlas service.
  • Provisioning - Not only does Vagrant start up the hosts, it can configure them.  You can use:
    • Shell
    • Chef
    • Puppet
    • Docker (new and cool - but probably not quite ready for production use at this point)
  • Providers - You can use VirtualBox, AWS, or any number of supported providers.  :)
My main purpose for using Vagrant is to start learning about Chef.

To install, just download and run the installer.

Vagrant IDEA Plugin

IntelliJ IDEA has a Vagrant plugin.  At the moment, this seems to mainly just provide a convenient way to do 'vagrant up', but it could come in handy.

What's in the Vagrantfile?

Basically, this file sits at the root of your project and defines the server OS, and provisioning mechanism for installing the required software.   Here are the important parts (IMO):
  1. The VM 'box' definition. This is equivalent to the 'AMI' (Amazon Machine Image) in AWS.  The Hashicorp Atlas service provides a whole bunch of 'box' definitions for most common Linux distros.
  2. Port mappings - This allows you to map ports on the outer host to ports on the guest OS.   You can use this to forward web server ports and ports for debugging, so you can attach your favorite IDE to the server process in the guest OS.
  3.  Shared folders.   By default, the folder that has the Vagrantfile in it is shared under /vagrant.   This is a very convenient way to transfer files to and view files on the guest.
  4. Provisioning - This is how Vagrant will install and configure the required software on the machine.  Start with a simple shell provisioner.   Basically, it's just a shell script that Vagrant will run after bringing up the machine.

Sunday, November 30, 2014

Spring for Java EE Developers - Part 3

Related to Factory Objects - Prototype Scope

In the previous post, I mentioned a few ways to make a factory or provider object.

  1. A configuration bean - The bean class is annotated with @Configuration, and you can add various @Bean methods that get called to create the instances.
  2. Factory Bean / Factory Method  -
A related technique is Spring's prototype scope.   This tells spring to make a new instance of the bean for every injection and every lookup.   In XML, it looks like this:

<bean id="makeOne" class="" scope="prototype"/>

Similarly, with annotations:

public class SomeBean


Spring also has an event framework, along with some standard events that the framework produces, allowing you to extend the framework more easily.    While this is not as annotation driven and fully decoupled as the CDI event framework, it functions in pretty much the same way.

To create your own event, simply extend ApplicationEvent.

public class MyEvent extends ApplicationEvent
    private final String message;

    public MyEvent(Object source, String message)
        this.message = message;

    public String getMessage()
        return message;

To produce events, beans must implement ApplicationEventPublisherAware.    Usually this class will store the ApplicationEventPublisher and use it later on to publish events.

public class MyEventProducer implements ApplicationEventPublisherAware
    private ApplicationEventPublisher applicationEventPublisher;

    public void setApplicationEventPublisher(ApplicationEventPublisher applicationEventPublisher)
        this.applicationEventPublisher = applicationEventPublisher;

    public void someBusinessMethod()
        applicationEventPublisher.publishEvent(new MyEvent(this, "Hey!  Something happened!"));

NOTE: It is important to understand that all of the listeners will be called on the caller's thread unless you configure the application event system to be asynchronous.   I'll cover that in another blog post.   The benefit of having the listeners execute on the caller's thread is that the Spring transactional context will propagate to the listeners.

To observe events, have a component implement ApplicationListener<T>, where T is the event class.

public class MyListener implements ApplicationListener<MyEvent>
    private SomeBusinessLogic logic;

    public void onApplicationEvent(MyEvent event)

The Downside of ApplicationEvent

One noticeable downside of using Spring's ApplicationEvents is that IDEA does not recognize them as it does with CDI events.   This is kind of a bummer, but it's no worse than using Guava's EventBus, for example.

Mitigation?   I think that using the event class (the subclass of ApplicationEvent) for one and only one purpose is probably sufficient.   It's a good idea to have purpose built DTOs anyway.

The Benefits of ApplicationEvent

The benefits of using ApplicationEvent over other possibilities can make them very worthwhile:
  1. De-coupling excessively coupled components - Often, a business logic component will trigger many different actions that don't need to be tightly coupled.   For example, notifying users via email / SMS and IM is best left de-coupled from the actual business logic.   The notification channels don't need to know about the business logic, and vice versa.   Also, you can much more easily add new notification channels without modifying the business logic at all!

    This was a very useful technique in improving the architecture of an existing Spring application that I have been working on.
  2. Zero additional libraries - You're already using Spring, so there's nothing to add.  No additional dependencies.
  3. Listen for Spring's own events - You can hook into events that Spring itself fires, which can be very useful.   Application start and stop, for example.

Request and Session Scope

Request and Session scopes are not hard to understand - each scope defines a set of objects that exist for the duration of the scope, and are destroyed when the scope ends.   However, the challenge comes when a longer lived scope wants to inject a bean in a shorter lived scope (e.g. an application scoped bean wants to inject a session or request scoped bean), this gets a little more complicated.

In implementing this, Spring takes a very different approach than that of CDI and Seam.  In CDI and Seam, an application scoped component is injected with request / session / conversation scoped beans on every method call (and un-injected when the method completes!).

Spring takes a different approach:  rather than inject the beans on every single method call, Spring injects a proxy and that proxy is modified to refer to the bean instance in the proper scope by the framework.

@Scope(value = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public class RequestBean
    private final long createdOn = System.currentTimeMillis();

    public long getCreatedOn()
        return createdOn;

Of course, this only works when Spring MVC is enabled, as otherwise there is no request context.

See also:

Wednesday, August 13, 2014

Spring for Java EE Developers - Part 2

The second installment in my series of blog posts about transitioning to Spring when coming from Java EE (or maybe other DI frameworks).    See  Spring for Java EE Developers for the first post.   This time I'll be diving in to some more details.


In CDI there is @Producer, and in Guice there is the Provider<T> interface.   These are very useful when you have some run-time decisions to make about what object to produce or how to configure it.   So, how do you make a factory in Spring?

Method 1 - Make a configuration bean

One simple way to create a factory in Spring is to add a @Configuration bean.   Factory methods can be annotated with @Bean, and the factory method parameters will be injected.   You will need to add CGLIB to your (run time) dependencies if you want this to work properly.

  1. Make sure you have cglib in your dependency list.
  2. Add <context:annotation-config/> to your applicationContext.xml (or other XML configuration).
  3. Create a class in a package that is scanned for annotations, and annotate it with @Configuration.
  4. Each method in the @Configuration class that produces a bean should be annotated with @Bean.   Parameters to the @Bean methods will be injected automatically, and can have @Value and @Qualifier annotations.

Method 2 - Make a factory bean / factory method

Another way is to use factory-bean and factory-method.
  1. Register the factory bean.  For example:

    <bean id="thingFactory" class="eg.ThingFactory"/>

    Where eg.ThingFactory has a method public Thing getThing()
  2. Register the produced object by referencing a method on the factory bean.
    <bean id="thing" factory-bean="thingFactory" factory-method="getThing"/>
    Spring will then call the getThing() method on the ThingFactory to get the instance.

Injecting values vs beans

In other DI frameworks, injecting a String is the same as injecting any other component.

In the Spring bean XML format, there is a difference between injecting a "value" vs injecting another bean.    To inject a bean, use ref="someBeanId" (a.k.a. bean 'name').   To inject a value, use value="some value or Spring EL".

Using Spring annotations, you can add  @Qualified for a named bean implementation (if there are more than one), and @Value to specify a Spring EL expression.

Transactional Beans

In EJB3, there are some simple transaction annotations that allow you to declare the transaction support you want for your business logic.   Spring has a very similar feature.

@Transactional - provides transaction control.   Very similar to EJB3 - class level and method level control. 

<tx:annotation-driven/> enables the transaction annotation support.

You can also use TransactionTemplate for  programmatic control when needed.

Post-Commit Actions and Transaction Synchronization

Use TransactionSynchronizationManager to get an interface that is similar to JTA Transaction.registerSynchronization().   Something like this:

TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronizationAdapter()
           void afterCommit()
                // ... do stuff ...

A few notes on this:
  • If this is used outside of a transaction, the method will fail.   You can have it call the 'after commit' immediately if not inside a transaction, or just let it throw an error and fix the problem.
  • TransactionSynchronizationManager is not an injectable thing.   You have to use the static methods.
  • TransactionSynchronizationAdapter is an empty implementation of TransactionSynchronization that you can use to override specific methods.   Pretty handy.
See this question on SO.

Next Time...

In the next post I'll try to cover Extended Persistence Contexts and some of the web MVC stuff. 

Sunday, July 20, 2014

My first attempt at using AWS EC2

There are lots of cloud hosting services out there.   AWS is one of the most popular (if not the most popular), so I decided to set myself up with a free account so I could learn how to use it.   This blog post covers my initial experiences.
  • Signing up is very easy, just go to   I signed in with my personal Amazon account, and created an AWS account.
  • I will probably be using EC2, and RDS -  An EC2 instance (VM) to host server-side web applications (Java) and RDS for the database.    I will probably use EBS as well, so I can have some durable filesystem storage for the EC2 instance.
  • I started with the "Basic" free tier.    You need to enter your CC information though, in case you go over the limitations of the free tier.   Since I'm mostly just going to be creating VMs for learning, mostly likely I won't be keeping too many instances running.

The free tier

Currently the AWS free usage tier gives you the following for one year:
  •  EC2 (virtual machines) - 750 hours/month on a 't2.micro' instance that is Amazon Linux, RHEL, or SLES
  • EBS (file system storage) - 30GB, 2 million I/O ops, 1G of snapshot storage
  • RDS (Relational db) - 750 hours/month on a 'micro' instance, 20G of storage, 20G of backup, 10M I/O ops

What's a t2.micro instance?

T2 is Amazons instance type that is optimized for 'burstable performance'.   A t2.micro instance has:
  • 1 CPU and 1G of RAM.
  • Only EBS for durable storage (i.e. anything not on EBS will be lost when the machine is shut down).
750 hours per month?   Should I start and stop my instances?

You probably shouldn't start and stop instances too often.   The billing granularity is hourly, so if you start an instance, you might as well keep it running for an hour.    If you stop an instance, you might as well keep it stopped for at least an hour.

Also, if you start and stop an instance three times in an hour, Amazon will bill you for three hours.   So, you need to think about whether you really need to shut down or not.   This makes sense because Amazon probably doesn't want everybody to be constantly starting and stopping machines all the time.

See this page for more.

It is also a good idea to enable billing alerts.

Launching an Instance

Go to the AWS console, click on EC2.   Click 'Launch Instance'.
  1. Chose a machine image - Make sure you check the 'Free tier only' box if you want to stay in the free tier.   I chose Amazon Linux.
  2. Choose an instance type - t2.micro is the only free tier instance type, so I chose that.
  3. Configure instance - leave the defaults
  4. Add storage - leave the defaults
  5. Tag instance - leave the defaults
  6. Configure Security Group - Since I'm doing this for the first time, I created a new security group called "Administrators".   I chose 'My IP' for SSH access.   Should be good enough for today, and I suppose that I can change that access rule via the AWS console later to add new IP addresses.  Click 'Review and launch'

    Boot from General Purpose (SSD) prompt: keep the default choice.  Click Next.
  7. Review - This should all look okay, so just go ahead and launch it.

    Create a new key pair: Select 'Create a new key pair' and enter the key pair name.   You'll need to download the private key (.pem file) and store it somewhere.   I put my in a Google Drive folder so I could get to it later.

Connect to the new Linux instance with SSH

See this page for Windows/Putty and this page for Linux/OSX ssh.

You'll need the private key, the instance id, and the public DNS address of the instance.

Amazon Linux

This linux distro is in the Red Hat family - it uses yum and rpm.   Many packages are available to install.   I saw that mine had a Java 7 JRE installed, and that the yum repo had Tomcat 7 available, as well as MySQL and other things.

What's next?

  • Set up Tomcat, enable HTTPS access from the outside.
  • Set up MySQL on RDS - Connect Tomcat to MySQL.
  • Look into making my own machine images (AMIs) that have everything pre-installed and set up.
Once I get Tomcat->MySQL going, hopefully I can begin installing webapps to see how well the t2.micro instance works.    If it works well, I might consider moving my home wiki to AWS.

 I may also consider doing the same thing with Open Shift, to compare and contrast the costs and ease of use.

Friday, July 18, 2014

Eclipse for IDEA Users

If you are an IntelliJ IDEA user, there's a good chance you'll be working on a team with people who use Eclipse.   Here are some basics that will help you get the two environments working together smoothly.

Main Differences

  • The concept of a project is basically the same as it is in IDEA.
  • Eclipse has the concept of a 'workspace', which contains multiple projects.   You might make one workspace for your workplace, and another for an open source project or experiments.
  • Most of the features of IDEA exist in Eclipse, but they may be in unexpected places in the UI.  For example:
    • The plugins are installed / managed under the Help menu (and sort of under the About dialog?).  This will certainly generate a few WTFs.
    • 'Team' = version control.   That kept making me whinge.
    • Perspectives - this is kind of a 'mode' concept.   Mostly maps to the side/bottom panels in IDEA.

Install Eclipse

Best to just follow the directions.   Installation is usually not a big deal, but it's a good idea to:
  • Install the same version that everyone has on your team.
  • Install the package solution that is appropriate for the kind of development you do.   For me, this is 'Eclipse IDE for Java EE Developers'.
  • Here is an example of installing Eclipse on OSX.
    Basically: Download, unpack, drag the 'eclipse' folder into Applications (not the Eclipse application, but the whole folder).

Eclipse Plugins

  • Install plugins with Eclipse Marketplace, which is (oddly enough) under the Help menu.
  • Uninstalling plugins is done in the About Eclipse menu item, which is in the Eclipse menu on OSX.   See this wiki page for more.
  • You'll probably need to install a version control plugin (e.g. 'subclipse' if you're using subversion) and you'll need to install a dependency management plugin as well (e.g. gradle).
  • Often, plugins won't work until you delete the project from the workspace and re-import it into the workspace.

Getting Around

The Workspace

The workspace is a collections of references to project directories.   These show up as top level items once you have some in your workspace.  Eclipse will prompt you for a workspace when you start it up:

If you select 'Use this as the default', then you can easily switch workspaces using File -> Switch Workspace.

The Project Explorer will show all the projects added to the workspace.    At first there will be none, so you will typically import a project that is already on your disk.

Importing a project

This is how you get a project into the workspace, and it can be found under File -> Import, or by right clicking in the Project Explorer.  If you already have a git clone / svn checkout, the workspace will basically link to the existing location.   If you clone / checkout from version control, the default behavior is to put the files in the workspace directory.

To import a typical VCS clone/checkout that already has Eclipse project files in it, chose Generic/existing project:

The project should import successfully if you have all of the right plugins installed.   At this point you will probably want some additional views of the project: version control, etc.   This is where 'perspectives' come in.


Perspectives are basically different modes you can work with.   IDEA has similar windows, but it doesn't force the whole UI in to a 'mode' like Eclipse does.   To access perspectives, click the perspectives button:
The most important perspectives (from my perspective, at least ;-> ):
  • Team Synchronizing - This is similar to the VCS window in IDEA.
  • Java EE - This is basically the main project view in IDEA.

Project Files

Eclipse stores it's project information in two files .classpath, and .project, and a .settings directory.   These are roughly equivalent to the .idea directory and the IML files.

These can all be added to version control so the project can just be cloned/checked out and opened by other team members.

Things that you'll miss

So here are the things that you'll probably miss coming from IDEA:

  • Deep navigation and code 'grokking' - Eclipse just doesn't know as much about your project as IDEA does, so it can't help with some more advanced referencing and navigation.
  • Refactoring - Yeah, Eclipse has refactoring but it's very basic in terms of features and in terms of how thorough it is.   IDEA knows much more about the project, so it can refactor very completely.   With Eclipse, be prepared to complete many of the refactorings by hand.   It gets the basics done though: renaming local vars, extracting methods, etc.
  • Multiple classpaths - IDEA has separate class paths for testing vs runtime.   In Eclipse, there is only one classpath, so you may encounter some strange results when running tests or non-test programs from within Eclipse as compared to running them from IDEA.   My advice is to not rely on running your code from the IDE.   Always know how to do things from the command line as a fallback.
  • Change lists - If you're using Git, you won't notice this.   However, if you're (still) using Subversion, change lists don't seem to be there in Eclipse.    Maybe they are, but I haven't been able to find them yet.

Thursday, June 26, 2014

Migrating from ANT and IVY to Gradle

Related to the previous post, Migrating from Maven to Gradle, here are some things I found when attempting to migrate an ANT / IVY build to Gradle.

Advantages over ANT/IVY
  • XML is not for humans - Gradle's DSL is much more readable and more concise.   No need for 'ivy.xml' and 'build.xml' and tons of 'properties files'.
  • Conventions -  Avoid re-inventing the wheel.   If you use the conventions for the Gradle plugins, this eliminates a great deal of code and makes your project look 'normal' to other people.  They can just dive right in and be productive.  "You are not special"  ;)
  • Declarative - Gradle is more declarative and eliminates a ton of boring, boilerplate code compared to ANT.
  • Plugins - Eliminate even more boilerplate code, and gain some conventions. 
    • Get dependencies.
    • Compile  the main code and the test code.  Process any resources.   Compile dependencies (multi-module).
    • Run the test suite and generate reports.
    • Jar up the main code.
  • Self install - Gradle self-installs from VCS via the gradle wrapper.
  • 'one kind of stuff' - Dependencies are declared right in the build file.
  • Daemon mode!
Getting started
  • Add build.gradle and settings.gradle to the root directory.   Can be empty files at first.
  • Gotcha #1: If you are using Subversion with the standard layout, Gradle will think that the project is named 'trunk' (or whatever the branch directory is... Subversion really sucks at branches!).

    To fix this, simply add'the-real-project-name' in settings.gradle.
  • Re-open the IDEA project.  IDEA will import the gradle project.
    Eclipse probably has something similar.
  • For a Java project, apply the Java plugin in build.gradle: apply plugin: 'java'
    This will automatically add the expected tasks for compiling, running tests, packaging as a jar, etc.  You don't have to write this boring stuff!
  • Custom source locations - Let's say the project has the sources in src and test_src.  This is not the standard layout for the java plugin, so we'll need to configure that in build.gradle:

    sourceSets {
        main {
            java {
                srcDir 'src'
            resources {
                srcDir 'conf'
        test {
            java {
                srcDir 'test_src'
  • Now we need to add the dependencies.   Since Gradle is based on Groovy, it's easy to make a simple converter:
    task convertIvyDeps << {
        def ivyXml = new XmlParser().parse(new File("ivy.xml"))
        println "dependencies {"
        ivyXml.dependencies.dependency.each {
            def scope = it.@conf?.contains("test") ? "testCompile" : "compile"
            println("\t$scope \"${it.@org}:${it.@name}:${it.@rev}\"")
        println "}"

    Just run the task and paste the output into the dependencies closure.

    We can also do something more radical: Parse the ivy.xml and populate the dependencies that way, see this post.
  •  Gocha #2: If you are using a properties file to define versions in ivy.xml, this will be a little different in Gradle.
    • Gradle supports 'extra properties' typically defined in an 'ext' closure.   These can be referenced inside double quoted strings in the dependencies closure.
    • Gradle doesn't like dots in the extra property names.   I changed them to underscore.  For example:
      ext {
      dependencies {
         ... blah blah blah...
         testCompile "junit:junit:${version_junit}"
    • It's nice having everything defined in one place. :)
  • At this point, you have a basic build with compilation, testing, test reports, and all that.