tag:blogger.com,1999:blog-23826218333589056242024-03-13T20:09:48.881-07:00Konstruct ComputersComputer Calesthenics and Orthodontiajoshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.comBlogger28125tag:blogger.com,1999:blog-2382621833358905624.post-84492023733558972692019-10-10T08:15:00.002-07:002019-10-10T19:32:25.871-07:00Install just the MongoDB Shell on Mac OSXIf you're going through the 'MongoDB university' tutorials, they suggest installing all of MongoDB on your Mac. This is probably not what you want if you are running a Docker-based local environment - You probably want to run the server in a container, and you just want to connect to that container, or your Atlas cloud-based servers from the command line.<br />
<br />
Fortunately, there's <a href="https://github.com/mongodb/homebrew-brew" target="_blank">the MongoDB brew tap</a>!<br />
<br />
To use this, just:<br />
<br />
<ol>
<li>Make sure you have Homebrew installed.</li>
<li>Install the tap: <tt>brew tap mongodb/brew</tt></li>
<li>Install the MongoDB command line: <tt>brew install mongodb-community-shell</tt></li>
</ol>
Credits to <a href="https://dba.stackexchange.com/a/250507/9402">this answer</a> on dba.stackexchange!<br />
<br />
<br />
<br />joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-44179223066651189642018-11-10T19:52:00.002-08:002018-11-10T19:52:05.909-08:00Looking for OpenJDK for Windows?If you're looking for a Windows build of OpenJDK, you will be happy to know that several organizations provide pre-built versions of OpenJDK for Windows:<div>
<ul>
<li>Azul - https://www.azul.com/downloads/zulu/zulu-windows/</li>
<li>Red Hat - https://developers.redhat.com/products/openjdk/overview/</li>
</ul>
<div>
As always, you should read the terms and conditions to see if these binary distributions are right for your use case.</div>
<div>
<br /></div>
<div>
Or, you could build it yourself.</div>
</div>
<div>
<br /></div>
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-650286811510368712018-11-03T04:25:00.003-07:002018-11-03T04:25:25.360-07:00Upgrading Java - TL;DR : Stick with OpenJDK LTS versionsWith the new, faster release cycle of Java there is a lot of confusion about when to upgrade. Upgrading Java is non-trivial for a large system, so it needs to be thought through carefully - you have to think about the IDE, all your tools, and all the dependencies.<br />
<br />
<a href="https://blog.joda.org/2018/10/adopt-java-12-or-stick-on-11.html" target="_blank">This awesome blog post covers a lot of the details.</a><br />
<br />
<a href="https://blog.codefx.org/java/java-11-migration-guide" target="_blank">Here's another awesome post about upgrading to Java 11</a><br />
<br />
However, if you are using the Spring Framework, this decision is made simpler: Spring is only officially supporting LTS releases. I think this makes a lot of sense. For example: <a href="https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.1-Release-Notes#java-11-support" target="_blank">Spring Boot 2.1 Released - Java 11 Support</a><br />
<br />
In my day job, the Spring Boot apps lean heavily on Spring's dependencies, so that we can leverage all the testing done by the Spring community. Yes, that means working with something that's not the absolute bleeding edge, but using bleeding edge stuff is not the goal for us. The goal is to produce applications that delight our users and add value to the business.<br />
<br />joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-24596493140488120842018-10-28T09:48:00.005-07:002018-10-28T09:48:49.176-07:00Slimming down a Spring Boot app for testingOne of my favorite things about Spring Boot is the ability to launch an application in embedded mode and do some pseudo-integration-testing (that is, it's integration testing because I'm able to call the embedded app over the loopback network, as if the test were running on a different machine). Of course you can launch your 'real' application that lives in your 'main' source folder, and you can enable / disable parts of the application with Spring profiles.<br />
<br />
But what if you want to create a specialized Spring Boot app, just for testing? Well, you can!<br />
<br />
For example, in src/test/org/example/test/app/TestServer.kt we can make an app class (btw, this is Kotlin, just so you know):<br />
<br />
<pre>package org.example.test.app</pre>
<pre></pre>
<pre>...imports blah...</pre>
<pre></pre>
<pre>@SpringBootApplication()
@Import(SomeConfig::class)
class TestGraphQLClientApp {
/**
* Defines the main resolvers: Query and Mutation.
*/
@Bean
fun resolvers(query: GraphQLQueryResolver) = listOf(query)
}
</pre>
<br />
<ul>
<li>This is a GraphQL server, and I'm going to test a simple GraphQL client with it. There are more components behind the scenes.</li>
<li>I'm putting it in the app sub-package to avoid loading any component in the main app or anywhere else unintentionally. Remember that a Spring Boot app class implies a 'component scan' of the package it lives in, and all sub-packages!</li>
<li>SomeConfig is imported, and this brings in whatever components from the main code or elsewhere.</li>
<li>Specialized test components can be defined in packages or sub-packages.</li>
</ul>
<div>
We can make a test like this:</div>
<div>
<br />
<pre>package org.example.test
... imports blah ...
@RunWith(SpringJUnit4ClassRunner::class)
@SpringBootTest(classes = [TestGraphQLClientApp::class, HttpClientConfiguration::class],
webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
class GraphQLClientTest {
companion object : KLogging()
@LocalServerPort
private val port: Int = 0
@Autowired
private lateinit var factory: RestTemplateFactory
// These have to be 'by lazy' because Spring will inject the fields they rely on after init.
private val template by lazy { factory.createRestTemplate() }
private val baseUrl by lazy { "http://localhost:$port/graphql" }
private val client by lazy { GraphQLClient(baseUrl, template) }
@Test
fun basicClientTest() {
client.query("query { foo }").also { value ->
logger.info { prettyPrint(value) }
assertEquals("foo", assertHasField(value, "data", "foo").asText())
}
client.query("query { getThing(id: \"12345-ABC\") { one two } }").also {
logger.info { prettyPrint(it) }
}
}
}
</pre>
</div>
<ul>
<li>Note that the default 'properties' will be loaded from application.properties or application.yml. If we want to override this, we should probably make a profile and use it from the test.</li>
</ul>
<div>
So what's the problem? The problem is that, in this particular context I have JPA and a few other Spring Boot 'starter' dependencies. So, when the test class starts the Spring Boot app, it launches:</div>
<div>
<ol>
<li>A JDBC Data Source</li>
<li>Liquibase (my preferred data migration tool)</li>
<li>JPA and Hibernate</li>
</ol>
<div>
Those are all great tools, and it's super convenient to have all these 'auto starters' but they are not needed in this particular test. How to turn them off and "slim" down my application? There are two approaches:</div>
</div>
<div>
<ol>
<li>Create a profile, and disable some things in that profile - This works for the autostart modules that support it, but not all of them do.</li>
<li>Use 'exclude' in SpringBootApplication to disable the autostart modules.</li>
</ol>
<div>
Using exclude is easy, and since it prevents Spring from loading the autostart modules in the first place, it can reduce startup time. So for a simple web app, we can disable all the database modules:</div>
</div>
<div>
<br /></div>
<pre>@SpringBootApplication(exclude = [
LiquibaseAutoConfiguration::class,
DataSourceAutoConfiguration::class,
DataSourceTransactionManagerAutoConfiguration::class,
HibernateJpaAutoConfiguration::class])
</pre>
<div>
<br />
That's all we need to do! Now the test app starts up in about 9 seconds.<br />
<br />
See also:<br />
<br />
<ul>
<li><a href="https://docs.spring.io/spring-boot/docs/current/reference/html/using-boot-using-springbootapplication-annotation.html#using-boot-using-springbootapplication-annotation" target="_blank">Using the @SpringBootApplication Annotation</a></li>
<li><a href="https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/autoconfigure/SpringBootApplication.html" target="_blank">SpringBootApplication annotation</a></li>
<li><a href="https://stackoverflow.com/a/49276219/266167" target="_blank">https://stackoverflow.com/a/49276219/266167</a></li>
</ul>
<br />
<br /></div>
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-18360908830583470642017-06-24T06:01:00.001-07:002018-10-28T09:53:24.143-07:00Kotlin: Getting Started - Part 1I've been using Kotlin for a month or so now, and here are some of the smaller things that I found very useful.<br />
<br />
<h2>
Some Basic Patterns</h2>
<div>
Here are some basic patterns that you probably (hopefully) use in Java, and how to do them with Kotlin.</div>
<h3>
Builder/Constructor</h3>
<div>
In Java you may have classes with many overloaded constructors, or with constructors that have lots of nullable parameters. I think we can all agree that this is just all around tedious for both the producer of the class and the consumer / caller. You might decide to make a builder, which allows you to have a single constructor, and have immutable fields in your target class. While this can make things a bit easier for the consumer/caller, it's still a lot of boilerplate code.</div>
<div>
<br /></div>
<div>
Kotlin provides with two features that can help with this in some cases:</div>
<div>
<ol>
<li>Optional parameters with default values</li>
<li>Named parameters</li>
</ol>
</div>
<div>
Of course, you can still make a builder if you want. Also, you can define secondary constructors.</div>
<div>
<br /></div>
<div>
Caveat for Java integration: These features are not available for Java code calling Kotlin, so don't expect this to magically improve Java.</div>
<h3>
Memoize</h3>
<div>
It's really easy to create an 'initialize once' field in Kotlin: just use: <span style="font-family: "courier new" , "courier" , monospace;">as lazy { ... }</span></div>
<div>
<span style="font-family: inherit;"><br /></span></div>
<div>
<br /></div>
<div>
You can chain these together, due to the fact that the 'as lazy' properties act just like normal properties.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<span style="font-family: inherit;"><br /></span></div>
<h2>
Integrating with Existing Java Code</h2>
<div>
If you're considering Kotlin, but you have a bunch of existing Java code you might be concerned about using Kotlin into your code base. Fortunately, Kotlin support is really easy to add to your build, and it's very easy to interoperate between Kotlin and Java.</div>
<div>
<br /></div>
<h3>
Adding Kotlin Support - Gradle</h3>
<div>
To add Kotlin compilation support to an existing Gradle build.<br />
<br />
<br />
<ol>
<li>
Put the Kotlin plugins in the buildscript class path:
<br />
<br />
<pre>buildscript {
ext {
kotlinVersion = '1.1.2-4'
}
repositories {
jcenter()
maven { url "https://plugins.gradle.org/m2/" }
}
dependencies { // Gradle Plugin classpath.
classpath("org.jetbrains.kotlin:kotlin-gradle-plugin:${kotlinVersion}")
classpath("org.jetbrains.kotlin:kotlin-allopen:${kotlinVersion}")
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:${kotlinVersion}"
}
}
</pre>
<br />
<i>NOTE: I'm avoiding the newer Gradle plugin configuration syntax because it does not support string interpolation. You have to repeat the <code>kotlinVersion</code> information over and over.</i>
<br />
</li>
<li>Enable the Kotlin plugins:
<br />
<pre></pre>
<pre> apply plugin: 'kotlin'
apply plugin: 'kotlin-spring'
</pre>
</li>
<pre></pre>
<br />
<li>
Set the target JVM (optional, but I prefer to do this):<br />
<br />
<pre> compileKotlin {
kotlinOptions.jvmTarget = "1.8"
}
compileTestKotlin {
kotlinOptions.jvmTarget = "1.8"
}
</pre>
<br />
</li>
<li>
Add the Kotlin runtime library dependencies:<br />
<br />
<pre> dependencies {
// Kotlin/JVM libraries
compile("org.jetbrains.kotlin:kotlin-stdlib:${kotlinVersion}")
compile("org.jetbrains.kotlin:kotlin-stdlib-jre8:${kotlinVersion}")
compile("org.jetbrains.kotlin:kotlin-reflect:${kotlinVersion}")
// Kotlin SLF4J utility
compile 'io.github.microutils:kotlin-logging:1.4.4'
}
</pre>
<br />
</li>
</ol>
</div>
<div>
</div>
<div>
</div>
<h3>
Calling Kotlin Functions from Java</h3>
<div>
<br /></div>
<div>
Java will see Kotlin functions as static methods in a class named like this: <span style="font-family: "courier new" , "courier" , monospace;"><package><KotlinFileName>Kt</span>. Basically, just the class name you might expect, plus 'Kt' on the end.<br />
<br />
<br /></div>
<div>
<br /></div>
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-32889219591286332992017-02-16T09:03:00.001-08:002017-02-17T11:01:34.615-08:00Tips for Building Docker ImagesA few tips for building Docker images, from my experience so far.<br />
<br />
<h2>
Stick with the conventions</h2>
<div>
As with most tools, it's best to start with the conventions and stick with them unless you have a very compelling reason to customize ("you are not special"). Some of the important conventions with Docker images:</div>
<div>
<br /></div>
<div>
<ul>
<li>Put your Dockerfile in the root directory of your project (git repo).</li>
<li>Base your image on another image! This allows you to inherit all the environment variables and such from the parent. Also, if it's in docker hub, you can refer to the documentation.</li>
<li>Add ENV, ENTRYPOINT and EXPOSE instructions in your Dockerfile. This will tell image users how to configure your image.</li>
<li>Add comments to indicate what files / directories can be overridden with 'volumes' for configuration.</li>
<li>Use ARG to allow you to pass in a variable during build time. <i>This is really good for version numbers</i>, etc.</li>
</ul>
<div>
<br /></div>
</div>
<h2>
Create The Image</h2>
<div>
To create the image, just do <tt>docker build</tt> from the root directory of the project:
<br />
<pre><span style="background-color: #fff2cc;">docker build -t test-image --force-rm .</span>
</pre>
<br />
Where:<br />
<ul>
<li><tt>-t test-image</tt> : gives the image a name (tag) in the local docker environment.</li>
<li><tt>--force-rm</tt> : removes intermediate containers</li>
</ul>
<h3>
Parameterized Image Building with ARG</h3>
If you have an image where you need to download a version of some file and you would like to not update the Dockerfile for every version, you can use ARG to define a variable that you can pass in to <tt>docker build</tt> like this:<br />
<br />
<tt><i>Dockerfile</i></tt>
<br />
<tt><br /></tt>
<pre><span style="background-color: #cfe2f3;">FROM openjdk:8-jre-alpine
EXPOSE 9324
ARG ELASTICMQ_VERSION=0.13.2
CMD ["java", "-jar", "-Dconfig.file=/elasticmq/custom.conf", "/elasticmq/server.jar"]
COPY custom.conf /elasticmq/custom.conf
ADD "https://s3-eu-west-1.amazonaws.com/softwaremill-public/elasticmq-server-${ELASTICMQ_VERSION}.jar" /elasticmq/server.jar
</span></pre>
<br />
<ul>
<li>The <tt>ARG</tt> defines <tt>ELASTICMQ_VERSION</tt> as an expected argument at build time.</li>
</ul>
You can then build this image, overriding the <tt>ELASTICMQ_VERSION</tt>, like this:
<br />
<pre><span style="background-color: #fff2cc;">docker build -t=my-elasticmq:${VER} --force-rm --build-arg ELASTICMQ_VERSION=${VER}</span>
</pre>
Where:<br />
<ul>
<li><tt>-t test-image</tt> : gives the image a name (tag) in the local docker environment.</li>
<li><tt>--force-rm</tt> : removes intermediate containers</li>
</ul>
<br />
<br />
<br /></div>
<h2>
Explore The Image</h2>
<div>
So, if you want to shell around and look at what is in the image, you can do that easily with:</div>
<div>
<br /></div>
<pre><span style="background-color: #fff2cc;">docker run -it --rm --entrypoint /bin/bash test-image</span>
</pre>
<div>
Where
<br />
<ul>
<li><tt>-it</tt> : runs an interactive terminal session</li>
<li><tt>--rm</tt> : removes the container on exit (this is really useful! Saves on having to clean up containers all the time.)</li>
<li><tt>--entrypoint /bin/bash</tt> : the shell you want to use. We want to override the entry point so the container won't fully start whatever it usually does.</li>
<li><tt>test-image</tt> : The image we want to start, if you gave it a name.</li>
</ul>
</div>
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-68600136430554750222017-02-07T12:16:00.001-08:002017-02-07T12:16:51.112-08:00Install Groovy in an Alpine-based Docker ImageIf you're making a custom image based on an Alpine Linux image, you may have a little trouble installing things that require bash, like Groovy. I tried using SDKMAN, but unfortunately I encountered a lot of problems with compatibility of unzip, and other tools. In my case I'm creating an image based on Tomcat and I want Groovy for doing some configuration work.<br />
<br />
First, we install the Alpine packages we need:<br />
<ol>
<li>bash</li>
<li>curl</li>
<li>zip</li>
<li>libstdc++ (Gradle needed this, but I don't think Groovy does :shrug:)</li>
</ol>
<br />
<pre>RUN apk add --update bash libstdc++ curl zip && \
rm -rf /var/cache/apk/*
</pre>
<br />
Now we need a workaround for fact that Groovy's shell scripts start with #!/bin/sh :
<br />
<br />
<pre># Workaround https://issues.apache.org/jira/browse/GROOVY-7906 and other 'busybox' related issues.
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
</pre>
<br />
Now we can install Groovy. This could probably be done a little more optimally, but it works:
<br />
<pre># Install groovy
# Use curl -L to follow redirects
# Also, use sed to make a workaround for https://issues.apache.org/jira/browse/GROOVY-7906
RUN curl -L https://bintray.com/artifact/download/groovy/maven/apache-groovy-binary-2.4.8.zip -o /tmp/groovy.zip && \
cd /usr/local && \
unzip /tmp/groovy.zip && \
rm /tmp/groovy.zip && \
ln -s /usr/local/groovy-2.4.8 groovy && \
/usr/local/groovy/bin/groovy -v && \
cd /usr/local/bin && \
ln -s /usr/local/groovy/bin/groovy groovy
</pre>
<br/>
As always, any suggestions about how to make it better, let me know.joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com2tag:blogger.com,1999:blog-2382621833358905624.post-13628827437697952792017-01-25T11:18:00.000-08:002017-01-25T11:20:09.862-08:00Git-fu: How to merge without actually mergingSometimes in your life with git, you'll encounter a situation where you try to merge, for example a hotfix branch back into develop, and the merge ends up:<br />
<ol>
<li>Having a huge number of conflicts, and/or...</li>
<li>Backing out changes in the target branch that should remain.</li>
</ol>
The reason for this usually has something to do with rebasing or cherry-picking in a way that Git can't follow, but that's not really important if you're in this situation and you need to finish up some merges quickly.<br />
<br />
A simple solution is to have git think the merge has happened, but not actually merge the files. This is actually very simple:<br />
<br />
First, merge without auto-committing or fast-forwarding:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$ git merge hotfix/1.2.3 --no-commit --no-ff</span> <br />
<br />
<span style="font-family: inherit;">This will do all the merging, but it will not create the merge commit. You can th<span style="font-family: inherit;">en <span style="font-family: inherit;">discard all the changes<span style="font-family: inherit;">, or only some of them, and <span style="font-family: inherit;">com<span style="font-family: inherit;">mit<span style="font-family: inherit;"><span style="font-family: inherit;">:</span> <span style="font-family: inherit;"><br /></span></span></span></span></span></span></span></span><br />
<br />
<span style="font-family: inherit;"><span style="font-family: inherit;"><span style="font-family: inherit;"><span style="font-family: inherit;"><span style="font-family: inherit;"><span style="font-family: inherit;"><span style="font-family: inherit;"><span style="font-family: inherit;"><span style="font-family: "Courier New",Courier,monospace;">$ git commit </span></span></span></span></span></span></span></span></span><br />
<br />
<span style="font-family: inherit;"><span style="font-family: inherit;"><span style="font-family: inherit;"><span style="font-family: inherit;"><span style="font-family: inherit;"><span style="font-family: inherit;"><span style="font-family: inherit;"><span style="font-family: inherit;">Subsequent merges to the target branch will not <span style="font-family: inherit;">try<span style="font-family: inherit;"> to re-apply any of the changes, as it thinks everything has been merged.</span></span></span></span></span></span></span></span></span></span><br />
<br />
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-76092347911394142082016-04-22T08:49:00.005-07:002016-04-22T08:49:53.975-07:00Spring Gotchas - Default value expressions not working for @ValueThe @Value annotation is very useful in Spring, and the default value syntax also comes in handy. However, when working on a new project and setting up your initial configuration, or when setting up a test fixture bean configuration, you may encounter situations where the default value syntax simply doesn't work. For example:<br />
<br />
<pre> @Value("${some.setting:8}")
private int mySetting;
</pre>
<br />
So here, we wanted a default value of <code>8</code> if the <code>some.settings</code> property is not found. Simple enough, but still... you end up getting this kind of error:<br />
<br />
<pre>org.springframework.beans.factory.BeanCreationException: Error creating bean with name '.... blah blah blah ...' </pre>
<pre>Caused by: org.springframework.beans.factory.BeanCreationException: Could not autowire field: private int com.foo.MyBean.mySetting; nested exception is org.springframework.beans.TypeMismatchException: Failed to convert value of type [java.lang.String] to required type [int]; nested exception is java.lang.NumberFormatException: For input string: "${some.setting:8}"
...
Caused by: org.springframework.beans.TypeMismatchException: Failed to convert value of type [java.lang.String] to required type [int]; nested exception is java.lang.NumberFormatException: For input string: "${some.setting:8}"
...
Caused by: java.lang.NumberFormatException: For input string: "${some.setting:8}"
</pre>
This means that Spring <i>does not know how to interpret the default value expression</i>. To enable the Spring Expression Language in <code>@Value</code> just add <code>PropertySourcesPlaceholderConfigurer</code> to the configuration.
In Java annotations:
<br />
<pre>@Configuration
public class MyConfig
{
...
@Bean
public static PropertySourcesPlaceholderConfigurer getPropertySourcesPlaceholderConfigurer()
{
return new PropertySourcesPlaceholderConfigurer();
}
...
}
</pre>
In XML, this is usually not a problem because you've got:
<br />
<pre>
<context:property-placeholder location="classpath:defaults.properties"/>
</pre>
<br />
Thanks to MyKong for the solution!<br />
See <a href="http://www.mkyong.com/spring3/spring-value-default-value/">http://www.mkyong.com/spring3/spring-value-default-value/</a>
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-32041784706763897332014-12-13T16:24:00.000-08:002014-12-13T16:24:33.095-08:00Server Side Development Environment - VirtualBox and Vagrant on OSXIf you're doing server-side development you probably want to take a look at using the VirtualBox / Vagrant combination. This will allow your team to share standardized dev server configurations through your version control system, that is, you can define a standard server OS with provisioned software right in the Git project. Developers can then easily create a 'production like' environment right on the workstations, or on any cloud provider like AWS or RackSpace. This frees up your devops team from having to worry about supporting the server side software packages on whatever OS the developers like to use. Quirks of MySQL, Java, Rails, or Python on Windows or OSX? Forget it! Just install and provision a the same software versions you are using in production on a virtual machine.<br />
<br />
Basically, your 'developer setup' page (and you DO have one of these, don't you?) goes from some long list of steps (with different sections for different OS's) to:<br />
<ol>
<li>Install VirtualBox</li>
<li>Install Vagrant</li>
<li>Clone the project repo</li>
<li>'vagrant up' from the command-line</li>
</ol>
Now, you have to figure out how best to deploy.<br />
<h2>
Why VirtualBox?</h2>
It's free, supports most common platforms, and Vagrant has built in support for it.<br />
<br />
https://www.virtualbox.org<br />
<br />
To install, just download and run the installer. You probably won't be using VirutalBox directly. Vagrant will be creating and starting the VirtualBox hosts. However, you may want to just launch the application once to make sure it's installed properly.<br />
<br />
The second step is to install Vagrant.<br />
<h2>
Why Vagrant? </h2>
Lots of reasons!<br />
<ul>
<li>Share the machine configs with your team, by checking in a Vagrant file into version control.</li>
<li>By default, the Vagrant machines share a directory with the main host. This is <i>much</i> more convenient than scp-ing files to and from the virtual machine.</li>
<li>Share the running machine on the internet - Vagrant can expose the virtual machines on the internet for other people to test and such. This is done via Hashicorp' Atlas service.</li>
<li>Provisioning - Not only does Vagrant start up the hosts, it can configure them. You can use:</li>
<ul>
<li>Shell</li>
<li>Chef</li>
<li>Puppet</li>
<li>Docker (new and cool - but probably not quite ready for production use at this point)</li>
</ul>
<li>Providers - You can use VirtualBox, AWS, or any number of supported providers. :) </li>
</ul>
My main purpose for using Vagrant is to start learning about Chef.<br />
<br />
https://www.vagrantup.com<br />
<br />
To install, just download and run the installer.<br />
<h2>
Vagrant IDEA Plugin</h2>
<br />
IntelliJ IDEA has a Vagrant plugin. At the moment, this seems to mainly just provide a convenient way to do 'vagrant up', but it could come in handy.<br />
<br />
<h2>
What's in the Vagrantfile?</h2>
Basically, this file sits at the root of your project and defines the server OS, and provisioning mechanism for installing the required software. Here are the important parts (IMO):<br />
<ol>
<li>The VM 'box' definition. This is equivalent to the 'AMI' (Amazon Machine Image) in AWS. The Hashicorp Atlas service provides a whole bunch of 'box' definitions for most common Linux distros.</li>
<li>Port mappings - This allows you to map ports on the outer host to ports on the guest OS. You can use this to forward web server ports and ports for debugging, so you can attach your favorite IDE to the server process in the guest OS.</li>
<li> Shared folders. By default, the folder that has the Vagrantfile in it is shared under /vagrant. This is a very convenient way to transfer files to and view files on the guest.</li>
<li>Provisioning - This is how Vagrant will install and configure the required software on the machine. Start with a simple shell provisioner. Basically, it's just a shell script that Vagrant will run after bringing up the machine.</li>
</ol>
<br />
<br />joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-84266434554665456612014-11-30T13:36:00.000-08:002014-11-30T13:36:04.251-08:00Spring for Java EE Developers - Part 3<h2>
Related to Factory Objects - Prototype Scope</h2>
In the <a href="http://konstructcomputers.blogspot.com/2014/08/spring-for-java-ee-developers-part-2.html">previous post</a>, I mentioned a few ways to make a factory or provider object. <br />
<br />
<ol>
<li>A configuration bean - The bean class is annotated with @Configuration, and you can add various @Bean methods that get called to create the instances.</li>
<li>Factory Bean / Factory Method - </li>
</ol>
A related technique is Spring's prototype scope. This tells spring to make a new instance of the bean for every injection and every lookup. In XML, it looks like this:<br />
<br />
<pre><bean id="makeOne" class="com.foo.SomeBean" scope="prototype"/>
</pre>
<br />
Similarly, with annotations:<br />
<br />
<pre>@Component
@Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public class SomeBean
{
...
}
</pre>
<br />
<h2>
Events</h2>
Spring also has an event framework, along with some standard events that the framework produces, allowing you to extend the framework more easily. While this is not as annotation driven and fully decoupled as the CDI event framework, it functions in pretty much the same way.<br />
<br />
To create your own event, simply extend <span style="font-family: "Courier New",Courier,monospace;">ApplicationEvent</span>.<br />
<br />
<pre>public class MyEvent extends ApplicationEvent
{
private final String message;
public MyEvent(Object source, String message)
{
super(source);
this.message = message;
}
public String getMessage()
{
return message;
}
}
</pre>
<br />
To produce events, beans must implement <span style="font-family: "Courier New",Courier,monospace;">ApplicationEventPublisherAware</span>. Usually this class will store the <span style="font-family: "Courier New",Courier,monospace;">ApplicationEventPublisher </span>and use it later on to publish events.<br />
<br />
<pre>@Component
public class MyEventProducer implements ApplicationEventPublisherAware
{
private ApplicationEventPublisher applicationEventPublisher;
@Override
public void setApplicationEventPublisher(ApplicationEventPublisher applicationEventPublisher)
{
this.applicationEventPublisher = applicationEventPublisher;
}
public void someBusinessMethod()
{
...
applicationEventPublisher.publishEvent(new MyEvent(this, "Hey! Something happened!"));
...
}
}
</pre>
<br />
<i>NOTE: It is important to understand that all of the listeners will be called on the caller's thread unless you configure the application event system to be asynchronous. I'll cover that in another blog post. The benefit of having the listeners execute on the caller's thread is that the Spring transactional context will propagate to the listeners.</i><br />
<br />
To observe events, have a component implement <span style="font-family: "Courier New",Courier,monospace;">ApplicationListener<T></span>, where T is the event class.<br />
<br />
<pre>@Component
public class MyListener implements ApplicationListener<MyEvent>
{
@Autowired
private SomeBusinessLogic logic;
@Override
@Transactional
public void onApplicationEvent(MyEvent event)
{
logic.doSomething(event.getMessage());
}
}
</pre>
<br />
<h3>
The Downside of ApplicationEvent</h3>
One noticeable downside of using Spring's ApplicationEvents is that IDEA does not recognize them as it does with CDI events. This is kind of a bummer, but it's no worse than using Guava's EventBus, for example.<br />
<br />
Mitigation? I think that using the event class (the subclass of ApplicationEvent) for one and only one purpose is probably sufficient. It's a good idea to have purpose built DTOs anyway.<br />
<br />
<h3>
The Benefits of ApplicationEvent</h3>
The benefits of using ApplicationEvent over other possibilities can make them very worthwhile:<br />
<ol>
<li><b>De-coupling excessively coupled components</b> - Often, a business logic component will trigger many different actions that don't need to be tightly coupled. For example, notifying users via email / SMS and IM is best left de-coupled from the actual business logic. The notification channels don't need to know about the business logic, and vice versa. Also, you can much more easily add new notification channels without modifying the business logic at all!<br /><br /><i>This was a very useful technique in improving the architecture of an existing Spring application that I have been working on.</i></li>
<li><b>Zero additional libraries</b> - You're already using Spring, so there's nothing to add. No additional dependencies.</li>
<li><b>Listen for Spring's own events</b> - You can hook into events that Spring itself fires, which can be very useful. Application start and stop, for example.</li>
</ol>
<h2>
Request and Session Scope</h2>
Request and Session scopes are not hard to understand - each scope defines a set of objects that exist for the duration of the scope, and are destroyed when the scope ends. However, the challenge comes when a <i>longer lived</i> scope wants to inject a bean in a <i>shorter lived </i>scope (e.g. an application scoped bean wants to inject a session or request scoped bean), this gets a little more complicated.<br />
<br />
In implementing this, Spring takes a very different approach than that of CDI and Seam. In CDI and Seam, an application scoped component is injected with request / session / conversation scoped beans on every method call (and <b>un-injected</b> when the method completes!).<br />
<br />
Spring takes a different approach: rather than inject the beans on every single method call, Spring injects a <i>proxy</i> and that proxy is modified to refer to the bean instance in the proper scope by the framework.<br />
<br />
<pre>@Component
@Scope(value = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public class RequestBean
{
private final long createdOn = System.currentTimeMillis();
public long getCreatedOn()
{
return createdOn;
}
}
</pre>
<br />
Of course, this only works when Spring MVC is enabled, as otherwise there is no request context.<br />
<br />
See also:<br />
<ul>
<li><a href="http://slackspace.de/articles/test-request-scoped-beans-with-spring/">Testing request and session scope </a></li>
</ul>
<br />
<br />joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-33668927377502337962014-08-13T05:13:00.000-07:002014-08-13T05:13:48.215-07:00Spring for Java EE Developers - Part 2The second installment in my series of blog posts about transitioning to Spring when coming from Java EE (or maybe other DI frameworks). See <a class="GCUXF0KCPB" href="http://konstructcomputers.blogspot.com/2014/06/spring-for-java-ee-developers.html">Spring for Java EE Developers</a> for the first post. This time I'll be diving in to some more details.<br />
<h2>
Factories</h2>
In CDI there is @Producer, and in Guice there is the Provider<T> interface. These are very useful when you have some run-time decisions to make about what object to produce or how to configure it. So, how do you make a factory in Spring?<br />
<h3>
Method 1 - Make a configuration bean</h3>
One simple way to create a factory in Spring is to add a @Configuration bean. Factory methods can be annotated with @Bean, and the factory method parameters will be injected. You will need to add CGLIB to your (run time) dependencies if you want this to work properly.<br />
<br />
<ol>
<li>Make sure you have cglib in your dependency list.</li>
<li>Add<span class="tag"> <context:annotation-config/> to your applicationContext.xml (or other XML configuration).</span></li>
<li><span class="tag">Create a class in a package that is scanned for annotations, and annotate it with @Configuration.</span></li>
<li><span class="tag">Each method in the @Configuration class that produces a bean should be annotated with @Bean. Parameters to the @Bean methods will be injected automatically, and can have @Value and @Qualifier annotations. </span><span class="pln"></span> </li>
</ol>
<h3>
Method 2 - Make a factory bean / factory method</h3>
Another way is to use factory-bean and factory-method.<br />
<ol>
<li>Register the factory bean. For example:<br /><br />
<pre><bean id="thingFactory" class="eg.ThingFactory"/></pre>
<br />
Where <span style="font-family: "Courier New",Courier,monospace;">eg.ThingFactory</span> has a method <span style="font-family: "Courier New",Courier,monospace;">public Thing getThing()<br /> </span>
</li>
<li>Register the produced object by referencing a method on the factory bean.
<br />
<pre> </pre>
<pre><bean id="thing" factory-bean="thingFactory" factory-method="getThing"/>
</pre>
Spring will then call the getThing() method on the ThingFactory to get the instance.
</li>
</ol>
<h2>
Injecting values vs beans</h2>
In other DI frameworks, injecting a String is the same as injecting any other component.<br />
<br />
In the Spring bean XML format, there is a difference between injecting a "value" vs injecting another bean. To inject a bean, use ref="someBeanId" (a.k.a. bean 'name'). To inject a value, use value="some value or Spring EL".<br />
<br />
Using Spring annotations, you can add <complete id="goog_612291929">@Qualified for a named bean implementation (if there are more than one), and @Value to specify a Spring EL expression. </complete> <br />
<h2>
Transactional Beans</h2>
In EJB3, there are some simple transaction annotations that allow you to declare the transaction support you want for your business logic. Spring has a very similar feature.<br />
<div>
<br /></div>
<div>
@Transactional - provides transaction control. Very similar to EJB3 - class level and method level control. </div>
<div>
<br /></div>
<div>
<tx:annotation-driven/> enables the transaction annotation support.</div>
<div>
<br /></div>
<div>
You can also use TransactionTemplate for programmatic control when needed.</div>
<div>
<h2>
Post-Commit Actions and Transaction Synchronization</h2>
Use TransactionSynchronizationManager to get an interface that is similar to JTA Transaction.registerSynchronization(). Something like this:<br />
<br />
<pre>TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronizationAdapter()
{
void afterCommit()
{
// ... do stuff ...
}
});
</pre>
<br />
A few notes on this:
<br />
<ul>
<li>If this is used outside of a transaction, the method will fail. You can have it call the 'after commit' immediately if not inside a transaction, or just let it throw an error and fix the problem. </li>
<li>TransactionSynchronizationManager is not an injectable thing. You have to use the static methods.</li>
<li>TransactionSynchronizationAdapter is an empty implementation of TransactionSynchronization that you can use to override specific methods. Pretty handy.</li>
</ul>
See <a href="http://stackoverflow.com/questions/15026142/creating-a-post-commit-when-using-transaction-in-spring">this question on SO</a>. <br />
<h2>
Next Time...</h2>
<div>
In the next post I'll try to cover Extended Persistence Contexts and some of the web MVC stuff. </div>
</div>
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-42822092272076893002014-07-20T10:36:00.002-07:002014-07-20T10:36:54.793-07:00My first attempt at using AWS EC2There are lots of cloud hosting services out there. AWS is one of the most popular (if not <i>the</i> most popular), so I decided to set myself up with a free account so I could learn how to use it. This blog post covers my initial experiences.<br />
<ul>
<li>Signing up is very easy, just go to aws.amazon.com. I signed in with my personal Amazon account, and created an AWS account.</li>
<li>I will probably be using EC2, and RDS - An EC2 instance (VM) to host server-side web applications (Java) and RDS for the database. I will probably use EBS as well, so I can have some durable filesystem storage for the EC2 instance.</li>
<li>I started with the "Basic" free tier. You need to enter your CC information though, in case you go over the limitations of the free tier. Since I'm mostly just going to be creating VMs for learning, mostly likely I won't be keeping too many instances running.</li>
</ul>
<h2>
The free tier</h2>
Currently the AWS free usage tier gives you the following for one year:<br />
<ul>
<li> EC2 (virtual machines) - 750 hours/month on a 't2.micro' instance that is Amazon Linux, RHEL, or SLES</li>
<li>EBS (file system storage) - 30GB, 2 million I/O ops, 1G of snapshot storage</li>
<li>RDS (Relational db) - 750 hours/month on a 'micro' instance, 20G of storage, 20G of backup, 10M I/O ops</li>
</ul>
See http://aws.amazon.com/free<br />
<h3>
What's a t2.micro instance?</h3>
T2 is Amazons instance type that is optimized for 'burstable performance'. A t2.micro instance has:<br />
<ul>
<li>1 CPU and 1G of RAM.</li>
<li>Only EBS for durable storage (i.e. anything not on EBS will be lost when the machine is shut down).</li>
</ul>
750 hours per month? Should I start and stop my instances?<br />
<br />You probably shouldn't start and stop instances too often. The billing granularity is hourly, so if you start an instance, you might as well keep it running for an hour. If you stop an instance, you might as well keep it stopped for at least an hour.<br />
<br />
Also,<b> if you start and stop an instance three times in an hour, Amazon will bill you for three hours</b>. So, you need to think about whether you really need to shut down or not. This makes sense because Amazon probably doesn't want everybody to be constantly starting and stopping machines all the time.<br />
<br />
See <a href="http://docs.aws.amazon.com/gettingstarted/latest/awsgsg-intro/gsg-aws-free-tier-usage-limits.html">this page</a> for more.<br />
<br />
It is also a good idea to <a href="http://docs.aws.amazon.com/gettingstarted/latest/awsgsg-intro/gsg-aws-billing-alert.html">enable billing alerts</a>.<br />
<br />
<h2>
Launching an Instance</h2>
Go to the AWS console, click on EC2. Click 'Launch Instance'.<br />
<ol>
<li>Chose a machine image - Make sure you check the 'Free tier only' box if you want to stay in the free tier. I chose Amazon Linux.</li>
<li>Choose an instance type - t2.micro is the only free tier instance type, so I chose that.</li>
<li>Configure instance - leave the defaults</li>
<li>Add storage - leave the defaults</li>
<li>Tag instance - leave the defaults</li>
<li>Configure Security Group - Since I'm doing this for the first time, I created a new security group called "Administrators". I chose 'My IP' for SSH access. Should be good enough for today, and I suppose that I can change that access rule via the AWS console later to add new IP addresses. Click 'Review and launch'<br /><br />Boot from General Purpose (SSD) prompt: keep the default choice. Click Next.</li>
<li>Review - This should all look okay, so just go ahead and launch it.<br /><br />Create a new key pair: Select 'Create a new key pair' and enter the key pair name. You'll need to download the private key (.pem file) and store it somewhere. I put my in a Google Drive folder so I could get to it later.</li>
</ol>
<h3>
Connect to the new Linux instance with SSH</h3>
<br />
See<a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html"> this page for Windows/Putty</a> and<a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html"> this page for Linux/OSX ssh</a>.<br />
<br />
You'll need the private key, the instance id, and the public DNS address of the instance.<br />
<br />
<h3>
Amazon Linux</h3>
<br />
This linux distro is in the Red Hat family - it uses yum and rpm. Many packages are available to install. I saw that mine had a Java 7 JRE installed, and that the yum repo had Tomcat 7 available, as well as MySQL and other things.<br />
<br />
<h2>
What's next?</h2>
<ul>
<li>Set up Tomcat, enable HTTPS access from the outside.</li>
<li>Set up MySQL on RDS - Connect Tomcat to MySQL.</li>
<li>Look into making my own machine images (AMIs) that have everything pre-installed and set up.</li>
</ul>
Once I get Tomcat->MySQL going, hopefully I can begin installing webapps to see how well the t2.micro instance works. If it works well, I might consider moving my home wiki to AWS.<br />
<br />
I may also consider doing the same thing with Open Shift, to compare and contrast the costs and ease of use.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-62960536053074961342014-07-18T10:15:00.000-07:002014-07-18T10:15:48.208-07:00Eclipse for IDEA UsersIf you are an IntelliJ IDEA user, there's a good chance you'll be working on a team with people who use Eclipse. Here are some basics that will help you get the two environments working together smoothly.<br />
<h2>
Main Differences</h2>
<ul>
<li>The concept of a <i>project</i> is basically the same as it is in IDEA. </li>
<li>Eclipse has the concept of a 'workspace', which contains multiple projects. You might make one workspace for your workplace, and another for an open source project or experiments.</li>
<li>Most of the features of IDEA exist in Eclipse, but they may be in unexpected places in the UI. For example:</li>
<ul>
<li>The plugins are installed / managed under the Help menu (and sort of under the About dialog?). This will certainly generate a few WTFs.</li>
<li><b>'Team' = version control</b>. That kept making me whinge.</li>
<li>Perspectives - this is kind of a 'mode' concept. Mostly maps to the side/bottom panels in IDEA.</li>
</ul>
</ul>
<h2>
Install Eclipse</h2>
Best to just follow the directions. Installation is usually not a big deal, but it's a good idea to:<br />
<ul>
<li>Install the same <i>version</i> that everyone has on your team.</li>
<li>Install the <b>package solution</b> that is appropriate for the kind of development you do. For me, this is <b>'Eclipse IDE for Java EE Developers'</b>.</li>
<li>Here is a<a href="http://www.cs.dartmouth.edu/~cs5/install/eclipse-osx/">n example of installing Eclipse on OSX</a>.<br />Basically: Download, unpack, drag the 'eclipse' <i>folder</i> into Applications (<i>not</i> the Eclipse application, but the <i>whole folder</i>).</li>
</ul>
<ol>
</ol>
<h3>
Eclipse Plugins</h3>
<ul>
<li>Install plugins with Eclipse Marketplace, which is (oddly enough) under the Help menu.</li>
<li>Uninstalling plugins is done in the About Eclipse menu item, which is in the Eclipse menu on OSX. See <a href="http://wiki.eclipse.org/FAQ_How_do_I_remove_a_plug-in%3F">this wiki page</a> for more. </li>
<li>You'll probably need to install a version control plugin (e.g. 'subclipse' if you're using subversion) and you'll need to install a dependency management plugin as well (e.g. gradle).</li>
<li>Often, plugins won't work until you delete the project from the workspace and re-import it into the workspace. </li>
</ul>
<h2>
Getting Around</h2>
<h3>
The Workspace</h3>
<br />
The workspace is a collections of references to project directories. These show up as top level items once you have some in your workspace. Eclipse will prompt you for a workspace when you start it up:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-NH9vtI2C6rc/U8lKz_MG3dI/AAAAAAAAB8g/D-9Mu8Z43eA/s1600/Screen+Shot+2014-07-18+at+12.21.29+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-NH9vtI2C6rc/U8lKz_MG3dI/AAAAAAAAB8g/D-9Mu8Z43eA/s1600/Screen+Shot+2014-07-18+at+12.21.29+PM.png" height="141" width="320" /></a></div>
If you select 'Use this as the default', then you can easily switch workspaces using <i>File -> Switch Workspace.</i><br />
<br />
The Project Explorer will show all the projects added to the workspace. At first there will be none, so you will typically import a project that is already on your disk.<br />
<div>
<h3>
Importing a project</h3>
This is how you get a project into the workspace, and it can be found under <i>File -> Import,</i> or by<i> right clicking</i> in the<i> Project Explorer.</i> If you already have a git clone / svn checkout, the workspace will basically link to the existing location. If you clone / checkout from version control, the default behavior is to put the files in the workspace directory.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-MLrGfJUVR_c/U8lMtG6dT5I/AAAAAAAAB8s/eGRsPBKt6ro/s1600/Screen+Shot+2014-07-18+at+12.32.07+PM+(2).png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-MLrGfJUVR_c/U8lMtG6dT5I/AAAAAAAAB8s/eGRsPBKt6ro/s1600/Screen+Shot+2014-07-18+at+12.32.07+PM+(2).png" height="320" width="291" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
To import a typical VCS clone/checkout that already has Eclipse project files in it, chose Generic/existing project:</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-BKw1rG4MwfE/U8lQC4PSrwI/AAAAAAAAB88/pHIwWxKAoDo/s1600/Screen+Shot+2014-07-18+at+12.44.19+PM+(2).png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-BKw1rG4MwfE/U8lQC4PSrwI/AAAAAAAAB88/pHIwWxKAoDo/s1600/Screen+Shot+2014-07-18+at+12.44.19+PM+(2).png" height="153" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
The project should import successfully if you have all of the right plugins installed. At this point you will probably want some additional views of the project: version control, etc. This is where 'perspectives' come in.<br />
<h3>
Perspectives</h3>
</div>
<div>
Perspectives are basically different modes you can work with. IDEA has similar windows, but it doesn't force the whole UI in to a 'mode' like Eclipse does. To access perspectives, click the perspectives button:</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-FnLS1vjXPM4/U8lSmtUaPfI/AAAAAAAAB9I/82-oYsQgE2U/s1600/perspectives1.tiff" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-FnLS1vjXPM4/U8lSmtUaPfI/AAAAAAAAB9I/82-oYsQgE2U/s1600/perspectives1.tiff" height="205" width="320" /></a></div>
<div>
The most important perspectives (from my perspective, at least ;-> ):</div>
<div>
<ul>
<li>Team Synchronizing - This is similar to the VCS window in IDEA.</li>
<li>Java EE - This is basically the main project view in IDEA.</li>
</ul>
</div>
<div>
<h2>
Project Files</h2>
Eclipse stores it's project information in two files .classpath, and .project, and a .settings directory. These are roughly equivalent to the .idea directory and the IML files.<br />
<br />
These can all be added to version control so the project can just be cloned/checked out and opened by other team members.<br />
<br />
<h2>
Things that you'll miss</h2>
So here are the things that you'll probably miss coming from IDEA:<br />
<br />
<ul>
<li>Deep navigation and code 'grokking' - Eclipse just doesn't know as much about your project as IDEA does, so it can't help with some more advanced referencing and navigation.</li>
<li>Refactoring - Yeah, Eclipse has refactoring but it's very basic in terms of features and in terms of how thorough it is. IDEA knows much more about the project, so it can refactor very completely. With Eclipse, be prepared to complete many of the refactorings by hand. It gets the basics done though: renaming local vars, extracting methods, etc.</li>
<li>Multiple classpaths - IDEA has separate class paths for testing vs runtime. In Eclipse, there is only one classpath, so you may encounter some strange results when running tests or non-test programs from within Eclipse as compared to running them from IDEA. My advice is to not rely on running your code from the IDE. Always know how to do things from the command line as a fallback.</li>
<li>Change lists - If you're using Git, you won't notice this. However, if you're (still) using Subversion, change lists don't seem to be there in Eclipse. Maybe they are, but I haven't been able to find them yet.</li>
</ul>
<br />
<br />
<br />
<br /></div>
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-51501783755274199802014-06-26T08:24:00.003-07:002014-06-26T08:34:27.331-07:00Migrating from ANT and IVY to GradleRelated to the previous post, <a href="http://konstructcomputers.blogspot.com/2014/06/migrating-from-maven-to-gradle.html">Migrating from Maven to Gradle</a>, here are some things I found when attempting to migrate an ANT / IVY build to Gradle.<br />
<br />
Advantages over ANT/IVY<br />
<ul>
<li><b>XML is not for humans</b> - Gradle's DSL is much more readable and more concise. No need for 'ivy.xml' and 'build.xml' and tons of 'properties files'. </li>
<li><b>Conventions</b> - Avoid re-inventing the wheel. If you use the conventions for the Gradle plugins, this eliminates a great deal of code and makes your project look 'normal' to other people. They can just dive right in and be productive.<b> </b>"You are not special" ;)</li>
<li><b>Declarative</b> - Gradle is more declarative and eliminates a ton of boring, boilerplate code compared to ANT.</li>
<li><b>Plugins</b> - Eliminate even more boilerplate code, and gain some conventions. </li>
<ul>
<li>Get dependencies. </li>
<li>Compile the main code and the test code. Process any resources. Compile dependencies (multi-module).</li>
<li>Run the test suite and generate reports.</li>
<li>Jar up the main code. </li>
</ul>
<li><b>Self install</b> - Gradle self-installs from VCS via the gradle wrapper.</li>
<li><b>'one kind of stuff'</b> - Dependencies are declared right in the build file. </li>
<li>Daemon mode!</li>
</ul>
Getting started<br />
<ul>
<li>Add build.gradle and settings.gradle to the root directory. Can be empty files at first.</li>
<li>Gotcha #1: <i>If you are using Subversion with the standard layout, Gradle will think that the project is named 'trunk'</i> (or whatever the branch directory is... Subversion really sucks at branches!). <br /><br /><i>To fix this, simply add <span style="font-family: "Courier New",Courier,monospace;">rootProject.name='the-real-project-name'</span> in settings.gradle.</i></li>
<li>Re-open the IDEA project. IDEA will import the gradle project.<br />Eclipse probably has something similar.</li>
<li>For a Java project, apply the Java plugin in build.gradle: <span style="font-family: "Courier New",Courier,monospace;">apply plugin: 'java'</span><br />This will automatically add the expected tasks for compiling, running tests, packaging as a jar, etc. <i>You don't have to write this boring stuff!</i></li>
<li>Custom source locations - Let's say the project has the sources in src and test_src. This is not the standard layout for the java plugin, so we'll need to configure that in build.gradle:<br /><br />
<pre>sourceSets {
main {
java {
srcDir 'src'
}
resources {
srcDir 'conf'
}
}
test {
java {
srcDir 'test_src'
}
}
}
</pre>
</li>
<li>Now we need to add the dependencies. Since Gradle is based on Groovy, it's easy to make a simple converter:
<br />
<pre>task convertIvyDeps << {
def ivyXml = new XmlParser().parse(new File("ivy.xml"))
println "dependencies {"
ivyXml.dependencies.dependency.each {
def scope = it.@conf?.contains("test") ? "testCompile" : "compile"
println("\t$scope \"${it.@org}:${it.@name}:${it.@rev}\"")
}
println "}"
}
</pre>
<br />
Just run the task and paste the output into the dependencies closure. <br /><br />We can also do something more radical: Parse the ivy.xml and populate the dependencies that way, see <a href="http://technicallypossible.blogspot.com/2009/12/using-ivyxml-with-gradle.html">this post</a>.</li>
<li> Gocha #2: If you are using a properties file to define versions in ivy.xml, this will be a little different in Gradle.</li>
<ul>
<li>Gradle supports 'extra properties' typically defined in an 'ext' closure. These can be referenced inside double quoted strings in the dependencies closure.</li>
<li>Gradle doesn't like dots in the extra property names. I changed them to underscore. For example:<br />
<pre> </pre>
<pre>ext {
version_junit="4.11"
}
dependencies {
... blah blah blah...
testCompile "junit:junit:${version_junit}"
}
</pre>
</li>
<li>It's nice having everything defined in one place. :)</li>
</ul>
<li>At this point, you have a basic build with compilation, testing, test reports, and all that.</li>
<ul>
</ul>
</ul>
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-82928648123557899222014-06-20T07:03:00.002-07:002014-06-20T07:03:28.531-07:00Migrating from Maven to GradleI thought I'd share some of my experiences with migrating from Maven to Gradle for a small Java open source project.<br />
<h2>
</h2>
<h2>
The Strategy</h2>
First, what's the best way to do this? The project is a fairly straightforward Java project without complex Maven pom.xml files, so maybe the best way forward is to just create a Gradle build along side the Maven one.<br />
<h2>
Some advantages over Maven</h2>
<br />
Here are some of the advantages I found when using Gradle:<br />
<ul>
<li>The 'java' plugin does almost all the work. It defines something equivalent to the Maven lifecycle in terms of compilation, testing, and packaging. </li>
<li><b>Much smaller configuration.</b> No more verbose pom.xml files!</li>
<li>A multi-module project can be configured from the top-level build.gradle file.</li>
<li>Dependency specifications are more terse and also more readable.</li>
<li>It's much straightforward to get Gradle to use libraries that are not in the Maven repositories, e.g. in version control. (However, I do believe that it's best to make a private repository with Artifactory or Nexus and install the libraries there, rather than keeping them in version control).</li>
<li>Dependencies between sub-modules is also very easy.</li>
<li>The whole parent/aggregator/dep-management thing in Maven is a bit clunky. Gradle makes this much easier. You can even do a multi-module build with a single Gradle build file if you want.</li>
</ul>
<h2>
First Attempt</h2>
Here are the steps I took.<br />
<ul>
<li>Using IDEA, create a new Gradle project where the existing sources are. Set the location of the Gradle installation. You should see the Gradle tab on the right side panel.</li>
<li> Create a build.gradle file and a settings.gradle file in the project root directory.</li>
<li>The basic multi-module structure can be the same as a Maven multi-module build:<br /><ul>
<li>A 'main' build.gradle file in the root directory. Along with a settings.gradle file that has the overall settings.</li>
<li>Sub-directories for each module.</li>
<li>Each module directory has it's own build.gradle file.</li>
<li>NOTE: If the module dependencies are defined correctly, building a module will also build the other dependent modules when you are in the module sub-directory! Major win over Maven here, IMO. </li>
</ul>
</li>
<li>Apply the plugins for a Java project, set the group and version, add repositories. In this case I have a multi-module project so I'm putting all of that in the <span style="font-family: "Courier New",Courier,monospace;">allprojects </span>closure:<br /><br />
<pre>allprojects {
apply plugin: 'java'
group = 'org.jegrid'
version = '1.0-SNAPSHOT'
repositories {
mavenCentral()
maven {
url 'http://repository.jboss.org/nexus/content/groups/public'
}
flatDir {
dirs "$rootDir/lib" // If we use just 'lib', the dir will be relative.
}
}
}
</pre>
<br />
I also have some libraries in the <code>lib</code> directory at the top level because they are not in the global Maven repos, or in the JBoss repo. The <code>flatDir</code> closure will allow Gradle to look in this directory to resolve dependencies. </li>
<li>Add dependencies. For a multi-module build this is done inside each project closure. Use the 'compileJava' task to make sure they are right.</li>
</ul>
In the end, this project didn't really work with Gradle because the dependencies are too old. So, I will need to rebuild the project from the ground up anyway. Some of the basic libraries have undergone many significant changes since the project started, so it's time to upgrade!<br />
<h2>
Basic Gradle Multi-Module Java Project Structure</h2>
Okay, so in creating a brand new project, the canonical structure is much like a Maven project.<br />
<br />
<ul>
<li>In the root directory (an 'aggregator' project) there is a main build.gradle file and a settings.gradle file. This is roughly equivalent to the root pom.xml file.</li>
<li>In each sub-project directory (module) there is a build.gradle file. This is roughly equivalent to the module pom.xml files.</li>
<li>The settings.gradle file has an include for each sub-project. This is roughly equivalent to the '<modules>' section of the root pom.xml file.</li>
<li>An allprojects closure in the root build.gradle file can contain dependencies to be used for all modules. This is similar to a 'parent pom.xml' (but much easier to read!).</li>
</ul>
One thing I wanted to do right away is to create the source directories in a brand new module. This is pretty darn easy with Gradle. Just add a new task that iterates through the source sets and creates the directories:<br />
<br />
<pre> task createSourceDirectories << {
sourceSets.all { set -> set.allSource.srcDirs.each { </pre>
<pre> println "creating $it ... "</pre>
<pre> it.mkdirs() </pre>
<pre> }</pre>
<pre> }
}
</pre>
<br />
I added this in the alllprojects closure, and boom! - I have the task for all of the modules. Neato! I can now run this on each sub-project as needed.<br />
<h2>
Porting The Code</h2>
<br />
One I had the directory layout and basic project files I can begin moving in some of the code. I started with the basic utility code for the project and the unit tests. Like I mentioned, this was using a very old version of JUnit, so I needed to upgrade the tests.<br />
<br />
<h3>
Diversion One - Upgrading to JUnit 4.x</h3>
Upgrading to JUnit 4.x is actually pretty easy. For the most part it retains backwards compatibility. There are a few reasons you might want to upgrade the tests.<br />
<ul>
<li>I prefer annotations over extending TestCase. This is a pretty simple transform:<br /><ol>
<li>Remove 'extends TestCase'</li>
<li>Remove the constructor that calls super.</li>
<li>Remove the import for TestCase</li>
<li>Add 'import static org.junit.Assert.*'</li>
<li>Add @Test to each test method. </li>
</ol>
</li>
<li>(already mentioned) Take advantage of 'import static'! import static org.junit.Assert.*</li>
<li>Expected exceptions: <br /><pre>@Test(expected=java.lang.ArrayIndexOutOfBoundsException.class)</pre>
</li>
<li>@BeforeClass and @AfterClass annotations to replace setUp() and tearDown(). </li>
</ul>
<h3>
Diversion Two - Using Guice or Dagger instead of PicoContainer?</h3>
I really enjoy using DI containers. It takes so much of the boilerplate 'factory pattern' code out of the project and makes for easy de-coupling and configuring of components. In the previous version of the project I had used PicoContainer. <br />
<br />
<ul>
<li>Pico - Pro: Good lifecycle support. Really small JAR file. Con: Not as type safe. Project seems to have stalled.</li>
<li>Guice - Pro: Not as small as Pico, but still very small. More type safe. Large community. Con: Bigger jar than Pico (but not too bad... without AOP its smaller). No real lifecycle support.</li>
<li>Dagger - Pro: Really small, with a compiler! Con: Gradle doesn't have a built in plugin for running the dagger compiler (well, as far as I can tell).</li>
</ul>
I think I'll give Dagger a try as it will cause me to learn how to make a Gradle plugin. Even if I don't succeed, I'll learn more about how Gradle works. <br />
<h2>
See also:</h2>
<ul>
<li><div class="post-title entry-title">
<a href="http://jcavallotti.blogspot.com/2013/12/migration-of-maven-based-project-to.html">Migration of a Maven-based project to Gradle</a></div>
</li>
<li><a href="http://forums.gradle.org/gradle/topics/source_directory_creation_for_java_projects_in_gradle_1_0_milestone_9">Source directory creation with Gradle</a></li>
</ul>
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-25348912453693829592014-06-08T10:30:00.000-07:002014-06-08T10:36:24.636-07:00Spring for Java EE DevelopersSpring has been around for a long time now, and has had a significant impact on the newer Java EE standards such as JSF, CDI, and EJB3. In some ways, Spring could be considered a 'legacy' at this point, but since it's out there it is good to know the basics in case you find yourself working with a Spring-based system (like I have).<br />
<br />
<br />
<br />
I'll post more as I learn, but here are my initial thoughts...<br />
<br />
<br />
<br />
<h2>
1. Transitioning to Spring - It's not that bad</h2>
In addition to influencing the newer Java EE standards, Spring itself has been influenced by the newer standards. I'm sure there are some people who will want to argue about which came first, etc. This is not interesting, IMO. Both communities benefit from the influences.<br />
<ul>
<li><b>Annotation-based configuration</b> - Spring no longer requires all components to be defined in a separate XML file (which is considered 'old school' at this point, although IDEs make this much easier to deal with).<br /><ul>
<li>You can actually use a combination of XML config and annotations in a manner very similar to Seam 2 and CDI.</li>
<li>You can also do "Java based" configuration like Guice or Pico. I'm not<i> </i>really that keen on this approach, but it could come in handy in certain cases.</li>
<li>You still need a main configuration XML file, but that's no big deal. In CDI you need META-INF/beans.xml and Seam you need components.xml. The main difference is that you can configure the scanning, which could be useful.</li>
</ul>
</li>
<li><b>Supports JSR 330 @Inject and JSR-250 lifecycle annotations </b>- If you are already familiar with CDI and EJB3, this can make the transition easier. The Spring-specific annotations offer some additional control <a href="http://docs.spring.io/spring/docs/4.1.0.BUILD-SNAPSHOT/spring-framework-reference/htmlsingle/#beans-standard-annotations-limitations">(the standard annotations have limitations</a>), but these can really help ease the transition.</li>
<li><b>No need for a separate POJO DI mechanism</b> - One issue that I did experience with EJB3 / CDI is that I found I needed a POJO level underneath the EJBs to share very basic services. I used Guice for this, as at the time Guice was very small and light. With Spring, you can use it as your POJO DI framework too, although it's significantly slower (instantiation time) and heavier (bigger jar files) than some others. In any case, you can use it if you have POJO Java processes that are not part of your application server cluster. 'One Kind Of Stuff' and all that.</li>
<li><b>JSF Integration</b> - Spring Web Flow can be configured to integrate the Spring contexts with JSF EL, similar to Seam and CDI.</li>
<li><b>Spring Web Flow ~= Conversation</b> - Having a 'sub-session' concept to allow the developer to retain state between pages is <i>essential</i> nowdays. A "flow" is fairly similar to a "conversation" in Seam and CDI. There are some significant differences in how a 'flow' is controlled, but the overall concept is the same.</li>
<li><b>LOTS of boilerplate-code-eliminating features! - </b>This is something that Seam2 had a bit of, but Spring has taken this much further:<br /><ul>
<li>Spring Data - Define interfaces for DAOs, and Spring Data writes all the boilerplate JPA code.</li>
<li>Defining a DAO service that provides a RESTful JSON interface can be done with hardly any code at all.</li>
<li>Spring Roo - Generate baseline code and add components easily. Like the 'forge' stuff in JBoss. Not sure how useful this really is with an existing project, but it could be a quick way to get the skeleton code in there. </li>
</ul>
</li>
</ul>
<h2>
2. The Bad News</h2>
NOTE: This is not an anti-Spring rant. I'm just pointing out a few facts. <br />
<ul>
<li> <b>Spring is big</b> - It is no longer the case that Spring is 'lighter' than Java EE - Both systems are highly modular, and very comprehensive. There are so many Spring add-ons now, expect to spend time wading through them. At this point, it might as well be an application server.<br /><br />On the other hand, it is well documented and very modular, so that mitigates things.<br /></li>
<li><b>Spring is not a standard, it's an implementation</b> - This is perhaps the biggest problem I have with Spring. It is like an alternate universe where there is only one implementation of the standard, and no independent community defining the standards. Sure JSRs and all that have their disadvantages, but Spring does have a considerable 'vendor lock in' problem (although it is OSS, so it's partially mitigated). Sometimes it can be good to know you can pick a different vendor without re-writing the whole thing.<br /><br />On the other hand, if you use Spring, you have a "container within the container", so the idea of porting is that you would port your inner container as well.<br /></li>
<li><b>Spring AoP is more complex than EJB3 and CDI</b> - Also a big pet peeve of mine. It's relatively easy to make interceptors in Seam, EJB3, and CDI. Granted, Spring AoP is much more powerful, but it's also got a lot of things that seem (to me) like they wouldn't get a lot of use. In my experience, this kind of complexity results in two problems:<br /><ol>
<li>Longer learning curve - Developers take more time to get familiar with the techinque.</li>
<li>A whole new kind of spaghetti code - This often happens when a developer gets through the learning curve and then proceeds to use AoP as a "golden hammer".</li>
</ol>
<br />On the other hand, if you really need to do fancy stuff with AoP, (um... do you <i>really </i>need that?), it's there if you want it. AoP can really be great when used wisely.<br /></li>
<li><b>Lots of references to JSP in the documentation </b>- JSP is now deprecated. It's a <i>huge </i>step backward from JSF 1.2 & Facelets or JSF2.</li>
</ul>
<h2>
3. Things I'm Still Figuring Out</h2>
<ul>
<li><b>Transaction / Hibernate Session management </b>- In an older version of Spring, there were some really serious problems with Hibernate Session management and JTA. Maybe this is no longer relevant, but I do remember looking at the Spring session management code and thinking "ugh! How did this every work?" (sorry guys). This is probably addressed, but I do want to know if the 'extended persistence context' concept exists with Spring and/or Spring Web Flow. This is very important to making simple, transactionally sound, high performance web apps! </li>
<li><b>JSF Integration</b> - I'm wondering just how deep this is.</li>
</ul>
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-5128374926725879572014-05-24T12:48:00.000-07:002014-05-24T12:48:00.093-07:00Thinking about Java 8With all the fanfare of the impending Java 8 release, I thought it would be a good opportunity to brush up on some of the new features and think about how useful they might be at work. Here's what I've come up with so far:<br />
<br />
<ul>
<li><a href="http://download.java.net/jdk8/docs/api/java/lang/FunctionalInterface.html">@FunctionalInterface</a> - I like this as it allows me to lock down interfaces that I want to have only one method (which is what makes them functional, or function-like). I know a co-worker or two who will really like this.</li>
<li><a href="http://download.java.net/jdk8/docs/api/java/time/package-summary.html">java.util.time</a> - Finally! JODA time users (like me) will find this to be very familiar looking. </li>
<li><a href="http://openjdk.java.net/projects/lambda/">Lambdas</a> - I think any Groovy user will say "finally, something like groovy closures!". This will probably come in handy, but...<ol>
<li>As with anything concise and powerful, it could be misused. Golden hammer problems might happen (suddenly everything has to be a Lambda).</li>
<li>The syntax is close to what Groovy does, so it might be a little confusing to those of us who switch back and forth between Groovy and Java.</li>
<li>The combination of Lambdas and function/method reference shorthand can result in some very 'tight' code. </li>
</ol>
</li>
<li><a href="http://openjdk.java.net/jeps/122">No more Permanent Generation</a> - Okay, so now classes, interned strings and static fields are in the existing 'old' generation? Sounds good to me initially, since I'm a big fan of 'one kind of stuff'. However, I'm not sure about how this will affect GC configurations such as the one that I use frequently at work (ParNew + CMS).</li>
</ul>
These features reduce the gap between Java and Scala. As good as Scala is, it's not easy to justify using it in many cases, and with Java 8, I think that set of cases got quite a bit smaller. I'll probably learn Scala anyway, just because, but for production code I'm thinking Java 8 would be a safer bet.<br />
<br />
<ul>
</ul>
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-60612953876934235602014-05-23T12:41:00.000-07:002014-05-23T12:41:04.076-07:00Job Hunting - Networking and RecruitersI figured I'd write about my job search here, since that's part of being a software engineering world. Hopefully some people will benefit from some of my experiences.<br />
<br />
<ol>
<li><b>Keep a list of opportunities </b>- Company name, position, status (applied, first interview, etc.) any notes about each interview. This will come in handy when talking to your network or recruiters. I use a Google apps spreadsheet for this. Keep the active jobs at the top, and the 'no' list at the bottom. </li>
<li><b>Use your network</b> - Don't be afraid to reach out to your industry friends and former co-workers. I used to feel that this was kind of... 'cheating', but that is a big mistake. These are people that know you, that have worked with you. You don't need to convince them of anything really. Your friends will be happy to help you out if they can. You would do the same for them, wouldn't you? If they don't have anything suitable at their companies, maybe someone they know will. </li>
<li><b>Go to some tech meetups in your area </b>- In addition to maybe learning about some new things, it's a great way to meet other technical people. Often, if a company is hiring they will encourage their engineers to go to these events and look for talent. It might be a good idea to print up some personal business cards to hand out. </li>
<li><b>Use recruiters to gain access to other opportunities</b> - A good recruiter will have access to some opportunities that you may not know about. They will also handle the interview scheduling, and give you more insight into the structure of the hiring company. When you're interviewing through your network, you have to do all this yourself. <br /><ul>
<li>Make sure the recruiter lets you know about any job <i>before </i>sending your resume anywhere. Check your list to make sure you haven't already applied.</li>
<li>Remember, <i>recruiters are getting paid by the hiring company</i>, usually as a
percentage of the yearly comp. So, they will put much more effort
into a senior level position than any junior position. </li>
</ul>
</li>
<li><b>Filter the opportunities</b>, especially when going through your network - If the hiring manager requires certain technical skills that you don't have, don't just send send your resume. If the job sounds really interesting, but your skills are not a great match, maybe a short conversation with the hiring manager is in order. Sometimes, the hiring company wants to hire "good people" who can learn the technology specifics. Other times, the company really wants something very specific (which IMO is a bit of a red flag), so if that's the case don't waste everyone's time by applying.</li>
<li><b>Filter recruiters</b> - If a recruiter is not showing you anything exciting, isn't efficient at scheduling interviews, or doesn't prepare you well for the interviews then move on. No point in wasting time. </li>
</ol>
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-70810834847328059292014-04-22T14:52:00.000-07:002014-04-22T14:52:16.854-07:00Upgrading Fedora - Notes<i>A few notes on upgrading Fedora installations.</i><br />
<br />
<br />
<br />
<h2>
Fresh Install </h2>
Probably the safest way to get a working upgrade is to back up any home directories or important configurations and go with a fresh installation.<br />
<br />
Upgrades often leave undesirable configurations in home directories (GNOME configs, for example). This often leads to strange desktop / display issues that can't easily by found or fixed.<br />
<br />
<i>You are not using OSX here. Migrating settings and applications may or may not work. :)</i><br />
<h3>
1. Create a USB Stick</h3>
On Fedora 19, <a href="https://fedoraproject.org/wiki/How_to_create_and_use_Live_USB#Linux_.28GNOME.29_quick_start_.28direct_write.29">these instructions</a> didn't work for me. Here is what I ended up doing. Get a USB stick that doesn't have anything important on it. <br />
<ol>
<li>Download the ISO image. </li>
<li>Insert the USB stick.</li>
<li>Start the <i>Disks</i> application and select the USB drive in the left panel. </li>
<li>Unmount the USB disk filesystem if it is mounted. </li>
<li>Up at the top of the right panel, click on the gear icon and select <i>Restore Disk Image.</i></li>
<li>Select the downloaded ISO image file, and click <i>Start Restoring...</i>.<i> </i></li>
</ol>
<br />
<h3>
2. Boot using the USB Stick, complete the installation</h3>
Shut down the machine, and re-start. If you need to, use the BIOS to select the USB as the boot drive.<i> </i><br />
<br />
Go through the install process. Best to dedicate a HDD to the install, that way you can boot from that drive via BIOS boot selector if you want multiple OS's on your computer without too much hassle.<i> </i>In my case, I've got a dual boot workstation, with a HDD dedicated to booting Linux. I use the BIOS boot drive selector to boot up Fedora instead of WinDoze.<i> </i><br />
<br />
<i>NOTE: I've found that EZbcd doesn't play nice with UEFI boot partitions that Fedora 20 installs. Best to just use the BIOS to select a boot disk.</i> <br />
<h2>
</h2>
<h2>
</h2>
<h2>
</h2>
<h2>
Using FedUp</h2>
<i>WARNING, THIS DOESN'T ALWAYS WORK. Almost every time I've done this, there were some strange after-effects with GNome at least. </i><br />
<br />
For newer versions of Fedora (newer than 17), <a href="https://fedoraproject.org/wiki/FedUp">FedUp</a> with the network upgrade is the way to go:<br />
<br />
<pre>$ sudo yum install fedup
$ sudo yum update fedup fedora-release
<code>$ sudo fedup --network 20</code>
</pre>
<br />
Where <code>20</code> is the version you want to upgrade to. Fedup will automatically reboot the system when it's done downloading everything.<br />
<br />
<br />
The Fedora site says: "Prior to Fedora 17, the DVD/ISO/USB Drive option is recommended."<br />
<br />
Yeah, well... what they really mean is, that FedUp will probably get you something that boots and runs some things, but you may discover later on that many settings are just plain broken.<br />
<br />
<br />joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-66397691676639671992014-04-08T11:12:00.001-07:002014-04-08T11:12:14.255-07:00MySQL - Making Snapshots and Loading SnapshotsJust a quick note on how to make database snapshots with MySQL.<br />
<br />
Create a compressed snapshot:<br />
<br />
<pre>$ mysqldump --single-transaction -udbuser -pdbpass somedb | bzip2 > somedb.sql.bz2
</pre>
<br />
<ul>
<li>The --single-transaction option can be left out if you are not using InnoDB.</li>
<li>In newer versions of MySQL/Moriadb, --opt is the default, so there's no need to specify it. </li>
</ul>
Load a compressed snapshot:<br />
<br />
<pre>$ bunzip2 -c somedb.sql.bz2 | mysql -u dbuser -pdbpass somedb </pre>
<br />
These commands are usually best done as a background job, as they can take some time to complete. Also, they may cause long delays for any applications using the database, so it's a good idea to shut the application servers down before creating a snapshot.
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-409832951464619152014-03-31T07:54:00.001-07:002014-03-31T07:54:22.132-07:00Seam 2 Gotchas - Part 1A few common mistakes I've seen made with Seam 2:<br />
<h2>
Referencing EJB Components Directly</h2>
I think it's pretty easy to know that referencing a Seam component that happens to be an EJB directly (with @EJB or with a JNDI lookup) is probably not going to end well, but a novice Seam developer might make this mistake. This mistake is more likely when writing code in a non-JSF part of the system (e.g. a Servlet).<br />
<br />
Here's what you can do about it:<br />
<ol>
<li>Make sure EJB Seam components are injected using @In, or look them up using Seam. Make sure you have a clear distinction between 'regular' EJBs and Seam component EJBs. Here are the two main things to avoid: <br /><ul>
<li>INJECTING SEAM COMPONENTS WITH @EJB - <b>Use @In instead!</b></li>
<li>LOOKING UP SEAM COMPONENTS WITH JNDI -<b> Use Component.getInstance(name) or Contexts.lookupInStatefulContexts(name) instead!</b></li>
</ul>
</li>
<li>In a non-JSF environment, use ContextualHttpServletRequest, for example, in a Servlet:<br />
<pre> @Override
protected void service(HttpServletRequest request, final HttpServletResponse response) throws ServletException, IOException
{
//... do some stuff...
new ContextualHttpServletRequest(request)
{
@Override
public void process() throws Exception
{
// Access the components, do work. The contexts will be properly set up here.
MyComponent component = (MyComponent)Component.getInstance("myComponent");
}
}.run(); // Run the request.
}
</pre>
See <a href="https://community.jboss.org/thread/182194">this JBoss community post</a>. <i>Personally, I strongly prefer using ContextFilter.</i></li>
<li>In a non-JSF environment, apply the ContextFilter. This will automatically wrap all requests in ContextualHttpServletRequest().<br /><br />For example: <web:context-filter url-pattern="/servlet/*"/> in components.xml.</li>
</ol>
<i>Note that when using ContextFilter or ContextualHttpServletRequest, exceptions may be handled differently than you might expect!</i><br />
<br />
If anything inside the ContextFilter/ContextualHttpServletRequest throws an exception, <b>then all the contexts will be torn down</b>. You may get other filters throwing <code>java.lang.IllegalStateException: No active event context!</code> <i>after</i> the ContextFilter/ContextualHttpServletRequest has finished!<br />
<br />
<h2>
Component Lookup - Component is not automatically created?</h2>
While injecting Seam components with @In is the simplest way to access another component, there are cases where a lookup is needed (e.g. in a Servlet). The problem is, there is more than one way to look up components, and the method used to look up the component will determine the behavior:<br />
<br />
<ol>
<li><span style="font-family: "Courier New",Courier,monospace;">Contexts.lookupInStatefulContexts(name)</span> - This is similar to @In : <b>It will not create components automatically!</b></li>
<li><span style="font-family: "Courier New",Courier,monospace;">Component.getInstance(name)</span> - This is similar to @In(autocreate=true) : <b>Components will be created automatically if they don't exist.</b></li>
</ol>
Make sure you use the appropriate method for your use case.<br />
<h2>
Injected values are null?!?</h2>
If you are used to other DI frameworks, you may be expecting injected values to stick around. Not always the case with Seam :<br />
<br />
<blockquote class="tr_bq">
<i>Seam will set injected fields to null when the request is finished. Injection / uninjection happen before and after every method invocation.</i></blockquote>
<br />
So, if you access an instance outside of Seam's control, then the injected values might be null.<br />
<br />
<ol>
</ol>
<br />joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-50403471274324723352014-03-25T13:06:00.000-07:002014-03-25T13:06:40.427-07:00JBoss AS 7 and SLF4JAs side project at work, I'm porting a Java Enterprise 5 Seam 2 application to JBoss AS 7 (7.2, to be precise). This application uses SLF4J for logging, and I quickly realized that without some careful configuration, the SLF4J log messages can get discarded. That's not so great when trying to troubleshoot deployment problems!<br />
<br />
(Side note: to post XML inside a <pre> tag on Blogger, use an HTML Encoder like <a href="http://www.opinionatedgeek.com/dotnet/tools/htmlencode/Encode.aspx">this one</a>)<br />
<br />
Anyway, here's what I ended up doing to get it to work:<br />
<ol>
<li>Don't use the provided SLF4J module from the container. This will allow the application to use it's own version of SLF4J and logging implementation (e.g. Log4J). I did this by adding the following exclusions to jboss-deployment-structure.xml (like <a href="https://community.jboss.org/message/797921#797921">this</a>):<br />
<pre> <exclusions>
... other modules ...
<module name="org.apache.log4j" />
<module name="org.slf4j" />
<module name="org.slf4j.impl" />
</exclusions>
</pre>
<ul>
<li>Make sure to <i>exclude the implementation <code>org.slf4j.impl</code> as well, otherwise the app server will supply it's own.</i> </li>
<li>For EAR deployments, this needs to be repeated in the sub-deployment for the WAR as well. See <a href="https://community.jboss.org/message/746799">this JBoss community post</a>.</li>
</ul>
<i><br /></i></li>
<li>Include the slf4j-api, and slf4j implementation jars (e.g. slf4j-log4j12 and log4j) in the <code>lib</code> directory of the EAR. In my case, this is just making sure that the Maven module for the EAR doesn't exclude these. Verify by locating the files in the target EAR. <br /><br />In the EAR pom.xml, I added the following dependencies:<br /><br />
<pre> <dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId></pre>
<pre> <scope>runtime</scope> </pre>
<pre> </dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</dependency>
</pre>
<br />In this case, the versions are specified in a dependency management pom.xml. Also, you may need to change the scope to 'runtime' if the scope is set in the dependency management (to something else, like 'test').</li>
<li>Put your log implementation config files where the implementation can see them. For Lo4J, you can make a jar with log4j.xml in it, and put this in the lib directory of the ear.</li>
</ol>
<h2>
Troubleshooting</h2>
Various things I encountered while setting this up... <br />
<h3>
No SLF4J Implementation</h3>
<br />
If you manage to exclude the SLF4J implementation, but the EAR doesn't contain one you may get this:<br />
<br />
<pre>ERROR [stderr] (ServerService Thread Pool -- 65) SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
ERROR [stderr] (ServerService Thread Pool -- 65) SLF4J: Defaulting to no-operation (NOP) logger implementation
ERROR [stderr] (ServerService Thread Pool -- 65) SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
</pre>
<br />
In this case, just make sure the desired SLF4J implementation class is in the EAR lib directory. For example, add it as a dependency in your pom.xml.<br />
<br />
<h3>
JBoss-Specific Logging Configuration</h3>
<br />
I had an old log4j.xml that had some references to some older JBoss-specific logging features like this:<br />
<br />
<pre> <category name="javax">
<priority value="INFO" class="org.jboss.logging.log4j.JDKLevel"/>
</category>
</pre>
<br />
These references caused ClassNotFoundExeptions when Log4J was initializing. To resolve this, I simply commented out these elements.<br />
<br />
Also, I replaced <span style="font-family: "Courier New",Courier,monospace;">org.jboss.logging.appender.RollingFileAppender</span> with <span style="font-family: "Courier New",Courier,monospace;">org.apache.log4j.RollingFileAppender</span><b>.</b><br />
<pre><b> </b></pre>
<ol>
</ol>
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0tag:blogger.com,1999:blog-2382621833358905624.post-116055867447842672014-01-12T17:11:00.000-08:002014-01-12T17:11:07.776-08:00Replacing a bad hard drive in a ZFS pool - Linux/zfs-fuseThought I'd re-post this here, for convenience. I've got a home-brew NAS server that is running Fedora, zfs-fuse, and CIFS. The situation:<br />
<ul>
<li>"Disk Utility" reports that drive /dev/sde has many bad sectors.</li>
<li><span style="font-family: "Courier New",Courier,monospace;">zpool status</span> shows a degraded state for the main pool. The drive is listed in the main pool by it's id.
<br />
<pre># zpool status -v
pool: nasdata
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-4J
scrub: resilver completed after 0h28m with 0 errors on Sat Feb 23 23:54:32 2013
config:
NAME STATE READ WRITE CKSUM
nasdata DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
disk/by-id/ata-ST31500541AS_5XW0PDZ1 ONLINE 0 0 0
disk/by-id/ata-ST31500541AS_5XW0PZJQ ONLINE 0 0 0
disk/by-id/ata-ST31500541AS_6XW1MKZZ UNAVAIL 0 193 6 experienced I/O failures
disk/by-id/ata-ST31500541AS_6XW1KRR9 ONLINE 0 0 0
errors: No known data errors
</pre>
</li>
</ul>
Well that pretty much sums it up. No data errors in the array itself, but the disk is unavailable. Here's the process for replacing it:<br />
<br />
<ol>
<li>Tell zfs to take the disk offline:<br />
<pre># zpool offline nasdata /dev/disk/by-id/ata-ST31500541AS_6XW1MKZZ
</pre>
<br />
Note that I'm using /dev/disk/by-id here. This is because that is how it is listed in the pool. </li>
<li>Shut the machine down.</li>
<li>Add the new disk. I also removed the failing disk because it was causing problems during POST.<br />
<i><span style="background-color: #fff2cc;">NOTE: REMEMBER TO LABEL YOUR DISKS! This really helps when the time comes to replace them! </span></i></li>
<li>Start the machine up.</li>
<li>Tell zfs about the new disk:<br />
<pre># zpool replace nasdata /dev/disk/by-id/ata-ST31500541AS_6XW1MKZZ /dev/disk/by-id/ata-SAMSUNG_HD204UI_S2HGJ90BA09450
</pre>
<br />
Note: I had to use the disk IDs because the pool set itself up that way in the first place (I had switched the drives to a new SATA card).
</li>
<li>Immediately ZFS begins replacing the disk:<br />
<pre># zpool status
pool: nasdata
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 0.00% done, 2695h59m to go
config:
NAME STATE READ WRITE CKSUM
nasdata DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
disk/by-id/ata-ST31500541AS_5XW0PDZ1 ONLINE 0 0 0
disk/by-id/ata-ST31500541AS_5XW0PZJQ ONLINE 0 0 0
replacing-2 DEGRADED 0 0 0
disk/by-id/ata-ST31500541AS_6XW1MKZZ OFFLINE 0 193 6
disk/by-id/ata-SAMSUNG_HD204UI_S2HGJ90BA09450 ONLINE 0 0 0 2.34M resilvered
disk/by-id/ata-ST31500541AS_6XW1KRR9 ONLINE 0 0 0
errors: No known data errors
</pre>
<br />Now hopefully this won't take 2695 hours to complete! :) Later on the status goes down to 11h. Okay, that's doable. </li>
<li>Several hours later, the new drive is incorporated into the pool:<br />
<pre># zpool status -v
pool: nasdata
state: ONLINE
scrub: resilver completed after 9h2m with 0 errors on Sun Feb 24 10:30:21 2013
config:
NAME STATE READ WRITE CKSUM
nasdata ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
disk/by-id/ata-ST31500541AS_5XW0PDZ1 ONLINE 0 0 0
disk/by-id/ata-ST31500541AS_5XW0PZJQ ONLINE 0 0 0
disk/by-id/ata-SAMSUNG_HD204UI_S2HGJ90BA09450 ONLINE 0 0 0 916G resilvered
disk/by-id/ata-ST31500541AS_6XW1KRR9 ONLINE 0 0 0
errors: No known data errors</pre>
</li>
</ol>
So that's it. While it was re-slivering, the ZFS filesystem was completely available. How nice! My SMB/CIFS shares were working just fine.
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com2tag:blogger.com,1999:blog-2382621833358905624.post-24169017270328659522013-12-31T09:21:00.000-08:002013-12-31T09:21:12.214-08:00Groovy: Line-by-line process output - stdout and stderr It is very easy to start up command line processes with Groovy. Just use the execute() method on a string, which returns a Java <span style="font-family: "Courier New",Courier,monospace;">Process</span> object:<br />
<br />
<pre>def proc = "./some-shell-script.sh".execute()
</pre>
<br />
But what if we want to capture the stdout and stderr of the process? Turns out this is also very easy:<br />
<br />
<pre>def proc = "./some-shell-script.sh".execute()
def rc = proc.waitFor()
def output = proc.text
</pre>
<br />
This is all well and good, but like anything it has a few limitations. The output is buffered and not available until the process completes. That makes it not as useful for say, logging to the console in real time as you might want to do in a Jenkins job. Also, it might be handy to prefix each line of output as it comes from the process with a timestamp or something like that. The good news is that Groovy comes with some things that can be used for this purpose.<br />
<br />
Groovy extends the Process object with some interesting methods that make it easy to handle stdout and stderr very easily. We can also use the ANT LineOrientedOutputStream to give us line-by-line behavior:<br />
<br />
<br />
<pre> class LineOutput extends LineOrientedOutputStream
{
String prefix
List<string> lines
@Override
protected void processLine(String line) throws IOException
{
lines.add(line)
println "${new Date().format('yyyy-MM-dd HH:mm:ss.SSS')} ${prefix} : ${line}"
}
}
</string></pre>
<br />
We can start threads for stdout and stderr on the process like this:
<br />
<pre> </pre>
<pre>def outLines = []
def errorLines = []
def proc = "./some-shell-script.sh".execute()
def outThread = proc.consumeProcessOutputStream(new LineOutput(prefix: "out", lines: outLines))
def errThread = proc.consumeProcessErrorStream(new LineOutput(prefix: "err", lines: errLines))
</pre>
<br />
The threads will automatically start, and begin recording and echoing every line as soon as it happens. The we can wait for the process to terminate and clean up:
<br />
<pre> </pre>
<pre>try { outThread.join(); } catch (InterruptedException ignore) {}
try { errThread.join(); } catch (InterruptedException ignore) {}
try { proc.waitFor(); } catch (InterruptedException ignore) {}
</pre>
<br />
All of this can be combined into a class for convenient access, adding some configuration options to make it more useful:
<br />
<pre> </pre>
<pre>class ShellCommand
{
private final String cmd
private final boolean echo
private final Process proc
private final Thread outThread
private final Thread errThread
private final List<string> outLines = []
private final List<string> errLines = []
private class LineOutput extends LineOrientedOutputStream
{
boolean echo
String prefix
List<string> lines
@Override
protected void processLine(String line) throws IOException
{
lines.add(line)
if (echo)
println "${new Date().format('yyyy-MM-dd HH:mm:ss.SSS')} ${prefix} : ${line}"
}
}
ShellCommand(String cmd, boolean echo = false,String outPrefix = "stdout", String errPrefix = "stderr")
{
this.cmd = cmd
this.echo = echo
// Start the process.
this.proc = cmd.execute()
// Start the stdout, stderr spooler threads
outThread = proc.consumeProcessOutputStream(new LineOutput(echo: echo, prefix: outPrefix, lines: outLines))
errThread = proc.consumeProcessErrorStream(new LineOutput(echo: echo, prefix: errPrefix, lines: errLines))
}
def waitForOrKill(int millis)
{
proc.waitForOrKill(millis)
_done()
}
private void _done()
{
try { outThread.join(); } catch (InterruptedException ignore) {}
try { errThread.join(); } catch (InterruptedException ignore) {}
try { proc.waitFor(); } catch (InterruptedException ignore) {}
proc.closeStreams()
}
def getRc()
{
def rc = null
try { rc = proc.exitValue() } catch (IllegalThreadStateException e) {}
return rc
}
}
</string></string></string></pre>
<br />
This also adds the waitForOrKill(millis) method, which is useful when running shell commands that are expected to complete in a reasonable amount of time (or something is wrong).
joshejoshhttp://www.blogger.com/profile/13081804761104798727noreply@blogger.com0