NullPointerException, probably…

Archive for the ‘Friendly Java Blogs’ Category

Integrating MongoDB with Spring

with one comment

Apparently, most of the visitors to my “Integrating MongoDB with Spring Batch” post can’t find what they look for, because they look for instructions how to integrate MongoDB with plain Spring Core.
Well, the source includes that integration, but it’s on github, and anyway that wasn’t the focus of that post.
So, here’s the integration – short, plain and simple:

  • Properties file with server and database details (resides in classpath in this example):
2    db.port=27017
  1. application-config.xml (or whatever you call it):
    1<beans xmlns=""
    2       xmlns:xsi=""
    3       xmlns:context=""
    4       xsi:schemaLocation="
    7 ">
    8    <context:property-placeholder 
    9                location=""/>
    10    <bean id="mongo" class="com.mongodb.Mongo">
    11       <constructor-arg value="${}"/>
    12       <constructor-arg value="${db.port}"/>
    13   </bean>
    14   <bean id="db" 
    15      class="com.mongodb.spring.config.DbFactoryBean">
    16       <property name="mongo" ref="mongo"/>
    17       <property name="name" value="${}"/>
    18   </bean>
  2. The com.mongodb.spring.config.DbFactoryBean class:
    1 public class DbFactoryBean implements FactoryBean<DB> {
    3        private Mongo mongo;
    4        private String name;
    6        @Override
    7        public DB getObject() throws Exception {
    8            return mongo.getDB(name);
    9        }
    11       @Override
    12       public Class<?> getObjectType() {
    13           return DB.class;
    14       }
    16       @Override
    17       public boolean isSingleton() {
    18           return true;
    19       }
    21       public void setMongo(Mongo mongo) {
    22           this.mongo = mongo;
    23       }
    25       public void setName(String name) {
    26  = name;
    27       }
    28 }
1    @Configuration
2    public class ApplicationConfiguration {
4        @Value("${}")
5        private String appDbName;
7        @Value("${}")
8        private String dbHost;
10       @Value("${db.port}")
11       private int dbPort;
14       @Bean
15       public DB db() throws UnknownHostException {
16           return mongo().getDB(appDbName);
17       }
19       @Bean
20       public Mongo mongo() throws UnknownHostException {
21           return new Mongo(dbHost, dbPort);
22       }
23   }

That’s, actually, it – enjoy. If you feel some part of the puzzle is missing, please leave a comment.

Written by JBaruch

30/05/2010 at 16:25

Posted in Frameworks, Friendly Java Blogs

Tagged with , ,

Integrating MongoDB with Spring Batch

with 5 comments

Update (May 30th 2010):
If you look for plain core Spring integration with MongoDBhere’s a post for you.

Spring Batch is a superb batch framework from, well, Spring. It covers all the concepts of batch architecture and, generally, spares you from reinventing the wheel. It’s cool, really. If you have batch-oriented application, you must go and take a look at Spring Batch. And if you don’t know what batch-oriented application is, just think about reading-validating-saving-to-db a zillion text files every night, unattended. Now you know what batch-oriented application is, go and look at Spring Batch.

Welcome back. As you’ve seen, Spring Batch constantly saves its state in order to be able to recover/restart exactly when it stopped. JobRepository is the bean in charge of saving the state, and its sole implementation uses data access objects layer, which currently has two implementations – in-memory maps and JDBC. It looks like this:

JobRepository class diagram

Of course, the maps are for losers testing,  JDBC implementation is the one to use in your production environment, since you have RDBMS at your application anyway, right? Or not…

Today, when NoSQL is gaining momentum (justified, if you ask me) the assumption that  “you always have RDBMS in enterprise application” is not true anymore. So, how can you work with Spring Batch now? Using in-memory DAOs? Not good enough. Installing, setting up, maintaining, baby-sitting RDBMS only for Spring Batch meta-data? Hum, you’d rather not. There is a great solution – just keep the meta-data in the NoSQL database you use for the application itself. Thanks to Spring, the Spring Batch architecture is modularized and loosely-coupled, and all you have to do in order to make it work is to re-implement the four DAOs.

So, here’s the plan:

  • Implement *Dao with NoSqlDb*Dao
  • Add them to Spring application context
  • Create new SimpleJobRepository, injecting your new NoSqlDb DAOs into it
  • Use it instead of the one you would create from JobRepositoryFactoryBean
  • Profit

That was exactly what I did for our customer, implementing the DAOs using MongoDB. Guess what, you must go and take a look at MongoDB.  It’s lightning-fast, schema-less document-oriented database, that kicks ass. When you suddenly have a strange feeling that RDBMS might not be the best solution for whatever you do, chances are you’d love MongoDB, as I do now. There are use-cases, in which you just can’t implement whatever you need to do with relational storage. Well, I lied. You can. It will take a year, it will look ugly and perform even worse. That’s my case, and I am just happy the year is 2010 and we know by now that one size doesn’t fit all.

I have to admit -implementing Spring Batch DAOs with MongoDB was fun. Even Spring Batch meta-data model, which was designed with relational storage in mind, persists nicely in MongoDB. Should I even mention that the code is cleaner comparing to JDBC? Even on top of JDBC template?

Now go and grab the Spring Batch over MongoDB implementation and the reference configuration: I have used the samples and the tests from original Spring Batch distribution, trying to make as few changes as necessary. You’ll need MongoDB build for your platform and Gradle 0.9p1 to build and run. (Why Gradle? Because it is truly a better way to build).

If you use MongoDB – enjoy the implementation as is. If you use some other document-oriented DB, the conversion should be straightforward. In any case, I’ll be glad to hear your feedback.

Written by JBaruch

27/04/2010 at 15:10

Maven2 to Gradle Convertor

with 21 comments

Update (04/05/2010):
The code has been refactored from script to class, and it is now hosted on github.
Update (31/07/2010):
Thanks to @lilithapp I have discovered a limitation – your project will be considered multi-module only if your reactor is also a parent of at least one of your modules. I probably won’t fix it, since that’s the case in most projects and since gradle-m2metadata-plugin doesn’t have that limitation.

Last JavaEdge I delivered a session about Java build tools landscape. My impression from this overview is solid – Gradle rocks. It is a best of breed and takes the best from Ant and Maven2, leaving the downsides of both behind. Take at look, it is worth it (Prezi rocks too, but it’s another blog post).

The only fly in the ointment I found is lack of good maven2 to Gradle convention. Gradle has good maven support. First of all, it can use dependenices’ POMs to determine their transitive dependencies. Second, it has Maven plugin, but it works in the opposite direction – it can generate POM for your project, built with Gradle. I need the other side – something similar Gradle has for Ant – ant.importBuild() imports an Ant build into the Gradle project, and each Ant target is treated as a Gradle task. This is cool! Franky, I need much less with Maven.

Here’s the shopping list: I need to generate the following settings from POM.xml

  • Dependencies (inc. scopes and exclusions)
  • Correct plugins selection (e.g. war for web application module)
  • GroupId
  • Version
  • Repositories
  • Compiler level settings
  • All those with full multi-module support
  • All those with reuse support from inheritance and settings.xml

After a short search I discovered JIRA issue GRADLE-154,  in which Antony Stubbs asks for a subset of such functionality, and finally attaches a small Groovy script that parses given POM.xml and dumps to the console dependencies in Gradle format. That was a great start for me, but the drawbacks were obvious – no support for multi-module projects (I can’t recall when I saw single-module project last time), no support for parts, coming from settings.xml, etc. One specific pom.xml file in view has very little to do with the effective pom in runtime. You already got it, right? The parsing should be done on the effective pom, which is easily obtained using maven-help-plugin. So, having effective pom in hand, I can rip it apart and build nice set of build.gradle files, and the settings.gradle for the multi module support, and they include all the items from the above list!

I can assure you there are some bugs here and there in this script, but generally it works, and I managed to migrate fairly complicated project with war assembly, transitive dependencies, poms inheritance, artifacts exclusions etc. in a single click. “Is this cool or is this cool?”

So, grab the script, and give it a shot. It has two flags: -verbose prints out Maven output during effective pom resolution and -keepFile keeps the effective pom file for you.

Note the new task in the generated – replacePoms. The idea is to solve the lack of IntelliJ Gradle integration when it comes to dependency management (IDEA knows how to run the build). Gradle generates poms for your modules. The knows to copy them to the place where IntelliJ needs them. Just run “build replacePoms”, and IDEA will recognize dependencies from Gradle! Yup!


P.S. You should check the new gradle-m2metadata-plugin, it’s the real thing – Maven3 embedded  into Gradle’s plugin. It gets all the metadata in runtime!

P.P.S. Sorry for my Groovy, it’s not my mother tongue.

Written by JBaruch

23/02/2010 at 04:29

Posted in Build, Friendly Java Blogs

Tagged with , ,

Artifactory as Training Labs Provisioning Platform

with 5 comments

As AlphaCSP‘s training guy I deliver a lot of trainings. They come in different flavors, on different topics, and in different companies. The common in all of them is the problem I encounter on labs setups.

First, problem definition:
Let’s take, for example, some serious global financial company, which needs training in Spring, Hibernate and JAX-RS. My contact point is a nice Training Activities Administrator. She just organized “Micro-Expressions Training for HR” and her next task is to organize my Java course. What do you say, will she be able to install IntelliJ IDEA (or Eclipse?), Spring 3 bundles, Hibernate dependencies and Jersey? And yes, the classroom network is detached from both Intranet and Internet (BIG financial company, remember?). Oh, I almost forgot a bonus – the training workstations rollback all changes after every restart.

Now, here are two possible provisioning solutions:

  1. Come over day before for installs. Well, probably the classroom is occupied with another course. If not, the guy that should let me in the classroom, the network, and the computers is busy, sick or in Thailand, but probably all three in the same time. Ah, and I have another work to do on this day! And the customer won’t pay for it anyway. You got the point – bad idea.
  2. Prepare the labs on CDs. Well, it generally works, most of the software courses delivered that way, one successful example is SpringSource trainings. You get nicely branded CD full of all you need – The IDE, the dependencies, and the labs source code. Good stuff, really. It works for SpringSource because of the high volume of catalog courses they deliver. They have a stock of identical CDs they use during every single training, worldwide. When it comes to tailor-made courses, things are different. No course is similar to any other course, the topics, installs, dependencies and exercises are unique set each time. That rather complicates the CDs craft – composing, burning, labeling. I don’t say it’s impossible – I did it for each and every course, but it’s a real PITA. And thanks to the reverting workstations, students will have to copy, extract, setup, define variables every day from scratch over and over again. Did I mention PITA?

And there is a third solution. The best one. You can use Enterprise Repository Manager to recreate students environment in a couple of minutes in any given time. Now, it rocks. It really is. Watch the steps:

  1. Customer Requirements
    1. Two simple installs, every Training Administrator and/or Sysadmin can manage:
      1. IntelliJ IDEA (next-next-next) from JetBrains site. Or Eclipse?
      2. Maven2 (unzip) from Apache site.
    2. Create .m2 directory under user home for Maven user settings.
    3. Permission to connect your notebook to the class’ Intranet. It is isolated, the machines revert themselves, shouldn’t be a problem.
  2. Exercises development
    1. Develop the exercises on  your notebook with all the Maven goodies – pom.xml, dependencies, superb IntelliJ-Maven integration (or Eclipse?).
    2. Install Artifactory locally (you’ll see why Artifactory and not Nexus in the following steps). I mean – download and unzip, heh. Run it (not even as a service)
    3. Import your local repository to Artifactory (can’t do it in Nexus #1) – zip it and make half-dozen of clicks in Artifactory UI.
    4. Deploy the exercises to the local repository. They probably won’t compile – they are exercises, right? Then just zip them and deploy from UI. Students will download them through Artifactory UI.
    5. Take the Artifactory down. You are ready to go to class.
  3. Exercises delivery
    1. “Good morning, students!” – deliver the hell of the course, get to the hands-on part.
    2. Connect your notebook to the class’ Intranet, get dynamic IP (yap, dynamic is good enough).
    3. Get Artifactory up and running.
    4. Let the students browse to Artifactory’s homepage. There they will found Maven Settings Generator (can’t do it in Nexus #2), which will generate settings.xml to work with your instance of Artifactory from their machines. All they need to do is check “Mirror-Any”, select “repo” and save the generated file under .m2 directory. That’s all, their machines are fully configured to get all the dependencies needed for Spring, Hibernate, Jersey, and whatever you need for your training.
    5. Let the students browse the repository to download the exercises zip, unzip it, export Maven project into IntelliJ IDEA (or Eclipse?) and just start working!

As you saw, using Artifactory as labs delivery platform dramatically simplifies both lecturer’s and student’s life, enabling rapid exercises development and rollout without any preparation from student’s part and minimal preparation from training organizer’s part, all those thanks to Maven2 dependency management capabilities, good IDE Maven integration, and, of course, Artifactory’s ease of use. And frankly, there is nothing I love more than ease of use. Maybe only chocolate ice-cream.

Written by JBaruch

18/01/2010 at 18:23