Adding a hook to Liferay

In this article, we elaborate on Liferay hooks, which are a type of plugin for the Java-based Liferay portal. We discuss what they are and then we build one. We setup the project structure and add all the code. Then we deploy it to Liferay.

Liferay hooks

After playing around a bit with a portal, and maybe writing a few portlets for it, you probably start wondering how you can override some of the portal features. You might want to use a branded theme for the whole portal or integrate the portal login with your own database of users/own authentication system.
Liferay has quite a few types of so called “plugins”, such as themes and layout templates, which enable developers to achieve the aforementioned effects. The most powerful type of plugin is called a “hook”. Liferay hooks let you change the portal functionality itself.

Note: There is also the even more powerful, old EXT Liferay extension model. This EXT model has become rather unpopular during the last few years. It basically allows you to override parts /specific classes of the Liferay source code. Usually, EXT plugins are tied to one version of Liferay, since they rely on specific implementation classes of a particular build, instead of on interfaces that change a lot less. The only common case in which EXT is still used, is to override Liferay Struts action classes, like LoginAction and LayoutAction. However, as we will see, from the next versions of Liferay on, it will be possible to override these action classes with hooks as well, which will almost deprecate the EXT model in my opinion.

Building a hook

We are going to build a simple hook that logs info about every request to Liferay. Before and after every request we will log something. This could be a useful hook if it is necessary to log the ips of the clients or if one wants to calculate how long every Liferay request takes.

Project Structure

The project structure of a hook looks very similar to that of a a regular webapp. Hooks, just like portlets, are packaged as wars. You can even deploy portlets and hooks in the same war file, although this is conceptually not a good idea, because hooks override the behavior of the whole portal, and portlets are supposed to be small pluggable components that don’t impact the portal itself.

Since we are using Maven, we use a typical Maven project structure.

Note: It is possible to build hooks using the Eclipse Liferay IDE plugin, but for this article, we choose to explain building a hook from scratch.

The files in our hook war will be the following:

  • /src/main/java/com/integratingstuff/liferay/hooks/CustomPostEventAction
  • /src/main/java/com/integratingstuff/liferay/hooks/CustomPreEventAction
  • /src/main/resources/
  • /src/main/webapp/WEB-INF/liferay-hook.xml
  • /src/main/webapp/WEB-INF/
  • /pom.xml

We see that all the typical Maven directories: src/main/java for the Java files, src/main/resources for the resource file and src/main/webapp for the web content. The file that defines our hook, liferay-hook-xml, resides in this last directory.

Note: It is not necessary to have a web.xml file in your source code. Upon hook deploy however, Liferay will automatically create it for the deployed war.

Defining hooks

Hooks are defined in a liferay-hook.xml file.

The liferay-hook.xml for our portlet looks like this:

	<custom-jsp-dir>/WEB-INF/jsps</custom-jsp-dir>	-->
		<service-impl> com.integratingstuff.liferay.hooks.CustomUserLocalServiceImpl</service-impl>

It is only overriding some portal properties, but for completeness, we also comment on the most common and important other uses of hooks in this section.

Overriding portal properties

We are only supplying a file to override some properties of our portal(this can be done on portal level also, but note that pointing to hook classes in the actual file is not a good idea since theses classes are not on the classpath of the portal itself).
Not all portal properties can be overridden with a hook. You can read which properties can be overridden by taking a look at the liferay hook dtd wiki page.

Overriding portal jsps

There are some other tags that can be used in a liferay-hook.xml file.
It is possible to override portal specific jsps by supplying a custom-jsp-dir and then putting jsps in that directory. For example, if we define a custom-jsp-dir “/WEB-INF/jsps” and then we create /WEB-INF/jsps/html/portlet/blogs/view.jsp in our project structure, this portal specific jsp will be overridden. The one from the hook will be used instead of the portal one.

Overriding portal services

Liferay contains a lot of services that are defined as spring beans in … of the portal source code. All these services can be overridden by using the service tag of liferay-hook.xml. For example, we could replace the portal implementation of the com.liferay.portal.service.UserLocalService – the service that enables one to save/update/lookup information about portal users – interface with our own this way.

Soon: overriding actions

As stated in this blog article, in the next releases of Liferay, it will be possible to override Liferay Struts actions with hooks as well.

Differently from Liferay services, these Struts actions are not defined as Spring beans, and in the past, they required the EXT model to be overridden. These Struts Liferay actions are concrete classes, not interfaces, are part of the internal Liferay code and dependent heavily on its implementation. It often is not a good idea to override them, since changing your version of Liferay would likely break them, but still, it is not uncommon to override some of the important ones like LayoutAction(to get custom rendering behavior of any page fragment) and LoginAction(to get custom portal login functionality that can not be covered by the Liferay event model) if need be.

Soon: adding servlet filters

Apparently, soon it will also become possible to add servlet-filter and servlet-mapping declarations to your hook, making it possible to add custom servlet filters to Liferay itself.

Our hook

Our file looks like this:

Basically, we are adding events before Liferay starts to process a request(but after any Liferay servlet filter) and after the processing of every request.

As stated in the comments of the documentation(, when overriding Portal Events properties, we have to point to classes that extend

So when implementing these classes, we have to extend this Action class:

public class CustomPreEventAction extends Action{
	public void run(HttpServletRequest request, HttpServletResponse response)
			throws ActionException {“Request from ip: ” + request.getRemoteAddr();
		request.setAttribute(“startOfRequest”,new Date());

public class CustomPostEventAction extends Action{
	public void run(HttpServletRequest request, HttpServletResponse response)
			throws ActionException {
		Date now = new Date();
		Date startOfRequest = request.getAttribute(“startOfRequest”);
		if (startOfRequest != null){“Request ook: ” + (now.getTime() - startOfRequest.getTime())+ “ms”);

Note though that the request and response that are passed in these actions are actually wrapper objects that are managed by Liferay itself.
Some things cannot be done with these wrapper objects. Invalidating or modifying the session of the request for example.

Other files


There is also the optional file. It is good practice to add it. It also offers some extra features.

author=Liferay, Inc.

#portal.dependency.jars=portal-impl.jar, struts.jar, commons-logging.jar, log4j.jar, slf4j-api.jar, slf4j-log4j12.jar, commons-codec.jar

Notice the commented line. With portal.dependency.jars you can supply jars that have to be copied from the portal root to the lib folder of the hook. This way, you can easily write a hook that depends on the portal implementation for example(for example, if you want to override one of the Struts actions that are part of portal-impl, with struts-action).

Even if you do not specify any portal.dependency.jars, Liferay will still copy its log4j.jar, commons-logging.jar and util-java.jar into the lib folder of the hook. Adding these this way(or having it in the lib folder of the war) is not necessary.


If you are using Maven, you will need the following dependencies in your pom.xml file, of which some are Liferay specific:



Deploying the hook

Deploying the hook is very straightforward. You can just drop the resulting war in the deploy directory of your Liferay install or add it to your server in your Eclipse/other IDE.

Getting started with portals: writing a portlet with Spring Portlet Mvc and deploying it to a portal

In this article, we first talk about portals and when they should be used. Then we talk about Java portlets and their specification, after which we write a portlet with Spring Portlet Mvc. Finally, we deploy the portlet we wrote on Liferay, the leading Java portal.


A portal is a collection of windowed mini web applications, called portlets, which support features like personalization, content aggregation, integration into a foreign portal, authentication and customization. The portal itself usually offers things like an eventing system, single sign on, a portal search, tons of community stuff and facilitates easy communication between the different windows/portlets.

In the screenshot below we see an example of a portal page. Notice all the different windows, each one representing a small web application.

Note: Our definition is the definition of an enterprise portal. A web portal is something different. The latter is just a portal to other web pages and does not necessarily have the windowed mini web applications, and could in theory be just a long list of external links how unsexy that may be.


Some years ago, portals were heavily hyped, as the future of the web even, but these days, not anymore. I even met some people who think they are plain useless and should be buried. Then I met some enthusiasts again who see more uses for them than I do.


Community websites

A portal like Liferay offers tons of features out of the box. If you need an instant messaging system, wikis, forums, message boards, document management, auditing, polls, a chat system, friends lists, blogs and calendars, combined with custom development, Liferay is probably one of the best choices around. You definitely do not want to start building all these things – which have been implemented a 1000 times before – yourself. You could use seperate packages, such as JForum or something, but these would still need a lot of integration and custom work. You could go with a PHP solution such as Drupal or WordPress, but if you are a Java developer/member of a team of Java developers, you are probably not a fan of doing all the custom development in php. Hence, Liferay.
To put it bold, if I would have to make a Facebook, LinkedIn or Youtube, I would make it with Liferay.
Let’s hope people are not going to fire load issue questions at me now. If they do, I am just going to point them to the Liferay Performance whitepapers.

Since I am on a portal/Liferay marketing roll anyway, people can view a list of sites that use Liferay here. Unlike what people might be thinking by now, I am not getting paid by Liferay for this article.

Enterprise websites backed by a SOA

But what if you do not need all the community stuff?
In which case is using the portlet specification over the servlet specification useful by nature, without having to fallback on other portal features to defend its use?
This can only be when all the seperate portlets add value over building the solution as a few classic web applications. If one portlet allows to select a customer, some other portlets could be fed this selection as input for example. This beats making a view interface to get a customer for every seperate web application.
Unfortunately, this will only work when there is already a clean modularization on the backend. If the backend consists of one database and one huge model, things will get too intertwined, and such modularization on the view level will become a headache instead of an advantage.
This is why I think an enterprise portal needs to be backed by a service oriented architecture, in which there are clearly seperated services which should be integrated with each other. The power of portlets comes from the ease with which they integrate disparate sources. So, they should be used when there is not one source of content but when there are multiple sources of content.

When not to use

Most web applications do not need a community and most web applications are (arguably) not an integration of many disparate content sources. Hence, in my opinion, portals are rather a niche product than a good overall solution.

Some argue that portals offer another level of abstraction over web development and that more things are taken care of for a developer. Sure, but at the same time, the developer is confronted with a lot more complexity, is constrained more and in my experience, many portal projects end up modifying the portal software itself which ties the developed software to some particular portal, destroying the portability argument. For Liferay for example, people usually rely on the Liferay specific window state “exclusive” for ajax requests, use the Liferay specific action-url-redirect to apply the post-redirect-get pattern and when custom authentication logic is necessary, Liferay hooks are built which rely on specific Liferay api. Sometimes you wonder why there is only a portlet spec and not a portal spec.

Portals are not a one-size-fits-all solution. If you do not have the feeling a portal is a perfect fit for your needs, you probably should not use one.


We know now what a portlet is. A mini web application. A window on a portal page. But we didn’t dive into the technical details until now.

The portlet container

In short, the portal utilizes a portlet container to manage the lifecycle of the portlets just like a servlet container is used to manage the lifecycle of servlets. A portlet container is responsible for the initialization, request processing and destruction of portlets. The Java Portlet Specification defines the contract between a compliant portlet container and portlets. This standardization allows for portability of portlets between portal implementations.

Portlet versus servlet development

Portlet development is very similar to servlet development. The portlet API is modeled after the servlet API. The Portlet, PortletContext, PortletRequest and PortletResponse are very similar to their servlet counterparts. The major difference is that portlets only render a fragment of an html page, instead of a whole page.

The portlet specification

However, there still are some differences between portlet and servlet development.
We discuss the 3 most important portlet specific features now.

a. Different phases

With portlets, a request has at least two distinct phases: the action phase and the render phase. The action phase is executed only once. This is the moment where any backend actions occur. In the render phase the view is rendered to the user. Unlike the action phase, the render phase can be executed multiple times for a single request. With servlets, there is no such distinction on the API level, and these phases is something a portlet developer has to get used to.

b. Portlet modes

A portlet can have different display modes. The portlet mode determines what content the portlet should generate. The Portlet API defines 3 portlet modes: view, edit and help. In view mode, a user typically views data. In edit mode, a user typically modifies data. In help mode, a user can consult help about the portlet. A portlet developer can add any number of custom portlet modes to a portlet.

c. Window states

A window state indicates the amount of portal page space that should be assigned to a portlet. The portlet API defines 3 window states: normal, minimized and maximized. Any portal is allowed to define additional window states. Liferay, for example, has one additional window state, “exclusive”, which just renders the page fragment coming from the portlet, without decorating it with the entire portal page. Very useful when there is a need for Ajax integration.

In the screenshot below, we see the portal page with the portlets again. This time however, some of them have the portlet window state “minimized”, the others still have window state “normal.” Note that every portlet has some icons. These will either change the window state(to maximize it for example), or they will change the portlet mode, from “view” mode to “edit” mode for example.

Note: There are two portlet specifications. JSR186(portlet 1.0 api) and JSR286(portlet 2.0 api). In JSR286, a lot of shortcomings of JSR186, which made writing vendor-neutral portlets difficult/limited, were lifted. The most important JSR286 new features are interportlet communication(next to the action and render phases, there is also an event phase in JSR286), WSRP 2.0 alignment, support for Ajax, and portlet filters and listeners. In my opinion, JSR286 was a big step in the good direction, but it definitely did not solve all common cases in which vendor-specific api is necessary.

Writing a portlet

Spring Portlet Mvc

The Spring Portlet Mvc framework is a mirror image of the Spring Web Mvc framework, and uses the same underlying view abstractions and integration technology. Therefore, different Web Mvc classes will be reused.
It is also important to realize that when using Spring Portlet Mvc, you will no longer write your own Portlet but use the Spring Mvc one(just like you dont write your own servlets when using Spring Web Mvc).

Project structure

We are going to create one of the simplest Spring Portlet Mvc portlets possible.

The files we will need are the following:

  • /src/main/java/com/integratingstuff/portlets/test/SampleController
  • /src/main/webapp/WEB-INF/portlet.xml
  • /src/main/webapp/WEB-INF/springContextConfig.xml
  • /src/main/webapp/WEB-INF/web.xml
  • /src/main/webapp/WEB-INF/jsp/demo.jsp
  • /pom.xml

In comparison with a regular webapp, there is only one portlet specific file: portlet.xml.


The portlet.xml file is our most important file. It defines our portlet.

<?xml version="1.0" encoding="ISO-8859-1"?>
<portlet-app xmlns=""
	version="2.0" xmlns:xsi=""

			<title>Portlet Mvc Demo</title>

Portlet Mvc is designed around a portlet that dispatches requests to spring portlet mvc controllers. However, the DispatcherPortlet does more than only that. It also makes sure that the portlet is completely integrated with the Spring ApplicationContext and the developer is able to use every other Spring feature.

You will probably notice that we, in this tutorial, are developing a portlet that only supports the “view” portlet mode.

Note also how we point the portlet to our spring contextConfigLocation. If we would not add this init-param, the DispatcherPortlet will look for the default [portlet-name]-portlet.xml file in the WEB-INF directory. If this file would not be present, an exception would be thrown at deploy time.


On initialization of the DispatcherPortlet, the framework will create the bean definitions defined in this file.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns=""

	<bean id="sampleController" class="com.integratingstuff.portlets.test.SampleController">
	<bean id="portletModeHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeHandlerMapping">
	    <property name="portletModeMap">
	        	<entry key="view" value-ref="sampleController"/>
	<bean id="viewResolver" class="org.springframework.web.servlet.view.UrlBasedViewResolver">
	    <property name="prefix" value="/WEB-INF/jsp/"/>
	    <property name="suffix" value=".jsp"/>
	    <property name="viewClass"><value>org.springframework.web.servlet.view.JstlView</value></property>

We have one custom controller, which we will discuss later.
Note how this controller is mapped on the view portlet mode in the PortletModeHandlerMapping bean, which is the bean the DispatcherPortlet will use to decide which controller to execute.

The last bean to discuss is our viewResolver. Note that this bean is a member of the regular spring mvc framework and not a member of a portlet specific package. This is because spring portlet mvc reuses all spring mvc view technologies, as we already said when we introduced Spring Portlet mvc. How this is possible is discussed in the next section.


We also need a valid web.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<web-app id="WebApp_ID" version="2.4" xmlns=""




When using Spring Portlet mvc, this file will always declare the ViewRendererServlet. To be able to reuse all the view technologies from Spring Web Mvc, the PortletRequest and Response need to be converted to a HttpServletRequest and Response in order to execute the render method of the (regular Spring Web Mvc) View. In order to do this, DispatcherPortlet uses the special ViewRendererServlet which only exists for this purpose.


package com.integratingstuff.portlets.test;

import javax.portlet.RenderRequest;
import javax.portlet.RenderResponse;

import org.springframework.web.portlet.ModelAndView;
import org.springframework.web.portlet.mvc.AbstractController;

public class SampleController extends AbstractController {
	public ModelAndView handleRenderRequestInternal(RenderRequest request, RenderResponse response) throws Exception {
		ModelAndView mav = new ModelAndView("demo"); 
		mav.addObject("message", "Check it out!"); 
		return mav;


Note that there are actually two methods to implement for our subclass of AbstractController: handleActionRequestInternal and handleRenderRequestInternal. We are not doing any backend action, we are just building and rendering the view, hence we only override the handleRenderRequestInternal method.


We are returning a “demo” view from our controller, which our viewResolver resolves to the following demo.jsp:

<%@ page contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%> 
This is our test portlet.<br/> 
Message: ${message} 

This is just a regular jsp.


This project was developed as a Maven project. This Maven pom.xml file is not essential for portlets at all, but for the people copy pasting along, the pom is printed here for easy reference:

<project xmlns="" xmlns:xsi=""



We can now build our portlet. If we run “maven package” on our project, a war is generated which is deployable on a portal.

Optional: liferay-portlet.xml

Usually, a liferay portlet also has a liferay-portlet.xml. This is only really necessary if you want to use some Liferay specific features, but when using Liferay, it always is good practice to include. If you want, add the following liferay-portlet.xml to your portlet application:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE liferay-portlet-app PUBLIC "-//Liferay//DTD Portlet Application 6.0.0//EN" "" >

Deploying the portlet to Liferay

1. Download Liferay

First, download Liferay Community Edition 6.0 from We use the version bundled with Tomcat. Extract the downloaded zip to your harddrive.

2. Install the Eclipse Liferay IDE

Although it is not necessary to use the Liferay IDE for Eclipse, we recommend it in this article. You can read about it at Liferay IDE Overview and figure out how to install it at the Liferay IDE Installation Guide.

In short, you just have to add as an update site within Eclipse(Help>Install New Software in Eclipse) and install the plugins present on that location.

Note: There is also a Liferay Developer Studio, which is a bit more extended than the regular Liferay IDE, but for the purpose of this article, the Liferay IDE suffices.

Add the Liferay Server to the Eclipse Servers panel

Open the Eclipse Servers panel(Window>Show View>Servers) and rightclick in this window. Choose New>Server, open the map “Liferay, Inc.” and select Liferay v6.0 CE Server(Tomcat 6) as Server Type. Click next and enter the Liferay tomcat directory: point to the tomcat within your extracted Liferay installation. Add the server.

Start the server

Rightclick on the server and press “Start”. The server will now be started and you will be able to access the portal in your browser through http://localhost:8080

Note: If you have trouble starting the server, add the following vm arguments -Xms1024M -Xmx1024M -XX:MaxPermSize=256M and increase the server timeout(double click on the server in the Servers window).

Add the portlet to the server

Rightclick on the server again and choose “Add and Remove”. You can now choose to add any projects eligible to be deployed on the server(if your project is not, you probably need to add the dynamic web module Eclipse facet to your project). Our portlet project should be in the list under “Available”. Move it to the server(“Configured”). It should automatically be published(if not, rightclick on the server again and press “Publish”).

Note: Alternatively, if you are not using Eclipse for example, you can run the maven package goal on the project and put the resulting war in the deploy dir of the extracted Liferay installation. Then run the Liferay server by running “startup” in the bin folder of the tomcat folder of the Liferay folder.

Add the portlet to a portal page

Now Liferay is started, log on to Liferay( with password test is the default test user on Liferay 6.0) and choose Add on the menu in the top, then >More>Undefined>our portlet. The portlet we made will then be added to the page.

The portlet we developed is now deployed and visible on a portal. See how we already edited its background by using the portal “Look and Feel” feature.

Migrating from Spring Security 2 to Spring Security 3

Spring Security 3 offers a number of improvements over Spring Security 2, such as the addition of Spring EL expressions in access declarations and better support for session management. Spring Security 3 also introduces a number of changes such as the removal of the NtmlFilter.

In most cases, a migration from Spring Security 2 to Spring Security 3 is rather straightforward, but when custom spring security filters are present, additional work needs to be done.

In this article we discuss all changes required to do the migration. We also recommend the order in which the changes are discussed as the order in which to do them.

1. Importing the dependencies

One of the biggest overall changes between Spring Security 2 and Spring Security 3 is that Spring Security 3 takes a more modular approach. Spring Security 3 was divided into modules. The modules you will encounter in pretty much any webapp that is secured with spring-security are:

  1. spring-security-core(core classes such as Authentication, Voter, and the like)
  2. spring-security-web(all core classes that are dependent on servlet api, such as all spring security filters)
  3. spring-security-config(needed to use the spring security xml namespace in spring applicationContext files)

There are lot of other specific modules, such as spring-security-ldap and spring-security-asm.

A maven dependency such as


would change to the following dependencies:


2. Changing the classnames

One of the effects of the modularisation is that a lot of spring security framework classes were moved to a more specific package.

SecurityContextHolder’s package changed from to for example.

In order to do a successful migration, it is necessary to change the import declaration of all spring security classes in classes that make use of them.

3. Handling small changes to the API

Usually, just changing the imported spring security class’s package name is enough. However, the api of some spring security framework classes was changed as well. Usually, these changes are rather small and easy to deal with.

Some of these small changes that come to mind:

  • SpringSecurityException does not exist any longer. The quickest migration is to let it extend from RuntimeException or NestedRuntimeException instead of SpringSecurityException. Note that the specific Spring Security exceptions, such as AccessDeniedException and AuthenticationException still exist, but just extend RuntimeException directly now.
  • The decide method of AccessDecisionVoter now takes a collection of ConfigAttribute instead of a ConfigAttributeDefinition. ConfigAttributeDefinition was basically just a decorator for a collection of ConfigAttribute elements.
  • SavedRequest is now an interface instead of a concrete class. In order to instantiate a default implementation, instantiate DefaultSavedRequest.
  • The default User of the UserDetails interface does not have a setAuthorities method anymore. The authorities are passed to the constructor, in the implementation the variable is final.

4. Rewriting the filters

Usually, a migration from Spring Security 2 to Spring Security 3 is straightforward. However, if you implemented custom spring security filters, chances are you will have a bit more work. The basic filter class from the Spring Security specific SpringSecurityFilter changed to the generic Spring web GenericFilterBean and the API of this class changed a lot more than the API of other classes.

Especially AbstractProcessingFilter(now AbstractAuthenticationProcessingFilter) changed. In Spring Security 3.0, the determineFailureUrl and determineTargetUrl methods were removed in favour of adding separate Handler instances(AuthenticationSuccessHandler, AuthenticationFailureHandler,..) to these classes, which makes the filters more configurable/reusable. A similar story for LogoutFilter(determineRedirectUrl removed in favour of adding a LogoutSuccessHandler).

5. Changing the security related applicationContext files

Last but not least, the spring applicationContext in which all the spring security beans are defined needs to change to.

At the very least, the spring security namespace schema will need to be changed from to Forgetting to do this whill result in a BeanDefinitionParsingException upon application deploy.

The spring security xml namespace underwent quite some changes too.
One of the most notable changes is the removal of custom-authentication-provider and custom-filter inside authenticationProvider and custom spring security filter beans. Authentication providers are now injected explicitly or configured as <security:authentication-provider ref=””> within the authenticationManager bean. Custom filters can be declared explicitly in the spring security filter chain bean declaration, or they can be included with <custom-filter> within the <http> element in case this one is used.
Another notable change within the spring security xml namespace is that authentication-provider is now declared as a child of authentication-manager.

It is also necessary to change the classnames of the spring security framework classes that changed package in the applicationContext file(s).

And finally, some beans expect other properties than they used to. These injections need to be corrected too. For example, FilterSecurityInterceptor does not take a String anymore but needs to be injected with or a proper securityMetadataSource or needs to be declared with a fully configured security:filter-security-metadata-source tag.

Network Adapters in VirtualBox

In this article I discuss VirtualBox as a useful tool for Java development and I cover how to configure and use the different available network adapters.


VirtualBox is a virtualization software package. It can be installed on almost any operating system(distributions for Windows, Mac and Linux are available). The operating system it is installed on is called the host. Within the application, additional operating systems, each known as a guest OS, can be added and run.
For example, if you are on a Ubuntu host machine, you can add a Windows Vista guest OS to your VirtualBox, which will enable you to run Windows Vista from within your Ubuntu OS.

VirtualBox in Java development

VirtualBox has a lot of uses. A popular one is Linux users running a Windows instance when they need the occasional Windows program that doesnt have a Linux equivalent. Another popular one is server admins setting up a new server configuration, for example when moving from Ubuntu Server to CentOS, on a virtual machine first.

For development, VirtualBox has some interesting uses as well.
VirtualBox enables developers to share development environments.
In particular, it is a great way to share databases for development.
I’ve worked on a Java web application that connected to a MySql database, a DB2 database, an OpenLdap and a Alfresco document management system. Sharing one VirtualBox image on which all of those were already installed, configured and filled in with testdata turned out to be a lot less work than installing and configuring everything on each developer machine seperately.

VirtualBox Network Adapters

One of the hassles with Virtualbox is configuring the network adapter. You usually want the guest OS to be reachable by the host OS and you usually want to have the guest OS to have network/internet access too.
With different network adapters come different options.


This is one of the simplest interfaces. As the name states, the guest only knows about the host and the host knows about the guest(for example, a database on the guest could be reached by the host by just using the ip of the guest in the connection string). No physical network interface is used. The guest wont have network/internet access.
Although this is supposed to work fine if you dont need internet access, i have ran into problems when trying to reach an oracle database installed on the host machine(Mac or WIndows Vista Host, Ubuntu or WinXp Guest). The command tnsping does not work, regular ping does. So it seems like the Oracle instance does not create an own interface for this type of network adapter.


Used to create a software based network, that is not visible to applications on the host or the outside world, but visible at most to other virtual machines. No physical network interface is used. I have never used this.


This is probably the most advanced option. With this network adapter, Virtualbox connects to one of your installed network cards and exchanges network packets directly, circumventing the whole host operating system. This means that the guest will ask the available dhcp server for his own ip address and appear on the network, reachable by the outside world, as if it were a seperate physical machine.
However, this option is not always available. If the host does not have network access, Bridged mode will never work. Also, some internet providers only allow for one ip address. In this case, when the guest asks for an ip, it wont get one since the host already has the one ip, and the guest wont have network access at all. It wont be able to reach the host nor will the host be able to reach it.


Network Address Translation(NAT) is the default networking mode in VirtualBox. A guest with NAT enabled acts much like a real computer connecting to the Internet through a router. The router in this case being the VirtualBox application on the host machine. The disadvantage of NAT is that, just as with a private network behind a router, the virtual machine is invisible to the outside world, including to the host. So, the guest has network access, but no one has access to the guest.

To make a database on a NAT guest available on the network, you will need to setup port forwarding. Luckily, this is quite easy.
Imagine that we have a Oracle Express Edition database at port 1521 on the guest OS with NAT enabled and we want to access it from the host.
If we configure port forwarding like this(VirtualBox->Selected Guest Os->Settings->Network->Network Adapter NAT->Advanced->Port Forwarding):

Port forwarding with NAT on VirtualBox

(where is the ip address on the guest OS)
the database will be reachable through jdbc from the host with a connection string like “jdbc:oracle:thin:@localhost:1521:xe”.
In the above configuration, I also added the option to ssh the guest through port 2222 on the host machine.

Starting iPhone development as an Android developer


A while ago I posted an article about starting Android development as a Java developer.
I am posting a similar article about iPhone development now, and I thought it would be more useful if it would be written from the specific point of view of an Android developer.
First we are going to talk about objective-C – the language in which iPhone apps are developed -, explain how to unit test objective-C code and discuss the main obstacle to start using it when coming from a Java background: manual memory management
Then we create a simple iPhone app: we talk about the iPhone equivalents for the layout xmls, Activity class and Application class in Android.
We end by mentioning that the ui design of an iPhone app is very different from an Android app, and when porting from Android to iPhone, that this is actually one of the bigger difficulties.

This article is aimed at people who know Java, have a decent background in programming languages and have a Mac available they can install the development environment for iOS – XCode and Interface Builder – on. XCode and interface builder are not installable on Windows and Linux operating systems.

Learning Objective-C

Objective-C is nothing more than C augmented with object oriented capabilities. Although at first it sounds like you will have to spend a lot of time learning C, this is not the case. You will need some background, but everything in objective-C is Java like. Low level C details are hidden. To learn Objective-C thoroughly, I’d recommend reading “Learn Objective-C for Java developers“.

If we have a Java class like:

public class Store {

	private List products;
	private String name;
	private boolean open;
	public List getProducts() {
		return products;
	public void setProducts(List products) {
		this.products = products;
	public String getName() {
		return name;
	public void setName(String name) { = name;
	public boolean isOpen() {
		return open;
	public void setOpen(boolean open) { = open;
	public List getProductNames(){
		List productNames = new ArrayList();
		for (Product product: products){
		return productNames;

Then the corresponding objective-C class would be divided into the following two files:


#import <Foundation/Foundation.h>

@interface Store : NSObject {
	NSArray *products;
	NSString *name;
	BOOL open;

@property (nonatomic, retain) NSArray *products;
@property (nonatomic, retain) NSString *name;
@property BOOL open;

- (NSArray *) getProductNames;



#import "Store.h"
#import "Product.h"

@implementation Store

@synthesize products;
@synthesize name;
@synthesize open;

- (NSArray *) getProductNames{
	NSArray *productNames = [NSMutableArray new];
	for (Product *product: products){
		[productNames addObject:];
	return productNames;

- (id) init {
	self = [super init];
	if (self!=nil) {
		products = [NSMutableArray new];
	return (self);

- (void) dealloc{
	[products release];
	[name release];


The first being the interface definition and the second being the implementation.

Although the syntax looks different between Java and objective-C, there is basically a one-to-one mapping to everything in there.
The only difference being the implemented dealloc method, to which we will get to soon, and indicates that for iPhone apps, we will have to manage our memory manually.

The interface definition

In the interface definition we declare our Store class as a subclass of NSObject, basically the equivalent of the java.lang.Object class. Then we declare our fields. Notice that we are declaring pointers to objects instead of to the objects itself(notice the use of the * symbol – once declared, however, the * symbol is not used anymore, but we are working with the pointer!). This is only the case for objects, not primitives, like BOOL, or int. Anyway, if you use the objects the standard way, you can just treat them as in Java.

After the fields definition, we declare our properties with the @property attribute. Notice that for the objects, we declare (nonatomic, retain). nonatomic means that the property(the getter and setter for the field) does not have to be threadsafe. retain comes down to keeping the object until release is explicitly called. (nonatomic, retain) is what you will use for all your properties if you want to have Java-like behavior. So, although you definitely should read about it, for the purpose of this article you can just take (nonatomic, retain) as is.

Then we define one instance method(indicated by the -, + would indicate a class method) to get the names of all products.

The implementation

In the implementation file, we synthesize our properties. While the @property only indicated that we were going to implement the property(the getter and setter), the @synthesize directive actually generates these methods for us, without us having to declare them explicitly.

Then we implement our method getProductNames. Notice how method calls(well, in objective-C terminology messages) are put between square brackets. [NSMutableArray new] means the class method new is called on the NSMutableArray class. [productNames addObject:] calls addObject on the productNames instance and passes the as a parameter. Note that point notation( is a special (Java like) notation for properties, made possible through the @property directive.

Then we implement init – which is the constructor – and we also implement dealloc, which we will discuss later: this method releases the retained objects.

Note: Although there seems to be a lot more to learn, porting to Java is quite easy. The primitive types are easy to translate, NS(Mutable)String has roughly the same methods as String, and the Collection classes – ArrayList, HashSet, HashMap – translate easily to NS(Mutable)Array, NS(Mutable)Set and NS(Mutable)Dictionary. Note that all these base classes in objective-c are not mutable, but they all have a mutable subclass, so to get the mutable Java equivalent, you will have to declare [NSMutableArray new] instead of [NSArray new]. Besides these small differences – there are more like these – I never struggled with porting any piece of code. Often, I would just copy-paste the Java code into XCode and almost translate literally line by line.

Installing XCode and creating a Project

Talking about XCode: if you havent already done so, this is a good time to install XCode.
XCode is the development environment for Mac and iOS(the iPhone operating system) software. It is the equivalent of Eclipse(or any other Java IDE) for Android developers.
To get it, you will need to register at and download it at the Apple Dev Center.

Once installed, fire it up and choose “Create a New XCode Project”. On the left hand side select “iOS – application”, on the right hand side “the View-Based application template”, click “Choose” and give your project a name.

Create Project

The most used part of “Groups and Files” is the section right under the project name. You will see the folders “Classes” and “Resources” there. In Classes our classes are placed, in Resources our resources are placed, like our application info file(the AndroidManifest.xml equivalent) and our user interface files(the Android layout xmls equivalents). Note that these folders are grouping folders within XCode. They do not correspond to actual folders on the file system! If you would create a new Group “images” under Resources and put an image “products.png” there, you would still reference it as “products.png” in the app, not as “images/products.png”.

Create a new Group “Model” under Classes and create the previous Store class and the following Product class in this new folder(New File>Cocoa Class>Objective-C Class):


#import <Foundation/Foundation.h>

@interface Product : NSObject {
	NSString *name;
@property (nonatomic, retain) NSString *name;


#import "Product.h"

@implementation Product

@synthesize name;


Unit testing our code

Something most objective-C/iPhone development books dont mention is how to write unit tests in XCode.
Coming from a Java background however, being used to unit test everything, you really want to write unit tests.
Also, to learn a new language, getting your unit tests up and running is probably the fastest way to execute some piece of code and see what happens.

Luckily, the latest versions of XCode have unit test support builtin. To be able to run unit tests, we will need to add a unit test bundle target to the project.
From the project root right click menu, choose Add>New Target and then “iOS>Cocoa Touch>Unit Test Bundle”.

Create Test Bundle

Next, create a new group called “Unit Tests” under “Classes”.
Then create a unit test by choosing New File>Cocoa class>Objective-C Test Class. Note that you could also choose New File>Cocoa Touch class>Objective-C Test Class, but then our unit test would be supplied with stub methods for specific testing on touch devices. Since, we are only going to unit test our model classes now, we dont need those.
Give it the name StoreTestCase – by convention names of unit tests should end with “TestCase” – and make sure you unmark the main iOS target and check the UnitTests test bundle as a target. You dont want the unit test to end up in your actual iPhone app and you want it to run when the test bundle is run.

Create Test Case

Now, change the content of the StoreTestCase.m file to:

@implementation StoreTestCase

-(void) testGetProductNames{
	Store *store = [Store new];
	Product *product = [Product new]; = @"Dvds";
	[store.products addObject: product];
	NSArray *productNames = [store getProductNames];
	STAssertEquals([NSNumber numberWithInt:[productNames count]],
                   [NSNumber numberWithInt:1],
	STAssertEquals([productNames objectAtIndex: 0],


In this test, we are creating a store with one product: dvds.
In the last two lines, we first check whether the getProductNames method returns an array with 1 element for the above case, and then we check that the value of this one element is “Dvds”.
Notice that in objective-C, String literals are prefixed with the @ sign.

To run this test, right click on Targets>UnitTests in XCode and choose “Build UnitTests”. Before you do that, make sure Store.m and Product.m are added to the UnitTest target as well, or the test wont find the Product and Store classes. You can add these files to the UnitTests target by rightclicking on the file, clicking Get Info, selecting the Targets tab and then check the UnitTests target as well.
When “Build UnitTest” is run, the build should succeed, which indicates that the test ran succesfully.

Running the test

Now we are going to make the test fail. In the Store.m implementation, change the [NSMutableArray new] declaration into [NSArray new].
The array wont be mutable anymore, wont be able to respond to the addObject message and hence the test should fail now.

And indeed, when run we see the following when we take a look at the Build Results:

Failed test


The equivalent of both Java log.debug/info/error and system.out.println in objective-c is NSLog(@”%@”,@”printSomething”). Note that, since it can only be used to print variables, you always have to put the second argument there, even if it is a string literal. The second argument will be inserted at the %@ spot in the first argument.

XCode usage

In XCode, I am missing Eclipse like features like setter/getter generation, automatic organization of imports and automatic override/implementation of methods/interfaces. It is possible to define macros who do this kind of work, but it is hard for a beginner to figure out how to add these. I really think XCode should ship with features like these that assume sensible defaults(since there are a lot less assumptions that can be made compared to Java – for example, it is possible to define 5 classes in some file that isnt even named after any of the 5 – one could argue that a feature like this shouldnt be included but I dont agree).

The Code Sense of XCode is nice though – the automatic completion is very user friendly and for a full list of options, you can just press F5.

Another tip: when you hold option() and doubleclick on a class you will see the short help, from which you can directly go to the api documentation – very useful if you are just starting to get to know the available methods of the framework classes.

Memory management

So far, we’ve said that objective-C and Java are very much a like in concepts and mappings, and ports are pretty straightforward.
There is only one catch though. Although objective-c supports garbage collection, it is not available on iOS devices, which means that we will have to manage our memory manually.
This is something Java developers are not used too.

In essence, it is really easy, but still, in the beginning I regularly had leaks reported by the Allocations and Leaks instruments.

The basic rule is that if you alloc something, you have to release it(which means calling release on the object). Note that the new method of a class is a short cut for [[-Class- alloc] init], so if you call new, you will need to release as well.
Only if you call some framework methods, for example when doing a [NSString string] call, there is no need to call release because the returned NSString is autoreleased.

Although the rule is easy, it is easy to overlook something.
Like, in the above unit test, we not only have to release the store and product instances, but we have to release the productNames array as well, although it was not created in the code snippet itself, but when the getProductNames method of the store instance was called.

Luckily, there are tools available to detect memory leaks. Instead of just running your application – ours will still just render an app with an empty screen -, you can choose Run>Run with Performance Tool>Leaks from within XCode. For iOS apps, this will run the application in the iPhone simulator, while meanwhile, leaks are reported in the Leaks tool, like in the following screenshot:

Leaks Tool

which makes it a lot easier to test your app on memory leaks, find them and remove them.

Note: Sometimes it is not possible to call release on an object. For example, when implementing the titleForRow:… method of the Picker Delegate, you will sometimes have to construct a mutable string and return it in that method. It is not possible to release it in that method, because the calling framework code still needs it. In a case like this, the easiest way is to call autorelease on the object, which will add it to the autorelease pool. However, autorelease comes a bit with a performance penalty and also means that your objects might be kept around longer than necessary, so consider that when using. Still, there are a lot of iOS developers out there who autorelease all their objects and are only tweaking this after testing/determining the memory intensive or slow parts.

Android app development vs iPhone app development

Let us take a look at the files now that are already present in the project we created before. In the Classes folder there are already two classes defined: an AppDelegate and a ViewController. In the Resources folder there are two xib files: MainWindow and one with the same name as our UIViewController. There is also a plist, which is roughly the equivalent of the AndroidManifest.xml file on Android, in which the application icon, name and bundle are defined.

AppDelegate – the equivalent of the Android Application

Just like in Android, there is a class that represents the application as a whole: the AppDelegate. This is a good point for application initialization, such as creating or updating a database, and setting shared preferences. Note that in Android, unless a custom Application class is specified in the AndroidManifest.xml file, the default Application class is used, and the project will not contain an own implementation of this class. For iOS apps, however, there will always be a custom implementation.

Note: We are not going to discuss in detail how our app delegate is linked to our view controller, but basically, in the xxx-Info.plist file, the main interface file(xib file – we are getting to those soon) “MainWindow” is specified. In this interface, the viewController of our appDelegate is linked to our single ViewController. Although those xib files are basically user interface files, they also glue main components in an app to each other.

UIViewController – the equivalent of the Android Activity

Just like in Android, an iPhone application is divided into one or more focused things a user can do. On Android, each of these things would be implemented as an Activity. For every view one activity. On iPhone, each of these things would be implemented as a UIViewController. Similar to the Activity class, the UIViewController class has lifecycle methods that can be overridden to do something when the activity is started, stopped, comes into view, disappears from view, etc..

Let’s declare the interface of our controller as follows:

#import <UIKit/UIKit.h>
#import "Store.h"

@interface ProductDisplayViewController : UIViewController {
	Store *store;
	UILabel *storeNameLabel;
	UITextField *newNameTextField;

@property (nonatomic, retain) Store *store;

@property (nonatomic, retain) IBOutlet UILabel *storeNameLabel;
@property (nonatomic, retain) IBOutlet UITextField *newNameTextField;

- (IBAction) changeStoreName;


and implement it like:

#import "ProductDisplayViewController.h"
#import "Store.h"

@implementation ProductDisplayViewController

@synthesize store;
@synthesize storeNameLabel;
@synthesize newNameTextField;

- (void)viewDidLoad {
	store = [Store new]; = @"Integrating Stuff Store";
	storeNameLabel.text =;
        [super viewDidLoad];

- (void)viewDidUnload {
	self.storeNameLabel = nil;
	self.newNameTextField = nil;
	[super viewDidUnload];

- (void)dealloc {
	[store release];
	[storeNameLabel release];
	[newNameTextField release];
    [super dealloc];

- (IBAction) changeStoreName {
	//store = [Store new];	//uncomment to cause a memory leak = newNameTextField.text;
	storeNameLabel.text =;


In the interface, the important thing to notice is the use of IBOutlet and IBAction. In the next section, we will discuss Interface Builder(IB), and these marker keywords tell Interface Builder that both the 2 properties as well as the method are available for linking. IBOutlet means that this property can be linked to an interface element within Interface Builder – which will be a textfield and a label in our case. IBAction means that the method can be used as an event – on a button click in our case. We will show this in action in the next section.

xib files – the equivalent of the Android layout xml files

Although it is – just like in Android – possible to declare the user interfaces programmatically, this approach is very uncommon, and usually user interfaces are build within Interface Builder and saved as xib files. Double click on the xib file with the same name as your UIViewController. Interface Builder will open.

Play a bit with the Library, and add some labels, a rounded rect button and a text field, like this:

Interface Builder 1

Our interface is now ready. The only thing left to do to finish our sample app is to connect the interface elements we used in Interface Builder with our two properties and our action in our XCode code. In Interface Builder, in the main view, hold down ctrl and click and drag from File’s Owner to the UITextField. It will then let you choose between the available outlets of this type: in our case only newNameTextField is available.

Interface Builder 2

Then do the same for the label. Control-drag from File’s Owner to the “Store name will come here” label and select the single option: storeNameLabel. Then, control-drag from the rounded rect button to file’s owner and select the changeStoreName action. This links a click on the button to our action. Our app is now finished and we can run it. Choose Build and Run in XCode from the run menu and take a look at the screenshot:

Our App

Note: Note that just like in Android, xib files can also contain subviews. For example, just like table row layout xmls are defined for Android apps, table view cells will be defined in a seperate xib file.
Note: Working with Interface Builder is a delight. If the Google guys would want to look somewhere to upgrade their own development environment for Android, they should do it here. Admittedly, they will face additional difficulties, since iPhone developers can make more assumptions about the screensize and all, since the range of apple products to take into consideration is very limited compared to the range of android products. In Android, RelativeLayout layouts are quite common, while on iPhone OS, there is no such thing.

Android UI design vs iPhone UI design

When creating my first iPhone app, the one thing I struggled most with was the UI design.

This is because there are more easy to use and more interface elements available for Android.

Switch instead of checkbox

There is no checkbox available.
Instead of a checkbox, there is an on/off switch available:


But it is not possible to put two next to each other on one line, because they take so much space and the on/off semantics do not agree with all use cases.

No radiobuttons

Radiobuttons are just not available for iOS. The alternative is to use a UIPicker or a UITableview in which only one item can be selected.

Huge picker instead of dropbox

Because an application I had to port made extensive use of dropdown lists(Andoid Pickers), which are a lot smaller than their iPhone UIPicker equivalents, it was quite hard to come up with a decent user interface(without resorting to scrollviews, which would have broken the flow of the app). The main hurdle came from the fact that it is impossible to change the height of a UIPicker. They are huge:


The strange thing is that in application preferences, you can use PSMultiValueSpecifier, which looks like:


And which would have been the exact thing I was looking for. It is not in the development kit, however. I really wonder why they do not offer it as an alternative to the picker. My guess is that they want to force developers in using UIPickers as much as possible. In case that the user usually leaves the default value and only uses the values from the list from time to time, it would be nice to have an interface element like the PSMultiValueSpecifier out of the box though.

Setting up Drools Guvnor


This blog article describes how to setup Drools Guvnor, the business rule management server that accompanies Jboss Drools. We describe what it is and discuss how to install and configure it, how to add model jars and add rules that act on the pojos of these model jars using the user friendly interface. We describe how to load the guvnor rule packages from external applications and fire their rules, or, how to import them into the project workspace with the Eclipse Guvnor Tools that come with the Jboss Tools plugin. Finally, we mention the Guvnor rest API and describe one of its use cases.

What is Drools Guvnor?

Drools Guvnor is a repository for Drools rules. It keeps Drools rules and the models on which the rules act in a centralised place, and manages their versioning as well.
On top of the repository sits a web application, that provides GUIs, editors and tools to aid in the construction and management of large numbers of rules, and with which domain experts – usually non programmers – can view and edit rules.


Installation is very straight-forward. After you get Guvnor from, you rename the drools-5.1.1-guvnor.war to drools-guvnor.war. Then you copy the war file into the deploy folder of your application server and then you start that app server. For the purpose of this tutorial, I used a Jboss 5.1 server. From then on, the drools web interface is available at http://localhost:8080/drools-guvnor. The basic configuration requires authentication, but user and password can be anything.

Changing the repository location

By default, the repository is kept within the app server itself. For JBoss Application servers, this repository is created in their bin directory(basically, the repository is created in the folder you start your app server from). The repository.xml file is created there, together with the repository folder. The repository.xml contains all the metadata for the repository and the repository folder contains the whole repository itself(well, by default – if the rules and models and the like are stored in an external database, as we will discuss next, this is not the case anymore).

Often, it is better to choose another location than the bin folder of your app server for your repository xml and folder.
Guvnor is a Seam application, and the repositoryConfiguration is a Seam Component within the application that is defined in the components.xml.
To change the location of the repository, unzip the war file, locate the components.xml file in the WEB-INF directory, and set the homeDirectory property of the repositoryConfiguration bean:

<?xml version="1.0" encoding="UTF-8"?>
<components ...>
   <core:init transaction-management-enabled="false"/>

   <component name="repositoryConfiguration">
      <property name="homeDirectory">/development/rules_repository</property>

If we start the app server again, the repository xml and folder will be created in the new folder(/development/rules_repositor here) if they are not present.

Changing the database

Guvnor uses Apache Jackrabbit for storing its assets, such as rules and model jars. Apache Jackrabbit is configured by the repository.xml we mentioned before. By default, the rules will be persisted to a Derby database, for which all data will be persisted within the db folder of the workspace. Basically, the PersistenceManager for every workspace is configured like this by default in the repository.xml:

<PersistenceManager class="org.apache.jackrabbit.core.persistence.pool.DerbyPersistenceManager">
   <param name="url" value="jdbc:derby:${wsp.home}/db;create=true"/>
   <param name="schemaObjectPrefix" value="${}_"/>

If you would want to use an external db2 database for persistence, you would change the PersistenceManager for the workspace:

<PersistenceManager class="org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager">
   <param name="driver" value=""/>
   <param name="url" value="jdbc:db2://"/>
   <param name="user" value="db2inst1"/>
   <param name="password" value="test"/>
   <param name="schemaObjectPrefix" value="${}_"/>
   <param name="schema" value="db2"/>

Note that the configuration for the workspace in the repository.xml is just a template from which the workspace.xml for each workspace is generated. If your workspaces are already there, you will need to go change the PersistenceManager in these workspace.xml files as well(they are under the workspaces folder in the folder where the repository.xml is located).

Uploading the business model to Guvnor

Before rules can be defined in Guvnor, a business model(a set of Java classes) jar needs to be uploaded to Guvnor. To not clutter the text, I am going to assume that you know how to make Java classes, compile them and put them in a jar, e.g. to construct a business model. In this tutorial, I am going to use a banking business model, with classes like Person and LoanFormula, but you can use any other business model.

There are several ways to upload a business model to Guvnor. Some others will be discussed later in this article, but the easiest way is by using the web interface. Log on to Guvnor(through http://localhost:8080/drools-guvnor) and go to the “Knowledge Bases” tab. Click on “Create New” right under the tab and choose “New Package”. Give your package a name – I choose “banking” – and press Ok.
Then “Create new” – “Upload new Model Jar”.
Give your model a name:

Uploading model 1

and then choose and upload the jar:

Uploading model 2

after which we end up with a package like this:

Uploaded model

Note that for every fact that is declared, Drools recursively needs all the dependencies on these facts. This poses problems from time to time. For example, in Enterprise Java Development, chances are high that your business model classes are annotated with JPA annotations, which need the JPA libs, which need other libs in turn. One has two options to deal with this. The first is to make a special build that strips all annotations from your classes and then compiles them again so they are really pojos that do not require any external libs at all. The second is to upload all dependencies(which can be easily done by making a jar with all dependencies included, which can be easily done with the maven-assembly-plugin or a custom ant script) and then exclude all non-business-model-facts from the list in the package.

Adding a sample rule

Now the model is uploaded, we can start defining rules.
To add a rule, you have to create a Category first(Administration>Category>New Category). Rules always have to reside under at least one Category in Guvnor. I choose to create the category “Banking”.
Once the category is created, go back to the “Knowledge Bases” tab and choose Create New>New Rule this time.

Enter a name for the rule:

Create Rule 1

And then invent the rule on the business model we just uploaded:

Create Rule 2

Note however that, although the graphical interface is quite nice, it is still somewhat (too?) limited.
One of the shortcomings is that Guvnor seems to limit the Drools syntax. The interface for doing a “from $collection” is there, but such a rule is not saveable. However, all shortcomings can be circumvented by using free form drools syntax, but that kinda defeats the whole purpose of the interface.

Packaging our rule package

Before an external application can use the rules in our banking package, we have to make a package snapshot first.
You can do this by going to the “Package Snapshots” tab and choosing Deploy>New Deployment Snapshot.
We name our package snapshot “LATEST” and end up with:

Creating a snapshot of a package

Notice all the links. External applications will be able to load the rules from the package binary link for example, which is what we are going to do next.

Calling a Guvnor rule package from an external Java application

The following jUnit TestCase will fetch and call the rule in our package:

public class GuvnorTest extends TestCase{
	public void testDroolsWithGuvnor() throws Exception {
		KnowledgeBase knowledgeBase = createKnowledgeBase();
		StatefulKnowledgeSession session = knowledgeBase.newStatefulKnowledgeSession();
		try {
			Person person = new Person();
			assertTrue(session.getFactCount() == 1);
			assertTrue(session.getFactCount() == 2);
		finally {
	private static KnowledgeBase createKnowledgeBase() {
		 KnowledgeAgentConfiguration kaconf = KnowledgeAgentFactory.newKnowledgeAgentConfiguration();
		 kaconf.setProperty( "drools.agent.scanDirectories", "false" );
		 KnowledgeAgent kagent = KnowledgeAgentFactory.newKnowledgeAgent( "test agent", kaconf );
		 kagent.applyChangeSet( ResourceFactory.newClassPathResource("changeset-banking.xml"));
		 return kagent.getKnowledgeBase();

Note that the test expects two facts in the session after execution of the rule. If you would change the income to 4000 instead of 6000 the rule wouldnt fire anymore(check the screenshot which displays the rule we added) and the test would fail since, after execution, there would still be only one fact in the session.

Note also that we are not using the package links mentioned in the previous section directly. Instead, we are using a changeset-banking.xml that has one of those links defined and looks like this:

<change-set xmlns=''
    xs:schemaLocation='' >
      type='DRL' basicAuthentication="enabled" username="admin" password="admin" />

Basically, it contains the links to the Guvnor package resources, and the authentication configuration to access those resources.

Basic Guvnor usage

All of the previous covers basic Guvnor usage: creating a category and package, uploading a model, define some rules on the model and package the package, so it can be called by an external application using Drools.

IDE tooling

With the Jboss Tools plugin for Eclipse come some specific Eclipse Guvnor Tools. The most interesting of these, in my opinion, is the Guvnor Repositories View. Guvnor repositories can be added under that view and elegantly browsed.

IDE Tooling

Project resources can also be added to a Guvnor package from the rightclick menu for the resource by choosing Guvnor > Add within Eclipse. The other way around, resources can be dragged from the Guvnor repositories view into the workspace, modified there and then be committed back to Guvnor(Guvnor>commit). It is also possible to upload the model using this model or to download the most recent rule package into the workspace so it can be included in the project build instead of being remotely fetched from Guvnor at runtime.

Using the Guvnor rest API – Uploading the model

Guvnor does not come with a webservice offering all its functionality(an inclusion of such a webservice is the number one feature for Guvnor on my wishlist), but only offers a rest api for uploading and downloading assets. However, this rest api has some useful use cases.

For example, the following code could be used to automate the uploading of the business model to Guvnor as part of a build process:

public class PostModel {

	public static void main(String[] args) {
		try {
			HttpClient client = new HttpClient();
			String url = "http://localhost:8080/drools-guvnor/org.drools.guvnor.Guvnor/api/packages/banking/BankingModel.jar";
			Credentials defaultcreds = new UsernamePasswordCredentials("admin", "admin");
			client.getState().setCredentials(new AuthScope("localhost", 8080, AuthScope.ANY_REALM), defaultcreds);
			//delete old model
			DeleteMethod deleteMethod = new DeleteMethod(url);
			int statusCode1 = client.executeMethod(deleteMethod);
			System.out.println("statusLine>>>" + deleteMethod.getStatusLine());
			//post new
			PostMethod postMethod = new PostMethod(url);

			// Send the model file as the body of the POST request
			File f = new File("smeg_model.jar");
			System.out.println("File Length = " + f.length());

			postMethod.setRequestBody(new FileInputStream(f));
			postMethod.setRequestHeader("Content-type","text/xml; charset=ISO-8859-1");

			int statusCode2 = client.executeMethod(postMethod);

			System.out.println("statusLine>>>" + postMethod.getStatusLine());
		} catch exceptions...

Calling native c-code through JNI in Android applications

When you have been developing Android applications for a while, there comes a moment that you want to leverage the power of some lib in native c in one of your applications. For doing ocr, for example.
This tutorial describes how to call native c-code in an Android application using the Android Native Development Kit.


The first thing to learn is how to call native c-code from regular Java through JNI, the Java Native Interface. Most Java developers are aware of the native keyword, but most never have to use it in practice. Therefore basic JNI usage will be discussed first.

Your Java source code has to declare one or more methods with the native keyword to indicate that they are implemented through native code:

native String getJniString();

These native methods obviously dont have a Java implementation. You must provide a native shared library that contains the implementation of these methods. This library must be named according to the standard Unix conventions as lib<name>.so and needs to contain the standard JNI entry point. For our method above, the implementation in c could look like:

#include <string.h>
#include <jni.h>

Java_com_integratingstuff_jni_HelloJni_getJniString(JNIEnv* env, jobject thiz){
    return (*env)->NewStringUTF(env, "Hello from JNI!");

in which case our native method would have to reside in the HelloJni class in the com.integratingstuff.jni package. The naming is obvious if you take a closer look.

Finally, your application must explictly load the library(elsewise you will get an UnsatisfiedLinkError). To load it at application startup, add the following code:

static {

In this case, the name of the shared library would need to be The lib prefix and so suffix should not be included here.

Note that you are not supposed to be following this tutorial in your IDE right now. If you just want to make this jni part work, and dont care about Android for now, you would need to compile the above c-file into the required so file first, dealing with the jni.h dependency yourself. This is trivial with gcc though, especially on a linux machine(on Windows, you probably need to install something extra: I would use Cygwin myself, but you could use some Windows C compiler like MinGW too).

The Android native development kit

If you’re going down the Android path, you need to download the Android Native development kit now. Basically, the android ndk is a c-compiler/builder that ships with all the important c-libs present on the Android OS. It comes with one important command: ndk-build.

To follow this tutorial, you can just create a new Android project in Eclipse. When the project is created, make a new folder in the project root, called “jni”. You copy the above hello-jni.c file into this folder, together with the following file:

LOCAL_PATH := $(call my-dir)

include $(CLEAR_VARS)

LOCAL_MODULE    := hello-jni
LOCAL_SRC_FILES := hello-jni.c


When the ndk-build command is run, it always looks for the file and reads it to know what variables to use and what to build. Linux users are probably familiar with these mk makefile files. If you are not, dont worry about it.
An file always has to begin with the LOCAL_PATH definition. It is used to locate the source files. In this case, we are setting this to the directory the file is residing in.
Then all local variables are cleared(LOCAL_MODULE and LOCAL_SRC_FILES etc.. this is important because all builds run in a single context and you dont want to keep some variable from another run filled in.)
Then the local module is set – this is going to be the name of the .so file in this case(thus, and then all the source files that belong in the module are declared.
Finally, include $(BUILD_SHARED_LIBRARY) will generate the .so lib.

Now, add the following method to your activity:

public native String getJniString();

, add the static block

static {

and change your onCreate implementation to:

TextView tv = new TextView(this);

When you try to run this, your new android application will exit with an UnsatisfiedLinkError. This is because we did not yet build the .so native shared library.

The only thing left to do is to run the ndk-build shell script, found in the $ANDROID-NDK folder, in the jni directory you created(the folder with the mk and c file in).
This will generate the output

Building the so shared lib

and create the required so.file.
When you run your application now, the string coming from the c file is displayed on the screen:

JNI Test on Android

Check the other ndk samples too

This tutorial is a wrapup of the hello-jni sample the android ndk ships with. In the samples directory of the ndk, there are a lot of other interesting examples. There is even one that shows you how to write and declare your whole activity in c: the native-activity sample. Whether doing that is a good idea, is something else entirely though.


Get every new post delivered to your Inbox.