Nov 9, 2010

Unit Testing your Persistence Tier code

While writing unit tests is now common, there is still a struggle for writing effective unit tests.  When it comes to an ORM persistence tier framework, a variety of solutions exist.  This article will discuss 3 common solutions, state which solution I like best, and then work toward making it easier to write persistence tier tests.

While the concepts discussed here can be applied to a variety of configurations it was written based on JUnit 4 for unit tests, JPA/Hibernate ORM persistence tier framework, a production database of Oracle, and HSQLDB 2.0 as the memory-resident database for unit testing.

When it comes to writing unit tests for persistence tier code, I have seen the following three approaches:

  1. Write unit tests with your POJOs without the persistence tier framework.
  2. Write unit tests with the persistence tier framework using a database instance of the same type as production.
  3. Write unit tests with a memory resident database.

Solution 1

Easy to use, avoids overhead of initializing your persistence tier framework, but lacks really testing your domain entities.  For example, named queries and associations (@OrderBy, lazy/eager fetching, cascading rules) cannot be tested.  While some unit tests can be written this way, it is insufficient solution to completely test your persistence tier.

Solution 2

The best solution when it comes to verifying your code will work in your production environment.  You will be unit-testing with the exact same database that your application uses.  If you write native queries or hibernate criteria with SQL clauses you will be able to write unit tests for this without any difficulty.  You will also be testing the same database dialect that is used by your application.  In theory, this is the best solution.

In practice, however, this solution does have some problems.

Problem 1: Schema Setup

If you are sharing a unit test database among all developers (or even running applications) the schema has to be complaint with those other developers.  A simple process of changing an attribute to be not nullable requires coordination.   You have to coordinate the schema change in such a way it doesn’t affect other developers.  This means you may be checking in code prior to the unit tests running or even working.   This attribute change is a minor example;  a major refactoring effort would be more difficult.

Problem 2: Concurrency

If you are using a database similar to production, odds are you are sharing it with others.  If so, concurrency will be an issue.  For most tests, it can easily be avoided by never committing a transaction; just rollback the transaction at the end of every test.  You cannot unit-test any multi-transactional code.  Also side-effects can still exist.  For example, oracle sequence numbers are not restored to the previous value on a transaction rollback.

Problem 3: Performance/Connectivity

Finally, depending on your configuration, performance can be a concern.   You must be connected to the database to run your unit-test and if that connection is over VPN, it can be dramatically slower.  With more companies allowing developers to work from home, this can have a bigger impact than you would expect.  Personally, I find the perfect time to write unit tests is when I cannot access the network.  While running a database locally is an option (you could run Oracle XE on your computer), the resources needed for an application database is large.  If you are stuck using a windows 32bit OS for development, the maximum 3GB of addressable memory space is reason enough to avoid doing this. Even if you setup a separate database / developer, what about your continuous integration server? Concurrent builds would have to be avoided, or your CI box could conflict with itself.

Solution 3

I have found solution 3 is the best choice for writing your unit tests, provided you have the time to address the differences in your memory resident database from your application database.  Developers can work independently of each other, unit tests can generate the schema automatically, and you can use transactions in your unit tests.  The problem with solution 3 is that you are now testing your code with a different database and ORM dialect.  If you assume the vendor of your ORM has tested the dialects that may not be concern.  However, there is a bigger level of uncertainty added when you test your code with non-production components.  If you have limited integration and/or QA testing, then this solution may not be right for you.

To make solution 3 palatable, the following issues were identified along with ways to work around them.  Issue 1 and 5 would also apply to solution 2 not just solution 3.

Issue 1 : Persistence tier initialization is not scalable

Initializing of your persistence tier for every test (or even test class) is not scalable.  While 5 seconds to initialize your persistence tier on application startup is trivial, 5 seconds added to every test is an eternity.  Solve this issue by initializing your persistence tier once for the entire test suite.

The following solution has been tested to work in the eclipse IDE and under ANT 1.7.1 (under ANT 1.7.0 this solution didn’t work, due to how the JUnit ANT task class-loader was designed).

In a class separate from any JUnit class, initialize your persistence tier via static initialization.  When the class loader loads the next JUnit class, the already loaded static class will retain the initialized persistence tier framework.

public class PersistenceHelper {
    private static final EntityManager entityManager;
    static {
        entityManager = Persistence.createEntityManagerFactory("PU").
    public static EntityManager getEntityManager() { 
        return entityManager 

Be sure to create PersistenceHelper in your test source folder, to avoid it being used accidently from application code.

Now to obtain the entity manager, simply use PersistenceHelper.getEntityManager(); the cost of initialization of the persistent tier is the same for 1 test as it is for 1000 tests (provided the framework creating the test suite doesn’t fork a new JVM process for every test class).  This does require a database to be cleared between tests, so that leads us to resolving the next issue.

Issue 2 : Clearing the database between tests

In Issue 1 we addressed how to make running unit tests scalable, by initializing the persistence tier framework once for the entire suite of tests.  We need to clear the database after every test in order to make sure one test doesn’t have side effects to another test.  The easiest way to clear the database is to delete all tables after the test executes.  Foreign key constraints adds an additional hurdle, so be sure to disable referential integrity before deleting the tables.  The following code is how this is done using the HSQL 2.0 database (HSQL 1.8 database does have different syntax for disabling referential integrity).

public void clearDatabase() {
    Connection c = ((SessionImpl) entityManager.getDelegate()).connection();
    Statement s = c.createStatement();
    Set<String> tables = new HashSet<String>();
    ResultSet rs = s.executeQuery("select table_name " +
        "from INFORMATION_SCHEMA.system_tables " +
        "where table_type='TABLE' and table_schem='PUBLIC'");
    while ( {
        if (!rs.getString(1).startsWith("DUAL_")) {
    for (String table : tables) {
        s.executeUpdate("DELETE FROM " + table);

JPA/Hibernate creates tables starting with DUAL_ to handle oracle like sequencing in HSQLDB.  To avoid breaking the sequence generator, these tables are not to be deleted.

In your unit tests, simple add the following method.  As a convenience, you can roll-back any non-committed transactions.

public void after() {
    if (PersistenceHelper.getEntityManager().getTransaction().isActive()) {

Issue 3 : Native SQL – missing functions

HSQLDB provides the ability to write custom functions.  I have used this to write many functions that exist in Oracle, that does not exist in HSQLDB.  There are two different ways to write functions in HSQLDB and I have provided an example of each of them here.

Oracle function NVL2 written in HSQLDB sql syntax.

The function can be written as follows.  If additional data types exist for your tests, you may need to create more variations of the function to support the alternate data-types (you can overload functions).

create function nvl2(t varchar(4000), f int, s int) returns int if t is not null then return f; else return s; end if

Oracle function LAST_DAY written in Java and referenced from HSQLDB sql syntax.

create function last_day(d DATE) returns date no sql language java parameter syle java external name ‘CLASSPATH:com.objectpartners.buesing.PersistenceHelper.lastDay’

public static Date lastDay(Date date) {
    Calendar cal = Calendar.getInstance();
    cal.set(Calendar.DAY_OF_MONTH, cal.getActualMaximum(Calendar.DAY_OF_MONTH));
    return cal.getTime();

The easiest way to add these functions would be done as follows:

String NVL2 = "create function nvl2(t varchar(4000), f int, s int) returns int " +
    "if t is not null then return f; else return s; end if";
String LAST_DATE = "create function last_day(d DATE) returns date " +
    "no sql language java parameter style java external name " +

Connection c = ((SessionImpl) entityManager.getDelegate()).connection();
Statement s = c.statement();

Call this from within the static initialization of the persistence tier code after it has been initialized.

Issue 4 : Native SQL – different SQL syntax

There is database differences that cannot be fixed by writing functions.  Examples of such SQL is Oracle’s “connect by” and “rank/partition” sql; which isn’t supported by HSQLDB.  This is where I impose a custom “mock” concept to the entity manager.  I write a proxy to the entity manager and have the proxy entity manager provide canned results or alter the SQL when createNativeQuery() is called.

The proxy passes through every call to the wrapped entity manager, with the exception of the native SQL query functions.  These functions can either return canned (mocked) results or can alter the SQL to where it can work with HSQLDB.

public class EntityManagerWrapper extends EntityManager {
    private EntityManager em;
    public EntityManagerWrapper(EntityManager em) {
        this.em = em;
    //... all other methods of EntityManager as delegate pattern to the wrapped entity manager.
    public Query createNativeQuery(String query) {
        //for specific test, remove specific sql and leave it to database initialization to fake connect by.
        String query2 = query.replaceAll("connect by prior employee_id = manager_id", "");
        return em.createNativeQuery(query2);

public class EmployeeFinder {
    @PersistenceContext(unitName = "PU")
    private EntityManager em;

    //used by unit tests to inject entity manager into EJB3 SSB.
    protected void setEntityManager(EntityManager em) {
        this.em = em;

    public List<Object[]> getDecendants(Employee employee) {
        Query query = em.createNativeQuery("select * from employee connect by prior employee_id = manager_id");
        return query.getResultList();

Now in your unit test, when you inject the entity manager into your stateless session bean, you can inject a specialized wrapped entity manager which will alter the SQL prior to sending it onto the HSQL database.

Issue 5:  JPA bootstrapping of META-INF/persistence.xml

For JPA the entire bootstrapping of persistence.xml is not test friendly.  The file must be in a META-INF folder and the EntityManager factory will inspect the jars for this automatically.  Since the application persistence.xml cannot be used for the unit tests (you are pointing to a different database, your database cannot be referenced by JNDI, etc.) you need to get the unit tests to find the test persistence.xml, and not the application persistence.xml.

When it comes to JPA/Hibernate, I have found an “elegant” solution.  I change the thread’s class-loader that will alter the location of persistence.xml.  This required inspection into how Hibernate was finding and loading this file.  This solution may not work with Hibernate4 or with any other JPA implementation.  However, it does work with Hibernate3, and works well.

The Hibernate code to find the persistence.xml requires the URL returned by the class-loader to end in META-INF/persistence.xml, but I’m free to alter the parent path to the file w/out any difficulty. I place an test version (or generate one in this process) in the alternate location of jap_unittest/META-INF/persistence.xml.

private static void wrapClassLoader() {
    ClassLoader current = Thread.currentThread().getContextClassLoader();
    Thread.currentThread().setContextClassLoader(new ClassLoader(current) {
        public Enumeration<URL> getResources(String name) throws IOException {
        if ("META-INF/persistence.xml".equals(name)) {
            Enumeration<URL> urls = super.getResources(name);
            Vector<URL> vector = new Vector<URL>();
            if (urls.hasMoreElements()) {
               String str = urls.nextElement().toExternalForm();
               str = str.replaceAll("build/classes", "jpa_unittest");
               // replaceAll solution works for eclipse, need additional replaceAll for ANT/other.
               vector.add(new URL(str));
               return vector.elements();
            } else {
               return super.getResources(name);

In the static initialization block of PerstenceHelper call wrapClassLoader() prior to initializing the entityManager.

public class PersistenceHelper {
    private static final EntityManager entityManager;
    static {
        entityManager = Persistence.createEntityManagerFactory("PU").createEntityManager();
    public static EntityManager getEntityManager() { return entityManager };

With additional work you could generate the test persistence.xml from the real persistence.xml.  Just use a good XML reader (JDOM for example) to read in the actual persitence.xml and write out the altered version prior to returning the URL from the class loader.

Also, the URL rewriting done in the above example str = str.replaceAll(“build/classes”, “jpa_unittest”); works when running unit tests from within your Eclipse IDE, additional coding would be needed to get this to run in other environments.


However you decide to write unit tests for your persistence tier, it is important to write them.  They key to writing good tests is to make it easy to write.  Building junit base classes leveraging the concepts provided here has made tests for me easier to write, so I write more of them.

I have used Hibernate for over 7 years and JPA/Hibernate for almost 3. If interest exists, I plan to write additional posts about JPA and Hibernate. If there is a topic of interest, please let me know.

About the Author

Object Partners profile.

One thought on “Unit Testing your Persistence Tier code

  1. Applicius says:

    It’s late, but Acolyte framework had been developped for such testing: .

Leave a Reply

Your email address will not be published.

Related Blog Posts
Natively Compiled Java on Google App Engine
Google App Engine is a platform-as-a-service product that is marketed as a way to get your applications into the cloud without necessarily knowing all of the infrastructure bits and pieces to do so. Google App […]
Building Better Data Visualization Experiences: Part 2 of 2
If you don't have a Ph.D. in data science, the raw data might be difficult to comprehend. This is where data visualization comes in.
Unleashing Feature Flags onto Kafka Consumers
Feature flags are a tool to strategically enable or disable functionality at runtime. They are often used to drive different user experiences but can also be useful in real-time data systems. In this post, we’ll […]
A security model for developers
Software security is more important than ever, but developing secure applications is more confusing than ever. TLS, mTLS, RBAC, SAML, OAUTH, OWASP, GDPR, SASL, RSA, JWT, cookie, attack vector, DDoS, firewall, VPN, security groups, exploit, […]