Blog post by Anadi Misra on Continuous Integration for database code with image banner of cog wheels.

A Day at DBCI

Experiments on how to bring the Database change management onto a CI cycle.

Posted by Anadi Misra on September 27, 2008 · 13 mins read

This blog is about pointers to going about implementing DBCI for your applications and is based on a similar objective I achieved in a project recently. It started as a pretty standard application(without a DB) and we had a full fledged CI handling not just builds but remote deploys, selenium tests and performance tests too. We then started getting impeded a bit when we introduced the DAO layer, we had decided to not rely on the JPA’s query capabilities (with all due respects they are good guys but nothing works better than Native SQL queries, also you can use the smart annotations JPA provides to map results for your native queries, but again not JPQL, JPQL is a bit over-rated); so the big question was there has to be a better way to test our SQL queries and an automated one at that. So we went about integrating the DBCI to our existing CI implementation. Here’s a summary what I wanted to achieve:

  1. There should be a way to setup test data for your unit tests.
  2. Have complete control and traceability for this data injected for tests.
  3. Be able to populate and clean your DB in the process of running tests for providing different sets of Data.
  4. All of this above should be automated

The Unit tests for this should not be very different from how you have been writing other unit tests. 

Normally during the development phases teams tend to have a shared development database instance accessed by all developers, nothing wrong in that in terms of sharing infrastruture but when it comes to relying on this instance for your unit tests this approach is a bit flawed. The reason is in such shared instances you do not have control of the data; the dude next to you might have well deleted the data based on which you wrote your queries. So while “it worked on your machine” the build can fail by the time it reaches the CI server. Also you might be better off running the tests against and database with a smaller subset of data which is relevant to the test scenarios instead of running against a heavy full grown DB (adds to the slowness of your builds). Also you can have an easier way if you had a build which fires up an in memory database, creates the schema, adds the data that you need and runs the test. On a CI environment the same process should be followed as it is the same build script but there you can probably utilize a MySQL better. Gives a uniformity in the way you add test data, add your tests and excute those tests irrespective of environments.

I’ll move to snippets for the implementation. Lets say you have an entity as follows: (Updated code for annotation syntax, 2011):

@Entity
@Table(name = "releases")
@NamedNativeQueries(value = {
 @NamedNativeQuery(name = "releaseStories", query = "SELECT stories.release_id AS release_id, stories.story_id AS story_id, userstory.* "
  + "FROM release_stories stories INNER JOIN user_story userstory ON stories.story_id = userstory.id WHERE stories.release_id =?1", resultClass = UserStory.class),
 @NamedNativeQuery(name = "releaseIterations", query = "SELECT iterations.release_id AS release_id, iterations.iteration_id AS iteration_id, iteration.* FROM release_iterations iterations "
  + "INNER JOIN iteration iteration ON iterations.iteration_id = iteration.id WHERE iterations.release_id =?1", resultClass = Iteration.class),
 @NamedNativeQuery(name = "releaseProjects", query = "SELECT projects.release_id AS release_id, projects.project_id AS project_id, project.* FROM release_projects projects "
  + "INNER JOIN project ON projects.project_id = project.id WHERE projects.release_id =?1", resultClass = Project.class) })
public class Release implements Serializable {

 private static final long serialVersionUID = -843064609186009652L;

 @Id
 @GeneratedValue(strategy = GenerationType.AUTO)
 private Long id;

 @Version
 private Integer version;

 @Column(nullable = false)
 private String name;

 @Column(length = 50)
 private String description;

 @Column(nullable = false)
 @Temporal(TemporalType.DATE)
 private Date startDate;

 @Column(nullable = false)
 @Temporal(TemporalType.DATE)
 private Date endDate;

 @Column(nullable = false)
 private ReleaseStatus status;

 @OneToMany(cascade = { CascadeType.DETACH, CascadeType.MERGE, CascadeType.PERSIST, CascadeType.REFRESH })
 @JoinTable(name = "release_iterations", joinColumns = @JoinColumn(name = "release_id"), inverseJoinColumns = @JoinColumn(name = "iteration_id"))
 private Set<Iteration> iterations;

 @OneToMany(cascade = { CascadeType.DETACH, CascadeType.MERGE, CascadeType.PERSIST, CascadeType.REFRESH })
 @JoinTable(name = "release_projects", joinColumns = @JoinColumn(name = "release_id"), inverseJoinColumns = @JoinColumn(name = "project_id"))
 private Set<Project> projects;

 @OneToMany(cascade = { CascadeType.DETACH, CascadeType.MERGE, CascadeType.PERSIST, CascadeType.REFRESH })
 @JoinTable(name = "release_stories", joinColumns = @JoinColumn(name = "release_id"), inverseJoinColumns = @JoinColumn(name = "story_id"))
 private Set<Userstory> stories;
 ...

You can see here we have this entity mapped to three other all through join tables. I wont get into the details of the DAO classes much but lets say we had a method like this in a DAO for this entity.

/**
* @param release
*/
@SuppressWarnings("unchecked")
private void findReleaseStories(Release release) {
    Query storiesQuery = entityManager.createNamedQuery("releaseStories").setParameter(1, release.getId()).setMaxResults(30);
    List stories = storiesQuery.getResultList();
    LOGGER.debug("Release with id " + release.getId() + " has " + stories.size() + " associated user stories");
    Set<UserStory> userStories = new HashSet();
    userStories.addAll(stories);
    release.setStories(userStories);
}

Assuming you have a JUnit test (setup using some spring annotations so that all dependencies are injected and you have all that it takes for your JUnit to call the DAO implementation; here’s how Maven and some of it’s handy plugins can help you achieve that DBCI.

Setting Up a clean schema and Injecting data: ALL sql approach

The first step involved in our CI build will be to have a clean schema where we can inject test data. hibernate plugin serves well here; you can use the schema tool to execute schema creation through an SQL file provided

<!-- Create/Update the test database instance -->
   <plugin>
   <groupId>org.codehaus.mojo</groupId>
   <artifactId>hibernate3-maven-plugin</artifactId>
   <version>2.2</version>
   <executions>
    <execution>
    <id>mysql</id>
    <phase>process-test-resources</phase>
    <goals>
     <goal>hbm2ddl</goal>
    </goals>
    </execution>
   </executions>
   <configuration>
    <components>
    <component>
     <name>hbm2ddl</name>
     <implementation>jpaconfiguration</implementation>
    </component>
    </components>
    <componentProperties>
    <drop>true</drop>
    <jdk5>true</jdk5>
    <persistenceunit>hbm2ddl-persistence</persistenceunit>
    <skip>${skipTests}</skip>
    </componentProperties>
   </configuration>
   <dependencies>
    <dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>5.1.16</version>
    </dependency>
    <dependency>
    <groupId>org.hsqldb</groupId>
    <artifactId>hsqldb</artifactId>
    <version>1.8.0.10</version>
    </dependency>
    <dependency>
    <groupId>javax.validation</groupId>
    <artifactId>validation-api</artifactId>
    <version>1.0.0.GA</version>
    </dependency>
    <dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-validator</artifactId>
    <version>4.0.0.GA</version>
    </dependency>
    <dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-entitymanager</artifactId>
    <version>3.6.6.Final</version>
    </dependency>
    <dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-core</artifactId>
    <version>3.6.6.Final</version>
    </dependency>
    <dependency>
    <groupId>org.hibernate.javax.persistence</groupId>
    <artifactId>hibernate-jpa-2.0-api</artifactId>
    <version>1.0.1.Final</version>
    </dependency>
    <dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-c3p0</artifactId>
    <version>3.6.6.Final</version>
    <exclusions>
     <exclusion>
     <artifactId>slf4j-api</artifactId>
     <groupId>org.slf4j</groupId>
     </exclusion>
    </exclusions>
    </dependency>
   </dependencies>
   </plugin>

And here’s the JPA configuration to go with it:

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
 <persistence-unit name="hbm2ddl-persistence" transaction-type="RESOURCE_LOCAL">
 <provider>org.hibernate.ejb.HibernatePersistence</provider>
 <!-- for implementation specific overrides -->
 <mapping-file>META-INF/orm.xml</mapping-file>
 <class>domain.Release</class>
 <class>domain.Project</class>
 <class>domain.Iteration</class>
 <class>domain.UserStory</class>
 <properties>
  <property name="hibernate.dialect" value="org.hibernate.dialect.MySQL5Dialect" />
  <property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver" />
  <property name="honernate.hbm2ddl.auto" value="create" />
  <property name="hibernate.hbm2ddl.import_files" value="/test-import.sql" />
  <property name="hibernate.show_sql" value="true" />
  <property name="hibernate.connection.username" value="liquibrain" />
  <property name="hibernate.connection.password" value="password" />
  <property name="hibernate.connection.url" value="jdbc:mysql://localhost:3306/liquibrain-test" />
 </properties>
 </persistence-unit>
</persistence>

Now this SQL file is in our src/test/resources directory; so if you add a query which has to be tested in your unit tests; you have to add insert statements to end of this file which gives a uniform way of test data input.

Thats pretty much it the build will now clean the schema and add test data at every run for your unit tests to test things out, you can of course add further tweaks to it by separating it to a profile if the tests start taking too long to execute.

Another way for injecting data can be the DBUnit Maven Plugin where you structure data as XML files. More about that in a different blog post.