Automating integration tests with docker-compose and Makefile

I remember a few years ago the adventure developers would face in order to run integration tests on their machines, as they had to install and configure external services (usually some RDBMS and application servers), use some shared testing environment, or something similar. Nowadays, docker has simplified a lot that situation, once external dependencies are available as images that can be used in any environment.

However, today it’s rather common to see applications that rely on many more external services than before, like different databases. Docker compose is another tool in the docker universe that helps applications to define all of their external dependencies, making much easier and cleaner to run (mainly) integration tests. But sometimes, only these tools are not enough. One still needs to start the containers, run some initialisation steps (like creating database, messaging topics, etc) and only then run their tests. For that, one can still rely on a quite old but popular tool: Makefiles.

In this article we’ll use a very simple application that has a repository class responsible for adding and retrieving entities in/from a MySql database. We also want to create a few integration test cases for this class using a real MySql server. The application is written in Scala and managed using SBT, but don’t get distracted by these details. The ideas shown here can be applied to any other programming language/building tool, as they are more related to the infrastructure than to the application itself.

Without further ado, let’s start!

The application

The application is rather simple and its main goal is to insert and retrieve customers (very simple ones!) from the database. Below you can see the code (which is available here):

Customer.scala

case class Customer(id: Long, name: String)

CustomerRepository.scala

trait CustomerRepository {

  def add(customer: Customer): Customer

  def findAll: Seq[Customer]
}

CustomerRepositoryMySql.scala

import java.sql.{Connection, ResultSet}

import scala.annotation.tailrec

class CustomerRepositoryMySql(connection: Connection) extends CustomerRepository {

  private val insertStatement = connection.prepareStatement(
    """
      | INSERT INTO customers (id, name) VALUES (?, ?)
    """.stripMargin
  )

  private val findAllStatement = connection.prepareStatement(
    """
      | SELECT * FROM customers ORDER BY id
    """.stripMargin
  )

  override def add(customer: Customer): Customer = {
    insertStatement.setLong(1, customer.id)
    insertStatement.setString(2, customer.name)
    insertStatement.executeUpdate()
    customer
  }

  override def findAll: Seq[Customer] = {
    val rs = findAllStatement.executeQuery()
    rsToSeq(rs) { rs =>
      Customer(rs.getLong("id"), rs.getString("name"))
    }
  }

  private def rsToSeq[A](rs: ResultSet)(transformF: ResultSet => A): Seq[A] = {
    @tailrec
    def loop(acc: Seq[A]): Seq[A] = {
      if (!rs.next()) acc
      else loop(acc :+ transformF(rs))
    }
    loop(Seq.empty[A])
  }
}

MysqlConnection.scala

import java.sql.{Connection, DriverManager}

object MysqlConnection {

  def create(host: String, username: String, password: String): Connection = {
    val connectionUrl = s"jdbc:mysql://${host}/testing_with_docker_db"

    Class.forName("com.mysql.jdbc.Driver")

    DriverManager.getConnection(connectionUrl, username, password)
  }
}

Well, as you can see, everything is quite simple and doesn’t require further explanation. Let’s see now the test code:

MySqlSpec.scala

import java.sql.Connection

import org.scalatest.{BeforeAndAfterAll, BeforeAndAfterEach, Suite}

trait MySqlSpec extends BeforeAndAfterEach with BeforeAndAfterAll {
  this: Suite =>

  var connection: Connection = _

  override protected def beforeEach(): Unit = {
    connection.createStatement().execute("TRUNCATE TABLE customers")
    super.beforeEach()
  }

  override protected def beforeAll(): Unit = {
    connection = MysqlConnection.create("db_server", "root", "root")
    super.beforeAll()
  }

  override protected def afterAll(): Unit = {
    super.afterEach()
    connection.close()
  }
}

CustomerRepositoryMySqlSpec.scala

import org.scalatest.{FlatSpec, Matchers}

class CustomerRepositoryMySqlSpec extends FlatSpec with Matchers with MySqlSpec {

  trait Context {
    val repository = new CustomerRepositoryMySql(connection)
  }

  "A sequence with the existing customers ordered by id" should "be returned if there are customers" in new Context {
    repository.add(Customer(1, "Joe"))
    repository.add(Customer(2, "Mary"))

    repository.findAll shouldBe Seq(
      Customer(1, "Joe"),
      Customer(2, "Mary")
    )
  }

  "An empty sequence" should "be returned if there are no customers" in new Context {
    repository.findAll shouldBe empty
  }
}

Like the code, the test code is rather simple. One thing to note here in the class MySqlSpec, is the line:

connection = MysqlConnection.create("db_server", "root", "root”)

Here this means we are connecting to the host called db_server, using the user root with password root (not something you’d do in production, but totally fine for an integration test). But where do these values come from? That’s where docker comes into play.

Docker configuration

In order to run our tests, we need to:

  • Create a mysql server reachable with the name “db_server”
  • Create a database called “testing_with_docker_db”
  • Create a table called “customers”

Let’s take a look in the docker-compose.yml file:

version: '3.5'

services:
  db_server:
    image: mysql:5.7
    ports:
      - 3306
    environment:
      MYSQL_DATABASE: testing_with_docker_db
      MYSQL_ROOT_PASSWORD: root
    volumes:
      - ${PWD}/scripts/:/scripts

Let’s break it down:

  • “db_server” is the name/host of how this container can be accessed
  • The image we are using to set-up mysql is … mysql!!
  • As MySql receives connections on the port 3306, that one needs to be open
  • This docker image can be configured through environment variables
    • MYSQL_DATABASE indicates the name of the database to be created when the container is started
    • MYSQL_ROOT_PASSWORD specifies the password of the root user
  • We still need to create the table, so we’ll have a .sql file in a folder called “scripts”. Then we mount a volume in order to share the local folder with the docker container.

The .sql script aforementioned:

CREATE TABLE customers (
    id          INT PRIMARY KEY,
    name        VARCHAR(60)
);

Docker compose offers a lot of features, so I’d highly recommend going through its documentation. One thing that is important for us here is to understand a little bit of networking. The way our docker-compose file is configured, it will create a Network where all containers will be able to call each other by name (that’s why the db_server thing will work).

Ok, so now we have a docker-compose file with everything we need, but we still need to start docker-compose, run the .sql script, run the tests, etc. That’s where Makefiles can help.

Makefile

According to Wikipedia, “A makefile is a file containing a set of directives used with the make build automation tool.”.

Let’s take a look in our file:

pwd=$(shell pwd)
home=$(HOME)

clean-containers:
	docker-compose kill
	docker-compose rm -f

wait-for-mysql:
	docker run --rm --network 'testingwithdocker_default' busybox /bin/sh -c "until nc -z db_server 3306; do sleep 3; echo 'Waiting for DB to come up...'; done"

start-containers: clean-containers
	docker-compose up -d
	make wait-for-mysql

prepare-database: start-containers
	docker-compose exec db_server /bin/sh -c "mysql -u root -proot testing_with_docker_db < /scripts/create_tables.sql"

integration-test: prepare-database
	docker run --rm -v $(pwd):/project -v $(home)/.ivy2:/root/.ivy2 -v $(home)/.sbt:/root/.sbt -w /project --network 'testingwithdocker_default' hseeberger/scala-sbt sbt it:test
	make clean-containers

There’s quite a lot to digest here, so let’s break it down again:

variables

pwd and home are variables pointing to our current and home folders.

clean-containers

Makefiles are composed by targets. clean-containers is a very simple one, responsible for cleaning up everything related to the docker-compose resources used during our integration tests.

wait-for-mysql

The target wait-for-mysql does what its name suggests. The reason it’s needed is that when docker-compose starts in background (like we’ll do), it doesn’t wait for its services to be fully operational. In our case, we need to make sure MySql is running before we attempt to establish a connection.

There are a few ways of achieving it, but here we using a very simple approach, that is using nc command to check if the port 3306 is available.

So our docker run command:

  • will run and then delete itself (—rm) after its completion
  • is connecting to the default network created by docker-compose (testingwithdocker_default)
  • is using busybox image, which is quite lightweight
  • finally issues the nc command to the host db_server and does that until the port 3306 is available

So what’s happening here is that the mysql container (db_server) is running in a network called “testingwithdocker_default” and we spin up a new container and manually connect to that network. From there, “db_server” host is reachable.

start-containers

This target simply starts docker-compose and wait until MySql is available. The clean-containers as part of the target definition indicates that it’s a pre-condition to run this target, i.e., clean-containers will be executed and only then start-containers.

prepare-database

This target depends on the containers to be running and thus, start-containers runs as a pre-condition. Then we can run the script create_tables.sql (note how we use the “/scripts/..” folder, as specified in the volumes directive within docker-compose.yml). docker-compose exec allows one to connect to a running container and that’s what we use to connect to db_server to run the .sql script.

integration-test

At this point everything is set-up and the tests can be executed. As this application is built in Scala, we use a SBT image. This container will connect to the network created by docker-compose and run sbt it:test, which causes the integration tests to be executed. As we need to have access to the source code from inside the image, a few volumes are mounted using -v. Besides making the current folder available to the image under /project, it also mounts some other folders to make the test execution faster (this is SBT specific, you’ll probably end-up doing something similar depending on your build tool).

Now if you want to run your tests, all you have to do is:

make integration-test

No longer neeeded to have a set-up document with lots of dependencies engineers needs to install on their machines.

So that’s it, with this setup one can run integration tests using external dependencies in a very automated way. While we just talked about MySql, one can extend this approach to whichever services are required, like Cassandra , Postgres, Kafka, etc. The complete source code can be found at GitHub .

Alternatives/Further ideas

While the approach above works, there are other ways of achieving it and also of making things better depending on your needs. Although I won’t get into the solutions here, I’ll go through some of the ideas.

Run tests from inside the IDE

The approach above only works if you have your tests running in a container connected to the same network created by docker-compose (which is what usually happens in a CI environment). There are 2 reasons for that:

  • We are connecting using the host using its name (db_server) and that’s only available through the docker network. You can’t connect to that name from your local machine (well, perhaps you can, but that requires further changes in your machine, like /etc/hosts)
  • The port 3306 is only open to the network, that’s not being mapped to your local machine.

So, if you want to be able to run tests from inside the IDE, you need to map the port to your local machine and then connect using localhost (or the ip of your docker-machine) instead of db_server.

Waiting for MySql to be ready

Waiting for MySql (or any other external service) to be in a completely healthy state can be a little annoying and unfortunately there isn’t much to be done to make this better. Docker-compose V2 version has a health check and depends_on mechanism that helps a bit and some people prefer it. The docker-compose configuration would look like:

version: '2.3'

services:
  db_server:
    image: mysql:5.7
    ports:
      - 3306
    environment:
      MYSQL_DATABASE: testing_with_docker_db
      MYSQL_ROOT_PASSWORD: root
    volumes:
      - ${PWD}/scripts/:/scripts
    healthcheck:
      test: ["CMD-SHELL", "mysql -u root -proot testing_with_docker_db -e 'show tables;'"]
      interval: 3s
      timeout: 3s
      retries: 10

  wait_services:
    image: busybox
    depends_on:
      db_server:
        condition: service_healthy

Then you could remove the wait-for-mysql target from the Makefile. So basically here we are defining the conditions to docker to consider the MySql container healthy and then we need another container that only goes into the ready state when the services it depends on (db_server in this case) are completely healthy. It means that docker-compose up -d will only return when those conditions are met, so you don’t need to manually wait for MySql in your Makefile. However, the condition directive was removed from V3 version.

Conclusion

I hope you could get an idea of how using docker/docker-compose with MakeFiles can help with the automation of your builds and make them more reliable and easier to run. Of course this approach can be modified and changed in a few ways (some ideas shown above), but the main idea is the same. Once again, the full code can be found here.

 

One thought on “Automating integration tests with docker-compose and Makefile

Leave a comment