Using MySQL for Testing (on Laravel)

Back-end

Using MySQL for Testing (on Laravel)

Tony Messias

Tony Messias

We have a really good testing culture at madewithlove and, in one of the projects I am working on, we have a decent suite of tests. A big chunk of these tests are Feature Tests. In this article, I’ll describe the process of switching a project’s test suite from running against an in-memory database (SQLite) to a disk-based database instead (MySQL), and some tips on how it can actually speed things up.

This particular project was using an in-memory SQLite database, mainly because it’s practical. We can run phpunit without having any preparation script to, for instance, create the database. Also, performance wise, it’s just faster to work with RAM instead of an actual filesystem.

However, it’s not all flowers and sunshine. There were some issueswhere certain SQL differs between our testing and production databases. That’s not good because the test becomes a false positive. If you don’t have good coverage during QA, it might be that you will only discover these nuances when your code fails in production.

I was trying to solve this and the most logical answer was to actually use the same database for testing as we do in production. It makes sense, but I had the feeling it would be a change that would slow down our suite significantly. Additionally, I would have to create some workflows to configure a test database which the developer would need to run occasionally when  running the test suite.

Turns out it wasn’t.

Creating the test database

This project uses Docker and Docker Compose and the database is MySQL. Unfortunately (for me), the official container image only accepts 1 database name via environment variable to be created when the container boots and I really dislike the idea of having a setup script to create the database before we can run phpunit.

Well, a container is actually really cheap to run (resources-wise), so I figured I could have another service described in the docker-compose.yml that is also a MySQL container, but only for testing. The docker-compose.yml file with both database services described looks like this:

  db:
    image: mysql:5.7
    volumes:
      - mysqldata:/var/lib/mysql
    environment:
      MYSQL_ROOT_PASSWORD: "${DB_PASSWORD}"
      MYSQL_DATABASE: "${DB_DATABASE}"
      MYSQL_USER: "${DB_USERNAME}"
      MYSQL_PASSWORD: "${DB_PASSWORD}"
      MYSQL_ROOT_HOST: "%"
    ports:
      - "${DOCKER_DB_PORT}:3306"
    networks:
      - appnet

  testdb:
    image: mysql:5.7
    environment:
      MYSQL_ROOT_PASSWORD: "${DB_PASSWORD}"
      MYSQL_DATABASE: "${DB_DATABASE}"
      MYSQL_USER: "${DB_USERNAME}"
      MYSQL_PASSWORD: "${DB_PASSWORD}"
      MYSQL_ROOT_HOST: "%"
    networks:
      - appnet

Ok, that looks nice. Environment variables can be used as values for the container’s environment variables because Docker Compose will actually read the .env file when it exists. That’s awesome. It means we only have to point our tests to that new service, by setting a DB_HOST environment variable in our phpunit.xml.dist file, something like this:

<env name="DB_HOST" value="testdb"/>

No other DB_* environment variable is needed, because we are actually using the same ones as our local setup, which are described in the .env file

So, is that it? Yes. Running the tests just works:

Tests running against SQLite

$ docker-compose exec app ./vendor/bin/phpunit
PHPUnit 7.3.5 by Sebastian Bergmann and contributors.

...............................................................  63 / 710 (  8%)
............................................................... 126 / 710 ( 17%)
............................................................... 189 / 710 ( 26%)
............................................................... 252 / 710 ( 35%)
............................................................... 315 / 710 ( 44%)
............................................................... 378 / 710 ( 53%)
........................................S...................... 441 / 710 ( 62%)
............................................................... 504 / 710 ( 70%)
............................................................... 567 / 710 ( 79%)
............................................................... 630 / 710 ( 88%)
............................................................... 693 / 710 ( 97%)
.................                                               710 / 710 (100%)

Time: 1.4 minutes, Memory: 86.25MB

OK, but incomplete, skipped, or risky tests!
Tests: 710, Assertions: 1838, Skipped: 1.

Tests running against MySQL

$ docker-compose exec app ./vendor/bin/phpunit
PHPUnit 7.3.5 by Sebastian Bergmann and contributors.

...............................................................  63 / 710 (  8%)
............................................................... 126 / 710 ( 17%)
............................................................... 189 / 710 ( 26%)
............................................................... 252 / 710 ( 35%)
............................................................... 315 / 710 ( 44%)
............................................................... 378 / 710 ( 53%)
........................................S...................... 441 / 710 ( 62%)
............................................................... 504 / 710 ( 70%)
............................................................... 567 / 710 ( 79%)
............................................................... 630 / 710 ( 88%)
............................................................... 693 / 710 ( 97%)
.................                                               710 / 710 (100%)

Time: 20.65 seconds, Memory: 80.25MB

OK, but incomplete, skipped, or risky tests!
Tests: 710, Assertions: 1838, Skipped: 1.

Great! It just works. Wait, whut?! How can MySQL be faster than running the tests against an in-memory database? Turns out that the RefreshDatabase trait we use is smart. For an in-memory database, it has to run the migrations on setup (for every test), while when you’re using an actual database (one that supports transactions), it optimizes for that and only runs the migration once, relying on transactions.

That’s great! And I thought using SQLite was faster.

Using tmpfs

I was ready to call it a day when I remembered a trick I saw passing by on Twitter about using tmpfs.

tmpfs is a common name for a temporary file storage facility on many Unix-like operating systems. It is intended to appear as a mounted file system, but stored in volatile memory instead of a persistent storage device. A similar construction is a RAM disk, which appears as a virtual disk drive and hosts a disk file system.

Wikipedia (https://en.wikipedia.org/wiki/Tmpfs)

Turns out Docker (on Linux) supports tmpfsmounts. So, now we can have the consistency of MySQL with the speed of an in-memory database. Sounds easy enough; our testdb service in the docker-compose.yml file looks like this:

  testdb:
    image: mysql:5.7
    tmpfs: /var/lib/mysql
    environment:
      MYSQL_ROOT_PASSWORD: "${DB_PASSWORD}"
      MYSQL_DATABASE: "${DB_DATABASE}"
      MYSQL_USER: "${DB_USERNAME}"
      MYSQL_PASSWORD: "${DB_PASSWORD}"
      MYSQL_ROOT_HOST: "%"
    networks:
      - appnet

Is that it?! Let’s run the tests again (after running docker-compose down && docker-compose up -d):

$ docker-compose exec app ./vendor/bin/phpunit
PHPUnit 7.3.5 by Sebastian Bergmann and contributors.

...............................................................  63 / 710 (  8%)
............................................................... 126 / 710 ( 17%)
............................................................... 189 / 710 ( 26%)
............................................................... 252 / 710 ( 35%)
............................................................... 315 / 710 ( 44%)
............................................................... 378 / 710 ( 53%)
........................................S...................... 441 / 710 ( 62%)
............................................................... 504 / 710 ( 70%)
............................................................... 567 / 710 ( 79%)
............................................................... 630 / 710 ( 88%)
............................................................... 693 / 710 ( 97%)
.................                                               710 / 710 (100%)

Time: 15.55 seconds, Memory: 78.25MB

OK, but incomplete, skipped, or risky tests!
Tests: 710, Assertions: 1838, Skipped: 1.

About 5 seconds gain. It’s not much compared to the previous speed gains, but that’s because of the RefreshDatabase trait trick I mentioned earlier. I/O is only happening once, which is when it runs the migrations. The tests run as transactions and are rolled back at the end, so no actual I/O happens.

If you’re not convinced, let’s run a benchmark on that folder we mounted as tmpfs. We can use the dd CLI tool to test IO operations. First, let’s get a terminal inside the testdb container:

$ docker-compose exec testdb bash
[email protected]:/#

Let’s write 512MBs somewhere and see how long it takes:

$ # test file written in the root folder (normal filesystem)
$ time sh -c "dd if=/dev/zero of=/testfile bs=512 count=1000000 && sync"
1000000+0 records in
1000000+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 9.63401 s, 53.1 MB/s

real	0m15.321s
user	0m0.224s
sys		0m0.252s

$ # now the testfile will be written in the tmpfs mount
$ time sh -c "dd if=/dev/zero of=/var/lib/mysql/testfile bs=512 count=1000000 && sync"
1000000+0 records in
1000000+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 1.11678 s, 458 MB/s

real	0m1.143s
user	0m0.228s
sys		0m0.895s

Wow, that’s actually really fast. From ~15s to ~1.3s. Really cool, right? I’ve got an SSD disk here, so if you are not on an SSD, your gains would be much, much higher.

Conclusion

Turns out using a real database is actually way faster than running a traditional in-memory one. Better yet, we can even trick the MySQL container with tmpfs, so it essentially behaves as an in-memory database.

I’m a bit sad because on a Mac the speed gains were not that big. We did some tests and the SQLite version took 3.3m while the MySQL version took 2.2m. But when using SQLite with Paratest, the whole suite runs in 22s. Still worth it for us, and we can also configure SQLite to run locally while CI runs on MySQL (as a safety net). SQLite was configured to run locally by overriding the environment variables in the phpunit.xml file.

The CI build time dropped from 11mins to about 4mins in total (we do have other steps beside running PHPUnit), so that was awesome as well.

Hope you enjoyed it.

Hire Tony as a speaker?

Contact us
Tony Messias

Tony Messias

Also known as ‘that cat man’ or ‘that guy working with Kurt Cobain looking over his shoulder all the time’, this Brazilian developer never stops learning. Because time is money, he never reads just one book at a time. Why take it easy, when difficult is an extra option? Tony is currently waiting for his wife to finish college. After that, they might hit the road for a couple of months and explore the world in search of paradise.