During app development at Ackee, we have tested two database-as-a-service solutions from two major players: Google Cloud SQL Second Generation, Amazon RDS for MySQL and official MariaDB Docker container running as a single instance without any replication on Google Container Engine. MariaDB was used in the test as a baseline, only to see if there is any significant performance overhead of replication and the difference between MySQL and MariaDB. What are the benchmark results?
SQL Database benchmark
The workload compares multi-zone (highly available) instances of Google Cloud SQL Second Generation, Amazon RDS for MySQL and latest offered MariaDB version docker container on Google Container Engine. The replication technology used by these three services differs significantly, and has a big impact on performance and latency on big heavy instances (16+ CPU, 104+ GB RAM as mentioned here) but it doesn’t seem so significant on smaller instances. Google Cloud SQL Second Generation uses MySQL’s semi-synchronous replication, RDS for MySQL uses block-level synchronous replication and the Mariadb Docker Container doesn’t use any replication.
We chose smaller instances with the same storage size (with as much IOPS as we could get) with a comparable price for the service.
Name | Configuration | HA | Disk | Price |
Amazon RDS | db.t2.large (2cpu, 8GB RAM) | Multizone A-Z | 100GB SSD (provisioned IOPS) | 219+$ |
Google SQL 2nd gen | db-n1-standard-2 (2cpu, 7.5GB RAM) | HA failover replica | 100GB SSD | 220$ |
MariaDB Docker on GKE | 2x n1-standard-2 (2cpu, 7.5GB RAM) | – | 100GB SSD | 117$ |
To determine throughput, a Sysbench OLTP workload was generated from a MySQL client in the same zone as the primary database instance. The workload is a set of step load tests that double the number of threads (connections) with each run. The data set used is almost three times larger than total memory of the database instance to ensure that reads go to disk.

Transaction per second (TPS) results show that Google Cloud SQL and the MariaDB are faster than RDS for MySQL. Google Cloud SQL’s TPS is much higher than RDS at up to 16 threads. At 64 threads, variance and the potential for replication lag increase, causing Cloud SQL’s perfomance drop at higher thread counts. The workload illustrates the differences in replication technology between the three services. Unfortunately, the MariaDB runnning on GKE wasn’t able to accept more than 100 connections (threads). Otherwise it would probably outperform both Cloud SQL’s. Google Cloud SQL emphasizes performance, allowing for replication lag, which can increase failover times, but without putting data at risk.

Latency
We measured average end-to-end latency with a single client thread (i.e., “pure” latency measurement).
The latency comparison changes as additional threads are added. Google Cloud SQL exhibits lower latency than RDS for MySQL across all tests. Compared to the MariaDB, Google Cloud SQL’s latency is about the same.


Running the benchmark
Environment configuration and sysbench parameters for our testing.
We used the following environment configuration and sysbench parameters for our testing.
Test instances:
- Google Cloud SQL v2, db-n1-standard-2 (2 CPU, 7.5 GB RAM), MySQL 5.7, 100 GB PD SSD (3k IOPS) + Failover Replica
- Amazon RDS Multi-AZ, db.t2.large (2 CPU, 8 GB RAM), MySQL 5.7.17, 100 GB SSD, 1k Provisioned IOPS + Multi-AZ Replica
- Google Container Engine with MariaDB Docker Container, db-n1-standard-2 (2 CPU, 7.5 GB RAM), 100 GB PD SSD (3k IOPS), No Failover Replicas
Test overview:
Sysbench runs were 1 table of 100M rows. With 100M rows, the data set size as loaded was ~25 GB inorder to ensure that the data set didn’t fit in RAM. Each step run was 5 minutes with a one minute “cool down” period in between.
#prepare sysbench --test=oltp --oltp-table-size=10000000 --mysql-host=$host --mysql-user=$user --mysql-password=$pw --db-driver=mysql prepare #run oltp for different number of threads for i in 1 2 4 8 16 32 64 128 256 512 1024 2048 4096; do echo running test for $i threads sysbench --test=oltp --max-requests=0 --num-threads=$i --max-time=3600 --mysql-host=$host --mysql-password=$pw --mysql-user=$user --db-driver=mysql --oltp-test-mode=complex run >/data/set-${newDataSet}/${host}-${i}.txt echo "cooling down for one minute." sleep 60 done
First, we prepare the data (i.e. fill the mysql database with random data). Second, we run the test for 1, 2, 4, 8 .. 4096 threads (such a high number as 4096 threads will most likely fail on most platforms/services) for –max-time seconds each test, unlimited number or requests (the test goes without stopping for the given number of threads) and after each test there is a cooldown period for the mysql to “rest and take a breath”.
All the scripts to run and prepare the OLTP tests, environment settings and provisioning as well as plotting the data with gnuplot can be found in this github repo
Result
Google Cloud SQL Second Generation outperformed RDS for MySQL when active thread count is low, as is typical for many web applications. MariaDB performed slightly better than Google Cloud SQL‘s latest MySQL but wasn’t able to accept more than 100 db connections.
Using a burstable db.t2.large instance for RDS is cheating. Once CPU credits are depleted it will become 3x slower. For correct comparison db.m4.large should be used.
And how did you get such pricing? I’ve tried and numbers just don’t add up.
db.t2.large, 100 GB SSD, 1k IOPS, Multi AZ:
0.272/h + 0.25/gb + 0.2/iops = 196 + 25 + 200 = $421
Even with 3-year all upfront (which doesn’t fit many companies) it is $313
Overall this comparison seems biased towards AWS pricing-wise
db.t2.large, 100 GB SSD, 1k IOPS, Multi AZ:
0.272/h + 0.25/gb + 0.2/iops = 196 + 25 + 200 = $421
why is 0.25*100 = 25, but 0.2*100= 200? :)) the price for storage is in GB-month and IOPS-month units 🙂 I didn’t count any discounts nor upfront payments as it wouldn’t be fair and as you agree it doesn’t fit many companies
db.t2.large costs 0.272/h while db.m4.large costs 0.35/h. I mentioned that I was trying to compare instances that cost the same. I run the tests multiple times in a really long timespan. The burst is negligible. Despite that possible advantage of RDS, it has lost on performance as well.
Because for 1000 IOPS I calculated is as 0.2*1000 and not 0.2*100, hence $200. Did I get it wrong?
I’m looking at pricing for both clouds and still don’t understand how come that most google services are only like 30% cheaper for similar performance but for DB IOPS it’s around 5x price difference. I have a feeling that I’m calculating something wrong and trying to figure it out.
Hmm, I assumed the IOPS are being multiplied by 100 as per the storage capacity. Because for 100 GB you can get extra 1k IOPS (you can’t have less or more, just 1k extra).
Can’t find a proper explanation and the official calculator doesn’t seem to work properly http://calculator.s3.amazonaws.com/index.html
It gives me 12.5$ for the 100 GB SSD and 65$ for the 1k IOPS. I’m confused a bit now.
Hmm, looks like you are right about the IOPS pricing. According to this https://aws.amazon.com/ebs/pricing/ and the calculator
it really would be 12.5$ for 100 GB SSD and 65$ for 1k IOPS.
So my configuration was actually about 30$ more expensive than I thought.
that was good comparison between the database-as-a-service solutions from two major players and the bench mark result was good which makes use of
MariaDB and mysql.