//
archives

Percona Server

This tag is associated with 4 posts

Running multiple MySQL instances in parallel

I know, I haven’t been posting much lately. 5.5 upgrades got postponed due to the new storage platform needing my immediate attention and being a speaker at the Percona Live conference in April also needs a lot of attention.

One of the things I want to try out is running multiple MySQL instances on the same machine. The concept remained in the back of my mind ever since I attended Ryan Thiessen’s presentation on the MySQL conference 2011 but we never actually got a proper usecase for it. Well, with the new storage platform it would be really beneficial so an excellent use case to try it out! So what have I been busy with in the past week? That’s right: running multiple instances MySQL on one single server. ;)

Even though it is not well documented and nobody describes the process in depth it is not that complicated to get multiple instances running next to each other. However it does involve a lot of changes in the surrounding tools, scripts and monitoring. For example, this is what I changed so far:

1. MySQL startup script
Yes, you really want this baby to support multiple instances. I’ve learned my lesson with the wildgrowth of copies of the various MMM init scripts.

2. Templating of configs
If you want to maintain the instances well you should definitely start using a fixed template which includes a defaults file. In our case I created one defaults file for all instances and each and every instance will override the settings of the defaults file. Also some tuning parts are now separated from the main config.

3. Automation of adding new instances to a host
Apart from a bunch of config files, data directory you really want to have some intelligence when adding another instance. For example only the innodb_buffer_pool_size needs to be adjusted for each new instance you add.

4. Automation of removing instances from a host
Part of the step above: if you can add instances, they you need to be able to remove an instance. This should be done with care as it will be destructive. ;)

After this there is still a long extensive list of things to be taken care of:
1. Automation of replication setup
The plan is to keep things simple and have two hosts replicate all instances to each other. So the instance 3307 on host 1 will replicate to instance 3307 on host 2 (and back), instance 3308 on host 1 will replicate to instance 3308 on host 2 (and back), etc.

2. HA Monitoring needs thinking/replacement
I haven’t found a HA Monitoring tool that can handle multiple instances on one host.
Why is this a problem?
If only one of the MySQL instances needs maintenance you can’t use the current tools unless you are willing to make all other instances unavailable as well. Also what will you do when the connection pool of one instance gets exhausted? Or if one instance on both servers die?

3. Backup scripts needs some changes
Obviously our backup tools (wrapper scripts around xtrabackup) need some alteration. We are now running multiple instances, so we need to backup more than one database.

4. Cloning scripts need some changes
We have a script that can clone a live database (utilizing xtrabackup) to a new host. Apart from the fact that it assumes it needs to clone only one single database we might also go for full cloning of all instances

5. Monitoring needs to understand multiple instances
Our current (performance) monitoring tools, like Nagios/Cacti/etc, only assume one MySQL instance per host. At best I can implement the templates multiple times, but that also increases the number of other checks with the same factor.

And there obviously a lot more things I haven’t thought of yet. As you can see I’ll be quite occupied in the upcoming period…

MySQL 5.5 upgrade blues (part one)

At the company I work for we are still running Percona Server 5.1 in production and are slowly heading towards a Percona Server 5.5 rollout. It did take a lot of preparation in the past few months (write a my.cnf conversion script for example) and a lot of testing. A couple of machines already have been upgraded this week to 5.5 to compare performance and stability. So far the machines proved to be stable enough to keep them on 5.5 and even better: we already see a couple of benefits! However, the title wouldn’t have been blues if everything would have been a breeze, right? ;)

First problem we ran into was that our Cacti templates broke due to the changed InnoDB status output. So I headed towards the Cacti templates and looked in the issue tracker if the issue was already known. Apparently it was already known, but unfortunately not fixed yet. Lucky enough writing the fixes myself wasn’t much of a problem.

Secondly we ran into the issue that the history list was growing from a “steady” 200 to 4000 after upgrade. Searching on this topic revealed a problem with the purge operations but it was not clear to me what exactly was the problem. According to the MySQL documentation the default should suffice. Uhm, right?

Now I knew some things have changed in 5.5 regarding purging: a separate purge thread was already introduced in 5.1 but could it have been so different then? So I tried to find out what each and every purge variable would do:

innodb_max_purge_lag 0
innodb_purge_batch_size 20
innodb_purge_threads 1
relay_log_purge ON

At first I assumed that increasing the batch size would make the purging more efficient: the larger the batch the more it could handle, right?
Wrong: the larger you set it, the later it will purge! I found this on the MySQL documentation about it:

The granularity of changes, expressed in units of redo log records, that trigger a purge operation, flushing the changed buffer pool blocks to disk.

So the name is actually confusing! In our case it went from 20 to 40 making things worse and then from 40 to 10 making the history list go from 4000 to 1800.

Then I decided to see what the purge lag would do. Changing the purge lag as described by Peter did indeed lower the history list for a short while, but MMM also kicked the 5.5 server out of its pool because it started lagging behind in replication! So this is definitely something to keep in the back of your mind!

I did not change the purge threads to 0 since it is a machine that runs in our production environment. Also confusing is the deprecated innodb_use_purge_thread that could be set to different values than 0 and 1 but is marked as experimental.

This graph shows best how it worked out:
InnoDB transactions over a week


In the end I lowered the purge batch size to 1 and the history list went back from 4000 to its “normal” 200.

I’m positive a part two will come shortly, so stay tuned. ;)

LRU followup #2

As I wrote in the previous LRU followup the innodb_lru_dump_restore directive has the drawback that it only loads the LRU dumpfile after starting up. I’m happy to say that I’m not the only one who noticed that: Vadim Tkachenko noticed the same a few months back when he was integrating Galera replication and he filed a bugreport for it.

It now has been resolved as of Percona Server 5.1.59-13.0 and 5.5.16-22.0 (both released this morning) with the configuration directive innodb-blocking-lru-restore.

However the bugreport states there are two different directived: innodb-blocking-lru-restore (5.1) and innodb-blocking-buffer-pool-restore (5.5). Just tested it on our test machine and the latter is the correct directive and probably the release notes of the 5.5 release were copied from the 5.1 release. I’ve just commented this in the 5.5 release post, so I think it will be corrected soon.

The directive works as it should: during the InnoDB initialization it loads the LRU dump:

mysqld: 111017 10:13:12 InnoDB: Completed initialization of buffer pool
mysqld: 111017 10:13:12 InnoDB: highest supported file format is Barracuda.
mysqld: 111017 10:13:12 InnoDB: Restoring buffer pool pages from ib_lru_dump
mysqld: 111017 10:13:12 InnoDB: Waiting for the background threads to start

Works like a charm and I’m very happy with this change. :)
Of course the LRU dump of our test machine is very small (512MB bufferpool) so it was loaded instantly. On a normal machine it should take a bit longer…

LRU Dump restore

Yesterday one of my colleagues pointed me to the innodb_lru_dump_restore variable available in the Percona MySQL Server distribution. See more about it here:
innodb_lru_dump_restore
LRU meaning that the list is in Least Recently Used order to make it easier to remove old items off the pages. Reading from the description it can be a very nice option for already running servers to maintain this list after startup, but it did not solve my immediate problem: I had to warm up two new freshly cloned read slaves on one of our busiest databases.

Normally I use mk-query-digest with the –processlist option on the (active) master, but in our setup this is seriously flawed because Continue reading

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 427 other followers

Follow

Get every new post delivered to your Inbox.

Join 427 other followers