Very short update from my side: I’ll be presenting at FOSDEM in Brussels (1-2 February 2014) and Percona Live MySQL Conference in Santa Clara (1-4 April 2014).
At FOSDEM I will present about Galera replication in real life which is concentrate around two use cases for Galera: adding new clusters to our sharded environment and migrating existing clusters into a new Galera cluster.
At Percona Live MySQL Conference I will present about our globally distributed storage layers. Next to our globally sharded environment we have built a new environment called ROAR (Read Often, Alter Rarely) that also needs to be distributed globally.
Both are interesting talks and I really look forward to present at these great conferences. So if you have the way and means to attend either one: you should!
I’m currently wrapping up last things in the office to be prepared for my presentation about MySQL-Statsd at the Percona Live London conference next week. It will be a revised version of the talk I gave at the Percona Live Conference & Expo in Santa Clara, but this time it will be more focussed to the practical side and, obviously, the MySQL-Statsd daemon we open sourced recently, so if you missed that talk or think a follow up is needed you should definitely attend! Slides will also become immediately available after the talk.
I’m also looing forward to attend a lot of other talks, for instance The Shard Revisited: Tools and Techniques Used at Etsy and MySQL, the ebay Classifieds Way and of course the tutorials.
It is great to see the bustrip to the community dinner organized by MariaDB. I did not really mind taking the tube for 17 minutes, but getting a busride in a Roadmaster is obviously a lot more fun and comfortable!
So see you at the conference next Monday and Tuesday!
It is far from perfect and the todo list is still long. You are, as it is a public repo, welcome to collaborate with us and make it even better. I can’t wait to see the first pull-request coming in!
You can also talk to us directly at Percona Live London where we will bring a revised version of our talk at Santa Clara. This time it will be aimed more at the practicality: It will contain less comparison with existing tools, be less theoretical and focus more on the how to do it.
A bit of a shame to do it this way: I tried several times to add the Spil Games Engineering feed to the Planet MySQL feed adder, but it fails on the feed validation and suggests to validate it via feedvalidator.org. However feedvalidator.org validates it perfectly fine as a RSS 2.0 feed. I did use the Planet MySQL feedback form twice but no reply on that and I think I will file a bug report later today.
It has been a bit of a rollercoaster ride for us since the Percona Live London posting. The team has expanded with two new DBAs, I was invided to give a talk on selected the Percona Live Conference & Expo 2013 in April and at the same time Spil Games is organizing the second MySQL User Group NL meeting on Friday February 22nd of February. I did not realize it is next week and never posted about it, so there it is!
The meeting schedule is as following:
17:00 Spil Games Pub Open
18:15 “MySQL User Defined Functions” by Roland Bouman
20:00 “Total cost of ownership” by Zsolt Fabian (Spil Games)
Before and after the meeting drinks and snacks will be served in our pub. You can chat up with others, mingle with the Spil Games employees or if you are very shy play some pool/foosball/pinball.
I’m happy we are presenting the TCO on this User Group. Zsolt will show his findings on several things you need to keep in mind if you wish to calculate your TCO, so it will be more a general guidance on how to do it yourself. Of course we will share some of our own WTFs/facepalms and other interesting facts we found during our own investigation. 😉
In case you are attending, there are several ways to get to the Spil Games HQ:
If you travel by car, just punch in our address in your navigation:
1223 RE Hilversum
Do take notice our entrance has moved to the new building on our campus behind these nicely graffiti painted doors:
Second option would be coming by public transport.
Coming from the direction of Amsterdam/Amerfoort:
Take the train to Hilversum (central) and either walk to our new office using Google Maps (about 15 minutes walk). Otherwise you can take bus #2 (to Snelliuslaan) and hop off on the Minckelersstraat (ask the driver) and walk the remaining few hundred meters.
Coming from the direction of Utrecht:
Take the train to Hilversum Sportpark and walk to our new office using Google Maps (about 8 to 10 minutes walk).
Hope to see you all next Friday at the Spil HQ! 🙂
Many thanks to all those who attended my talk at the Percona Live London 2012 conference!
I did put the location in the last slide, but just in case you missed the last slide (or missed my talk) you can find them here:
I did receive a couple of questions afterwards (in the hallways of the conference) that made me realize that I forgot to clear up a couple of things.
First of all the essence of shifting the data ownership of a specific GID towards a specific datacenter and ensuring data consistency also means one Erlang process within that very same datacenter is the owner of that data. This does also mean this Erlang process is the only that can write to the data of this GID. Don’t worry: for every GID there should be a process that is the data owner and Erlang should be able to cope with the enormous scale here.
Second of all the whole purpose of the satellite datacenter (all virtualized) is to have a disposable datacenter while the master datacenter (mostly virtualized, except for storage) is permanent. Imagine that next to the existing presence (master or satellite DC) in one country we also expect big growth due to the launch of a new game we could easily create a new satellite datacenter by getting a couple of machines in the cloud. This way our hybrid cloud can easily be expanded either by virtuals or by datacenters. I thought this was a bit too offtopic but apparently it raised some questions.
If you have any questions, don’t hesitate to ask! 🙂
On one of the clusters at Spil we noticed a sudden increase in the length of the history list and a steep increase in the ibdata file in the MySQL directory.
I did post a bit about this topic earlier regarding MySQL 5.5 but this cluster is still running 5.1 and unfortunately 5.1 does not have the same configurable options to influence the purging of the undo log…
What it boils down to is that the purge lag is largely influenced by the length of the history list and the purge lag:
On 5.5 it is also influenced by the number of purge threads and purge batch size. I toyed around with these settings in my earlier post and tuning them helped. However the only setting I could change on 5.1 is the purge lag in milliseconds that was already set to 0. In other words: I could not fiddle around with this. This time it wasn’t an upgrade to 5.5 either so I could not blame that again. 😉
So what was different on this server then? Well the only difference was that it did have “a bit” of disk utilization: around 80% during peak hours. Since it is not used as a front end server it does not affect the end users, but only the (background) job processes that process and store data on this server. However it could be the case that due to the IO utilization it started to lag behind and created a too large history list to catch up with its current configuration.
How did we resolve it then? After I read this quote of Peter Zaitsev on Marco Tusa‘s posting the solution became clear:
Running Very Long Transaction If you’re running very long transaction, be it even SELECT, Innodb will be unable to purge records for changes which are done after this transaction has started, in default REPEATABLE-READ isolation mode. This means very long transactions are very bad causing a lot of garbage to be accommodated in the database. It is not limited to undo slots. When we’re speaking about Long Transactions the time is a bad measure. Having transaction in read only database open for weeks does no harm, however if database has very high update rate, say 10K+ rows are modified every second even 5 minute transaction may be considered long as it will be enough to accumulate about 3 million of row changes.
The transaction isolation is default set to REPEATABLE-READ and we favor it on many of our systems, especially because it performs better than READ-COMMITTED. However a background job running storage server does not need this transaction isolation, especially not if it is was blocking the purge to be performed!
So in the end changing the transaction isolation to READ-COMMITTED did fix the job for us.
Some other things: tomorrow my team is attending the MySQL User Group NL and in three weeks time I’ll be speaking at Percona London:
So see you there!
It has been a while since I wrote on this blog. Basically I had too much on my mind (expanding my department, holidays, etc) to actually write here and I’ll promise to post more regularly from now onwards. 😉
Anyway, as the title already suggests: I found out how you can use CURDATE() in a wrong way. One of the developers in my company asked me to help him out as his code all of a sudden did not work properly anymore. Or even better: it used to process several thousands of rows and all of a sudden it processed none.
I looked at his code snippet and it was quite a long query with a lot of brackets:
SELECT SUM(some_count_col), logdate, loghour FROM logs
WHERE (logdate = CURDATE() AND loghour = HOUR(NOW())
GROUP BY logdate, loghour;
Column wise logdate is of the type DATE and loghour of the type TINYINT.
note that this is, obviously, not the original query, but it is similar
Apart from the fact that his usage of brackets makes the query quite unreadable I was quickly able to simplify the query to this: Continue reading
Thank you very much if you attended my session at the Percona Live MySQL Conference!
I promised some people to share my slides, so I posted them on the page at Percona:
Spil Games: Outgrowing an internet startup (Percona Live MySQL Conference 2012) on SlideShare
Click here if you need a direct link
My opinion of the conference is that it was amazing! The conference was very well organized, the atmosphere was great and I met so many great people that I had a tough time remembering all their names and companies. The contents of all talks were really well balanced and most of the ones I attended were very interesting.
The most interesting talk of the conference was the Scripting MySQL with Lua and libdrizzle inside Nginx. It was a shame only a few people attended the talk and that they ran out of time before they could complete the presentation. 😦
Apart from that I had a really great time and hope to see you all next year! (or later this year in London)
Suitcase packed? Check!
I’m ready for my departure to San Francisco tomorrow morning!
I already mentioned before that I will be a speaker at the MySQL conference, but I think the session has moved since. It is now scheduled Thursday between 1:00 and 1:50 in ballrooom E. Be there if you want to know more about what Spil Games is doing!
I also determined what the most interesting talks are going to be for me and here are some of the highlights:
One to Many: The Story of Sharding at Box (Wed 1:00 – 1:50pm)
Sounds very interesting to see how different their story is from the one at Spil Games.
The Etsy Shard Architecture: Starts with S and Ends With Hard (Wed 2:00 – 2:50)
Same as above but then with the difference that, from the description, it seems they are implementing almost the same solution as we are. 😀
Scripting MySQL with Lua and libdrizzle inside Nginx (Wed 3:30 – 16:20)
Very interesting thought of combining Nginx with Lua and a database connection through Libdrizzle. It seems you can easily implement lightweight services this way. So definitely a reccomended session!
Percona XtraDB Cluster: New HA solution (Thu 11:00 – 11:50)
Percona XtraDB Cluster came to me as a complete surprise earlier this year. I’ve been playing around with it a little bit and now that it has gone GA last week I’m even more anxious to attend this session. I think it could be a good candidate to become one of the building blocks for Spil Games in the future.
Common Schema: a framework for MySQL server administration (Thu 2:00 – 2:50)
I haven’t done much with Common Schema so far and it is already available on our platform so I think it would be a good idea to attend this session and get more practical insights.
So I’m off to SF in about 19 hours. If you are also attending the conference: see you there!
It took me a while to figure out why our new instance wasn’t graphing: the response time query was performing and the script was picking up its values. Also the rrd files were created but for some bizarre reason all values were set to “NaN”, in rrd/cacti terms: Not a Number. If you search on that subject you will come across a lot of (forum) postings stating you need to change your graph type to “GAUGE” or change the MIN/MAX values for your data templates. Strange as this was already set to a minimum of 0 and a maximum of an unsigned 64 bit integer.
After running the ss_get_mysql_stats.php script manually for these graphs I got the error stating that a -1 value was not allowed. Indeed the output of the script contained a -1 value for the last measurement and I quickly found the culprit: an uninitialized array key inside the script will cause it to return a -1 value. Now why was this array key not initialized? Simply because the query filling the array was capped to 13 rows instead of the expected 14 rows.
This left me with three options:
1. Change cacti templates to allow -1 value
2. Change cacti templates to only contain 13 data points instead of 14
3. Change the query in the ss_get_mysql_stats.php script
Naturally I patched the script and after 10 minutes the graphs started to get colorful again! 😉
So if you have the same problem as we do, you can find the patch attached to my bugreport:
Query reponse time and Query time histogram not graphing