MySQL is a complex system that requires many tools to repair, diagnose and optimise. Fortunately for admins, MySQL has attracted a vibrant community of developers who are putting out high quality open source tools to help with the complexity, performance and health of MySQL systems, most of which are available for free.
The following 10 open source tools are valuable resources for anyone using MySQL, from a standalone instance to a multiple-node environment. The list has been compiled with variety in mind. You will find tools to help back up MySQL data, increase performance, guard against data drift and log pertinent troubleshooting data when problems arise.
There are several reasons why you should consider these tools instead of creating your own in-house tools.
First, thanks to their wide use, they're mature and field tested. Second, because they are free and open source, they benefit from the knowledge and experience of the continually expanding MySQL community. Finally, these tools are actively developed, and many are professionally supported (either for free or commercially), so they continue to improve and adapt with the evolving MySQL industry.
Keep in mind that there are many more tools worthy of your attention. I have chosen to emphasise free and open source, and to err on the side of usefulness and usability. Also note that all are Unix command-line programs but one, in large part because MySQL is more widely deployed and developed on Unix systems. If I missed a favourite, feel free to highlight it in the comments below.
Now, let's meet the first of the 10 essential MySQL tools.
Nothing frustrates like slow MySQL performance. All too often faster hardware is thrown at the problem, a solution that works only if hardware is in fact to blame.
More often than not, poor performance can be attributed to slowly executing queries that are blocking other queries, creating a ripple effect of slow response times. Since it's a lot cheaper to optimise queries than to upgrade hardware, the logical first step in MySQL optimisation is query log analysis.
Database administrators should analyse query logs frequently, depending on the volatility of the environment. And if you've never performed query log analysis, it's time to start, even if you are relying on third party software, which is often assumed to be optimised when in fact it is not.
Today's best query log analyser is mk-query-digest. Co-written by Baron Schwartz and myself, it is actively developed, fully documented and thoroughly tested. MySQL distributions include the query log analyser mysqldumpslow, but the tool is outdated, poorly documented and untested. Other query log analyzers like mysqlsla, which I wrote several years ago, suffer the same problems as mysqldumpslow.
mk-query-digest analyses query logs and generates reports with aggregated, statistical information about execution times and other metrics. Since query logs usually contain thousands, if not millions of queries, query log analysis requires a tool.
mk-query-digest can help you find the queries that take the longest time to execute as compared to other queries. Optimising these slow queries will make MySQL run faster by reducing the greatest delays. The real art of query optimisation is more nuanced, but the basic goal is the same. Find slow queries, optimise them and increase response times.
The tool is easy to use, executing mk-query-digest slow-query.log will print the slowest queries in slow-query.log. The tool includes support for "query reviews," for reporting queries you have not yet seen or approved, making frequent log analyses quick and efficient.
Maintainers: Daniel Nichter and Baron Schwartz
Being able to generate data dumps quickly is vital for backups and server cloning. Unfortunately, mysqldump, which ships with MySQL distributions, is single-threaded and thus too slow for data-intensive jobs. Thankfully, the modern replacement, mydumper, uses multiple threads, making it 10 times as faster than mysqldump.
Also known as MySQL Data Dumper, this tool does not manage backup sets, differentials, or other parts of a complete backup plan. It just dumps data from MySQL as quickly as possible, enabling you to complete backups under tight time constraints, such as overnight, while employees are offline, or to perform backups more frequently than you would with mysqldump.
One technical point to know about mydumper is that it locks tables, so it is not the ideal tool for performing backups during operating hours. Then again, professional data recovery costs hundreds of dollars per hour, and you always get a bill even if the data isn't recoverable. mydumper is free and well worth exploring for even basic backups.
mydumper also comes in handy when cloning servers. Other tools perform complete hard drive duplications, but when all you need is MySQL data, mydumper is the fastest way to get it. Servers provisioned in a cloud are particularly suited to cloning using mydumper. Just dump your MySQL data from an existing server and copy it to the new instance.
Cloning is worthwhile for creating slave servers, benchmarking and profiling, but nowhere is it more vital than in testing and development. Being able to spin up a replica for quick testing before going live is essential for dynamic MySQL environments. With mydumper, you can quickly create a server that is nearly identical to your production server, enabling your test results to better mimic production results.
Maintainers: Domas Mituzas, Andrew Hutchings, Mark Leith
If your databases are in use every day, all day, giving you no "overnight" during which tables can be locked for backup, xtrabackup is your solution. Also known as Percona XtraBackup, this tool performs non-blocking backups and is the only free, open source tool that can do this. By comparison, proprietary non-blocking backup software can cost more than £3,000 per server.
xtrabackup also offers incremental backups, allowing you to back up only the data that has changed since the last full backup. Adding incremental backups to your backup process is powerful, given the reduced performance hit of these tremendously smaller backups.
Furthermore, another project has grown up around xtrabackup that makes managing a full backup plan even easier: xtrabackup-manager. Although this tool is new and still in development, it holds a lot of potential because it offers advanced features like rotating backups, with groups and backup set expiring. Together, xtrabackup and xtrabackup-manager are a formidable and free backup solution.
Download xtrabackup: http://www.percona.com/software/percona-xtrabackup/downloads/
Download xtrabackup-manager: http://code.google.com/p/xtrabackup-manager/
Maintainer: Lachlan Mulcahy
tcprstat is probably the most esoteric of the 10 on this list. The tool monitors TCP requests and prints statistics about low-level response times. When you become familiar with the response time way of thinking about performance, the payoff of tcprstat is significant.
The principle is elaborated in the book "Optimising Oracle Performance" by Cary Millsap and Jeff Holt, and it applies equally well to MySQL. The basic idea is that a service, in this case MySQL, accepts a request, fulfills that request and responds with results. The service's response time is the time span between receiving a request and sending a response. The shorter the response time, the more requests can be served in the same amount of time.
Parallel processing and other low-level factors play a significant part here, but the simplified upshot is that there are 28,800 seconds in an 8 hour workday, so reducing response times by just four-tenths of a second (from 0.5 to 0.1 second) results in 230,400 more requests served each day. tcprstat helps you achieve this.
I have only enough space in this article to pique your curiosity, so I'll finish this tool's introduction by telling you the first step toward getting started with MySQL response time optimization. Read "Optimising Oracle Performance." Then start using tcprstat.
"Data drift" is a significant problem for dynamic MySQL environments. This problem, wherein slave data becomes out of sync with the master, is often caused by writing data to a slave or executing certain non-deterministic queries on the master. What's worse is that the data differences may go unnoticed until they become crippling.
Enter mk-table-checksum, a tool that performs the complex, sensitive calculations necessary to verify the data in two or more tables is identical.
mk-table-checksum works with both standalone servers and servers in a replication hierarchy, where the tool's greatest value is easily seen. Verifying table data between a master and a slave must account for replication consistency. Because changes to the master are replicating to slaves with some amount of time delay ("lag"), simply reading data from the servers is an unreliable way to verify consistency, given that the data is constantly changing and incomplete until fully replicated.
Locking tables and waiting for all data to replicate would allow consistent reads, but to do so would mean effectively halting the servers. mk-table-checksum allows you to perform non-blocking, consistent checksums of master and slave data. For technical details on how this is accomplished, see the tool's documentation.
Apart from replication consistency, there are other problems with verifying data. Table size is one of them. The MySQL command CHECKSUM TABLE is sufficient for small tables, but large tables require "chunking" to avoid long locks or overloading CPU or memory resources with checksum calculations.
Chunking solves a second problem, the need for regular data consistency checks. While data drift can be a one time occurrence, often it is recurring. mk-table-checksum is designed to continuously check tables, vetting certain chunks one run and other chunks the next run until eventually the whole table has been checked. The ongoing nature of this process helps ensure that recurring drift is corrected.
Problems have a way of waiting until you're not looking or at home sleeping to occur, and diagnosing them after the fact is sometimes impossible without data about the state of MySQL and the server at the time of the problem. The natural inclination is to write your own script to wait for or detect a problem and then start logging extra data because, after all, no one knows your system better than you. The problem is, you know your system when it's working, and if you knew the kinds of problems the system would have, you would simply fix them rather than try to capture and analyse them.
Thankfully, those who specialise in knowing when MySQL is not working, and in fixing the problems, have written a duo of tools called stalk and collect. The first tool waits for certain conditions to become true before running an instance of the second tool. That seems trivial, but these tools are made efficient by certain details addressed.
Firstly, stalk runs collect in configurable intervals, keeping you from logging too much redundant data, which can obfuscate postproblem analysis. Secondly, collect gathers not only the standard information that MySQL can report about itself but a lot more data that you might not have thought to include: lsof, strace, tcpdump and so on. Thus, if you end up having to consult with a professional who specialises in fixing MySQL problems, you will have all the data that they need.
stalk and collect are configurable, so they can be used for almost any problem. The one requirement is a definable condition to establish a trigger for stalk. If multiple conditions signal the problem, then you may also need to consult with a professional for a more extensive review of your MySQL environment because problems can appear in MySQL even though the underlying cause is elsewhere.
stalk and collect can be used proactively, too. For example, if you know that there should never be more than 50 active MySQL connections at a time, then you could proactively monitor this stalk, making these tools helpful both for problems that you know and problems that you have not yet seen.
Maintainer: Baron Schwartz
You don't always want to wait for something to go wrong before addressing a problem, and dashboards provide an essential way for you to monitor your MySQL environment for potential problems before they arise.
There are many free and commercial monitoring applications for MySQL, some MySQL-specific and others generic with MySQL plugins or templates. mycheckpoint is notable because it is free, open source, MySQL-specific and full featured.
mycheckpoint can be configured to monitor both MySQL and server metrics, like InnoDB buffer pool flushes, temporary tables created, operating system load, memory usage and more. If you don't like charts, mycheckpoint can also generate human-readable reports.
As with stalk, alert conditions can be defined with email notifications, but no secondary tool like collect will be run to log additional troubleshooting data. Another useful feature is mycheckpoint's ability to monitor MySQL variables to detect changes that can lead to problems, or signal that someone has modified MySQL when they shouldn't have.
Monitoring MySQL isn't just for data centres or large deployments. Even if you have a single MySQL server, monitoring is essential. As with your vehicle, there's a lot to know about the system while it's running to help you foresee or avoid malfunctions. mycheckpoint is one solution among many worth trying.
Maintainer: Shlomi Noach
More info: http://code.openark.org/forge/mycheckpoint
Queries against partitioned or sharded data sets can be accelerated dramatically using shard-query, which parallelises certain queries behind the scenes. Queries that use the following constructs can benefit from shard-query's parallel execution:
- Subqueries in the FROM clause
- UNION and UNION ALL
Aggregate functions SUM, COUNT, MIN and MAX can be used with those constructs, too.
shard-query is not a standalone tool, it requires other programs like Gearman, and it's relatively complex to set up. But if your data is partitioned and your queries use any of the constructs listed above, then the benefits are worth the effort.
Download: (svn checkout) http://code.google.com/p/shard-query/source/checkout
Maintainer: Justin Swanhart
More info: http://code.google.com/p/shard-query/
As tables become larger, queries against them can become slower. Many factors influence response times, but if you have optimised everything else and the only remaining suspect is a very large table, then archiving rows from that table can restore fast query response times.
Unless the table is unimportant, you should not brazenly delete rows. Archiving requires finesse to ensure that data is not lost, that the table isn't excessively locked, and that the archiving process does not overload MySQL or the sever. The goal is an archiving process that is reliable and unnoticeable except for the beneficial effect of reducing query times. mk-archiver achieves all this.
mk-archiver has two fundamental requirements, the first of which is that archivable rows must be identifiable. For example, if the table has a date column and you know that only the last N years of data are needed, then rows with dates older than N years ago can be archived. Moreover, a unique index must exist to help mk-archiver identify archivable rows without scanning the entire table. Scanning a large table is costly, so an index and specific SELECT statements are used to avoid table scans.
In practice, mk-archiver automatically handles the technical details. All you have to do is tell it what table to archive, how to identify archivable rows, and where to archive those rows. These rows can be purged, copied to another table or written to a dump file for future restoration if needed. Once you're comfortable with the tool, there are many options to fine-tune the archiving process. Also, mk-archiver is pluggable, so it can be used to solve complex archiving needs without patching the code.
When was the last time you audited the security of your MySQL servers? You're not alone if "never" is the answer. There are many companies that provide security audits, but unless nothing ever changes after those audits, then the security of your MySQL environment should be checked regularly.
External threats are one obvious reason to enforce MySQL security, but internal threats like current or former employees are often more dangerous because they are (or were) trusted. Security is also important for enforcing privacy, preventing accidental access (for example, logging into the production server instead of the development server) or enabling third party programs to interact with your systems.
For those looking to increase the security of their deployments, oak-security-audit is a worthwhile, free and open source tool that performs basic MySQL security audits. It doesn't require any setup. Just run it against your MySQL servers, and it prints a report with risks and recommendations about accounts, account privileges, passwords and some general best practices, like disabling network access.
oak-security-audit focuses just on MySQL security, so it's not a replacement for a full system security audit by a human, but it's a great first line of defence that is easy to use. You could run it weekly with cron and have the reports emailed to you.