Archive for the ‘MySQL’ Category.

Second update of modifying table statistics in MariaDB

Since my last post I’ve changed how the table statistics work quite a bit in MariaDB. I ran into a few problems with my original changes. In the TiVo 5.0 patch the show table_statistics command chose from one of three hash tables to read from depending on the flags. There is a global hash table for global stats and two in the thd object for session and query stats. Each time a non show query is executed the query statistics are reset. In 5.1 the implementation of show command changed from reading arbitrary data structures to constructing queries to run against information_schema tables. The information_schema tables are constructed on the fly, placed into a temporary table and have the select resulting from the show command executed on them. This works ok for the show commands but broke my new information schema tables.

As part of porting my changes into mariadb 5.2 I added a few new information schema tables called QUERY_TABLE_STATISTICS, QUERY_INDEX_STATISTICS, SESSION_TABLE_STATISTICS, and SESSION_INDEX_STATISTICS. For the part these tables query the hash tables from the 5.0 implementation. This worked fine for the show commands in my previous update. It broke when querying the QUERY_TABLE_STATISTICS directly. In 5.0 I reset the per query table statistics on every query that isn’t a SHOW query. This works fine except that SELECT * FROM QUERY_TABLE_STATISTICS reset the query statistics that the select query was supposed to retrieve.

After a conversation with serg in #mariadb on freenode we came up with a plan to fix the QUERY_TABLE_STATISTICS and QUERY_INDEX_STATISTICS tables. Instead of tracking the most recent query the plan is to track the most recent N queries tracked with a QUERY_ID. I realized this is very similar to how SHOW PROFILES worked. I decided to try to integrate the query statistics in with the profile. This seemed like a good idea until I realized that I want to enable the slow query log statistics without the overhead of leaving profiling on in production.

All of the profiling methods are controlled by the profiling variable. All of the table, index, and user statistics are controlled by the userstat variable. The profiling and my changes to the table statistics both use a QUERY_ID to show a list of queries. The problem is that depending on when the profiling and userstat variables are enabled the query_ids can be inconsistent. For example if a user enables userstat, executes a query, then enables profiling and executes another query the table statistics query_id 1 and the profiling query_id 1 will be different.

Every show command that uses a information schema table puts rows into a temporary table. I’ve also added temporary table tracking into the table statistics. This has an interesting side effect when combined with per query statistics tracking. Queries are only put into the queue of queries for query statistics when they read/write rows. The query_statistics_history_size controls how many queries to keep stats for. Each show command now uses a temporary table so the show commands are now tracked using row statistics. Here is an example using show variables.

mysql> set query_statistics_history_size=5;
Query OK, 0 rows affected (0.00 sec)

mysql> show variables;
+——————————————-+—————————————————————————————————————-+
| Variable_name | Value |
+——————————————-+—————————————————————————————————————-+
| aria_block_size | 8192
………
| warning_count | 0 |
+——————————————-+—————————————————————————————————————-+
395 rows in set (0.00 sec)

mysql> show query table_statistics;
+———-+————–+————+———–+————–+————————-+
| Query_id | Table_schema | Table_name | Rows_read | Rows_changed | Rows_changed_x_#indexes |
+———-+————–+————+———–+————–+————————-+
| 8 | #temp# | #temp# | 395 | 395 | 395 |
+———-+————–+————+———–+————–+————————-+
1 row in set (0.00 sec)

mysql> show query table_statistics;
+———-+————–+————+———–+————–+————————-+
| Query_id | Table_schema | Table_name | Rows_read | Rows_changed | Rows_changed_x_#indexes |
+———-+————–+————+———–+————–+————————-+
| 8 | #temp# | #temp# | 395 | 395 | 395 |
| 9 | #temp# | #temp# | 1 | 1 | 1 |
+———-+————–+————+———–+————–+————————-+
2 rows in set (0.00 sec)

mysql> show query table_statistics;
+———-+————–+————+———–+————–+————————-+
| Query_id | Table_schema | Table_name | Rows_read | Rows_changed | Rows_changed_x_#indexes |
+———-+————–+————+———–+————–+————————-+
| 8 | #temp# | #temp# | 395 | 395 | 395 |
| 9 | #temp# | #temp# | 1 | 1 | 1 |
| 10 | #temp# | #temp# | 2 | 2 | 2 |
+———-+————–+————+———–+————–+————————-+
3 rows in set (0.00 sec)

See how each new show command adds in another query into the queue of tracked queries? While technically correct I don’t think this is ideal. I’m open to suggestions on how to keep the temp table tracking useful while not polluting the list of queries with unnecessary results.

I went back to #mariadb again and we came up with a single unified query_id per thread that both show profiles and the table statistics can use. This unifies the query ids with the down side that a lot of ids can be wasted. Doing set profiling=1, executing a query, and running show profiles isn’t guaranteed to use query_id 1 like it did before. This is where I’m looking to the community to decide how to handle the two different commands and query ids.

The main reason for tracking the per query statistics is to dump them to the slow query log. If userstat is enabled and a query is written to the slow query log it will include two new comments that look like:

# Time: 110712 13:12:16
# User@Host: [ebergen] @ localhost []
# Thread_id: 1 Schema: test QC_hit: No
# Query_time: 64.666276 Lock_time: 0.000113 Rows_sent: 10 Rows_examined: 719488
# Row_Stats: test:rows_read=677984,rows_changed=0,rows_changed_x_indexes=0;#temp#:rows_read=41504,rows_changed=41494,rows_changed_x_indexes=41494;
# Index_Stats:
SET timestamp=1310501536;
select * from t2 group by u order by u desc limit 10;

Things that are missing or wrong so far. The first thing is that I don’t have a query similar to show profiles for queries with their ids. I don’t want to duplicate show profiles for statistics. I’m open to suggestion on how to unify the profiling and table statistics. Some of the structures for profiling and query stats are similar. I think they can be unified but this is more work than I want to put into the tivo branch. If mariadb is willing to accept this kind of feature I can work on unifying them.

Here is an example of where I’m stuck with show profiles vs show query statistics.

mysql> use test;
Database changed
mysql> set query_statistics_history_size=5;
Query OK, 0 rows affected (0.00 sec)

mysql> set profiling=1;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from t limit 5;
+——-+——–+
| t | u |
+——-+——–+
| 14515 | 282874 |
| 14521 | 258653 |
| 14573 | 113276 |
| 14577 | 826475 |
| 14585 | 444645 |
+——-+——–+
5 rows in set (0.00 sec)

mysql> show profiles;
+———-+————+————————-+
| Query_ID | Duration | Query |
+———-+————+————————-+
| 7 | 0.00052300 | select * from t limit 5 |
+———-+————+————————-+
1 row in set (0.00 sec)

mysql> set profiling=0;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from t limit 10;
+——-+——–+
| t | u |
+——-+——–+
| 14515 | 282874 |
| 14521 | 258653 |
| 14573 | 113276 |
| 14577 | 826475 |
| 14585 | 444645 |
| 14612 | 792545 |
| 14626 | 483300 |
| 14842 | 447267 |
| 15325 | 38865 |
| 15340 | 744424 |
+——-+——–+
10 rows in set (0.00 sec)

mysql> show query table_statistics;
+———-+————–+————+———–+————–+————————-+
| Query_id | Table_schema | Table_name | Rows_read | Rows_changed | Rows_changed_x_#indexes |
+———-+————–+————+———–+————–+————————-+
| 2 | #temp# | #temp# | 15 | 15 | 15 |
| 7 | test | t | 5 | 0 | 0 |
| 10 | test | t | 10 | 0 | 0 |
+———-+————–+————+———–+————–+————————-+
3 rows in set (0.00 sec)

mysql> show profiles;
+———-+————+————————-+
| Query_ID | Duration | Query |
+———-+————+————————-+
| 7 | 0.00052300 | select * from t limit 5 |
+———-+————+————————-+
1 row in set (0.00 sec)

Notice how show query table_statistics has three queries. Query id 2 is actually the first show command. With profiling enabled or disabled show profiles doesn’t have as many queries as show table_statistics does.

A bit on SANs and system dependencies

It’s fairly well known that I’m not a fan of SANs for mysql. I’m really not a fan of them in general but most of this is from not being a fan of them for mysql. Over the past few decades the server world has migrated from few large expensive servers to lots of small cheap servers. For the most part this is accepted as a good thing. Large mainframe computers were slowly replaced by relatively smaller yet still expensive servers which are now replaced by smaller cheaper servers. I apply this same logic and trend to storage. I think storage should be replaced by smaller local storage or storage services rather than large centralized storage.

The idea of a SAN seems great on paper. You get a large pool of storage which can be sliced up into smaller pools as needed by different clients. If one client demands more storage it’s fairly straight forward to allocate them more storage and expand the filesystem over it. if you need more storage you simply add it to the pool and dole it out as needed. When it comes to mysql there are some problems with this approach. Some of the problems are technical others are with people and process.

People tend to think of storage as a sack that can hold N apples. When the sack is full of apples either get another sack or a larger sack and add more apples. This is only one dimension of storage. The other one is more difficult to plan for and often times overlooked until it’s too late. This is the rate at which apples can be added or removed from the sack or sorting through the sack to get at specific apples. The time it takes to access data and the number of concurrent requests that can be supported is more important for the speed of a database than the amount of data that can be stored.

Not all SAN storage is created equal. It’s possible that the large shared pool of storage is created by different underlying disks. Some of these disks can only support a few hundred iops (input/output operations per second) while others can support several thousand. By mixing the type of disks the once large shared storage pool is now smaller pools classified by the rate at which they can read and write data. Many SANs also support caching reads or writes in front of those disks. So not only do you now have to decide which clients will use which speed of disks but making sure the cache isn’t overloaded by any of the clients.

On top of all of this there is the problem of changing workloads. During almost every SAN performance related conversation I have ever had someone always suggests that I benchmark the SAN vs DAS and let the benchmark drive the architecture. This is fine if the SAN performance doesn’t change but as I’ve already established allocating SAN space is an ongoing task. This is because the workload of clients can change and the number of clients on the SAN can also change. Adding more clients or changing the workload of clients can change the performance for other clients.

I understand that it is possible to allocate clients in a way that will prevent some clients from interfering with others but in practice I’ve never seen this work out. Inevitably some clients interfere with others. This becomes a huge problem as people tend to think of i/o latency as a constant within some small range of variation. I’ve seen i/o latency go from a few milliseconds to several seconds for the same storage. Tracking down the source of these problems can be difficult especially if the clients are separate teams or business units. Adding new clients to a SAN or changing the workload invalidates any previous benchmarks or capacity plans done against it.

Back to mysql. Most mysql installations commit transactions serially. As part of this process they run an fsync on the transaction log to make sure the transaction is sycned to disk. This means that doubling the i/o latency will halve the number of transactions that can be sent through. This makes mysql more sensitive to i/o latency than other types of applications. Mysql slaves will perform every i/o of a transaction sequentially, not just the commit phase which makes them even more sensitive. While it’s technically possible to have a SAN function correctly it isn’t as simple as carving up the global pool of storage for clients.

I have to be fair and say that there is a potential for performance problems with DAS as well. Performance can degrade for various reasons such as when a disk dies or a battery dies. In these situations performance can degrade a little bit or it can degrade severely such as the controller refusing to cache writes without a battery. The difference between these scenarios and the allocation scenarios on a SAN is that DAS failures are states that can be monitored. If a disk dies your monitor sends an alert and you replace it. If the battery dies simply replace it. This means your monitoring has to be in place and you need a HA pair of machines but I don’t feel like that is a tall order.

The upside of using DAS is that each machine gets it’s own set of disks. If it overloads those disks and causes problems those problems won’t be reflected on the other machines io latency. What it does mean is that it’s more difficult to add space to each machine meaning that you need to plan accordingly. It can also mean that there is a larger pool over unused space because each machine needs a certain percentage of overhead. This depends on the workload of course but it must be considered. This also means that it is more difficult to scale vertically although SSDs and fusionio have raised the vertical scalability ceiling significantly in the past few years.

 

[ Update 2011-06-19 Daniël van Eeden has posted a good followup on his blog ]

Update on porting table_statistics to MariaDB

Over the past few weeks I’ve been working on porting the per table and per session row stats from the tivo patch into mariadb. This is the show table_statistics functionality. I’ve been pushing the code to a branch on lanchpad as I get features working. Up to revision 2953 the row stats are working but the index stats aren’t. I think I have a reasonable implementation of the row stats code given my level of C++ skill. I’m changing the syntax a bit from how it worked in the tivo patch to make it be more compatible with how the original row stats code worked that was ported into mariadb.

In MariaDB 5.2 there are show table_statistics show index_statistics, and show user_statistics functions that return global stats collected by the server when the userstats variable is enabled. There are also associated information schema tables that feed these commands. So far I’ve changed the syntax of the show command to be

SHOW [ QUERY | SESSION | GLOBAL ] TABLE_STATISTICS

Where the default show table_statistics returns the global stats. This is different from the tivo patch against 5.0.72 which changed the default to be the last query. The new syntax keeps things backwards compatible with the original show command. The query keyword will return the previous query_statistics_history_size queries of statistics. This is a bit different from the 5.0 version which only tracked one query.

For example if I set query_statistics_history_size=5 then mariadb will track the row stats from the previous 5 queries:

mysql> set query_statistics_history_size=5;
Query OK, 0 rows affected (0.00 sec)

mysql> show tables;
+—————-+
| Tables_in_test |
+—————-+
| t |
| t2 |
+—————-+
2 rows in set (0.00 sec)

mysql> select * from t;
+—+
| t |
+—+
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
+—+
5 rows in set (0.00 sec)

mysql> select * from t2;
+—-+
| t2 |
+—-+
| 1 |
| 2 |
| 3 |
+—-+
3 rows in set (0.00 sec)

mysql> show query table_statistics;
+———-+————–+————+———–+————–+————————-+
| Query_id | Table_schema | Table_name | Rows_read | Rows_changed | Rows_changed_x_#indexes |
+———-+————–+————+———–+————–+————————-+
| 81 | test | t | 5 | 0 | 0 |
| 82 | test | t2 | 3 | 0 | 0 |
+———-+————–+————+———–+————–+————————-+
2 rows in set (0.00 sec)

mysql> select count(*) from t,t2;
+———-+
| count(*) |
+———-+
| 15 |
+———-+
1 row in set (0.00 sec)

mysql> show query table_statistics;
+———-+————–+————+———–+————–+————————-+
| Query_id | Table_schema | Table_name | Rows_read | Rows_changed | Rows_changed_x_#indexes |
+———-+————–+————+———–+————–+————————-+
| 81 | test | t | 5 | 0 | 0 |
| 82 | test | t2 | 3 | 0 | 0 |
| 84 | test | t | 5 | 0 | 0 |
| 84 | test | t2 | 3 | 0 | 0 |
+———-+————–+————+———–+————–+————————-+
4 rows in set (0.00 sec)

Note that show query table_statistics shows the stats only for the current connection. There isn’t a way to get lists from other threads. My next task is to add a separate table with the query_id and query text similar to show profiles that can be used to correlate a query back to the different statistics kept for it. After that I plan on adding index statistics.

The older version of this in the tivo patch has been extremely useful for optimizing queries. I hope I can make this even better and get it into mariadb for other people to use. If you have any suggestions for how I should change this please post in the comments.

Rare MyISAM failing to delete MYD file for temporary tables.

I manage a few systems that every hundred million disk temporary table deletions or so one of them will fail. MySQL will delete the MYI file but leave the MYD behind. It’s very strange. There isn’t an error in the error log but subsequent queries that try to use the same temporary table name will error because the MYD file still exists. The queries fail with an error like ERROR 1 (HY000): Can’t create/write to file ‘/tmp/#sql_25d1_0.MYD’. Fortunately the client error gives the temporary table name so it’s easy to clean up by deleted the MYD file. While I still don’t know what the root cause is but I was able to patch MySQL to fix the issue.

The old mi_delete_table function which is responsible for deleting only the MYI and MYD file would delete the MYI file and leave the MYD file behind. This function would try to delete the MYI file, get an error back from the filesystem and not attempt to delete the corresponding MYD file. The strange part is that the file is actually deleted. The error is captured in mi_delete_table but is lost by the functions calling it. I’ve tried to patch some parts of the temporary table code to try to track down the actual filesystem error but haven’t seen any errors returned. I don’t think I’ve correctly patched all the temporary table code to capture the error. This hasn’t been a priority because the patch clears up the symptom of the issue which is the MYD file being left behind.

In my new mi_delete_table function I try to delete both the MYD and MYI file before returning an error. This has fixed the issue of MYD files being left behind but I still don’t know the root cause of why they weren’t being deleted in the first place. I’m not sure about filing a bug for this because even though I think my mi_delete_table is better than the stock one I can’t really reproduce the issue.

Anyway here is the patch if anyone is interested.

Second draft of the per session row and index stats patch

I’ve taken the part of the tivo patch that includes table and index statistics and broken it out into it’s own patch. This patch includes the ability to do show [session | global ] table_statistics and the same for index_statistics. In this version the row stats are also logged in the slow query log. To log per query stats I had to track them separately from the per session stats. Because the tracking was already done for the slow query log I’ve modified the command to allow uses to access row stats for the previous query separate from the sum for the current session. The flush commands also act similarly.

Along with changing the slow query log format I’ve also change it to log the timestamp with every query. This made it easier to do automated parsing of the slow query log.

The queries now support three different modes. I’ve detailed how the show commands work in a previous post. This patch has helped developers diagnose numerous query problems that weren’t obvious by the row estimates in the explain output. I highly recommend it for users who run more complicated queries in mysql and have been having a hard time diagnosing why they are slow. This has proved particularly adept at finding misbehaving sub queries and bad join optimization which doesn’t show up in explain output. As I find example queries I will try to post them here.

The patch.

max_allowed_packet replication and global variables

The max_allowed_packet variable is used to control the maximum size of a query sent to MySQL. It’s function is fairly well defined in the manual but there is a significant gotcha that exists when changing the size of max_allowed_packet while using replication.

When the replication threads are created the global max_allowed_packet value is copied in to the thread context like doing a set session command in the slave connection. This is done because replication enforces max_allowed_packet a bit differently than other threads. It accounts for both the size of the packet and the overhead of the replication header. This makes the max_allowed_packet enforcement accurate in replication but it means that the slave thread won’t account for set global max_allowed_packet=N until replication is restarted. It should be possible to do the same calculation while checking incoming packets but I haven’t looked into making a patch.

I think this has flown under the radar because as soon as the slave hits a query that is larger than max_allowed_packet the i/o thread dies. For example when increasing the max_allowed_packet on a master and slave without restarting replication then running a query larger than the old max_allowed_packet but smaller than the new size and replication will break on the slave. When the operator runs slave start the slave i/o thread will pick up the new global max_allowed_packet and die with:

Last_Error: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‘mysqlbinlog’ on the binary log), the slave’s relay log is corrupted (you can check this by running ‘mysqlbinlog’ on the relay log), a network problem, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‘SHOW SLAVE STATUS’ on this slave.

This really only means that the relay log was corrupted. The fix for a corrupted relay log is to do slave stop then a change master to back to the same Exec_master_log_pos and Relay_Master_Log_File from show slave status. Only these two fields need to be specified. The username, password, and host will be reused from the old connection. This will delete the old binary log and download a new one. The new slave threads will have the new max_allowed_packet and everything will move along.

I think there are two things to consider here. First is that mysql can behave in unexpected ways when setting global variables. The other is that it’s important to restart replication when increasing max_allowed_packet on a slave. I haven’t looked into writing a patch for this because it’s so easy to work around by remembering to restart replication.

How to be a MySQL DBA and the best MySQL book on the planet.

Recently there was a thread on the mysql mailing list discussing how to become a MySQL DBA. I’m not sure the MySQL DBA role exists in the same capacity that it does in Oracle. Historically the Oracle’s DBAs that I’ve met are focused purely on Oracle. They focus on maintaining Oracle based systems including managing migrations, upgrades, table space sizes and other tasks. MySQL DBAs tend to be filed in to two different buckets, people that work like developers and help with query optimization and people that work like sys admins and are focused on the operation of MySQL. There are very few people who can fill both roles and I think that’s why there are so many MySQL DBA jobs on the market. Companies are looking for one DBA when they should really be looking for two.

Jeremy’s post on how to hire a MySQL DBA is still true today. These people still don’t exist. I’ve noticed there are two groups of people with part of the skills needed to be a MySQL DBA. Good Oracle DBAs tend to be very well versed in SQL and query optimization. They’re good at working with developers to write queries that will play nice with the database. They have brains that think in sets of data and can handle complex query logic. The downside is that they have only been exposed to Oracle which includes everything and have a hard time with the LAMP world where systems must be built out of a lot of separate components.

Sys admins on the other hand are used to managing daemons, working with rsync, linux, and shells. They can handle software deployment, monitoring, and understand system performance from the operating system level. They understand the basics of configuration and quite a few of them can handle simple MySQL tasks such as installation and basic replication configuration. They tend to not have very much experience in query optimization or the specifics of how applications interact with databases. I’ve long held the opinion that MySQL should just be another component in the system and doesn’t need specialized and isolated monitoring solutions which makes it easier for a group of sys admins for it to monitor along side apache and other daemons. To turn a sys admin in to a DBA they need to understand the special requirements MySQL has such as i/o latency and atomicity of backups. Good sys admins can pick up these skills quickly.

This brings me to the best MySQL book on the planet, High Performance MySQL Second Edition. Why is it the best? Because it applies to both types of DBAs and can help them develop the skills they need to become a super DBA that can handle both the sys admin tasks and the query optimization tasks. The book has been out for roughly two years and is still very relevant. I recommend it to everyone that asks me where they can go to learn how to be a MySQL DBA and it’s never disappointed. A quick note on books. Don’t loan out your copy, ask the person you recommend it to to buy a copy. It helps the writers and this book should be on the desk of anyone working with MySQL.

Table statistics draft 2, the slow query log

I’ve posted a new table statistics patch which is the next version of the session table/index statistics patch This version of the patch adds slow query log output. If a query is logged to the slow query log it will have row count statistics added to it.

I’m not sure about the format of the log which is why I’m posting this so early. The first format I tried was:

# Time: 100119 19:24:37
# User@Host: [ebergen] @ localhost []
# Query_time: 10 Lock_time: 0 Rows_sent: 7 Rows_examined: 3
# Rows_read: sbtest.foo:3, sbtest.bar:3,
select * from foo a, bar b where sleep(1) = 0;

Where there would be an additional line for each of rows_changed, rows_changed_x_indexes and index_rows_read. This seemed verbose so I tried a different format of:

# Time: 100119 20:27:16
# User@Host: [ebergen] @ localhost []
# Query_time: 6 Lock_time: 0 Rows_sent: 6 Rows_examined: 14
# Rows Stats: sbtest.foo 18 0 0, sbtest.bar 15 3 3,
# Index Stats: sbtest.bar.u 6,
select * from foo a, bar b where b.u=4 order by sleep(1);

Where the row stats has 3 columns per table of rows_read, rows_changed, rows_changed_x_index. I’m leaning towards the second format but I’m open to ideas. What do you think?

The new patch is here

First draft of per session table and index statistics

I had some free time over Thanksgiving so I decided to work on something I have been thinking about for quite some time. I hacked up Google’s show table_statistics patch to also track stats per connection. I say this is a first draft hack because I based it off of the v2 patch which uses a straight up hash table instead of the intermediate object cache.

I’ve added the global/session key word to the existing show table_statistics command in the same way that show status works. This means that the default behavior of show table_statistics is to show session data instead of global data. This is different from the Google patch which only works globally. This has been running in production environments for a bit and seems stable. Note that these environments don’t run at the concurrency that motivated Google to update the patch to be less likely to lock a global mutex. You have been warned!

I’m planning on updating the patch with more stats and a cache for the global stats. So far it’s been useful in debugging queries that have low row estimates in the explain plan but are actually scanning quite a few rows. Explain tends to handle row count estimates for sub queries poorly. It’s handy to copy a query from the slow query log on a production server and run it again using show session table_statistics to see how many rows it actually read from individual tables. I also have plans to have build time tests which can keep track of row counts from a sample database. I also want to look into adding these stats directly into the slow query log.

Here is the updated patch. The patch applies against 5.0.72sp1. Here are the command descriptions.

For table statistics:

SHOW [GLOBAL | SESSION] TABLE_STATISTICS [LIKE 'pattern' | WHERE expr]

FLUSH [GLOBAL | SESSION] TABLE_STATISTICS

Index statistics:

SHOW [GLOBAL | SESSION] INDEX_STATISTICS [LIKE 'pattern' | WHERE expr]

FLUSH [GLOBAL | SESSION] INDEX_STATISTICS

Some examples.

mysql> show session table_statistics;
Empty set (0.00 sec)

mysql> show global table_statistics;
+————+———–+————–+————————-+
| Table | Rows_read | Rows_changed | Rows_changed_x_#indexes |
+————+———–+————–+————————-+
| sbtest.foo | 6 | 0 | 0 |
+————+———–+————–+————————-+
1 row in set (0.00 sec)

mysql> select * from sbtest.foo;
+——-+
| t |
+——-+
| 82921 |
| 24489 |
| 73681 |
+——-+
3 rows in set (0.00 sec)

mysql> show session table_statistics;
+————+———–+————–+————————-+
| Table | Rows_read | Rows_changed | Rows_changed_x_#indexes |
+————+———–+————–+————————-+
| sbtest.foo | 3 | 0 | 0 |
+————+———–+————–+————————-+
1 row in set (0.00 sec)

mysql> show global table_statistics;
+————+———–+————–+————————-+
| Table | Rows_read | Rows_changed | Rows_changed_x_#indexes |
+————+———–+————–+————————-+
| sbtest.foo | 9 | 0 | 0 |
+————+———–+————–+————————-+
1 row in set (0.00 sec)

pid file directory and a full disk

To continue the pid file theme I’ve found another slight issue. This was unrelated to the testing which I found the previous pid file issues. I was working on an unmonitored development mysql system. While working on it I ran it out of disk space in /. The box has it’s mysql datadir in a separate partition which had plenty of space. The pid file is in a dir on /. When I started mysqld_safe mysqld exited because it couldn’t create the pid file. mysqld_safe continued to restart mysqld until I saw the problem and killed it a few minutes later. I’m not sure exactly why, I didn’t spend very much time digging into a failure that I caused by filling up the disk. mysqld was exiting because it was trying to create a pid file in a full partition.

Note: This was a stock mysqld, not one running my pid file patch.