EMT Tutorial – Installation

EMT is a monitoring tool that I’ve been developing over the past few years. It’s goal is to serve as a hub for performance metrics on a single server. I’ve tried to talk about what EMT is before but I’m not a very good writer so I thought it would be best to just show people. This tutorial is going to be a quick overview of installing EMT from the rpm and a basic tutorial of it’s usage. Some of this is covered in the manual and some has changed in newer released.

Installation
The easiest way to install EMT is to grab the latest rpm from the Google Code downloads page. After installing the rpm you will see a notice about correcting some details in the default view.

ebergen@etna:(~/gc/emt) sudo rpm -i ./emt-0.2-107.noarch.rpm
The emt_view command will likely having missing data until you specify correct interfaces and disks in /opt/emt/plugins/views/default.php.

Seeing this reminds me that EMT needs to support user defined views. I’ve filed an issue about this. Moving on..

The default system stats plugin for EMT uses some tools that are probably already on your system to collect stats about system performance. This duplicates a lot of functionality from the sysstat package which is fine because EMT isn’t just about system stats. If everything went OK in a few minutes the emt_view command will output some stats.

Basic Usage

ebergen@etna:(~/gc/emt) emt_view -n 5
[-------emt-------] [--cpu--] [disk memory [------network------] [----swap----]
Sampling Start Time Sys% Usr% Busy% [Mem%] Recv Bytes Send Bytes [In] [Out Used
                              sda          eth0       eth0
2010-08-16 20:40:01 0    0    1     42     23K        89K        0    0    700M
2010-08-16 20:42:02 1    1    2     42     33K        110K       0    0    700M
2010-08-16 20:43:02 0    1    0     42     21K        93K        0    0    700M
2010-08-16 20:44:01 0    0    0     42     19K        144K       0    0    700M
2010-08-16 20:45:01 0    1    0     42     29K        159K       0    0    700M

The -n 5 tells emt_view to return the most recent 5 events which in the default configuration is 5 minutes of data. EMT plugins are divided into two parts, commands and fields. Internally every minute a series of commands are executed and one or more fields is parsed from those commands. In the output each column is one field. The power of EMT is being able to compare the results of any command with any other command side by side. In the above output there are the results from at least 5 commands.

Depending on the fields there will be either two or three headings per column. The first is the namespace. In future releases it will prevent name collisions between plugins but for now it’s only real use is a grouping for headings. The second column is the field name. The third column is a sub field. Sub fields can be dynamically discovered each minute by a plugin. In this case the plugin discovered eth0 and the view is configured to use it.

A few different views ship with EMT. Some of these are as simple as a list of fields to display. Others create fields on the fly. To see the list of views use emt_view -v. To select a view use emt_view -s view_name. It’s possible to create custom views on the command line with the fields listed in emt_view -l. I’ll cover this in more detail in a future post.

emt_view is the basic method of accessing the data provided by EMT. There are also other programs such as emt_awk which provide csv output that can be piped to other commands like awk. emt_view is commonly used for analysis and emt_awk is often used by monitoring tools to alert on thresholds. I’ll cover these and other commands in future tutorials.

How to be a MySQL DBA and the best MySQL book on the planet.

Recently there was a thread on the mysql mailing list discussing how to become a MySQL DBA. I’m not sure the MySQL DBA role exists in the same capacity that it does in Oracle. Historically the Oracle’s DBAs that I’ve met are focused purely on Oracle. They focus on maintaining Oracle based systems including managing migrations, upgrades, table space sizes and other tasks. MySQL DBAs tend to be filed in to two different buckets, people that work like developers and help with query optimization and people that work like sys admins and are focused on the operation of MySQL. There are very few people who can fill both roles and I think that’s why there are so many MySQL DBA jobs on the market. Companies are looking for one DBA when they should really be looking for two.

Jeremy’s post on how to hire a MySQL DBA is still true today. These people still don’t exist. I’ve noticed there are two groups of people with part of the skills needed to be a MySQL DBA. Good Oracle DBAs tend to be very well versed in SQL and query optimization. They’re good at working with developers to write queries that will play nice with the database. They have brains that think in sets of data and can handle complex query logic. The downside is that they have only been exposed to Oracle which includes everything and have a hard time with the LAMP world where systems must be built out of a lot of separate components.

Sys admins on the other hand are used to managing daemons, working with rsync, linux, and shells. They can handle software deployment, monitoring, and understand system performance from the operating system level. They understand the basics of configuration and quite a few of them can handle simple MySQL tasks such as installation and basic replication configuration. They tend to not have very much experience in query optimization or the specifics of how applications interact with databases. I’ve long held the opinion that MySQL should just be another component in the system and doesn’t need specialized and isolated monitoring solutions which makes it easier for a group of sys admins for it to monitor along side apache and other daemons. To turn a sys admin in to a DBA they need to understand the special requirements MySQL has such as i/o latency and atomicity of backups. Good sys admins can pick up these skills quickly.

This brings me to the best MySQL book on the planet, High Performance MySQL Second Edition. Why is it the best? Because it applies to both types of DBAs and can help them develop the skills they need to become a super DBA that can handle both the sys admin tasks and the query optimization tasks. The book has been out for roughly two years and is still very relevant. I recommend it to everyone that asks me where they can go to learn how to be a MySQL DBA and it’s never disappointed. A quick note on books. Don’t loan out your copy, ask the person you recommend it to to buy a copy. It helps the writers and this book should be on the desk of anyone working with MySQL.

EMT SVN now on Google code

Jeremy moved the emt svn repository to google code last night. This gives it better integration with the issues tracker, google’s kick ass source browser and gives me the ability to add more commit rights without giving people accounts on servers. Check out the new source tab. Especially the part that lists the field objects. EMT ships without about 100 metrics not counting dynamic sub fields including checks for mysql, apache, memcache, per process memory, network, and other system stats.

First post using shap ShapeWriter input method on Android

I must say this is way faster than tapping. It’s surprisingly accurate even after only a few minutes of using it

WordPress on Android

It’s like my own little twitter. I don’t think I will be publishing much from this but it’s great for creating stub posts.

Table statistics draft 2, the slow query log

I’ve posted a new table statistics patch which is the next version of the session table/index statistics patch This version of the patch adds slow query log output. If a query is logged to the slow query log it will have row count statistics added to it.

I’m not sure about the format of the log which is why I’m posting this so early. The first format I tried was:

# Time: 100119 19:24:37
# User@Host: [ebergen] @ localhost []
# Query_time: 10 Lock_time: 0 Rows_sent: 7 Rows_examined: 3
# Rows_read: sbtest.foo:3, sbtest.bar:3,
select * from foo a, bar b where sleep(1) = 0;

Where there would be an additional line for each of rows_changed, rows_changed_x_indexes and index_rows_read. This seemed verbose so I tried a different format of:

# Time: 100119 20:27:16
# User@Host: [ebergen] @ localhost []
# Query_time: 6 Lock_time: 0 Rows_sent: 6 Rows_examined: 14
# Rows Stats: sbtest.foo 18 0 0, sbtest.bar 15 3 3,
# Index Stats: sbtest.bar.u 6,
select * from foo a, bar b where b.u=4 order by sleep(1);

Where the row stats has 3 columns per table of rows_read, rows_changed, rows_changed_x_index. I’m leaning towards the second format but I’m open to ideas. What do you think?

The new patch is here

First draft of per session table and index statistics

I had some free time over Thanksgiving so I decided to work on something I have been thinking about for quite some time. I hacked up Google’s show table_statistics patch to also track stats per connection. I say this is a first draft hack because I based it off of the v2 patch which uses a straight up hash table instead of the intermediate object cache.

I’ve added the global/session key word to the existing show table_statistics command in the same way that show status works. This means that the default behavior of show table_statistics is to show session data instead of global data. This is different from the Google patch which only works globally. This has been running in production environments for a bit and seems stable. Note that these environments don’t run at the concurrency that motivated Google to update the patch to be less likely to lock a global mutex. You have been warned!

I’m planning on updating the patch with more stats and a cache for the global stats. So far it’s been useful in debugging queries that have low row estimates in the explain plan but are actually scanning quite a few rows. Explain tends to handle row count estimates for sub queries poorly. It’s handy to copy a query from the slow query log on a production server and run it again using show session table_statistics to see how many rows it actually read from individual tables. I also have plans to have build time tests which can keep track of row counts from a sample database. I also want to look into adding these stats directly into the slow query log.

Here is the updated patch. The patch applies against 5.0.72sp1. Here are the command descriptions.

For table statistics:

SHOW [GLOBAL | SESSION] TABLE_STATISTICS [LIKE 'pattern' | WHERE expr]

FLUSH [GLOBAL | SESSION] TABLE_STATISTICS

Index statistics:

SHOW [GLOBAL | SESSION] INDEX_STATISTICS [LIKE 'pattern' | WHERE expr]

FLUSH [GLOBAL | SESSION] INDEX_STATISTICS

Some examples.

mysql> show session table_statistics;
Empty set (0.00 sec)

mysql> show global table_statistics;
+————+———–+————–+————————-+
| Table | Rows_read | Rows_changed | Rows_changed_x_#indexes |
+————+———–+————–+————————-+
| sbtest.foo | 6 | 0 | 0 |
+————+———–+————–+————————-+
1 row in set (0.00 sec)

mysql> select * from sbtest.foo;
+——-+
| t |
+——-+
| 82921 |
| 24489 |
| 73681 |
+——-+
3 rows in set (0.00 sec)

mysql> show session table_statistics;
+————+———–+————–+————————-+
| Table | Rows_read | Rows_changed | Rows_changed_x_#indexes |
+————+———–+————–+————————-+
| sbtest.foo | 3 | 0 | 0 |
+————+———–+————–+————————-+
1 row in set (0.00 sec)

mysql> show global table_statistics;
+————+———–+————–+————————-+
| Table | Rows_read | Rows_changed | Rows_changed_x_#indexes |
+————+———–+————–+————————-+
| sbtest.foo | 9 | 0 | 0 |
+————+———–+————–+————————-+
1 row in set (0.00 sec)

pid file directory and a full disk

To continue the pid file theme I’ve found another slight issue. This was unrelated to the testing which I found the previous pid file issues. I was working on an unmonitored development mysql system. While working on it I ran it out of disk space in /. The box has it’s mysql datadir in a separate partition which had plenty of space. The pid file is in a dir on /. When I started mysqld_safe mysqld exited because it couldn’t create the pid file. mysqld_safe continued to restart mysqld until I saw the problem and killed it a few minutes later. I’m not sure exactly why, I didn’t spend very much time digging into a failure that I caused by filling up the disk. mysqld was exiting because it was trying to create a pid file in a full partition.

Note: This was a stock mysqld, not one running my pid file patch.

Attempting to unwind the tangled web of pid file creation.

Previously I wrote about how late the mysql pid file is created in the startup process. At first glance it seemed like a relatively easy thing to fix. In main() there is a call to start_signal_handler(). The first instance of static void start_signal_handler() does only one thing. It checks !opt_bootstrap to make sure mysqld isn’t being called by mysql_install_db. I’m not sure why mysql_install_db can’t have a pid file created but that’s getting outside the scope of my investigation. It seems simple enough to move the call to start_signal_handler() above the call to init_server_components() in main() and have the pidfile created earlier. It turns out pidfile creation happens differently on different arches.

For windows and netware start_signal_handler simply creates the pid file. For __EMX__ (I’m not sure what that is) start signal handler does nothing. By default start_signal_handler starts a signal handler thread. This thread then creates the pid file. I think this can be cleaned up by removing the start_signal_handler functions that either do nothing or only create a pid file and handle the pid file creation for arches that need it directly in main with some good self documenting ifdefs right around it.

I don’t have all the environments to test that this patch really works. I’ve tested it on linux and it does create a pid file and deletes it on shutdown. The pid file is created just after argument parsing and before the heavy weight storage engine initialization.

[Update 2009-12-07: I think the old patch broke embedded. I updated it to ifdef out the call to start_signal_handler]
Here is the patch Create pid file sooner patch.

mysqld_safe and pid file creation race condition

mysql_safe is responsible for restarting mysqld if it crashes and will exit cleanly if mysqld was shutdown. The way it determines if mysqld shutdown correctly is if the pid file is cleaned up correctly. MySQL does quite a few things before creating the pid file like initializing storage engines. It can take quite a while to initialize innodb if it has to roll forward the transaction log. During this time if mysqld crashes mysqld_safe won’t restart it. Normally this is ok because the server would just crash again but it can mess with your head a bit if you’re testing changes to mysqld_safe. Especially if those changes involve what mysqld_safe does if mysqld crashes. I think it makes sense to create the pidfile earlier and there is a bug for it. Chime in on the bug if this has burned you.