Spanning uses MySQL for several of our products, and our Backup for Salesforce product is no exception. Over the 2015 holiday season we noticed that some of our backups were reporting MySQL deadlock errors. While this wasn’t the first time we had encountered deadlocks (more on that later), this was extremely surprising to us as we hadn’t deployed any code changes over the holidays.
Ultimately, we realized that while we had not deployed any code changes, we had modified our infrastructure. Namely, we upgraded our MySQL instances in Amazon RDS to use SSDs as opposed to HDDs (or magnetic disks, as they are called in the AWS console). The goal was to provide a performance improvement to our application without having to change any code.
![[Go Speed Racer, Go!] [Go Speed Racer, Go!]](/2016/02/02/Diagnosing-and-Resolving-MySQL-deadlocks/Go-Speed-Racer.jpg)
While the upgrade accomplished this goal, it also brought along a small handful of deadlocks. To be more precise, we had experienced deadlocks due to the same root cause several months prior, but some recently improved, self-imposed monitoring of production was robust enough to bring the error to our immediate attention.
Fortunately, MySQL (using the InnoDB engine) offers an extremely simple way to diagnose deadlocks, assuming you know where to look.
Locating deadlocked transactions
At a minimum, to locate the transactions (and more specifically, the deadlocked SQL statements), you will need the PROCESS privilege for your MySQL user. If you don’t have this privilege you will simply get an error when you attempt to run the query below. If your user isn’t privileged simply ask your administrator to execute the query for you and forward the output. You can run the following to access the stored information about the most recent deadlock:
SHOW ENGINE INNODB STATUS \G
I recommend using '\G'
rather than ';'
as a query terminator as the output is much more friendly to read. The output can be rather large, but there is a section specifically titled LATEST DETECTED DEADLOCK
which is where you should focus your efforts. Here is an example with most of the irrelevant data removed from the output:
1 | ------------------------ |
Here you can see 2 UPDATE statements, one for the record with id of ‘ABC’ in transaction 1, and another with an id of ‘XYZ’ in transaction 2. Note that these statements were running concurrently. We’ll dive into the details of this output below after a slight overview on the table itself.
Diagnosing the source of the deadlock
To explain why the above deadlock happened (without going too far into our architecture and table structure), there are just a few things I should explain first. This is a gross simplification, but:
- We have a table in MySQL that represents every version of every record we have ever backed up for a given organization.
- Every row effectively stores a point in time when the record first became relevant, and a point in time when the version stopped being relevant. Meaning that if a version is backed up on January 1 at noon, you can think of it as being inserted with a start_date=’2016-01-01 12:00:00’ and an arbitrarily large end_date such as end_date=’9999-12-31 23:59:59’. We actually don’t use dates but it’s easier to demonstrate this way.
So let’s suppose that the table has the following structure to store records that we have retrieved from Salesforce. Salesforce calls them SObjects, and each one has a type (Account, Contact, Attachment, etc.) and a globally unique identifier that we will store in the salesforce_id column.
1 | CREATE TABLE `record` ( |
Records in the database might look something like the following:
1 | mysql> insert into record (salesforce_id, sobject_type, start_date, deleted) |
Note that we have the key sobject_start_end_deleted
which is a non-unique index. This plays an important role in the deadlock, which (SPOILER ALERT) is caused by a technique known as a gap lock. If you aren’t familiar with gap locks, you can read an introduction to them in the MySQL docs and their explanation why it is used. If things still aren’t clicking I would suggest you refer to the following blog post.
During the course of our backup, we will retrieve a page of records from the Salesforce API, and then insert the information for each record into our table. We might receive 100 rows in a page, some of which are records we have never seen before, while others might be an update to a record that we have backed up in the past. Suppose a new version of ABC and XYZ both come in 2 days later. In this case we would need to:
- Insert the new version of each record.
- Locate the version of each record prior to the most recent one.
- Update the end_date of the previous versions to be less than the newest version.
The table might now look something like this:
1 | mysql> select * from record; |
Now imagine this running in our production code, which is having to insert, query, and potentially update millions of records on a daily basis. We have a function in our NodeJS code to update the end_date column for the previous version of a record, and if necessary, mark it as deleted. We run this update function with a small amount of concurrency using the Bluebird.map() function similar to the following:
1 | // number of milliseconds for every second |
The specified concurrency argument offers a small performance gain since multiple connections can be made to the database to update several rows in parallel. Bluebird’s map function will simply iterate over the array we pass it in a non-deterministic order, and fire off 5 promises in parallel. As one resolves, another promise is created to update another row to maintain a concurrency of 5 until the array has been completely processed. However, each one of those UPDATE statements creates a transaction that tries to update rows using the above WHERE clause, which targets the non-unique index on the table. This opens the opportunity for a deadlock due to MySQL’s gap locking mechanism.
We noticed that gap lock was occurring due to the output of SHOW ENGINE INNODB STATUS which contained two transactions in the deadlock section (you can refer back to the sample output above). The first has a transaction id of 1418022806 and the second is 1418022805. From the output we can see that the second transaction acquired a row-level lock but that the first transaction acquired a gap lock (trx id 1418022805 lock_mode X locks gap before rec
). This confirms that we were indeed experiencing a gap lock.
Taking both of those UPDATE statements and converting them to SELECT statements that used the same WHERE clause, we found the primary key (id) of these rows were exactly 1 apart, meaning the rows were right next to each other in the table. Thus, not only was MySQL telling us that a gap lock was in play, it was pretty easy to verify just by running queries and inspecting the output.
It is possible to avoid gap locks by ordering the statements according to the index, but unfortunately the non-deterministic order of Bluebird’s Promise.map() is out of our control, and is actually at the heart of what allows them to always maintain the specified level of concurrency since there is no guarantee how long an asynchronous request will take to complete.
Determining deadlock resolution
Resolving a deadlock obviously depends on what actions are causing the deadlock. Our first encounter with deadlocks was fairly straightforward. In anticipation of our deleted item restore (which we have subsequently improved and replaced with multi-record restore to handle both updated and deleted items), we were executing a long-running process to change the way we represented deleted records in the database. This utility would find specific rows in a table, locate a previous version of each row, then copy over some of the data and delete the row that was no longer relevant. It just so happened that the utility was attempting to delete rows for a Salesforce organization that was in the process of being backed up. Subsequently, our backup code was in the process of modifying the very same records that the fix-up utility was trying to delete, resulting in a deadlock. The path forward in this case was quite simple – ignore it and let it run its course. The fix-up utility was a temporary process that had a very slim chance of ever conflicting again with another backup, and in fact never did. It finished successfully after a few weeks of execution without another hiccup.
Our more recent deadlocks definitely required intervention as our backup code will continue to run as long as this product is still available. We decided to fix them in a two-step process, though depending on the severity and persistence of the problem, you may choose to only implement one. The process was:
- Remove the concurrency argument entirely so that only one row is updated at a time.
- Revisit the code at a later time and add retry logic around the database update call.
Removing concurrency was the simplest code change to make. This allowed us to push out an immediate fix without any real engineering effort, and continue to fully backup data for all of our customers as quickly as possible. Unfortunately it does incur a slight performance penalty, and we don’t like things taking longer than they have to. So we came along later and put the concurrency back into the code. However, the update code was wrapped in retry logic to make sure that any updates which failed due to a deadlock were retried (up to a maximum number of times). This allows us to run at maximum performance when no deadlocks occur, yet gracefully handle the problem if/when it occurs again.
Life after deadlocks
Without any monitoring in place (or even insufficient monitoring), it’s possible that many months could pass before we encounter a problem caused by deadlocks. At that point, the transaction information may have already been purged from the InnoDB engine status log. Living with problems like these can be tricky. If something appears to be working, then we have to assume it is as correctness can be a fuzzy thing to define. Fortunately for us, our self-imposed monitoring of our production environment was able to point out the deadlocks to us, and allow us to investigate them quickly.
In the past, the error itself would be found in a log file as well as a production database table that most employees don’t have access to anyway. The ones that do would have to remember to check this table on a regular basis. This made production monitoring difficult, so we worked on an internal web application that uses some internal REST APIs to help us monitor the health of every backup, export, and restore operation performed on a daily basis. Using a simple dashboard there, we can aggregate the results and see which operations are successful, partial, or outright failures.
The internal-only dashboard allowed us to pinpoint the deadlocks and immediately begin investigation into the problem. This helped us out immensely in this particular case, but such a tool is probably the basis of an entirely separate blog post itself.