Delayed durability, the TURBO button for your transaction log

As noted in the previous post, this month I got my learn on at Tech Outbound’s SQL Cruise Alaska. Which was awesome in so many ways, but I’m only going to tell you about one. Because this one just might be of some value to you.

First a little background: a few months ago I was asked by Ginger Grant of my local user group (Arizona SQL Server Users Group) to present in September, so since then I’ve been quietly working on a presentation about helpful tips for optimizing data manipulation. Generally speaking, it’s a session about ways to make INSERT, UPDATE, and DELETE statements go faster. I came up with this session because (1) I haven’t seen any session covering this topic and (2) I have some colleagues who could really benefit from this kind of information.

I’ve made presentations before, but they were all given internally for coworkers. (Wait, that sounds bad, doesn’t it.) This means my presentation in a few weeks will be my first to people I don’t really know, which is will be a new experience for me, so I took some of my time on the ship to get some tips from experienced presenters.

Stay with me here, we’re getting to the good part.

Now, one of the presenters was Itzik Ben-Gan, who literally writes the T-SQL books for Microsoft. He knows that language better than I know English. No joke. So in conversation I noted what my presentation was about and asked if he had any suggestions for content.

“Delayed durability. You should speak about that.”

This led to some kind of a bonus learning session, which lead to more conversations in subsequent days, eventually ending with Itzik eagerly showing me his delayed durability test scenarios on his laptop. Try not to be jealous.

Needless to say, delayed durability is now included in my session. But in case you won’t be in Phoenix next month I’ll cover it here.

Although delayed durability is a feature that has been available since 2014, it seems many SQL professionals don’t really know much about it, or at least of the ways it can dramatically improve performance. It basically involves speeding up the commit point of a transaction, resulting in seemingly faster writes of data.

Let me explain.

When you write a transaction – let’s say you’re inserting a record – the database engine takes your request and “flashes” a record to the Log Buffer Cache. This is a 60 KB portion of memory, so it’s not very spacious. As your transaction progresses the data is changed on the page, and then the transaction is “hardened” by writing to the transaction log. Bing, bang, boom.

If you turn delayed durability on then you are allowing your transaction to tell you it has committed after committing the “Log Buffer Flash” but before it writes (hardens) to disk in the transaction log. It may not sound like much but if you have a lot of small transactions that time savings can really add up. Like all the money you spent on $4 coffees, but more so.

Let me show you.

So here’s an example of a simple script that is writing 50,000 records to a table, one record at a time. I’ve only got one computer, but imagine you have multiple connections writing small changes to a Data Store. Here’s the first test, without delayed durability, which takes about 10 seconds on my laptop with a processor roughly the speed of two fat and lazy hamsters.

CREATE TABLE DelayedDurability (Id int IDENTITY (1,1) PRIMARY KEY CLUSTERED, SomeId int);

-- Simulate 50,000 transactions
DECLARE @id int =1
WHILE @id <= 50000 BEGIN
       INSERT DelayedDurability (SomeId)
       SELECT @id
       SET @id += 1

Now, here is a script that turns on and uses the delayed durability feature.

It is. . .noticeably faster.


TRUNCATE TABLE DelayedDurability

DECLARE @id int =1
WHILE @id <= 50000 BEGIN
       BEGIN TRAN -- <-- ADD THIS

       INSERT DelayedDurability (SomeId)
       SELECT @id


       SET @id += 1

Did you know your SQL Server transaction log came with a TURBO button?

So what’s the catch? Well, your transaction are “delayed” in writing from memory to the transaction log file, which means if you encounter a power outage then your data will effectively roll back. That would mean lost transactions, and that would be a bad thing.

Also, remember that the log buffer cache is 60 KB, so it’s not able to store transaction log entries for massive amounts of data manipulation. But if you have a system that has many little transactions then delayed durability can provide a huge performance boost for that kind of scenario.

And if you have even more questions, check out Aaron Bertrand’s post from a few years ago on the subject.

2 thoughts on “Delayed durability, the TURBO button for your transaction log

  1. “which means if you encounter a power outage then your data will effectively roll back.”

    Oh, it’s WAY worse than that. Even on a graceful shutdown or planned cluster failover, this happens. Check Books Online – it’s pretty frank about this.

    I still love the feature, but it’s just so rare that someone is willing to lose data even on graceful shutdowns.

    1. Hey Brent,

      I totally agree, and I think one of the reasons it’s rarely used is because most folks see it and think about using it for EVERYTHING. Then they hear the words “potential data loss” and think about using it for NOTHING.

      Because it’s pretty much like the bus in “Speed”: you get to keep going over 55mph, but if you stop your transactions blow up and die.

      But there are some situations where you don’t care if particular transactions die, like something that loops every n minutes and deletes old session connectivity info. There’s also a Data Store scenario I mentioned, where data is copied from to a different location and in the case of an outage some date&time flag field could be reset to before the outage and the process could start over. Kinda like my run-on sentence there.

      Thanks for the comment!

Leave a Reply

Your email address will not be published. Required fields are marked *