Tuesday 20 February 2018

Looking deeper into the physical & logical architecture - Transaction Log File

Beyond all doubt, it is essential to have a good understanding of the Transaction Log (T-Log) so that we can diagnose unforeseen performance issues related to it and I am sure that almost everyone had at least one. The T-Log is basically a record of all transactions happening to the database. All these transactions are actually first written to the physical T-Log file, and then after a CHECKPOINT, is written to the Data File via the Lazy Writer process. Some of the uses of T-Log are: as a point in time recovery (full recovery model), to record the Start and End of each transaction, every data modification (insert, update, delete) including system SP's, DDL statements to any table including system tables, every extent and page allocation and de-allocation operation, and creation or drop of tables and indexes.

More specifically, SQL Server uses a Write-Ahead Log Algorithm whereby the Log Buffer Manager always guarantees to write the change descriptions (log records) to the T-Log on disk before it writes the changed data pages to the physical Data Files. Furthermore, Write Ahead Logging also allows SQL Server to ensure some of the ACID properties of database transactions, most significantly durability. It is of paramount importance to keep in mind that there is no performance benefit from having multiple physical log files for the T-Log because it is written sequentially and in a round-robin fashion. Sometimes, we can have many T-Log files for a database as long as there is no enough space on the partition disk, so we can add one more T-Log file on another partition disk so that SQL Server does not stop working until the final solution is provided.

Instead of having various T-Log files, it is highly advisable to work on preventive actions to optimise the T-Log file usage in terms of performance and availability, for instance, make sure only one Log File is used, use a dedicated drive/array for the T-Log file (use RAID 10 for log drives, if possible. RAID 1 is also acceptable), avoid Log Fragmentation (avoid frequent small auto-grow events, set 2GB, 4GB or 8GB auto-grows for example), Fragmented Log File can slow down the performance of operations that read the T-Log file such as Backup Tasks, Crash Recovery Process, Database start up, Transactional Replication, Database Mirroring, creation of a Database Snapshot, DBBC CHECKDB, DBCC LOGINFO, Change Data Capture, etc.

So, how can we control the size of the T-Log File? it depends on the database recovery model, in other words, with SIMPLE recovery model: SQL Server auto-truncates and reuse the space after committing the transaction whereas with either FULL or BULK_LOGGED recovery model, a Log Backup is the only action that can truncate the T-Log, nevertheless, you might have seen that after taking Log Backup, the Log space was not truncated yet, this is because some internal T-Log records may still be required by some other database operations like Open transaction, AlwaysOn AG, Replication, CDC, Mirroring, Backup, etc. It is well worth noting that we must try to keep VLF's as less as possible and each VLF of an acceptable size because the more you have the worse the performance will be. I recommend having VLFs of no more of 512MB per one, but it depends entirely on the use of the T-Log file and its growth settings.

Internally, SQL Server divides a Transaction Log file into a number of sections called Virtual Log Files (VLFs). By default, a T-Log will have 4 VLF's when created. After that, the next VLF's will get individually created with each auto grow or manual grow. Here is the base on what SQL Server starts creating the next VLF's:  
  • if autogrow value < 64MB, then 4 VLF's 
  • if autogrow value >= 64MB and autogrow value < 1GB, then 8 VLF's 
  • if autogrow value > 1GB, then 16 VLF's.
Note that, in SQL Server 2014:
  • if If autogrow value  <= 8MB, then 1 VLF.
(updated 13-04-2024) and in SQL Server 2022
  • if If autogrow value  <= 64MB, then 1 VLF (instead of 4 VLF's)

Finally, if we need to figure out how many VLF's are, we can use the command DBCC LOGINFO which is an undocumented command but is safe to run because it just reads the T-Log, and the most important columns to look for are the following: FileSize (VLF size in bytes), Status (0 Not in use, 2 In use), CreateLSN - it can be used to determine how many VLF’s were created in a log file growth operation, that may be 4, 8 or 16, and this also indicates the Log Sequence Number.
That is all for now, taking this insight into consideration will stand you in good stead for when you have to fix some internal T-Log issues. Thanks for reading. Let me know any remarks you may have. Stay tuned.

No comments:

Post a Comment

Let me know any remarks or questions you may have. Please write down your name.

HELLO, I'M PERCY REYES! — a book lover, healthy lifestyle lover... I've been working as a senior SQL Server Database Administrator (DBA) for over 20 years; I'm a three-time awarded Microsoft Data Platform MVP. I'm currently doing a PhD in Computer Science (cryptography) at Loughborough University, England — working on cryptographic Boolean functions, algorithmic cryptanalysis, number theory, and other algebraic aspects of cryptography. READ MORE