Monday 26 December 2016

How to delete large quantity of data with no growth of the Transaction Log and no blocking issue

Without a shadow of a doubt, at times while working on optimising not only database performance but also hard disk usage, we have to face with challenging tasks which need to be completed rapidly without impacting on production environment. It is well-known that deleting data from large tables could be a truly hard task to complete as the much bigger they are the much more time they will take. Consequently, this task will also block the tables much time causing performance issues and stopping systems from working. Today's post is going to show one technique to delete historic data so as to only keep recent data and boost the performance without a hitch. (This technique applies for SQL Server 2005 Enterprise Edition up to SQL Server 2014 Enterprise. SQL Server 2016 supports TRUNCATE table with partitions.)

To begin with, the technique consists in using SQL Server partition switching which allows to access and transfer subsets of data quickly and efficiently. In this way, we may need to alter the large table and make it partitioned but in this example we are going to create a new one. Now let's say there is a need to keep only data from the latest three months. Here is the structure of the table 'LogTracker' which will be partitioned shortly after. It is also included a specific column named 'MonthNumber' to save the number of the month which at the same time will be the partition number.

CREATE TABLE [dbo].[LogTracker](
    [LogID] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime]  NULL,
    [ErrorDesc] [nvarchar](MAX)  NULL,
    [User] [nvarchar](40) NULL,
    [MonthNumber]  AS DATEPART(MONTH, [CreateDate]) PERSISTED NOT NULL
 CONSTRAINT [PK_LogTracker] PRIMARY KEY CLUSTERED 
  (
    [LogID] ASC,
    [MonthNumber] ASC
  )
)

There must also be a non-partitioned table (with the same structure) to move the data from the partitioned one. This table will hold the data during a very short time and then it will be truncated. Because it does not have any foreign keys or constraints the truncate operation will be completed successfully.

CREATE TABLE [dbo].[LogTracker_Temp](
    [LogID] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime]  NULL,
    [ErrorDesc] [nvarchar](MAX)  NULL,
    [User] [nvarchar](40) NULL,
    [MonthNumber]  AS DATEPART(MONTH, [CreateDate]) PERSISTED NOT NULL
 CONSTRAINT [PK_LogTracker_Temp] PRIMARY KEY CLUSTERED 
  (
    [LogID] ASC,
    [MonthNumber] ASC
  )
)

Now it is time to create the Function and the Scheme partition we will use to partition the table 'LogTracker'. For this particular example, the data will be classified by month, which means that each partition will contain data from one month.

CREATE PARTITION FUNCTION [upf_LogTracker](INT) AS RANGE LEFT FOR VALUES (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
GO
CREATE PARTITION SCHEME [ups_LogTracker] AS PARTITION [upf_LogTracker] ALL TO ([PRIMARY])
GO

Finally, we will partition the table.

ALTER TABLE dbo.LogTracker DROP CONSTRAINT PK_LogTracker
WITH (MOVE TO ups_LogTracker([MonthNumber]))
GO
ALTER TABLE dbo.LogTracker ADD CONSTRAINT PK_LogTracker PRIMARY KEY ([LogID],[MonthNumber])
GO

Having successfully completed everything, we are able to insert some data for the purpose of this illustration. Obviously, we can insert millions of rows and the performance of deletion by using this technique will be the same, it will practically be instantaneous. There is no difference between deleting some rows or millions as they are using switching partition which is essentially a logical transfer of data between two tables.

INSERT INTO LogTracker SELECT NEWID(), '2016-06-23 13:13:56', 'Error A Desc', 'preyes'
INSERT INTO LogTracker SELECT NEWID(), '2016-07-24 04:18:12', 'Error C Desc', 'preyes'
INSERT INTO LogTracker SELECT NEWID(), '2016-08-20 01:34:57', 'Error B Desc', 'preyes'
INSERT INTO LogTracker SELECT NEWID(), '2016-09-21 08:09:16', 'Error D Desc', 'preyes'
INSERT INTO LogTracker SELECT NEWID(), '2016-11-22 05:26:45', 'Error A Desc', 'preyes'
INSERT INTO LogTracker SELECT NEWID(), '2016-11-23 15:43:56', 'Error A Desc', 'preyes'
INSERT INTO LogTracker SELECT NEWID(), '2016-12-24 23:28:12', 'Error C Desc', 'preyes'
As you can see, the data is classified by month.

SELECT $PARTITION.[upf_LogTracker](MonthNumber),* FROM LogTracker



With this code we are going to delete the historic data and keep only the most recent three months. It moves out the data of each month from 'LogTracker' to the stage table 'LogTracker_Temp' which in no time is truncated. It starts moving data from the month number one until 'the current month - 3'.

DECLARE @M INT = CASE WHEN DATEPART(MONTH, GETDATE()) - 3 >0 THEN DATEPART(MONTH, GETDATE()) - 3  ELSE 12 + DATEPART(MONTH, GETDATE()) - 3 END
DECLARE @PartitionNumberToArchive INT=1
WHILE (@PartitionNumberToArchive<=@M)
BEGIN
    ALTER TABLE dbo.LogTracker SWITCH PARTITION @PartitionNumberToArchive TO dbo.LogTracker_temp
    TRUNCATE TABLE dbo.LogTracker_temp
    UPDATE STATISTICS dbo.LogTracker WITH FULLSCAN
    SET @PartitionNumberToArchive=@PartitionNumberToArchive+1
END

The previous code will finish in one o some seconds, instantaneously.



Naturally, the code needs to be scheduled and executed via a SQL Job at the end of each month. To be perfectly honest, not only will this deletion finish in one or two seconds, but also it will not cause any impact on the performance and the Transaction Log will not grow as it does by using traditionally techniques. I hope this post is truly useful for you and you make the most out of it. Let me know any remarks you may have. That is all for now. Thanks for reading.

Sunday 11 December 2016

SQL2016: DROP IF EXISTS and CREATE OR ALTER statements

Beyond all doubt, using old-fashioned ways of manipulating database objects is not only a wasting of time but also boring. Therefore, it is well-known that there are situations where we face with many errors while altering or dropping certain database objects because they do not exist or  maybe they already are created. The traditional way of completing this task suggests writing a logic to validate first the existence of the object and then make a decision depending on the result, for instance, we usually do it by querying the object from the 'sys.all_objects' system view. Having said that, I am of the idea that this way of working is always a very unproductive task. Luckily, this situation has changed as now with the arrival of SQL Server 2016 we are able to drop/creater/alter database objects without thinking much about whether they are or not existing objects.

Let me expand on what I am saying. New language features of T-SQL have been introduced in SQL Server 2016 such as DROP IF EXISTS and CREATE OR ALTER statements. With DROP IF EXISTS we can drop some sorts of object and validate their existence in only one statement. The objects supported with DROP IF EXISTS are PROCEDURE, TABLE (included COLUMN and CONSTRAINT), TRIGGER, VIEW, FUNCTION, INDEX, ASSEMBLY, ROLE, AGGREGATE, RULE, TYPE, DATABASE, SCHEMA, USER, DEFAULT, SECURITY POLICY, SEQUENCE, and SYNONYM. Thus we do not need to do something like this any longer.

IF EXISTS (SELECT * FROM sys.objects WHERE name = 'MyProcNumberOne')
 DROP PROCEDURE MyProcNumberOne

Here are some examples of DROP IF EXISTS statement.

DROP PROCEDURE IF EXISTS [dbo].[MyProcNumberOne]
GO
DROP TABLE IF EXISTS [dbo].[MyTableDemo]
GO
DROP VIEW IF EXISTS [dbo].[MyViewTest]
GO
DROP VIEW IF EXISTS [dbo].[MyFunctionTwo]
GO
DROP INDEX IF EXISTS [dbo].[IX_MyTableDemo_04]

Now DROP IF EXISTS working on columns and constraints.

ALTER TABLE [dbo].[MyTableDemo] DROP COLUMN IF EXISTS Col2
GO
ALTER TABLE [dbo].[MyTableDemo] DROP CONSTRAINT IF EXISTS FK_MyTableDemo_10
GO

Now talking of the other new statement CREATE OR ALTER, it supports STORED PROCEDURES (including natively compiled), FUNCTIONS (including natively compiled), TRIGGERS, and VIEWS. So we are not going to need the traditional validation any more.

IF NOT EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[dbo].[MyProcNumberOne]')
CREATE PROCEDURE dbo.mysproc (@Par1 INT, @Par2 VARCHAR(10), ... )
AS
BEGIN
...
..

CREATE OR ALTER statement is also easy to use. If the object exists then it will be altered, otherwise it will be created. I couldn't have been simpler.

CREATE OR ALTER [dbo].[MyProcNumberOne] (@Par1 INT, @Par2 VARCHAR(10), ... )
AS
BEGIN
...
..

To be perfectly honest, I do think that these new language features are the be-all and end-all as they are practical, simple and allow us to work in the sense of improving the quality of deploying new business functionalities and integration of systems. Surely, you will make the most out of these features. That is all for now. Let me know any remarks you may have. Thanks for reading. 

Wednesday 7 December 2016

New per-operator level performance stats included in SQL2016 SP1 and SQL2014 SP2

Traditionally, we were used to getting only basic runtime performance stats per operator regarding to CPU, I/O and Memory, which were not good enough to allow us to automate recollecting tasks of them. I strongly believe that DBAs are always overwhelmingly interested in diagnosing some issues proactively so as to prevent bad-written queries from using hardware resources inefficiently. Luckily, this convoluted situation has been changed radically in SQL Server 2016 and SQL Server 2014 SP2 since Microsoft has included very useful information per operator (and also per thread in the scope of each operator) which can be seen looking into the Showplan XML. For instance, we are going to see that information for the following query (in this case it has run under a serial plan, that is one thread).



This is its actual execution plan.


It is highlighted all what is new in SQL2016, SQL2016 SP1 and SQL2014 SP2. Therefore, we can appreciate CPU and I/O runtime performance stats for the Clustered Index Scan operator on 'PurchaseOrderDetail' table such as I/O reads, CPU time and Elapsed time. In a parallel plan you will be able to see information per thread in each operator.



And now we will see not only CPU and I/O performance stats for the Hash operator but also Memory performance stats.



All this information can also be seen in the Properties window in the context of each operator inside the Actual Execution Plan by using the latest version of SSMS. If you display the detail of each statistic, the detail per thread will be seen when working with parallel execution plan.



Having seen this, not only are we now able to look into useful performance stats by using Showplan XML and SSMS, but also we can create some scripts for administration and optimization purposes in order to pick up queries with poor performance and then optimise them. That is all for now. Let me know any remark you may have. Thanks for reading.

Monday 14 November 2016

How to create new Logins and Users for an AlwaysOn Availability Group

(This tip also applies for Log Shipping and Database Mirroring)

Surely, after setting up an AlwaysOn Availability Group (AG) we are going to need to create additional Logins and Users who must have access and the same permissions across the replica servers so that users can work on each server properly. At times it may end up being a confusing task as SQL Logins may not have matched to the correct SQL Users and therefore users will only access to the SQL instance but not databases in the Secondary Replicas. Do not forget that we cannot create Users directly in Secondary Replicas as they are only read-only, read intent or no access.

To begin with, we need to understand some basic concepts. Logins and Users in SQL Server are different things. Logins allow to access only to the SQL instance and perform tasks at that level such as create/alter/drop databases, alter configurations of the SQL instance, etc. whereas Users allow to get into the databases and work inside them doing delete/update/insert data, create/alter/drop tables, stored procedures, triggers, etc. So, traditionally we always need to have a Login and a User matched each other so as to have access to databases.(Since SQL Server 2012 it is possible to create SQL User without a SQL Login but this aspect is not covered in this post.)

In this context, when it comes to creating new Logins and Users not only will we need to create them in the Primary Replica, but also we have to make sure they also are created in each Secondary Replica so that Users and Applications can log on successfully with the same permissions as they have in the Primary Replica. So we might get some big issues if they are not matched correctly.

Obviously, we need to create the new Login at Primary Replica level and then give the permissions inside the databases. Consequently, ONLY the user will be replicated immediately to each Secondary Replica. So, because the Login is NOT a database object it will not be replicated (it is a server object), which means that we need to create the Login in each Secondary Replica server manually. If the Login is a Windows Login then all what we need to do is to create that Windows Login in each Secondary Replica. A Windows login only has a SID across the whole domain so it will be automatically matched to its Windows User with the same name in each database in the Secondary Replica server. We do not need to know its SID before creating it.

USE [master]
GO
CREATE LOGIN [DOMAIN\percy.reyes] FROM WINDOWS WITH DEFAULT_DATABASE=[master]
GO

Now when it comes to working with SQL Logins and SQL Users we must figure out and take the correct SID of the SQL Login in order to use it to create the same SQL Login in each Secondary Replica server. Therefore, we can query the SID of the SQL Login we need by executing the following code like this.

Having got it, we can use it to create the SQL Login by indicating the correct SID. (Do not forget to use the same password and other properties too.)

USE [master]
GO
CREATE LOGIN [preyes] WITH  PASSWORD=N'P4s$Word&1', SID= 0x8AA780458D0F9E4F95D4204B8E022C83, CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO

In conclusion, only after successfully following this technique will the SQL Login match automatically to the correct user inside the database. It also means that the user will be able to log on any Secondary Replica server with the right permissions inside the databases. That is all for now. Let me know any remarks you may have. Thanks for reading.

Friday 28 October 2016

How to monitor database file space usage in SQL Server

Undoubtedly, when it comes to monitoring database file space usage we need a simple way to do it. Fortunately, SQL Server has considerable quantity of DMV and System Functions to allows us to get closely related information about it. Today's post has the intention of providing a practical way of warning us in case the free database file space is less than the threshold of 20%. With this alert in mind, we will be able to make a preventive decision in order to prevent the performance of in-flight transactions from being impacted.

I am coming up with a simple query to achieve that. Firstly, we need to create a physical temporary table 'FileSpace' which will hold related information for each database. Having successfully creating the table, the following step is to execute the adequate logic, in the context of each database user via 'sp_msforeachdb', to collect file space usage which will be reported in the end. I am getting space usage information of each database file by using FILEPROPERTY system function.

The execution of the entire script (including creation of the table) may be automated via a SQL Job in order to send yourself an alert with format HTML by using database mail. Unluckily, this code does not include the logic to send that email but it is not something difficult to complete, therefore, just do it. That is all for now. Let me know any remarks you may have. Thanks for reading.

Here is the whole script.
USE [master]
GO
CREATE TABLE [dbo].[FileSpace](
    [DatabaseName] [nvarchar](128) NULL,
    [FileName] [sysname] NOT NULL,
    Type varchar(10) not null,
    [FileSizeGB] [decimal](10, 2) NULL,
    [SpaceUsedGB] [decimal](10, 1) NULL,
    [SpaceFreeGB] [decimal](10, 1) NULL,
    [SpaceFree%] [decimal](10, 1) NULL,
    Physical_Name varchar(max)
) ON [PRIMARY]
GO
 
EXEC sp_MSforeachdb '
USE [?]
insert into master.DBO.[FileSpace]([DatabaseName] ,    [FileName] , Type,    [FileSizeGB],    [SpaceUsedGB] ,    [SpaceFreeGB] ,    [SpaceFree%],Physical_Name )
select db_name() DatabaseName, name FileName, Type_Desc ,
cast(size/128.0/1024.0 as decimal(10,2)) SizeGB, 
cast(FILEPROPERTY(name,''SpaceUsed'') /128.0/1024.0  as decimal(10,1)) SpaceUsedGB, 
cast((size - FILEPROPERTY(name,''SpaceUsed'') ) /128.0/1024.0 as decimal(10,1)) SpaceFreeGB, 
cast(((size - FILEPROPERTY(name,''SpaceUsed'') )/(size*1.0)) *100 as decimal(10,1)) [SpaceFree%], physical_name
FROM sys.database_files'

SELECT * FROM FileSpace
WHERE [SpaceFree%]<=20
DROP TABLE FileSpace

Friday 30 September 2016

Creating alerts for monitoring proactively SQL Server AlwaysOn Availability Groups

When it comes to monitoring databases we have to do it proactively by using alerts implemented natively or via T-SQL code inside SQL Jobs running recurrently (I know we do have the AlwaysOn Dashboard, but personally I do not have time to check it all the time). As a result of this situation, so rapidly do we have databases working on production environment that we may need to have alerts working as well. In this post I am going to show how to have the most important AlwaysOn Availability Group alerts implemented. To begin with, I would basically suggest creating alerts only for these error messages.



Undoubtedly, we may need more alerts. Therefore, we will need to look into ‘sys.messages’ system object and look for error messages closely related to AlwaysOn Availability Group feature. Only after identifying them will we be able to create particular alerts taking the ‘message_id’ number.  To illustrate, I will query ‘sys.messages’ by using keywords like ‘Availability’, ‘suspended’, ‘failed’, ‘disconnected’, etc.
SELECT * FROM sys.messages  
WHERE (  [text] LIKE '%Availability%Disconnected%' OR   [text] LIKE '%Availability%not%synchroni%'
OR   [text] LIKE  '%Availability%suspended%'    OR   [text] LIKE '%Availability%failed%' OR   [text] LIKE '%Availability%chang%role%'
OR   [text] LIKE '%Availability%resumed%' )  AND language_id=1033



As you have seen, they are about 63 messages (depending on the SQL Server version you are working on). I do believe that these messages are the be-all and end-all so we might not need to implement additional alerts. Put differently, they are enough. Nevertheless, we might filter out some messages we do not consider important and include others to monitor AlwaysOn Failover Cluster instead.
SELECT * FROM sys.messages  
WHERE (  [text] LIKE '%Availabiliy%replica%' OR   [text] LIKE '%Database%replica%' OR   [text] LIKE '%primary%role%'
    OR   [text] LIKE '%secondry%role%' OR   [text] LIKE '%availability%group%' OR   [text] LIKE '%WSFC%'
    OR   [text] LIKE '%primary%replica%' OR   [text] LIKE '%secondary%replica%' OR   [text] LIKE '%alwayson%')
    AND    [text] NOT LIKE '%No user action is required.%'
    AND severity<>10
    AND language_id=1033
After having got the error number, the next step is to create the alert for the error by using ‘sp_add_alert’ system stored procedure which accepts basic parameters like the name for the alert and ‘message_id’ (taken from ‘sys.messages’). The rest of parameters will be the same for each alert. So, you can create the other alerts by replacing the name and ‘message_id’, for instance, the following example is based on the error number 41418. (We also create the notification to send an email to us when the alert is raised. Obviously, not until having successfully configured SQL Agent service to use Database Mail will you be able to receive the email.)

USE [msdb]
GO
EXEC msdb.dbo.sp_add_alert @name=N'41418 HADR Alert', 
        @message_id=41418, 
        @severity=0, 
        @enabled=1, 
        @delay_between_responses=0, 
        @include_event_description_in=1, 
        @notification_message=N'Contact your Database Administrator urgently.', 
        @job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'41418 HADR Alert', @operator_name=N'DBA', @notification_method = 1
To try out to see whether or not the alert is working, we will need to raise the error manually via RAISERROR command with ‘LOG’ parameter (to see the message logged in the SQL Server error log).
RAISERROR(41418,16,1) WITH LOG;

Having done that, we can see the following message in error log:



Likewise, you will get the email.
image
To sum up, I am convinced that these sorts of alert will always work for monitoring Availability Groups and others. You just need to spend some time on figuring out about the error messages and then go ahead and create the right alert for you. I hope you enjoy this post and let me know any remark you may have. Thanks for reading again.

Thursday 29 September 2016

Using ‘sp_browsereplcmds’ to diagnose SQL Server Replication issues

While diagnosing SQL Server transactional replications issues, we may need to look into some pending commands inside the Distribution database. In other words, not only do we have to monitor pending commands, but also take some actions so as to keep working the replication. For instance, at times we may have to kill some specific commands because of errors which do not allow other commands to be moved to subscribers. Before doing that, we firstly need to identify which commands have to be removed from the queue by using ‘sp_browsereplcmds’ system stored procedure. This basically accepts many input parameters like article_id:

EXEC SP_BROWSEREPLCMDS @article_id = 1

After executing it, we are going to filter only the pending commands for the Article in question. (Remember that an article in replication is directly related to a table. You can query ‘sysarticles’ system table inside the published database.)


Another parameter we can use in order to get more specific information is the transaction sequence number which is essentially the identifier for the transaction. Luckily, when reading some errors, we can see the sequence number and command ID which allow us to identify exactly the root cause we need to work on with ease.

EXEC SP_BROWSEREPLCMDS @xact_seqno_start = '0x00000027000000B50008',@xact_seqno_end = '0x00000027000000B50008' 


There are other parameters like command ID to get only the command we need to look into, and also the database ID to get all commands for that database.

EXEC SP_BROWSEREPLCMDS @xact_seqno_start = '0x00000027000000B50008',@xact_seqno_end = '0x00000027000000B50008' , @publisher_database_id = 33, @article_id = 1,@command_id= 1

Be cautious, do not execute ‘sp_browsereplcmds’ without any parameter on production database environments as they can have millions of commands inside Distribution database and as a result of this we will not get what we need rapidly and at the same time we will affect the database server performance. I hope you can find this post interesting when it comes to troubleshooting replication issues. Let me know any remark you may have. Thanks for reading.

Friday 19 August 2016

Why is the .txt history detail file of SQL Backup Job not created?

Clearly, we always need to have some kind of history for SQL Backup jobs in order to check whether or not they were executed correctly. I personally use a .txt file to write on it all details of each step executed so that I can use it to diagnose any problem or error behind it. Nevertheless, at times set it up may become something not so easy as the .txt file might not be created during the SQL Backup job execution.


There are some reasons why this may be happening. The first one is because the directory where the .txt file does not exist. It should have been created manually before executing the SQL Backup job. The second reason is because the backup directory is blocked. Surprisingly, you will realise it is with blocked access when you try to open it. One way to get it unblocked is by simply opening the directory and then clicking on “continue” via Windows Explorer.



The third one is because the SQL Agent account does not have permissions on that directory. There must have given Read and Write permission on it and there are some cases where we will need to give explicit permissions to the SQL Agent account on that directory via CMD windows command tool. Undoubtedly, it would be no problem if we are working on only one directory but what would it happen whether we are implementing many SQL Backup jobs? it would become a very tedious job to manage one by one. So, in this case we need some manner to automatize and get them done rapidly. For instance, here I am going to show you a technique to achieve it:

icacls "H:\SQLBackup\FinancialDB\Full" /grant MyDomain\sqlagentAccount:(OI)(CI)F

Now for creating the script to give permission on all necessary directories we can create the code by using this T-SQL (based on reading the directory from Backup Devices whose names are like 'FinancialDB-Full_Database_Backup'):

DECLARE @sn NVARCHAR(128);
EXEC master.dbo.xp_regread 'HKEY_LOCAL_MACHINE', 'SYSTEM\CurrentControlSet\services\SQLSERVERAGENT',  'ObjectName',   @sn OUTPUT;
select 'icacls "'+SUBSTRING(physical_name,1, LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name)))+'" /grant '+@sn+':(OI)(CI)F'
from sys.backup_devices where name
ORDER BY name 

All the output results must be executed on CMD tool (as Administrator) and finally after executing the SQL Backup job we will verify that the .txt file was created:



And the history details are inside:


I hope this tip is useful and practical for you. Let me know any remark you may have. Thanks for reading!

Thursday 21 July 2016

The 'SkipErrors' parameter for the Replication Distribution Agent

Having seen many errors in SQL Server Replication, I can say that the vast majority of them are reported at replication agents level like Distribution Agent, Log Reader Agent, Snapshot Agent, Queue Reader Agent, and so on. Unfortunately, some of  them are mostly related to Primary and Foreign Key conflicts, and to solve them either one by one or by reconfiguring the replication we may need much time, therefore, we need to look for an option to allow the operation to continue while we work to fix them as soon as possible. Luckily, this option is the 'SkipErrors' parameter which I have used many times to deal with them. Likewise, you can use it for skipping other errors. 
Today's post will show how to use the' SkipErrors' parameter which allows to skip specific errors so that the data synchronization process is not stopped. This parameter is configurable in the profile of the Distribution Agent and has as input the error number we may want to skip.
The following picture shows an error (with code 547 related to an Foreign Key issue) in the Distribution Agent process and we see how the transactions are being queued due to this error, consequently, there is a need to fix it so as to allow the rest of pending transactions are moved on. (Distribution Agent reads sequentially the table 'msrepl_commands' to get the command to execute in the subscribers, this means that First in the Queue is the First Out to be moved to subscribers.)



Other common errors where you can use 'SkipErrors' parameter is when there are not some rows in the subscriber to apply the changes (The row was not found at the Subscriber when applying the replicated command). The error code for this case is 20598.
 

I mentioned before that the 'SkipErrors' parameter is configurable inside the Distribution Agent profile and that is what we are going to do right now. Firstly, we need to create a customized profile based on the Default profile and write in the 'Value' column the numbers of the errors (to be skipped)  separated by colons as we can see in the following picture 20598:547.



Having done that, we may have to restart the Distribution Agent. Next time the Distribution Agent starts up, it will load the new customized profile with the error codes to be skipped (ignored). Finally, not only will we verify that many errors are skipped, but also the Distribution Agent is running with no problem.
 

Just to finish writing this post, keep in mind that the transactions with these errors were missed, which means that you will not be able to recover them and may affect your data consistency. In other words, we must use 'SkipErrors' parameter with extreme caution. Let me know any remark you may have. Thanks for reading.

Wednesday 20 July 2016

How to query the windows users that access via windows groups from SQL Server

At times while monitoring a database server, we may need to know not only which SQL users are accessing, but also windows users for security and auditing purposes. Frankly, this task may be quite simple to get it completed when all of those users are created and visible inside the database engine. Despite this, it is not a surprise that windows users can access via windows groups which means that DBAs can not see them from SQL Server easily unless we have the manner to list them. Luckily, there is an extended stored procedure named “xp_logininfo” that we need to use to get that information. (The 'xp_logininfo' asks the Active Directory for the windows users.)
First of all, this system stored procedure takes two input parameters. The first one is the windows group name, and the second one is the value ‘members’.  For instance, it lists the windows users that access via the windows group ' MyDomain\SQLProdUsrs'. (You need to have “sysadmin” role to execute it.)
EXEC xp_logininfo  @acctname ='MyDomain\SQLPrdUsrs',  @option='members' 
Naturally, this is perfect if we only have one windows group to query. As time goes by, we are likely to get created more windows groups inside the database engine and in no time we will need to create an specific script to figure out the other windows users. Here is that code.
SET NOCOUNT ON

CREATE TABLE #WindowGroup(

    server_name varchar(100),

    account_name varchar(300),

    type char(8),

    privilege char(9) ,

    mapped_login_name varchar(300) ,

    permission_pathsysname  varchar(300)

 )

 

DECLARE @WindowGroupName varchar(max)

DECLARE @db [NCHAR](128)  

DECLARE cursor_WG CURSOR FOR SELECT [name] FROM sys.server_principals WHERE TYPE='G' 

 

OPEN cursor_WG

    FETCH NEXT FROM  cursor_WG INTO @WindowGroupName

    WHILE @@FETCH_STATUS= 0

        BEGIN

            INSERT #WindowGroup(account_name , type , privilege  ,mapped_login_name  ,permission_pathsysname   )

            EXEC xp_logininfo  @acctname =@WindowGroupName  ,  @option =  'members' 

        FETCH NEXT FROM cursor_WG INTO @WindowGroupName

    END 

CLOSE cursor_WG

DEALLOCATE cursor_WG

 

UPDATE #WindowGroup SET server_name=@@servername

select server_name AS ServerName, account_name as WindowsAccountName, Type, Privilege, mapped_login_name as MappedLoginName, permission_pathsysname as WindowsGroupName

FROM #WindowGroup 

ORDER BY permission_pathsysname,mapped_login_name

DROP TABLE #WindowGroup

 

SET NOCOUNT OFF


As you have seen, the T-SQL code is simply easy to understand. Clearly, it filters the windows groups by indicating the value ‘G’ for the Type column of the system view “sys.server_principals”, and eventually through the cursor each of them is read. Now I hope you make the most out of this script to have better visibility of the windows users accessing the databases. Thanks for reading again!

Saturday 16 July 2016

SQL Mirroring Port could stop working unexpectedly

When it comes to diagnosing and fixing some SQL Mirroring issues we may have to face with bizarre causes that could stop working a database mirroring and as a result the state of it will be "disconnected". For instance, one of the most common problems is that the SQL mirroring port may stop working unexpectedly. Consequently, the whole database mirroring functionality will stop working too. We basically configure SQL Server mirroring to use the port 5022 by default which works good enough for the vast majority of cases. To be perfectly honest, there is not an urgent need to change it since the issue is not about the port number (or mirroring configuration) but the port itself.

While working on fixing the issue, we may think the database mirroring port may have stopped working or blocked for some security reason which we do not know accurately at the beginning of the situation. Nevertheless, the reason should be investigated further by the Network team so as to prevent from getting the same issue in the future again. Having clear the situation, I am going to show one way to have the database mirroring working again. To begin with, we need to figure out what is the current number port of the mirroring being used.



We can see that the mirroring port is 5022, so we can opt to change it.

ALTER ENDPOINT [Mirroring] AS TCP (LISTENER_PORT = 5023)

Only after successfully completing that will we have to restart the database engine. Finally, we need to restore the original port and then restart again the database engine.

ALTER ENDPOINT [Mirroring] AS TCP (LISTENER_PORT = 5022)

Having done it, we will see the database mirroring working properly. That's is all for now. I hope this tip also works for you. Let me know any remarks you may have. Thanks for reading.
HELLO, I'M PERCY REYES! — a book lover, healthy lifestyle lover... I've been working as a senior SQL Server Database Administrator (DBA) for over 20 years; I'm a three-time awarded Microsoft Data Platform MVP. I'm currently doing a PhD in Computer Science (cryptography) at Loughborough University, England — working on cryptographic Boolean functions, algorithmic cryptanalysis, number theory, and other algebraic aspects of cryptography. READ MORE