Sunday 25 June 2017

Getting the full name of SQL Jobs including the steps in execution

While monitoring a database server, we may need to know some details about the sessions, connections and requests that migh be causing performance or blocking issues so that we can take actions to fix them. In doing so, one very important piece of information is the program name that is connected to the database engine. Broadly, it is posible to see the program name in detail by using system stored procedures or DMVs such as 'sp_who2' and 'sys.dm_exec_sessions'. Nevertheless, not every name of the programs may be easy to interpret, especially when it comes to SQL Jobs. For instance, if you detected that a SQL Job is the root of the problem and then needed to know which SQL Job it is, the column program_name of 'sp_who2' or 'sys.dm_exec_sessions' woud only give us the SQL Job ID based on the following format:

SQLAgent - TSQL JobStep (Job 0x2613DA812CD2D248A9BA377DE6DEF355 : Step 1)

Obviously, we cannot do much with that info because there is no SQL Job name, and even worse, no SQL Job step name. However,  we can figure out the name of the SQL Job in msdb.dbo.sysjobs by using the ID.

SELECT * FROM msdb.dbo.sysjobs WHERE job_id=0x2613DA812CD2D248A9BA377DE6DEF355

Despite the fact that it may be relativily easy to get the name of the SQL Job, it is not enough because it is of paramount importance to know the name of the step in execution, and keep in mind that doing this manually this every time when needed is going to be an uphill battle as it is arduous and not optimal, especially if there are many SQL Jobs running and causing struggles. Thinking of this situation, I created a script to automate the task of figuring out details related to SQL Jobs such as the name and also the step name that is running. To be more precise, this logic is inside a function called "ufn_GetJobStepNameDesc" that takes the value of the "program_name" column and returns the name of the SQL Job and the step in execution. Let's take a look at the following code whereby we also filter out the sessions used by SQL Jobs.

SELECT session_id, login_time, login_name, [status], writes,
       logical_reads, [language], DB_NAME(database_id) DatabaseName,
          dbo.ufn_GetJobStepNameDesc([program_name]) AS SQLJobDescription
FROM sys.dm_exec_sessions where [program_name] like 'SQLAgent - TSQL%'
As you can see, I have highlighted the use of the function, and also added other important columns to look at as part of monitoring. So, using dbo.ufn_GetJobStepNameDesc([program_name]) the final outcome would be like this: SQLAgent - TSQL JobStep "<Name of the job step>" (Job: <Job name>).  For instance: SQLAgent - TSQL JobStep "Updating_Accounts" (Job: SAP_Financial_Process)
Here I share with you my script so that you can check it thoroughly and then make the most out if it.
USE [master]
GO
CREATE FUNCTION  [dbo].[ufn_GetJobStepNameDesc] (@step_name_desc VARCHAR(MAX))
RETURNS VARCHAR(max)
AS
BEGIN
       DECLARE @full_step_name_desc VARCHAR(MAX)
       DECLARE @jobstep_id_start INT
       DECLARE @jobstep_id_len INT
       DECLARE @jobstep_id INT
       SELECT @jobstep_id_start=CHARINDEX(': Step', @step_name_desc)+7, @jobstep_id_len=CHARINDEX(')', @step_name_desc)-@jobstep_id_start
       set @jobstep_id= CAST(SUBSTRING(@step_name_desc, @jobstep_id_start,@jobstep_id_len) AS INT)

       DECLARE @job_id_start INT
       DECLARE @job_id_len INT
       DECLARE @hexa_job_id VARBINARY(MAX)
       SELECT @job_id_start=CHARINDEX('(Job 0', @step_name_desc)+5,  @job_id_len=CHARINDEX(':', @step_name_desc)-@job_id_start
       SET @hexa_job_id=CONVERT( VARBINARY, RTRIM(LTRIM(SUBSTRING(@step_name_desc, @job_id_start,@job_id_len))),1)
      
       SELECT @full_step_name_desc='SQLAgent - TSQL JobStep "' + step_name+'" (Job: ' + j.[name] +')'
       FROM msdb.dbo.sysjobsteps js
            INNER JOIN msdb.dbo.sysjobs j ON j.job_id=js.job_id
       WHERE step_id=@jobstep_id and j.job_id=CAST(@hexa_job_id  AS UNIQUEIDENTIFIER)
       RETURN ( @full_step_name_desc)
END
That is all for now, I hope you find this post helpful. Let me know any remarks you may have. Stay tuned.

Monday 19 June 2017

Error: could not obtain information about Windows NT group/user

Without any doubt, at times there is a need to use extended stored procedures in SQL Server, for instance, the following error may be raised while using 'xp_logininfo' to get information about domain account 'MyDomainMyAccount' (the account you are logged within SQL Server). The error may also appear when a SQL job, whose owner is a windows account, tries to authenticate against the Active Directory(AD) and this validation fails because of internal security reasons between SQL Server and the AD.

Error: 15404, State: 19. Could not obtain information about Windows NT group/user ‘MyDomainMyAccount’, error code 0x5.

In order to solve this error your Network Administrator has to enable ‘Allowed to authenticate’ security setting on the domain controllers computer object for the account 'MyDomainMyAccount' in the domain 'MyDomain' by following these steps:
  1. logon to the Domain Controller of domain 'MyDomain'
  2. open Active Directory Users and Computers (dsa.msc)
  3. enable the ‘Advanced Features’ under the menu ‘View’
  4. navigate to the domain controllers computer object and open the property window
  5. click on the security tab
  6. add the SQL Service account 'MyDomainMyAccount' and enable the setting ‘Allowed to authenticate’
  7. click OK to close the window
  8. repeat steps 4-7 on each Domain Controller computer object
Having done that, 'xp_logininfo' will run successfully bringing the information from the Active Directory. That is all for now, let me know any remarks you may have. Thanks for reading. Stay tuned.

Friday 2 June 2017

sys.dm_exec_requests: total_elapsed_time column might return inconsistent data

Developing useful scripts for administration purposes, I have found a bug with respect to data recollected by 'sys.dm_exec_requests' dynamic management view on SQL Server 2008 R2 SP2 while I was analysing the total elapsed time for a particular SQL query. Let me expand on what I am saying. I did detect an error in the 17th second of the execution with 'session_id' equal to 63 (see it in the picture). Following the sequence of each result in the execution of queries, I need to draw your attention to the second query and its second column where the total elapsed time according to 'sys.dm_exec_requests' should be 17 and not 938 seconds since the previous one was 16. Now, checking the value of the third column (calculated by subtracting the 'start_time' value from GETDATE function), you will verify that this time is accurate whereas the second one is false. The value for the second and third columns should be the same but they are different.



To sum up, for this particular and real case, the column 'total_elapsed_time' returns inconsistent information about elapsed execution time of a process when it exceeds 16 seconds. Despite of the fact that I have not seen the same issue in other versions like SQL Server 2012/2014/2016/2017, it is better not to trust in DMVs so much. Therefore, I suggest working with caution. That is all for now, let me know any remark you may have. Stay tuned.

Friday 26 May 2017

How to know which non-clustered indexes are not being used any longer

It is said that indexes are extremely useful to boost the performance of databases, especially indexes on huge tables. I am in agreement with that and I also support the idea of having the right indexes to achieve the best performance if they are created and used properly. Consequently, we have to vigilant of it, indexes may have been created to solve a particular problem in a moment, however, it does not mean that they are going to be useful forever but only for some days or months while data does not change too much. So, I should say as well that indexes are not the be-all and end-all for every business case, and we must always monitor the index usage and should consider the choose of dropping them if we verify they are not being using any longer. Keep in mind, not only could indexes be big and occupy a lot of space, but also impact on write operations negatively. Therefore, we should avoid wasting space, and having done that we can also optimise preventive indexes maintenance time.

This post will show a code to know exactly which non-clustered indexes are being used since the last restart of SQL Server engine. This code also filters out the indexes which are not related to primary keys. As you have seen in the code, we figure out the indexes by checking the value of columns user_seeks, user_scans, and user_lookups, if they are equal to zero then is says to us they were not used until this moment. Please be cautious, before using this code to get the indexes to be deleted, you should make sure that the database engine is not been restarted recently, otherwise, you will delete the wrong indexes and will impact on the performance tremendously. I strongly suggest using this code only if the latest restart of SQL Server has been done three months ago, more and less.

SELECT  i.type_desc,last_user_seek, last_user_scan, last_user_lookup,
        OBJECT_NAME(s.[object_id]) AS [Table Name] ,
        i.name AS [Index Name] ,
        i.index_id ,
        user_updates AS [Total Writes] ,
        user_seeks + user_scans + user_lookups AS [Total Reads] ,
        user_updates - ( user_seeks + user_scans + user_lookups ) AS [Difference]
, 'DROP INDEX ['+ schema_name(o.[schema_id]) +'].['+ OBJECT_NAME(s.[object_id]) +'].[' + i.name  + '];' as Statement_Drop_Index
,i.is_unique, i.is_unique_constraint, i.is_primary_key
FROM    sys.dm_db_index_usage_stats AS s WITH ( NOLOCK )
INNER JOIN sys.indexes AS i WITH ( NOLOCK )
INNER JOIN sys.objects o on o.object_id = i.object_id
    ON s.[object_id] = i.[object_id]  AND i.index_id = s.index_id
WHERE OBJECTPROPERTY(s.[object_id], 'IsUserTable') = 1
      AND s.database_id = DB_ID()     
      AND i.index_id > 1 and i.is_primary_key=0 and i.is_unique=0 and i.is_unique_constraint=0
      AND isnull(user_seeks, 0) + isnull(user_scans, 0) + isnull(user_lookups,0) =0 
ORDER BY  OBJECT_NAME(s.[object_id]), [Difference] DESC, [Total Writes] DESC,[Total Reads] ASC ;

When you execute it, you will also get a column containing the T-SQL code to delete the indexes which have not been used since the latest restart of SQL Server, and then you can execute it to delete them. To sum up, do not get me wrong, I am not against using indexes, on the contrary, I am in favour of using them properly, and if there were indexes that are not being using since a long time ago then they should be deleted. That's all for now. Let me know any remarks you may have. Thanks for reading.

Thursday 11 May 2017

ObjectExplorer error: Dedicated administrator connections are not supported

Dedicated Administrator Connection (DAC) is a type of privileged connection that we can establish to SQL Server database engine when it does not respond because of too much workload. In this context, DAC is truly useful which allow to diagnosis and solve this sort of problems without restarting the database server any longer. Its is said that it is not possible to establish a DAC connection to the Object Explorer panel via SSMS so that we can get the following error while trying do it in that way:

   Cannot connect to admin:<SQLInstanceFullName>.
   Dedicated administrator connections are not supported. (ObjectExplorer)




It is true up to a point, but it does not mean that SSMS does not support DAC connections. Therefore, the question is:  can we ONLY establish a DAC connection to SQL Server by using SQLCMD? the answer is 'NO'. We also can do it via SSMS while creating a new SQL Query, which means that instead of clicking on 'Object Explorer' and 'Connect', we must click on 'New Query' and finally write 'admin:<Full Name of SQL Instance>' on 'Server Name' text box. That is all, it couldn't have been simpler.



Having done that, we will have a DAC connection ready via SSMS. Let me know any remarks you may have. Thanks for reading. Stay tuned.

Sunday 30 April 2017

Converting Unix Timestamp into SQL Server DateTime

It is well known that developing and integrating applications based on different technologies may end up being a huge challenge, especially when it comes to dealing with data stored in diverse data type formats as it is Timestamp in Unix and DateTime in SQL Server. This post intents to show you a method in SQL Server to convert Unix Timestamp into Datetime. Here it is:

CREATE FUNCTION [dbo].[UNIX_TIMESTAMP_TO_DATETIME] (
@timestamp integer
)
RETURNS datetime
AS
BEGIN
  DECLARE @return datetime
  SET @timestamp = @timestamp - 18000
  SELECT @return = DATEADD(second, @timestamp,{d '1970-01-01'});
   
  RETURN @return
END

The function takes as a parameter a Unix Timestamp value so that it is converted into Datetime, and it will then give you the equivalent value in SQL Server Datetime format.  I know it might not be the be-all and end-all, nevertheless, I am pretty sure it will be use for many people. I hope you make the most out of it. That’s all for now. Let me know any remarks you may have. Thanks for reading. Stay tuned.

Wednesday 12 April 2017

Checking Disk Space usage and Database Files usage per partition disk

When it comes to monitoring and managing disk space we may have some graphic tools to help us complete these tasks rapidly, nevertheless, not everybody can afford to pay the tools, and most of them do not give some details we may need on the spur of the moment. For example, we may have the need to check how much disk space is occupied by database files exclusively so that a wise decision can be made to ensure we are using disk space properly of a database server. We may need to see a report something like this.



In the report we can appreciate that all partition disks are displayed, each of them has information related to the space used in total (db_file_size_on_disk_db) by all database files (data & log) and it also includes the equivalent in percentage (db_file_size_on_disk_pct). Thus, we will be able to know whether or not the disk space is being using properly. Furthermore, disk space capacity and disk space available in gigabytes and percentage can also seen (disk_capacity_gb, disk_space_available_gb, disk_space_available_pct).

Now it is high time to show the code to get that report containing that information:

CREATE PROC [dbo].[sp_DBA_mon_db_files_size_on_disk] (@volume_mount_point char(1)=NULL)
/*WITH ENCRYPTION*/
AS
begin
if (@volume_mount_point is not null )
begin
    SELECT    vs.volume_mount_point, 
    vs.logical_volume_name,   
    
    DB_NAME(f.database_id) database_name,  f.type_desc,
    vs.total_bytes/1024/1024/1024.0 AS [disk_capacity_gb],
    cast( f.[size] / 128 /1024.0 as decimal(10,2)) AS [db_file_size_on_disk_gb],  
    cast( ( f.[size]*100/ 128 /1024.0 ) /   (vs.total_bytes/1024/1024/1024.0 )  as DECIMAL(10,2) ) AS [db_file_size_on_disk_pct],
    cast(vs.available_bytes/1024/1024/1024.0 as DECIMAL(10,2)) AS [disk_space_available_gb],    
   cast ( ( vs.available_bytes *100.0  / vs.total_bytes) as DECIMAL(10,2) )  AS [disk_space_available_pct],
   f.physical_name,
   vs.file_system_type  
    FROM  sys.master_files AS f
    CROSS APPLY   sys.dm_os_volume_stats(f.database_id, f.file_id) AS vs
    WHERE   f.database_id < 32767 and  vs.volume_mount_point=@volume_mount_point + ':\'
    order  by f.[size]  desc
end
 else
  begin
    SELECT vs.volume_mount_point, 
        vs.logical_volume_name,                
        vs.total_bytes/1024/1024/1024.0 AS [disk_capacity_gb],
        cast(SUM( f.[size] / 128 /1024.0) as decimal(10,2)) AS [db_file_size_on_disk_gb],  
       cast( (SUM( f.[size]/ 128 /1024.0) *100) /   (vs.total_bytes/1024/1024/1024.0 )  as DECIMAL(10,2) ) AS [db_file_size_on_disk_pct],
        cast(vs.available_bytes/1024/1024/1024.0  as DECIMAL(10,2)) AS [disk_space_available_gb],    
       cast ( ( vs.available_bytes *100.0  / vs.total_bytes) as DECIMAL(10,2) )  AS [disk_space_available_pct],
       vs.file_system_type  
    FROM  sys.master_files AS f
    CROSS APPLY   sys.dm_os_volume_stats(f.database_id, f.file_id) AS vs
    WHERE   f.database_id < 32767
    group by  vs.volume_mount_point,vs.total_bytes ,   vs.logical_volume_name,  vs.total_bytes ,vs.file_system_type  ,  vs.available_bytes 
    
 end
end
    

The report could receive the letter of the partition disk as a parameter, and also it can be omitted. As you can see in the definition of the stored procedure, the default value for @volume_mount_point is NULL which means, according to the logic of it, that general information will be displayed, that is the report in the graphic, otherwise, giving the letter of a partition will allow you to get detail information for every database file located in that disk partition. That's is all for now. I hope it is useful and practical for you.  Let me know any remarks you may have. Thanks for reading. Stay tuned.

Wednesday 5 April 2017

Looking into Modern_Spanish and Latin1_General

The collations Modern_Spanish and Latin1_General are both Windows collations and support the same character set (Code Page 1252), but there are some differences between both collations related to the treatment of some characters in terms of sorting/doing comparisons. Some of the differences are noticeable especially when Accent-Insensitive is used, in Modern_Spanish n and ñ are considered different characters whereas in Latin1_General they are considered a character and an accented version of the same character.

Today I am going to show off an example to make it clear so that you can take it on board to pre-empt unforeseen results or conflict of collations. This example is based on Modern_Spanish_CI_AI and Latin1_General_CI_AI.

create table TempTable (
   Text_in_Latin1  varchar(100) collate Latin1_General_CI_AI,
   Text_in_Modern varchar(100) collate Modern_Spanish_CI_AI
)
go
insert into TempTable values ('Español', 'Español')
insert into TempTable values ('Espanol', 'Espanol')
insert into TempTable values ('Espanól', 'Espanól')
go
select Text_in_Latin1 from TempTable where Text_in_Latin1='Espanol'
go
select Text_in_Modern from TempTable where Text_in_Modern='Espanol'
go
drop table TempTable

Text_in_Latin1
--------------------------------------
Español
Espanol
Espanól

Text_in_Modern
--------------------------------------
Espanol
Espanól

As you can see, the first result demonstrates that Español, Espanol and Espanól are treated as the same word when we use Accent-Insensitive whereas in the second one Español and Espanol are treated as different words. It is worth noting that you only must change the collation if you really understand the behaviour change. For example, one might think that Latin1_General_CI_AS (Windows Collation) and SQL_Latin1_General_CP1_CI_AS (SQL Collation) are the same, but they are not the same, there is a slight difference in the short behaviour between both for specific special characters like '-'. Here another example about it.

create table TempTable (
   Text_in_Latin1  varchar(100) collate Latin1_General_CI_AS,
   Text_in_SQL_Latin1 varchar(100) collate SQL_Latin1_General_CP1_CI_AS
)
go
insert into TempTable values ('es-pe', 'es-pe')
insert into TempTable values ('espe', 'espe')
go
select Text_in_Latin1 from TempTable order by Text_in_Latin1
go
select Text_in_SQL_Latin1 from TempTable order by Text_in_SQL_Latin1
go
drop table TempTable

Text_in_Latin1
---------------------------------------
espe
es-pe

Text_in_SQL_Latin1
---------------------------------------
es-pe
espe

To be honest, I personally like having the same collation for all databases, but it has to be done carefully and, more importantly, if you really want that change. That is all for now, let me know any remarks you may have.
HELLO, I'M PERCY REYES! — a book lover, healthy lifestyle lover... I've been working as a senior SQL Server Database Administrator (DBA) for over 20 years; I'm a three-time awarded Microsoft Data Platform MVP. I'm currently doing a PhD in Computer Science (cryptography) at Loughborough University, England — working on cryptographic Boolean functions, algorithmic cryptanalysis, number theory, and other algebraic aspects of cryptography. READ MORE