Tuesday 15 August 2017

Altering SQL Jobs without granting SysAdmin privilege

What would you do if you were asked to grant a few users the permission for altering SQL Jobs? It is a tricky task to carry out although many of us would think that granting SQLAgentOperatorRole role might be enough, however, many years ago no sooner had I done it than I realised it did not work as expected, and now I think most of us faced up this issue at first.
The SQLAgentOperatorRole role allows user to alter SQL jobs as long as the user is the owner of the SQL job, otherwise, the user need to be SysAdmin at SQL instance level. What’s more, one of the best security practices says that nobody but DBA must be SysAdmin, and we should use Windows Authentication. Nevertheless, when it comes to owners for SQL jobs they should use 'sa' as owner which does not mean having enable that account, it should be disable. In this sense, it is highly advisable to have the disabled account “sa” as the owner of all SQL jobs and avoid granting SysAdmin privilege. So, it is of paramount importance not to use a windows user as owner of a SQL job because SQL Server will always validate windows users against the Active Directory and it is likely to get unforeseen errors during that process. 
Today I am going to share with you a stored procedure to enable users to alter SQL Jobs without the need of granting SysAdmin privilege. This stored procedure consists of a logic that will allow a specific user to take the ownership of a SQL Job so that the user can be able to alter it and after making the changes the user can change the ownership to ‘sa’ (or the original owner). This is the stored procedure that I mentioned above.

USE [msdb]
GO
CREATE PROC[dbo].[usp_change_owner_job] @jobname varchar(max), @newowner varchar(max)
with execute as owner
as
begin
    declare @old_owner varchar(max)
    select @old_owner= s.name from msdb.dbo.sysjobs j inner join sys.server_principals s
    on j.owner_sid= s.sid where j.name=@jobname

    --declare @newowner varchar(max)
    --set @newowner=ORIGINAL_LOGIN()
    EXEC msdb.dbo.sp_update_job @job_name=@jobname, @owner_login_name=@newowner

    --print @newowner
    declare @msg varchar(max)
    set @msg= 'The owner of ' +@jobname + ' job was changed from ''' + @old_owner + ''' to ''' + @newowner + ''''
    print @msg    
end

So, for instance, if you have a SQL job whose owner is 'sa' and your user is 'User2', you will not be able to alter the job until you take the ownership of it temporarily. Here are all the steps you must follow:

1. Create the stored procedure 'usp_change_owner_job' based on the code above.
2. Grant EXECUTE permission on 'usp_change_owner_job' to 'User2'.
3. Grant SQLAgentOperatorRole role to 'User2'.
4. Change the ownership of the SQL Job to 'User2' by using the stored procedure 'usp_change_owner_job'.

EXEC msdb.dbo.[usp_change_owner_job] 'BusinessJob01', 'User2'

The owner of BusinessJob01 job was changed from 'sa' to 'User2'

5. Now the User2 has the ownership of the SQL job and is now able to alter it.
6. After making the changes on the SQL Job, the 'User2' must change the ownership to 'sa'.

The owner of BusinessJob01 job was changed from 'User2' to 'sa'

The owner of BusinessJob01 job was changed from 'User2' to 'sa'

Having successfully completed all the steps will you be able to alter any SQL Job without granting SysAdmin privilege to users. That is all for now. Let me know any remarks you may have.

Tuesday 8 August 2017

Dealing with physical database corruptions

Beyond all doubt, not every single database in the world has a DBA dedicated to monitoring it 24x7 hours. Moreover, many database environments are unwittingly implemented/installed where there was no consideration to meet the basic software and hardware minimum requirements. This situation is compounded by the fact that many companies do not put much attention on databases from the beginning, maybe because at first the database are quite small and serving not many transactions per second. Unsurprisingly, all databases are becoming bigger and bigger with the passing of time so it is not rocket science to foresee that everything will get worse in terms of performance and physical integrity. In this new context, it is much more critical to have everything in place so as to prevent databases from getting damaged. However, many times it is too late when one realises that the database got damaged because of poor implementation. The overwhelmingly majority of physical corruption issues are not sparked by SQL Server on its own, but poor hardware implementation. Thus, if you find yourself working with databases struggling this problem, I wholeheartedly recommend reinstalling the whole server from scratch taking minimum requirements on board.
After having a good implementation of a database server, it is of paramount importance to carry out maintenance tasks at least once a month, and it should include executing full DBCC CHECKDB, and if possible DBCC CHECKDB WITH PHYSICAL_ONLY option once a week. It is also understandable that DBCC CHECKDB makes intensive use of resources, especially CPU and Disk, nevertheless it is possible to use MAXDOP option with DBCC CHECKDB to limit CPU usage and ease the pain. So, nowadays there is no feasible excuse to avoid executing it regularly.

Having said that, I would like to share with you some steps to follow in order to try to fix a physical database corruption:

1. To begin with, try to back your database up.
2. Set your database to SINGLE_USER mode, and then execute DBCC CHECKDB WITH PHYSICAL_ONLY to determine whether physical corruption exists or not.

ALTER DATABASE MyEnterpriseDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
GO
DBCC CHECKDB(MyEnterpriseDB) WITH PHYSICAL_ONLY 

If there is physical corruption, SQL Server will display some errors like these:

Msg 8909, Level 16, State 1, Line 1
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 0 (type Unknown),
page ID (1:219356) contains an incorrect page ID in its page header.
The PageId in the page header = (0:0).
Msg 8928, Level 16, State 1, Line 1
Object ID 501576825, index ID 1, partition ID 72057594040549376,
alloc unit ID 72057594044940288 (type In-row data): Page (1:1408252) could not be processed.

3. Sometimes only non-clustered indexes are damaged, and luckily in this case, you just need to recreate those indexes. If you want to figure out which indexes are damaged so that you can recreate them, you can have a look at this article I wrote many years ago.
4. However, if the clustered index or heap structure is damaged then you may need set the database to EMERGENCY mode and execute DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS option which may not always be the best option for bringing a database to a physically consistent state but when the clustered index or heap structure of a table is heavily damaged there is no other option, just do it at the cost of some data loss. It is also worth noting that if you want to find out which objects are damaged so that only execute DBCC CHECKDB repair process on them then you can also check out the same article. Here is this example, we are going to execute on the whole database.

ALTER DATABASE MyEnterpriseDB SET EMERGENCY;
GO
DBCC CHECKDB (MyEnterpriseDB, REPAIR_ALLOW_DATA_LOSS) WITH PHYSICAL_ONLY 
GO
ALTER DATABASE MyEnterpriseDB SET ONLINE WITH ROLLBACK IMMEDIATE;
GO
ALTER DATABASE MyEnterpriseDB SET MULTI_USER WITH ROLLBACK IMMEDIATE;

5. Having executed DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS option does not always ensure that the database will be repaired, however, many times it is enough. So, after getting your database repaired, try to go to bottom of the problem, and fix the problem at hardware level because it is most likely that disk storage is not working properly. Bearing in mind that database corruption issue is just the tip of the iceberg so consider DBCC CHECKDB as a quick fix, not a final solution.
6. Finally, you should take a full database backup.

That is all for now, I hope you find this post helpful. Let me know any remarks you may have. Stay tuned.

Friday 4 August 2017

Troubleshooting timeout expired errors

Having .NET Applications working with SQL Server, some unforeseen timeout errors can be raised for two possible causes: bad-written SQL code (especially Ad-Hoc queries) and application issues. Under this circumstance, developers will generally blame SQL Server at first glance because they may think it is a misconfiguration related to timeout in SQL Server. In my experience troubleshooting these issues, I would say that lots of them are not mostly related to SQL Server configuration itself. Additionally, when it comes to application as the root cause, reading the SQL Server error log, you will not find any error about timeout or failed login events. What does it mean? it simply means that the application never tried to connect to SQL Server because it was still working at application level when the error was raised. Thus, the cause of timeout error is inside the application at Net SqlClient Data Provider level (see the CommandTimeout property value configured for the application connection). This 'CommandTimeout' property specifies the number of seconds that a application provider should wait for result sets before sending a timeout signal. It is well known that the default is 30 seconds, therefore, it may not be good enough due to application performance problems.
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
In order to get it fixed, firstly, developers should check .NET code within applications. In some cases, the developers I worked with found infinite loops or slow codes which spent much time before sending SQL queries to the database engine, as a result of this, the timeout threshold was reached at application level and, obviously, the error was raised. By the way, the following error could also be related to timeout:
Unable to connect to SQL Server session database.
Having checked the application, you can increase the 'CommandTimeout' property value to 60 seconds (if it is not enough then change to 120 or 180, and also considerate using 0 (unlimited) which should be a workaround while the problem is being traced and fixed). That is all for now, let me know any remarks you may have. Thanks for reading. Stay tuned.

Tuesday 1 August 2017

Updating so-called 'column statistics' for the whole SQL Server database

Definitely, statistics in SQL Server are vital to ensure stable database performance as a whole. Every DBA must know the importance of keeping statistics up to date, however, not everyone who is in charge of a database knows much about it, especially, those who do not have this role or are DBA by accident. So, I will just mention that statistics are objects that contain data distribution, density, selectivity, cardinality, etc. for all columns that were at least once queried or linked to indexes. This information helps the SQL optimizer make the right decision of what indexes and resources should use so as to execute and process the queries as fast as possible. Put differently, statistics allow SQL optimizer to create the best possible execution plan to run queries efficiently, otherwise, when statistics are not up to date bad execution plans may be created which will make inefficient use of resources bringing down the performance of the whole database and, even worse the whole server. So, it is a basic task to routinely update statistics at least once a month as part of your database maintenance tasks.
It is worth noting that statistics that linked to indexes (so-called 'index statistics') are full updated automatically when indexes are rebuilt. So, those statistics do not need explicit update. Nevertheless, statistics linked to columns (so-called 'column statistics') still need of an explicit update and this is what we need to do after rebuilding all the indexes. Thinking about it, today I am going to share a script to only run the update of columns statistics:

SET NOCOUNT ON     
DECLARE @schema_name varchar(max)      
DECLARE @table_name varchar(max)     
DECLARE @stat_name varchar(max)        
DECLARE @update_stat_cmd varchar(max)        
DECLARE @update_stat_msg_header   varchar(max)      
DECLARE update_stat_cursor CURSOR FOR    
          select  schema_name(o.[schema_id]),  object_name(s1.[object_id]) , s1.name   
          from (  
            select s.[object_id], s.name from sys.stats s  
            left join sys.indexes i on s.name=i.name  
            where i.name is null) s1  
          inner join sys.objects o on o.[object_id]=s1.[object_id]  
          where  o.type='U'  
          order by schema_name(o.[schema_id]),  object_name(s1.[object_id]) , s1.name   
  
       
OPEN update_stat_cursor        
FETCH NEXT FROM update_stat_cursor INTO  @schema_name, @table_name,  @stat_name         
WHILE (@@fetch_status = 0)        
  BEGIN    
       DECLARE @ini DATETIME, @fin DATETIME     
       SET @update_stat_msg_header =  '->Updating ['+ RTRIM(@schema_name)  +'].[' + RTRIM(@table_name) + ']([' +@stat_name + '])'      
       PRINT @update_stat_msg_header    
       SET @update_stat_cmd ='UPDATE STATISTICS ['+ RTRIM(@schema_name)  +'].[' + RTRIM(@table_name) + ']([' +@stat_name + '])'    
       SET @ini=GETDATE()    
       EXEC (@update_stat_cmd)      
       SET @fin=GETDATE()    
      
       FETCH NEXT FROM update_stat_cursor INTO @schema_name, @table_name,  @stat_name            
  END        
 
PRINT ' '       
PRINT '----------------------------------------------------------------------------- '        
SET @update_stat_msg_header = '*************  THERE ARE NO MORE STATISTICS TO BE UPDATED **************'         
PRINT @update_stat_msg_header        
PRINT ' '        
PRINT 'All statistics not linked to any index were rebuilt!'        
  
CLOSE update_stat_cursor        
DEALLOCATE update_stat_cursor    
SET NOCOUNT OFF

That is all for now. Thanks for reading. Let me know any remarks you may have. Stay tuned.

Tuesday 25 July 2017

Testing database connectivity by using a Universal Data Link file

After installing a SQL instance, we may need to test the database connectivity from a client host to make sure that everything is working very well, for instance, sometimes Windows Firewall might block access to the service or there might be network issues. Moreover, in the likely event that there was no time to install SQL Server client tools such as SSMS or SQLCMD to carry out the test, you woud be a bit suprised to know that there is a simpler way to do it, that is via a Data Link file. Consequently, in this post I am going to show you how to create and use a Data Link file to test connectivity to a SQL instance. To begin with, you must open Notepad to create an empty .txt file and save it with the .udl extension as you can see in the following picture.
After doing that, you must open the .udl file and you will then see the following window with four tabs. The second tab "Connection" is to fill with the server name (or SQL instance name) and the credentials accordingly. For instance, I am testing the connectivity to a default SQL instance and using Windows Authentication. You must modify that to serve your needs.
In the first tab "Provider" we can choose the Provider to use in the test. By default, it is always "Microsoft OLE DB Provider for SQL Server". It is ideal to test other providers as well. It is worth noting that SQL Native Providers will be listed if SQL Client tools are installed locally in the client host from where the test is done. In the tab "Advanced" it is possible to set the timeout value whereas in the tab "All" we can see a summary of all the settings, and we can also edit the values of some important connection parameters such as "Language", "Connect Timeout", "Packet Size", "Data Source" and "Initial Catalog".

Finally, we just have to click on "Test Connection" in the tab "Connection" to proceed with the test. If the connectivity to the SQL instance is ok, you will see the message "Test connection succeeded", it couldn't have been simpler!
That is all for now. I hope you find this post helpful and practical. Let me know any remarks you ma have. Stay tuned.

Thursday 20 July 2017

Detecting excessive compilation and recompilation issues

Undoubtedly, recompilation is a big topic to reckon with, more importantly, in database environments processing data that is changing rapidly over time and compounded by ad-hoc workloads which may cause CPU bottleneck, so it is of paramount importance to detect excessive compilation and recompilation issues and address them to warrant stable performance for the queries, and in this sense, there are some tools to be used to detect these issues such as Performance Monitor, Extended Events, SQL Server Profiler Trace, DMVs, etc. When it comes to using Performance Monitor we should concentrate the efforts on analising the performance counters SQL Server: SQL Statistics: Batch Requests/sec, SQL Server: SQL Statistics: SQL Compilations/sec, and SQL Server: SQL Statistics: SQL Recompilations/sec. What's more, you can save a trace file capturing the events SP:Recompile, SQL:StmtRecompile, and CursorRecompile, then you can use the following query to see all the recompilation events:

select spid, StartTime, Textdata, EventSubclass, ObjectID, DatabaseID, SQLHandle 
from fn_trace_gettable('C:\RecompilationTrace_01.trc', 1)
where EventClass in(37,75,166) -- 37 = Sp:Recompile, 75 = CursorRecompile, 166 = SQL:StmtRecompile

Furthermore, we can also capture the showplan XML for query compile, but doing that has significant performance overhead because it is captured for each compilation or recompilation. So, just do it for a very short time as long as you see a high value for the SQL Compilations/sec counter in Performance Monitor. Once you know where the problem is you can use Engine Tuning Advisor to see whether any indexing changes improve the compile time and the execution time of the query.
Talking of DMVs to diagnose recompilation issues, looking into 'sys.dm_exec_query_optimizer_info' is very helpful, in particular, look at the Elapsed Time, which is the time elapsed due to optimizations, and also Final Cost. If you see that Time Elapsed is very close to the CPU time, you might reach the conclusion that the compilation and recompilation time is attributable to that high CPU use. Another DMV to use is 'sys.dm_exec_query_stats' whose most important columns to look at are sql_handle, total worker time, plan generation number (the number of times the query has recompiled), and statement Start Offset. Here is an example to check the top 20 most procedures that have been recompiled.

select top 20 SQLText.text, sql_handle, plan_generation_num, execution_count, dbid, objectid 
from sys.dm_exec_query_stats
   cross apply sys.dm_exec_sql_text(sql_handle) as SQLText
where plan_generation_num >1
order by plan_generation_num desc

There is a plethora of recommendations to deal with recompilation and keep the performance in optimum conditions, but for now you can take into consideration the following options:
  • Check to see whether the stored procedure was created with the WITH RECOMPILE option or whether the RECOMPILE query hint was used. If a procedure was created with the WITH RECOMPILE option, since SQL Server 2005, you may be able to take advantage of a statement-level RECOMPILE hint if a particular statement within that procedure needs to be recompiled. Using this hint at the statement level avoids the need of recompiling the whole procedure each time it executes, while at the same time allowing the individual statement to be compiled. 
  • Recompilations can occur due to changes in statistics, and you can use the KEEPFIXED PLAN query hint to make recompilations occur only when there is the need to ensure correctness and not to respond to changes in statistics. For instance, in this context recompilation can only occur if the underlying table structure or its schema that is referenced by a statement changes, or if a table is marked with the sp_recompile stored procedure, all resulting in the fact that the plan no longer applies and consequently triggering the recompilation event. 
  • Using the KEEP PLAN query hint is useful to set the recompilation threshold of temporary tables to be the same as permanent tables. Take a look at the EventSubclass column which displays 'Statistics Changed' for an operation on a temporary table.
  • Turning off the automatic updates of statistics for indexes and statistics that are defined on a table or indexed view prevents recompilations that are due to statistics changes on that object. It is worth noting that turning off the auto-stats option is not always a good idea. This is because the query optimizer is no longer sensitive to data changes in those objects resulting in suboptimal query plans. To be honest, I never turned off this option because I always preferred trusting in SQL Server criteria and instead opting to work on optimising queries.
  • Keep in mind that recompilation thresholds for temporary tables are lower than for normal tables, so if the recompilations on a temporary table are due to statistics changes, you can change the temporary tables to table variables. A change in the cardinality of a table variable does not cause a recompilation. The side effect of this approach is that the query optimizer does not keep track of a table variable's cardinality because statistics are not created or maintained on table variables. This can result in less optimal query plans, however, you can test the different options and choose the best one. Generally, temporary tables provide much better performance than tables variables when lots of data is involved.
  • Recompilation might also occur as a result from SET option changes, so one can diagnose it by using SQL Server Profiler to determine which SET option changed. It is highly advisable to avoid changing SET options within stored procedures and it is much better to set them at the connection level, and for the overwhelmingly majority of cases the default SET options work well. It is also very important to ensure that SET options are not changed during the lifetime of the connection.
  • To avoid recompilations that are due to deferred compilations, do not combine DML with DDL and do not create the DDL as a result from conditional IF statements.
  • To avoid recompilation and also to avoid ambiguity between objects, batches should have qualified object names, for example, dbo.Table1, User1.MySP, etc.
That is all for now, let me know any remarks you may have.

Friday 14 July 2017

Ports and Protocols Used by Microsoft SQL Server

Naturally, I have been asked many times about ports used by SQL Server services and to be honest sometimes I took me some time to reply because there is a great number of ports and protocols and it is not easy to remember them quickly. Not all of us have to learn everything by rote, so thinking about it, I made the decision of sharing the following lists of useful ports and protocols so that you can have them at hand when needed.

Ports and Protocols Used by Microsoft SQL Server 2000
Service / Purpose Protocol Port
Analysis Services TCP 2725
Client connections when "hide server" option enabled TCP 2433
Clients using Named Pipes over Netbios TCP 139/445
Microsoft SQL Monitor port UDP 1434
OLAP Services connections from downlevel clients OLAP Services 7.0 TCP 2393/2394
SQL over TCP ** TCP 1433
Standard URL for a report server (Reporting Services) TCP 80 HTTP /443 SSL
Ports and Protocols Used by Microsoft SQL Server 2005
Service / Purpose Protocol Port
Analysis Services connections via HTTP (default) TCP 80
Analysis Services connections via HTTPS (default) TCP 443
Clients using Named Pipes over Netbios TCP 137/138/139/445
Dedicated Administrator Connection TCP 1434 by default (local port). But this port is assigned dynamically by SQL Server during startup.
Reporting services on Windows 2003/2008/Vista (default) TCP 80
Reporting services on Windows XP SP2 TCP 8080
SQL Server 2005 Analysis Services TCP 2383
SQL Server Browser Service TCP 2382
SQL Server Integration Services (MSDTSServer) TCP 135
SQL Server Resolution Protocol TCP 1434
SQL over TCP (default instance) TCP 1433
SQL over TCP (named instances) TCP 1434 / 1954
Ports and Protocols Used by Microsoft SQL Server 2008/2012/2014/2016/2017
Service / Purpose Protocol Port
Analysis Services connections via HTTP (default) TCP 80
Analysis Services connections via HTTPS (default) TCP 443
Clustering UDP 135
Clustering TCP 135 (RPC) / 3343 (Cluster Network Driver) / 445 SMB / 139 NetBIOS / 5000-5099 (RPC) / 8011-8031 (RPC)
Database Mirroring TCP There is no default port for this service. Use the following T-SQL statements to identify which ports are in use: SELECT name, port FROM sys.tcp_endpoints.
Dedicated Administrator Connection TCP 1434 by default (local port). But this port is assigned dynamically by SQL Server during startup.
Filestream TCP 139 y 445
Microsoft Distributed Transaction Coordinator (MS DTC) TCP 135
Reporting services Web Services TCP 80
Reporting Services configured for use through HTTPS TCP 1433
Service Broker TCP 4022
SQL Server Analysis Services TCP 2382 (SQL Server Browser Services for SSAS port)
2383 (Clusters will listen only on this port)
SQL Server Browser Service (Database Engine) UDP 1434. Might be required when using named instances.
SQL Server Browser Service TCP 2382
SQL Server default instance running over an HTTPS endpoint. TCP 443
SQL Server Instance (Database Engine) running over an HTTP endpoint.  TCP 80 y 443 (SSL)
SQL Server Integration Services TCP 135 (DCOM)
SQL over TCP (default instance) TCP 1433
Transact-SQL Debugger TCP 135
Windows Management Instrumentation TCP 135 (DCOM)

That is all for now. I hope you find this post useful. Let me know any remarks you may have. Stay tuned.

Sunday 25 June 2017

Getting the full name of SQL Jobs including the steps in execution

While monitoring a database server, we may need to know some details about the sessions, connections and requests that migh be causing performance or blocking issues so that we can take actions to fix them. In doing so, one very important piece of information is the program name that is connected to the database engine. Broadly, it is posible to see the program name in detail by using system stored procedures or DMVs such as 'sp_who2' and 'sys.dm_exec_sessions'. Nevertheless, not every name of the programs may be easy to interpret, especially when it comes to SQL Jobs. For instance, if you detected that a SQL Job is the root of the problem and then needed to know which SQL Job it is, the column program_name of 'sp_who2' or 'sys.dm_exec_sessions' woud only give us the SQL Job ID based on the following format:

SQLAgent - TSQL JobStep (Job 0x2613DA812CD2D248A9BA377DE6DEF355 : Step 1)

Obviously, we cannot do much with that info because there is no SQL Job name, and even worse, no SQL Job step name. However,  we can figure out the name of the SQL Job in msdb.dbo.sysjobs by using the ID.

SELECT * FROM msdb.dbo.sysjobs WHERE job_id=0x2613DA812CD2D248A9BA377DE6DEF355

Despite the fact that it may be relativily easy to get the name of the SQL Job, it is not enough because it is of paramount importance to know the name of the step in execution, and keep in mind that doing this manually this every time when needed is going to be an uphill battle as it is arduous and not optimal, especially if there are many SQL Jobs running and causing struggles. Thinking of this situation, I created a script to automate the task of figuring out details related to SQL Jobs such as the name and also the step name that is running. To be more precise, this logic is inside a function called "ufn_GetJobStepNameDesc" that takes the value of the "program_name" column and returns the name of the SQL Job and the step in execution. Let's take a look at the following code whereby we also filter out the sessions used by SQL Jobs.

SELECT session_id, login_time, login_name, [status], writes,
       logical_reads, [language], DB_NAME(database_id) DatabaseName,
          dbo.ufn_GetJobStepNameDesc([program_name]) AS SQLJobDescription
FROM sys.dm_exec_sessions where [program_name] like 'SQLAgent - TSQL%'
As you can see, I have highlighted the use of the function, and also added other important columns to look at as part of monitoring. So, using dbo.ufn_GetJobStepNameDesc([program_name]) the final outcome would be like this: SQLAgent - TSQL JobStep "<Name of the job step>" (Job: <Job name>).  For instance: SQLAgent - TSQL JobStep "Updating_Accounts" (Job: SAP_Financial_Process)
Here I share with you my script so that you can check it thoroughly and then make the most out if it.
USE [master]
GO
CREATE FUNCTION  [dbo].[ufn_GetJobStepNameDesc] (@step_name_desc VARCHAR(MAX))
RETURNS VARCHAR(max)
AS
BEGIN
       DECLARE @full_step_name_desc VARCHAR(MAX)
       DECLARE @jobstep_id_start INT
       DECLARE @jobstep_id_len INT
       DECLARE @jobstep_id INT
       SELECT @jobstep_id_start=CHARINDEX(': Step', @step_name_desc)+7, @jobstep_id_len=CHARINDEX(')', @step_name_desc)-@jobstep_id_start
       set @jobstep_id= CAST(SUBSTRING(@step_name_desc, @jobstep_id_start,@jobstep_id_len) AS INT)

       DECLARE @job_id_start INT
       DECLARE @job_id_len INT
       DECLARE @hexa_job_id VARBINARY(MAX)
       SELECT @job_id_start=CHARINDEX('(Job 0', @step_name_desc)+5,  @job_id_len=CHARINDEX(':', @step_name_desc)-@job_id_start
       SET @hexa_job_id=CONVERT( VARBINARY, RTRIM(LTRIM(SUBSTRING(@step_name_desc, @job_id_start,@job_id_len))),1)
      
       SELECT @full_step_name_desc='SQLAgent - TSQL JobStep "' + step_name+'" (Job: ' + j.[name] +')'
       FROM msdb.dbo.sysjobsteps js
            INNER JOIN msdb.dbo.sysjobs j ON j.job_id=js.job_id
       WHERE step_id=@jobstep_id and j.job_id=CAST(@hexa_job_id  AS UNIQUEIDENTIFIER)
       RETURN ( @full_step_name_desc)
END
That is all for now, I hope you find this post helpful. Let me know any remarks you may have. Stay tuned.
HELLO, I'M PERCY REYES! I've been working as a senior SQL Server Database Engineer for over 20 years; I'm a three-time Microsoft Data Platform MVP. I'm a cryptographer conducting research on cryptographic Boolean functions and their applications.