Tuesday 24 May 2016

How to change collation of all columns

Just thinking about some interesting tools that could be useful for doing some DBA tasks, I would like to share my code to change the collation of all columns of all SQL Server tables (I mean User Tables, not System Tables). To begin with, I will show you a basic code to filter columns by an specific collation:

select tb.schema_id, tb.name,c.name,  c.collation_name, t.name, c.max_length, c.is_nullable,c.column_id 

from sys.columns c

inner join sys.types t on t.user_type_id= c.user_type_id

inner join sys.tables tb on  c.object_id=tb.object_id

where c.collation_name is not null 

and t.is_user_defined=0 and tb.is_ms_shipped=0 and tb.name<>'sysdiagrams'

and c.collation_name<>'SQL_Latin1_General_CP1_CI_AS'

order by tb.name, c.column_id

By executing it you will list every column that has a different collation you would like to change. Now I am going to show you the code that generates the code to change the collation of columns. After executing this code you must take the output and execute it to have your columns collation changed for a different one. In this example I am using SQL_Latin1_General_CP1_CI_AS collation as my wanted collation, I mean I want to have SQL_Latin1_General_CP1_CI_AS  as my new collation. You have to replace it according to your requirement.

select 'ALTER TABLE ' + QUOTENAME(SCHEMA_NAME(tb.schema_id)) + '.' + QUOTENAME(tb.name) + 

' ALTER COLUMN ' + QUOTENAME(c.name) +  ' ' + QUOTENAME(t.name) + '(' + CAST( case when T.NAME='NVARCHAR' THEN  c.max_length/2 

WHEN  T.NAME='NCHAR' THEN  c.max_length/2 ELSE c.max_length  END  AS VARCHAR(10)) +')' 

 +' COLLATE SQL_Latin1_General_CP1_CI_AS' + CASE WHEN c.is_nullable =1 THEN ' NULL ' else ' NOT NULL ;' END--,  c.collation_name,c.is_nullable 

from sys.columns c

inner join sys.types t on t.user_type_id= c.user_type_id

inner join sys.tables tb on  c.object_id=tb.object_id

where c.collation_name is not null 

and t.is_user_defined=0 and tb.is_ms_shipped=0 and tb.name<>'sysdiagrams'

and c.collation_name<>'SQL_Latin1_General_CP1_CI_AS'

order by tb.name, c.column_id


Mind your head because the execution of output code may fail as some columns could have some constraints (for instance some Foreign keys, Primary Keys, Uniques, etc.) or indexes that you may need to drop them first and then recreate them after you change the collation of the column.
Just to finish, I highly recommend testing this code on a copy of your database to check whether any error appears because of reasons explained above, then doing what is necessary. Finally, being totally sure that there is no error, you can proceed with the execution on your database in production environment. Please let me know any remark or question you may have. Thanks for reading!

Sunday 1 May 2016

AlwaysOn AG Listener: The attempt to create the network name and IP address for the listener failed

While working on a heap of High Availability and Disaster Recovery solutions, I was challenged to deal with some complex errors that I had to overcome as fast as possible. Today’s post is going to show how we may solve one of them. I am speaking about the following error which is raised when we have to set up the AlwaysOn AG Listener:

The Windows Server Failover Clustering (WSFC) resource control API returned error code 5057.  The WSFC service may not be running or may not be accessible in its current state, or the specified arguments are invalid. 
The attempt to create the network name and IP address for the listener failed. The WSFC service may not be running or may be inaccessible in its current state, or the values provided for the network name and IP address may be incorrect. Check the state of the WSFC cluster and validate the network name and IP address with the network administrator. (Microsoft SQL Server, Error: 41009)

At times it can be quite easy to fix it, but it may become complicated as we do not have more details of the root cause, therefore, it does not give us any clue. In my experience working on this, the most common cause has to be about lacking of permission for the cluster name account so we have to make sure that this account has the 'Create Computer' and 'Read' permissions:



Once you have given right permissions and if the error is still there then you must check whether the IP Address is available to be assigned to the AlwaysOn AG Listener. It is simple to verify by making ping to IP Address which should be free. If not, ask your Administrator a new IP Address and try again.
I hope this practical post helps you. Let me know any remarks you may have. Until next post, thanks for reading!

Saturday 30 April 2016

Reusing SQL Job creating script to create new similar ones with different Schedule ID

As we now it is very important to look for new ways of being more productive every day. For instance, one of our tasks as DBA is to implement SQL Backup Jobs for each database. Personally, I like reusing code to create more similar SQL Jobs faster, that is, create one SQL Job, generate the SQL creating script of it, replace some things, and finally execute it to create every SQL Backup Job for all databases.
After creating the next SQL Jobs by reusing the complete code, you will find that these SQL Jobs have the same SQL Schedule ID. So, if we modify the schedule for one of them, every SQL Job will be modified as well. Under this circumstance, we will have to drop the SQL Schedule and create a new one. It may not be what we wanted to do as it may take some additional time. Therefore, are we curious to know how to create SQL Jobs based on the same template but having a different SQL Schedule ID?. This post shows how to do it.

First of all, look at this picture.



You will see a parameter @schedule_uid which is the SQL Job schedule ID, so what we have to do now is to comment this line in order to allow SQL Server to generate a new ID for the SQL Job Schedule.



Having modified that parameter for each Job, the rest of Jobs will not inherent the Schedule ID anymore and a new one will be created instead. I hope this post is useful for you and let me know any questions. Until next post, thanks for reading!

Wednesday 9 March 2016

SQL Server Analysis Services Error: The following system error occurred: (Microsoft.AnalysisServices)

Clearly, while adding some users as Windows Administrators, not only do we do that, but also we add them as Administrator of Analysis Services (SSAS). So, if we delete them from Windows then they will become orphaned users inside SSAS. As a result of this action, not surprisingly, we are going to get the following error when we are trying to add other users to the server administrator role in SSAS via the GUI or by code.

“The following system error occurred: (Microsoft.AnalysisServices)”

This sort of error is raised at times as the orphaned users have not got a correct matching inside the Active Directory. So, only their invalid user IDs will still be registered.



What we should do to fix this issue and be able to add other users is, firstly, to remove these orphaned users and, finally, add the new ones. I hope you find this short post interesting. Thanks for reading!

Friday 4 March 2016

How to split the tempdb database into more files

Naturally, as DBAs we do know is mandatory to modify data and log properties of the tempdb database. Unless we do a customized configuration of it, SQL Server will create only one data file and log file by default. The reality is that we will mostly need to create more files on production environments. There are many recommendations not only about how to create them but also the quantity of files on OLTP environments. To be perfectly honest, I do not believe that the number of data files only depends on the number of cores, but it also depends on concurrency, tempdb contention issues, the workloads on your server and, clearly, the performance of your queries. So, there is no a rule for it. Moreover, depending on the SQL Server version we are working on, we may NOT need to split the tempdb into many data files. There are some situations where it will work splendidly with only one data file since each database environment is unique and, therefore, we need to determine the best for it.

Today we are not going to discuss more details about it. I just would like to suggest splitting your tempdb database into four data files (if you have four or eight core) or eight data files (for 16, 32, 64, or more cores) and ONLY one log file. Ideally, we also need to locate them in different drives RAID1, RAID5 or RAID10. Now having very clear the situation, I will show you one small script to split your default tempdb database into 8 data files and remaining the only one log file.

USE [master]
GO
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'templog', NEWNAME= N'tempdev_Log')
GO
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'tempdev', NEWNAME=N'tempdev_Data01')
GO
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'tempdev_Log',  FILENAME= N'D:SQLTempDBtempdev_Log.ldf', SIZE = 2048MB , FILEGROWTH = 2048MB)
GO
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'tempdev_Data01',  FILENAME= N'D:SQLTempDBtempdev_Data01.mdf', SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data02', FILENAME = N'D:SQLTempDBtempdev_Data02.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data03', FILENAME = N'D:SQLTempDBtempdev_Data03.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data04', FILENAME = N'D:SQLTempDBtempdev_Data04.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data05', FILENAME = N'D:SQLTempDBtempdev_Data05.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data06', FILENAME = N'D:SQLTempDBtempdev_Data06.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data07', FILENAME = N'D:SQLTempDBtempdev_Data07.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data08', FILENAME = N'D:SQLTempDBtempdev_Data08.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )

After having successfully executed, we must restart the database engine in order to get the new files created and all changes done. I hope you find this post interesting. Let me know any remark if you may have. Thanks for reading!
HELLO, I'M PERCY REYES! — a book lover, healthy lifestyle lover... I've been working as a senior SQL Server Database Administrator (DBA) for over 20 years; I'm a three-time awarded Microsoft Data Platform MVP. I'm currently doing a PhD in Computer Science (cryptography) at Loughborough University, England — working on cryptographic Boolean functions, algorithmic cryptanalysis, number theory, and other algebraic aspects of cryptography. READ MORE