Saturday 30 April 2016

Reusing SQL Job creating script to create new similar ones with different Schedule ID

As we now it is very important to look for new ways of being more productive every day. For instance, one of our tasks as DBA is to implement SQL Backup Jobs for each database. Personally, I like reusing code to create more similar SQL Jobs faster, that is, create one SQL Job, generate the SQL creating script of it, replace some things, and finally execute it to create every SQL Backup Job for all databases.
After creating the next SQL Jobs by reusing the complete code, you will find that these SQL Jobs have the same SQL Schedule ID. So, if we modify the schedule for one of them, every SQL Job will be modified as well. Under this circumstance, we will have to drop the SQL Schedule and create a new one. It may not be what we wanted to do as it may take some additional time. Therefore, are we curious to know how to create SQL Jobs based on the same template but having a different SQL Schedule ID?. This post shows how to do it.

First of all, look at this picture.



You will see a parameter @schedule_uid which is the SQL Job schedule ID, so what we have to do now is to comment this line in order to allow SQL Server to generate a new ID for the SQL Job Schedule.



Having modified that parameter for each Job, the rest of Jobs will not inherent the Schedule ID anymore and a new one will be created instead. I hope this post is useful for you and let me know any questions. Until next post, thanks for reading!

Wednesday 9 March 2016

SQL Server Analysis Services Error: The following system error occurred: (Microsoft.AnalysisServices)

Clearly, while adding some users as Windows Administrators, not only do we do that, but also we add them as Administrator of Analysis Services (SSAS). So, if we delete them from Windows then they will become orphaned users inside SSAS. As a result of this action, not surprisingly, we are going to get the following error when we are trying to add other users to the server administrator role in SSAS via the GUI or by code.

“The following system error occurred: (Microsoft.AnalysisServices)”

This sort of error is raised at times as the orphaned users have not got a correct matching inside the Active Directory. So, only their invalid user IDs will still be registered.



What we should do to fix this issue and be able to add other users is, firstly, to remove these orphaned users and, finally, add the new ones. I hope you find this short post interesting. Thanks for reading!

Friday 4 March 2016

How to split the tempdb database into more files

Naturally, as DBAs we do know is mandatory to modify data and log properties of the tempdb database. Unless we do a customized configuration of it, SQL Server will create only one data file and log file by default. The reality is that we will mostly need to create more files on production environments. There are many recommendations not only about how to create them but also the quantity of files on OLTP environments. To be perfectly honest, I do not believe that the number of data files only depends on the number of cores, but it also depends on concurrency, tempdb contention issues, the workloads on your server and, clearly, the performance of your queries. So, there is no a rule for it. Moreover, depending on the SQL Server version we are working on, we may NOT need to split the tempdb into many data files. There are some situations where it will work splendidly with only one data file since each database environment is unique and, therefore, we need to determine the best for it.

Today we are not going to discuss more details about it. I just would like to suggest splitting your tempdb database into four data files (if you have four or eight core) or eight data files (for 16, 32, 64, or more cores) and ONLY one log file. Ideally, we also need to locate them in different drives RAID1, RAID5 or RAID10. Now having very clear the situation, I will show you one small script to split your default tempdb database into 8 data files and remaining the only one log file.

USE [master]
GO
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'templog', NEWNAME= N'tempdev_Log')
GO
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'tempdev', NEWNAME=N'tempdev_Data01')
GO
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'tempdev_Log',  FILENAME= N'D:SQLTempDBtempdev_Log.ldf', SIZE = 2048MB , FILEGROWTH = 2048MB)
GO
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'tempdev_Data01',  FILENAME= N'D:SQLTempDBtempdev_Data01.mdf', SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data02', FILENAME = N'D:SQLTempDBtempdev_Data02.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data03', FILENAME = N'D:SQLTempDBtempdev_Data03.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data04', FILENAME = N'D:SQLTempDBtempdev_Data04.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data05', FILENAME = N'D:SQLTempDBtempdev_Data05.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data06', FILENAME = N'D:SQLTempDBtempdev_Data06.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data07', FILENAME = N'D:SQLTempDBtempdev_Data07.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data08', FILENAME = N'D:SQLTempDBtempdev_Data08.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )

After having successfully executed, we must restart the database engine in order to get the new files created and all changes done. I hope you find this post interesting. Let me know any remark if you may have. Thanks for reading!

Monday 15 February 2016

Transactional Replication and Change Data Capture: The Log Reader Agent Conflict

Behind close doors of SQL Server, the following issue may be raised when Transactional Replication and Change Data Capture (CDC) are deployed and running together in the same database server, and because something was done incorrectly managing CDC jobs. We do know that two SQL Jobs are created for the CDC process when CDC is deployed which are 'cdc.MyDB_capture' and 'cdc.MyDB_cleanup'.
Looking into the first one, I would like to say that the 'cdc.MyDB_capture' job executes 'sys.sp_MScdc_capture_job' system stored procedure and it invokes 'sp_cdc_scan' to read internally the Transaction Log and capture the changes done in the database via the Log Reader Agent (created initially for Transactional Replication purposes). In other words, the 'cdc.MyDB_capture' job is the agent of CDC process which reads the Transaction Log by using the Log Reader Agent. Therefore, Transaction Replication and CDC running for the same database cannot use the same Log Read Agent at the same time. Otherwise, we will get this error:

The capture job cannot be used by Change Data Capture to extract changes from the log when transactional replication is also enabled on the same database. When Change Data Capture and transactional replication are both enabled on a database, use the logreader agent to extract the log changes.

The error message is really clear. Put differently, it is not possible that two Log Reader Agent instances are running on your database at the same time. When transactional replication is configured then the cdc.MyDB_capture job is (or should have been) dropped automatically and, if you uninstall Transactional Replication then cdc.MyDB_capture job is created again. To be perfectly honest, this behaviour is because Transactional Replication has the highest priority to use the Log Agent Reader. So, if you have transactional replication running for your database and cdc.MyDB_capture job is still enabled (and running) then you will have to disable or drop it manually since it will be failing and raising the error above. Thanks for reading.

Saturday 13 February 2016

Looking into some concepts about the well-known tempdb database

Today's post is going to look into some concepts about the well-known tempdb database. As DBAs, we have to keep an eye on it not only at the beginning of the installation, but also as part of our monitoring tasks on a daily basis. To do so, we really need to understand how it works and what enhancements have come from SQL Server 2005 to forward. To begin with, the tempdb is the shared database per instance in SQL Server which is used to stored and manage temporary objects. It has a number of changes since SQL Server 2005, that is, there are new tempdb usages and internal optimizations enhancements. Nevertheless, the tempdb architecture is mostly unchanged since SQL Server 2000. 
In a user database, the transactions have the ACID attributes: atomicity, concurrency, isolation, and durability whereas in the tempdb database the transactions lose the durability attribute which basically means that they do not persist after a SQL Server shut down event. Most of these internal operations on tempdb do not generate log records as there is no need to roll back. So, these operations are faster. Furthermore, some of the database options cannot be modified for tempdb and others are limited or restricted. Here they are:
  • Auto Shrink is not allowed for tempdb.
  • Database Shrink and File Shrink capabilities are also limited.
  • The database CHECKSUM option cannot be enabled. 
  • A database snapshot cannot be created on tempdb. 
  • DBCC CHECKALLOC and DBCC CHECKCATALOG are not supported. 
  • Only offline checking for DBCC CHECKTABLE is performed.

Tempdb Space Usage: The following types of objects can occupy tempdb space: Internal Objects, Version Stores, and User Objects.

Internal Objects: Internal Objects metadata is stored in memory (it means that the metadata is hidden in the tempdb and it does not appear in catalog views such as 'sys.all_objects') and each of them occupies at least nine pages (one IAM page and eight data pages) in tempdb. Page Allocations and Update operations on it does not generate log records. The Internal Objects are used
  • to store intermediate runs for sort
  • to store intermediate results for hash joins and hash aggregates
  • to store XML variables or other large object (LOB) data type variables (text, image, ntext, varchar(max), varbinary(max), and all others)
  • by queries that need a spool to store intermediate results
  • by keyset cursors to store the keys
  • by static cursors to store a query result
  • by Service Broker (Query Notification and Event Notification) to store messages in transit
  • by INSTEAD OF triggers to store data for internal processing
  • by DBCC CHECKDB (it internally uses a query that may need to spool intermediate results)
Version Stores: They do not appear in catalog views such as 'sys.all_objects'. Here are more concepts:
  • Version stores are used to store row versions generated by transactions for features such as snapshot isolation, triggers, MARS (multiple active result sets), and online index build.
  • The online index build version store is for row versions from tables that have online index build operations on them.
  • The common version store is for row versions from all other tables in all databases.
  • The version store consists of append-only store units which are highly optimized for sequential inserts and random look up. Inserts into the version store do not generate log records.
  • Each unit can store many row versions. If there are versions to be stored, a new store unit is created about every minute.
In the following cases the versions of rows are generated for
  • SNAPSHOT isolation and read committed snapshot isolation (the versions are generated by DML operations in the database when the respective database options are enabled)
  • AFTER triggers (the versions are generated for all the update operations by the transaction that fired the trigger during the INSERT, DELETE, or UPDATE statement in any database, independent of database options. INSTEAD OF triggers do not generate versions)
  • MARS (the versions are generated by the UPDATE or DELETE statement when there is a pending SELECT on the same transaction and the same connection)
  • building an ONLINE index
User Objects: They do appear in catalog views such as 'sys.all_objects. Now some details about it:
  • The 'sp_spaceused' system stored procedure can show the size occupied by these objects.
  • User Objects include both user-defined tables and indexes, and system catalog tables and indexes.
  • Operations on User Objects in tempdb are mostly logged. Bulk copy program (BCP), bulk insert, SELECT INTO, and index rebuild operations are bulk logged.
  • User-defined tables include the global temporary tables such as ##t, and local temporary tables such as #t.
  • Local temporary tables also include table variables such as @t and the mapping index for online clustered index build with the SORT_IN_TEMPDB option.
Performance Enhancements in SQL Server 2005 or later
  1. The tempdb logging optimization avoids logging the “after value” in certain log records in tempdb.
  2. Instant data file initialization works by not zeroing out the NTFS file when the file is created or when the size of the file is increased.
  3. There is less use of the UP type page latch when allocating pages and extents in tempdb. Proportional fill has been optimized to reduce UP latch contention.
  4. Proportional fill has been optimized to reduce UP latch contention.
  5. There is now deferred drop in tempdb.
  6. Worktable caching is improved.
  7. SQL Server 2005 or later caches the temporary table that is created by using a CREATE TABLE or SELECT INTO statement.
Recommendations for managing
  1. The tempdb files must be configured with initial size and auto-growth based on your workloads. Do not let with the default sizes.
  2. The tempdb files must be located on RAID0 (for better performance) or RAID1 (if you need have more writes than reads) or RAID5 (if you have more reads than writes). RAID10 is the best option but no all companies can justify this.
  3. The tempdb files must be located on separated disks to avoid contention issues and improves the performance.
  4. Tempdb database must be created with one data file per physical processor if the quantity of CPUs is eight or less. If there are more then eight CPUs then you can start off with eight data files and after increase the number of files by four in case there was PAGELATCH contentions on one of the allocation bitmap pages (including PFS pages, GAM pages and SGAM pages) until the issue is solved, if not, add four files more, and so on.
  5. Do not forget, tempdb data files or log file should not grow so much in a short time, if this happens then something is working wrong, so you need to analyze the workloads performance and detect the query that you have to optimize.
That is all for now. Let me know any remarks you may have about tempdb database. Thanks for reading.

Monday 8 February 2016

Using KEEP_CDC option to keep CDC metadata

Having a database with Change Data Capture (CDC) enabled, we will need to take this recommendation into consideration during the restore process. There is one option to be used as part of RESTORE syntax. This is KEEP_CDC option which allows us to restore the CDC metadata as well when the database is restored to another SQL Server instance (or restore it in the same instance with a different a name, indeed). So, here is the example:
RESTORE DATABASE [TESTDB2] FROM  DISK = N'D:SQLBackupTESTDB.bak' WITH  FILE = 1, KEEP_CDC

While verifying that CDC metadata was restored as well, we will see that not only the database and tables keep CDC option enabled, but also all data inside captured by CDC is still there.
-- Checking if CDC is enabled for database TESTDB2.

select is_cdc_enabled,name from sys.databases where name='TESTDB2'

-- Checking if CDC is enabled for table MyTable.

select is_replicated, is_tracked_by_cdc, * from sys.tables

select * from [cdc].[change_tables]  

-- checking the data tracked for table MyTable.

select * from cdc.dbo_MyTable_CT 

This is not all. Now we must create the CDC jobs by executing:
USE TESTDB2

EXEC sys.sp_cdc_add_job 'capture'

EXEC sys.sp_cdc_add_job 'cleanup' 

Finally, you also can verify the jobs were created for CDC
USE TESTDB2

EXEC [sys].[sp_cdc_help_jobs] 

Having done that, the restore process of database (with CDC included) has been completed successfully. Thanks for reading!

Thursday 28 January 2016

Did you get this "AuthorizationManager check failed" error working with SQL Jobs and PowerShell?

Taking of PowerShell, while working on implementing SQL Jobs which execute PowerShell scripts, unexpectedly, they may begin failing without any apparent reason. So, we get this error:

AuthorizationManager check failed At line:1 char:2  + & <<<<  ‘S:myfolderscript.ps1’      + CategoryInfo          : NotSpecified: (:) [], PSSecurityException      + FullyQualifiedErrorId : RuntimeException.  Process Exit Code 1.  The step failed.

What we should do is to check PowerShell so as to make sure the ExecutionPolicy is not set to “Restricted” by executing the following command:
Get-ExecutionPolicy
If it is then set it to “RemoteSigned” or “Unrestricted” depending on your security policy.
Set-ExecutionPolicy RemoteSigned
Not only do we have to make sure that Windows Management Instrumentation service (WMI) service is enabled and running, but also we have to restart it. Only after successfully doing that will your job and script run again with no error. I do believe that this would work without a shadow of a doubt for the vast majority of cases. I hope you find this post useful. Thanks for reading.

Monday 25 January 2016

How to fix: The In-row data RSVD page count for object ‘SchemaName.ObjectName’, index ID ‘x’, partition ID ‘xxxxxxxxxxxxxx’

Undoubtedly, we all know that SQL Server 2000 is now out of compliance with Microsoft. Surprisingly, not all companies have their databases migrated, so in that sense, many of them are planning to upgrade to one of the latest version of SQL Server (SQL Server 2014/2016). This post pretends to give a recommendation that you may take it as a good practice in order to avoid inconsistency errors of object metadata for the upgraded databases. The inconsistency errors what I am talking about can be found when you run DBCC CHECKDB command to check the logical and physical integrity:
DBCC CHECKDB(N'MyDatabase') WITH NO_INFOMSGS

And here the following inconsistency error is likely to appear:
The In-row data RSVD page count for object ‘SchemaName.ObjectName’, index ID 0, partition ID 75863107960832, alloc unit ID 75863107960832 (type In-row data) is incorrect.

Run DBCC UPDATEUSAGE.

CHECKDB found 0 allocation errors and 1 consistency errors in table ‘SchemaName.ObjectName’ (object ID 1157579162). CHECKDB found 0 allocation errors and 1 consistency errors in database ‘MyDatabase’.
The output message above tells us that there is one LOGICAL consistency error inside the database "MyDatabase" because there are pages and row count inaccuracies for the table which ID is 1157579162. Luckily, errors of this sort can be fixed by running DBCC UPDATEUSAGE command which reports and corrects the inaccuracies. Only after successfully running DBCC UPDATEUSAGE(0) command in the database context will you be able to see the following messages indicating we got all inaccuracies fixed.
DBCC UPDATEUSAGE: Usage counts updated for table 'SchemaName.ObjectName' (index 'ObjectName' , partition 1):

        RSVD pages (In-row Data): changed from (-275) to (145) pages.

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Finally, we are now able to run again DBCC CHECKDB without any error. The recommendation is that so rapidly have you upgraded your databases (from SQL Server 2000 to superior version) that you must execute DBCC UPDATEUSAGE so as to avoid those errors (or more critical issues). Thanks for reading!

Saturday 16 January 2016

Error: “The local node is not able to communicate with the WSFC cluster” – AlwaysOn Availabiliy Group

At times after having an AlwaysOn solution implemented, not only may the WSFC Cluster service run down, but also it may be corrupted for whatever reason. As a result of this, we will need to reinstall everything. The big question would be now how to uninstall and delete the Availability Group correctly if WFC service is not running, and to worsen the situation, we do not have any option than deleting forcibly the Availability Groups and, therefore, ending up making a dirty and incomplete uninstallation, indeed.
After deleting the Availability Group, we will need to remove the nodes from the WSFC Cluster and, surely, it can be done by uninstalling WSFC Failover Service (instead of destroying the Windows Cluster which cannot be possible in this case). Having done that, we have to create again the Windows Cluster putting each node inside. Finally, we are going to be able to set up the AlwaysOn Availability Group. Ideally, it is the normal procedure for it.


Nevertheless, we cannot play by the procedure as each case is particular (and this case really is). It might not be a great surprise if we get this error while trying to set up the AlwaysOn Availability Group:



Obviously, this error is due to incorrect uninstallation we had done above and, luckily, to fix it we will have to disable AlwaysOn Availibility Groups feature at database engine service level.



Now we have to enable it again and then restart the database engine service



Not until you have done these all steps will you be able to create again the Availability Groups you want to. Thanks for reading again!

 

Wednesday 4 February 2015

SQL Server Index Report with Included Columns, Storage and more for all Tables in a Database

Today my new tip about “SQL Server Index Report with Included Columns, Storage and more for all Tables in a Database" has been published at MSSQLTips.com, I cannot wait to invite you to read it here http://www.mssqltips.com/sqlservertip/3450/sql-server-index-report-with-included-columns-storage-and-more-for-all-tables-in-a-database/
Let me know if you have any question or comment. Thanks for reading!

Wednesday 28 January 2015

Using the SQL Server Default Trace to Audit Events

My new tip about “Using the SQL Server Default Trace to Audit Events” has been published, you can read it at http://www.mssqltips.com/sqlservertip/3445/using-the-sql-server-default-trace-to-audit-events/ .
Let me know if you have any comment or question. Thanks for reading!

HELLO, I'M PERCY REYES! — a book lover, healthy lifestyle lover... I've been working as a senior SQL Server Database Administrator (DBA) for over 20 years; I'm a three-time awarded Microsoft Data Platform MVP. I'm currently doing a PhD in Computer Science (cryptography) at Loughborough University, England — working on cryptographic Boolean functions, algorithmic cryptanalysis, number theory, and other algebraic aspects of cryptography. READ MORE