SQL Server Always On Availability Group enables us to connect Read-Only workloads to secondary replicas automatically by using differents ways. For instance, to connect .NET applications to secondary replicas, we must also include "ApplicationIntent=ReadOnly" besides the AG Listener Virtual Name in the connection string. More specifically, after configuring the Read-Only Routing list, this would be the connection string if we had an Availability Group with a Listener called "AGLISTENER" (in these examples it is used Windows Authentication).
Server=tcp:AGLISTENER,1433;IntegratedSecurity=SSPI;ApplicationIntent=ReadOnly;MultiSubnetFailover=True
But that is not all, it is also possible to establish connection to secondary replicas via SQLCMD tool by using the new parameter -K (and optionally -M for faster detection of and connection to the active server). If -K is not specified, the sqlcmd utility will not support connectivity to a secondary replica. So, this would be a simple example of how to use SQLCMD with -K and -M parameters. I couldn't have been simpler.
Sqlcmd -S AGLISTENER -E -K ReadOnly -M
And last but not least, we can also do it by using SSMS Connection Dialog window by using “ApplicationIntent=ReadOnly” in the "Additional Connection Parameter" tab. I also wrote another post about it with more details, take a look at it.
To sum up, these are the basic ways to try out Read-Only intent connections to secondary replicas, the first way is ideal for developers whereas the second and third if you are working as a DBA.
That is all for now. Let me know any remarks you may have. Stay tuned.
Showing posts with label DBA. Show all posts
Showing posts with label DBA. Show all posts
Thursday, 21 December 2017
Saturday, 16 December 2017
Implementing dynamic PIVOT in SQL Server
It is well known that PIVOT is one of the new features included in SQL Server 2005 which allows to convert rows into columns. But what happens if we wanted to make it dynamic going beyond limits respect to the number of columns? Today in this post I am going to show you how to do it (applies to SQL Server 2005 through SQL Server 2017). To begin with, we are going to use the following example to analyse some data, which will be pivoted shortly after.
According to the result set, there are many sales per customer between the years 2001 and 2004.
In order to pivot the 'TotalDue' per 'Year' we do need to indicate each year in the PIVOT clause. In this case we do also need to know the years which will be taken into account. For instance, this query will pivot 'TotalDue' for the years 2001, 2002, 2003, and 2004.
Having successfully executed the query, we will get the following nice result:
Up to now, everything seems to be perfect. Nevertheless, what's going on if we wanted to pivot for many more years? Obviously, we would have to deal with a big limitation at first glance since we will need to add the years manually inside the query, but it does not make sense for a real business case. As a result, we can say that PIVOT is not scalable, I mean that PIVOT is not 'dynamic' by design. Luckily, the purpose of this post is to show how to implement an algorithm to simulate a dynamic PIVOT in SQL Server by using the native PIVOT clause and sp_executesql.
The algorithm is quite simple and does not deserve major explanation, however, I am going to say that this will only create a query dynamically to pivot 'TotalDue' by adding all years inside, and finally the output code will be executed via sp_executesql.
Now I am going to illustrate another example by using data from Northwind database.
Here the simple result without PIVOT.
Having executed the code above, we will get this result pivoted.
Finally, using dynamic PIVOT the result will be the same.
As you have seen, dynamic PIVOT is truly useful for a real business case. Therefore, I hope you make the most out of this algorithm. That is all for now, let me know any remarks you may have. Thanks for reading. Stay tuned.
USE AdventureWorks GO SELECT CustomerID,YEAR(DueDate) [Year], TotalDue FROM Sales.SalesOrderHeader ORDER BY CustomerID
In order to pivot the 'TotalDue' per 'Year' we do need to indicate each year in the PIVOT clause. In this case we do also need to know the years which will be taken into account. For instance, this query will pivot 'TotalDue' for the years 2001, 2002, 2003, and 2004.
SELECT CustomerID, [2001] AS '2001', [2002] AS '2002', [2003] AS '2003', [2004] AS '2004' FROM ( SELECT CustomerID,YEAR(DueDate) [Year], TotalDue FROM Sales.SalesOrderHeader ) pvt PIVOT (SUM(TotalDue) FOR [Year] IN ([2001],[2002],[2003],[2004])) AS Child ORDER BY CustomerID
Up to now, everything seems to be perfect. Nevertheless, what's going on if we wanted to pivot for many more years? Obviously, we would have to deal with a big limitation at first glance since we will need to add the years manually inside the query, but it does not make sense for a real business case. As a result, we can say that PIVOT is not scalable, I mean that PIVOT is not 'dynamic' by design. Luckily, the purpose of this post is to show how to implement an algorithm to simulate a dynamic PIVOT in SQL Server by using the native PIVOT clause and sp_executesql.
The algorithm is quite simple and does not deserve major explanation, however, I am going to say that this will only create a query dynamically to pivot 'TotalDue' by adding all years inside, and finally the output code will be executed via sp_executesql.
DECLARE @TableYears AS TABLE([Year] INT NOT NULL) DECLARE @Year INT, @YearsPVT NVARCHAR(MAX) INSERT INTO @TableYears SELECT DISTINCT YEAR(DueDate) AS [Year] FROM Sales.SalesOrderHeader SET @Year = (SELECT MIN([Year]) FROM @TableYears) SET @YearsPVT=N'' WHILE @Year IS NOT NULL BEGIN SET @YearsPVT = @YearsPVT + N',['+ CONVERT(NVARCHAR(10),@Year) + N']' SET @Year = (SELECT MIN([Year]) FROM @TableYears WHERE [Year]>@Year) END SET @YearsPVT = SUBSTRING(@YearsPVT,2,LEN(@YearsPVT)) PRINT @YearsPVT DECLARE @SQL NVARCHAR(MAX) SET @SQL = N'SELECT * FROM ( SELECT CustomerID,YEAR(DueDate) [Year], TotalDue FROM Sales.SalesOrderHeader ) pvt PIVOT (SUM(TotalDue) FOR [Year] IN (' + @YearsPVT + ')) AS Child ORDER by CustomerID' EXECUTE sp_executesql @SQL
SELECT P.ProductID, C.CategoryName, OD.UnitPrice * OD.Quantity AS TotalAmount FROM Products P INNER JOIN dbo.[Order Details] OD ON P.ProductID=OD.ProductID INNER JOIN Categories C ON C.CategoryID=P.CategoryID
Using PIVOT in its old-fashioned way:
SELECT ProductID, [Beverages], [Condiments], [Confections], [Dairy Products], [Grains/Cereals], [Meat/Poultry],[Produce],[Seafood] FROM ( SELECT P.ProductID, C.CategoryName, (OD.UnitPrice * OD.Quantity) AS Monto FROM Products P INNER JOIN dbo.[Order Details] OD ON P.ProductID=OD.ProductID INNER JOIN Categories C on C.CategoryID=P.CategoryID ) PIV PIVOT (SUM(Monto) FOR CategoryName IN ([Beverages], [Condiments], [Confections], [Dairy Products], [Grains/Cereals], [Meat/Poultry],[Produce],[Seafood])) AS Child
Finally, using dynamic PIVOT the result will be the same.
DECLARE @CatPVT AS NVARCHAR(MAX), @Categorias AS VARCHAR(20) DECLARE @CatID INT SET @CatID=(SELECT MIN(CategoryID) FROM Categories) SET @Categorias = ( SELECT CategoryName FROM Categories WHERE CategoryID = @CatID) SET @CatPVT = N'' WHILE @Categorias IS NOT NULL BEGIN SET @CatPVT = @CatPVT + N',['+ @Categorias +N']' SET @Categorias = (SELECT TOP(1) CategoryName FROM Categories WHERE CategoryID > @CatID ORDER BY CategoryID ASC) SET @CatID=(SELECT MIN(CategoryID) FROM Categories WHERE Categoryname=@Categorias) END print @CatPVT SET @CatPVT = SUBSTRING(@CatPVT, 2, LEN(@CatPVT)) DECLARE @sql AS NVARCHAR(MAX) SET @sql = N'SELECT * FROM (SELECT P.ProductID, C.CategoryName, (OD.UnitPrice * OD.Quantity) AS Monto FROM Products P INNER JOIN dbo.[Order Details] OD ON P.ProductID=OD.ProductID INNER JOIN Categories C ON C.CategoryID=P.CategoryID ) PIV PIVOT (SUM(Monto) FOR CategoryName IN ('+ @CatPVT + ')) AS Child' EXEC sp_executesql @sql
Saturday, 2 December 2017
Memory + CPU support for all SQL Server versions and editions
It is by no means uncommon to think that every DBA should have a list like this following one in order to have clear some aspects before installing/implementing new database environments. It is a compact brief of what different versions and editions of SQL Server support in terms of memory and CPU resources. Consequently, I share with you my resume, and I hope you find it useful. Let me know any remarks you may have.
SQL Server 2016/2017 | SQL Server 2014 | |||||
SQL Server edition | Maximum memory supported | Maximum Compute Capacity | SQL Server edition | Maximum memory supported | Maximum Compute Capacity | |
Enterprise | OS max | OS max | Enterprise | OS max | OS max | |
Standard | 128 GB | 4 CPU or 24 cores | Business Intelligence | 128 GB | 4 CPU or 16 cores | |
Web | 64 GB | 4 CPU or 16 cores | Standard | 128 GB | 4 CPU or 16 cores | |
Express | 1410 MB | 1 CPU or 4 cores | Web | 64 GB | 4 CPU or 16 cores | |
Express with Advanced Services | 1410 MB | 1 CPU or 4 cores | Express | 1 GB | 1 CPU or 4 cores | |
Express with Tools | 1 GB | 1 CPU or 4 cores | ||||
SQL Server 2012 | Express with Advanced Services | 1 GB | 1 CPU or 4 cores | |||
SQL Server edition | Maximum memory supported | Maximum Compute Capacity | ||||
Enterprise | OS max | OS max | SQL Server 2008 R2 | |||
Business Intelligence | 64 GB | 4 CPU or 16 cores | SQL Server edition | Maximum memory supported | Maximum Compute Capacity | |
Standard | 64 GB | 4 CPU or 16 cores | Datacenter | OS max | OS max (64 CPU or 256 logical) | |
Web | 64 GB | 4 CPU or 16 cores | Enterprise | 2 TB | 8 ( or 256 Logical CPU) | |
Express | 1 GB | 1 CPU or 4 cores | Developer | OS max | OS max | |
Express with Tools | 1 GB | 1 CPU or 4 cores | Standard | 64 GB | 4 | |
Express with Advanced Services | 1 GB | 1 CPU or 4 cores | Web | 64 GB | 4 | |
Workgroup | 4 GB (64-bit), OS max (32-bit) | 2 | ||||
SQL Server 2008 | Express | 1 GB | 1 | |||
SQL Server edition | Maximum memory supported | Maximum Compute Capacity | Express with Tools | 1 GB | 1 | |
Enterprise | OS max | OS max (8 CPU or 64 Logical CPU) | Express with Advanced Services | 1 GB (4 GB for RS) | 1 | |
Developer | OS max | OS max | ||||
Standard | OS max | 4 | SQL Server 2005 | |||
Web | OS max | 4 | SQL Server edition | Maximum memory supported | Maximum Compute Capacity | |
Workgroup | 4 GB (64-bit), OS max (32-bit) | 2 | Enterprise Edition | OS max | OS max | |
Express | 1 GB | 1 | Developer Edition | OS max (32-bit), 32 TB (64-bit) | OS max | |
Express with Tools | 1 GB | 1 | Standard Edition | OS max, 32 TB (64-bit) | 4 | |
Express with Advanced Services | 1 GB | 1 | Workgroup Edition | 3 GB(32-bit) | 2 (32-bit) | |
Express Edition | 1 GB(32-bit) | 1 (32-bit) | ||||
Evaluation Edition | OS max | OS max |
Friday, 1 December 2017
SQLServer Buffer Manager Buffer cache hit ratio, Page Life,Lazy Write/sec, Checkpoint Pages/sec
Without any shadow of a doubt, one of the procedures to follow in order to monitor and diagnose performance issues of a database server is to analyse meaningful performance counters related to SQL Server. As a Database Administrator, we need to get used to dealing with this sorts of problems so it truly helpful to know how to analyse and interpret them by contrasting with other windows performance counters.
To begin with, we can analyse SQLServer Buffer cache hit ratio performance counter which gives us the percentage of cache memory usage. This percentage should mostly be above 99% for OLTP database servers. Therefore, nowadays 4GB or 8GB of RAM is not good enough for the vast majority of them. The knock-on effect of this lack of RAM is that there will be a huge increase on the I/O disk activity as a result of using much more paging (data pages are moving in and out from memory very frequently because there is not enough space to locate new ones). This will also cause a huge disk queue which will keep disk subsystem busy, consequently, it will impact directly on the database performance detrimentally. In this context, we also need to look into other SQL Server performance counters like SQLServer: Buffer Manager: Page Life Expectancy, SQLServer: Buffer Manager: Lazy Write/sec, and SQLServer: Buffer Manager: Checkpoint Pages/sec.
As I mentioned before, there are other useful performance counters to diagnose and tune cache memory issues like Buffer Manager: Page Life Expectancy which indicates the time in seconds that a page has been inside of memory. This time should mostly be above 300 seconds, that is, 5 minutes. So, values below it should be considered as an alert, which means that SQL Server is under high memory pressure because bad-written queries may be using a lot. Clearly, the final solution is not to add more memory but identify and optimise those problematic queries. Only after that may you opt to add memory.
The other performance counter to check is SQLServer: Buffer Manager: Lazy Write/sec which gives the quantity of pages moved out per second from memory. Not only could it mean a lack of memory, but also there are copious amounts of Checkpoints which are not good at all because it will cause recompilation of stored procedures at the same time.The value of this counter should mostly be below 20. So, if you notice this value above 20, you may need to check SQLServer: Buffer Manager: Checkpoint Pages/sec counter as well. Checkpoints move out all pages from memory to disk.
Finally, keep in mind that it really important to do diagnostic tasks regularly which allow to take preventive actions so as to improve the whole performance over time. I do hope you find this post helpful. Let me know any remarks you may have. Thanks for reading. Stay tuned.
To begin with, we can analyse SQLServer Buffer cache hit ratio performance counter which gives us the percentage of cache memory usage. This percentage should mostly be above 99% for OLTP database servers. Therefore, nowadays 4GB or 8GB of RAM is not good enough for the vast majority of them. The knock-on effect of this lack of RAM is that there will be a huge increase on the I/O disk activity as a result of using much more paging (data pages are moving in and out from memory very frequently because there is not enough space to locate new ones). This will also cause a huge disk queue which will keep disk subsystem busy, consequently, it will impact directly on the database performance detrimentally. In this context, we also need to look into other SQL Server performance counters like SQLServer: Buffer Manager: Page Life Expectancy, SQLServer: Buffer Manager: Lazy Write/sec, and SQLServer: Buffer Manager: Checkpoint Pages/sec.
As I mentioned before, there are other useful performance counters to diagnose and tune cache memory issues like Buffer Manager: Page Life Expectancy which indicates the time in seconds that a page has been inside of memory. This time should mostly be above 300 seconds, that is, 5 minutes. So, values below it should be considered as an alert, which means that SQL Server is under high memory pressure because bad-written queries may be using a lot. Clearly, the final solution is not to add more memory but identify and optimise those problematic queries. Only after that may you opt to add memory.
The other performance counter to check is SQLServer: Buffer Manager: Lazy Write/sec which gives the quantity of pages moved out per second from memory. Not only could it mean a lack of memory, but also there are copious amounts of Checkpoints which are not good at all because it will cause recompilation of stored procedures at the same time.The value of this counter should mostly be below 20. So, if you notice this value above 20, you may need to check SQLServer: Buffer Manager: Checkpoint Pages/sec counter as well. Checkpoints move out all pages from memory to disk.
Finally, keep in mind that it really important to do diagnostic tasks regularly which allow to take preventive actions so as to improve the whole performance over time. I do hope you find this post helpful. Let me know any remarks you may have. Thanks for reading. Stay tuned.
Categories:
DBA,
Performance Tuning,
Recompilation
Thursday, 23 November 2017
Dropping columns statistics after migrating to SQL Server 2014/2016/2017
While migrating databases to new versions of SQL Server, there are some tasks that need to be carried out in order to warrant strong consistent data and physical integrity of databases in tandem with their performance. For instance, it is common to run DBCC CHECKDB, rebuild all indexes, and update columns and index statistics. In some scenarios, we might consider dropping columns statistics and let SQL Server create them again according to new algorithms especially when the new cardinality estimator (CE) is going to be used after upgrading to SQL Server 2014/2016/2017. I personally do that, after restoring databases on the new version of SQL Server I proceed to drop all columns statistics (always having AUTO_CREATE_STATISTICS option enabled) and then SQL Server will definitely create them again based on the nature of database queries and the logic of THE new CE. In a long-term perspective this technique is the most recommendable from my point of view as not only will we have new versions of statistics but also purge old and unused statistics.
Here I will share a script to delete columns statistics. Be caution and only drop them if you have AUTO_CREATE_STATISTICS option enabled, otherwise no statistics will be created and the database performance will be affected tremendously. That is all for now. Let me know any remarks you may have.
Here I will share a script to delete columns statistics. Be caution and only drop them if you have AUTO_CREATE_STATISTICS option enabled, otherwise no statistics will be created and the database performance will be affected tremendously. That is all for now. Let me know any remarks you may have.
SET NOCOUNT ON IF db_name() NOT IN ('model','master','distribution','msdb','tempdb') BEGIN DECLARE @schema_name varchar(max) DECLARE @table_name varchar(max) DECLARE @stat_name varchar(max) DECLARE @update_stat_cmd varchar(max) DECLARE @update_stat_msg_header varchar(max) DECLARE update_stat_cursor CURSOR FOR select schema_name(o.[schema_id]), object_name(s1.[object_id]) , s1.name from ( select s.[object_id], s.name from sys.stats s left join sys.indexes i on s.name=i.name where i.name is null) s1 inner join sys.objects o on o.[object_id]=s1.[object_id] where o.type='U' order by schema_name(o.[schema_id]), object_name(s1.[object_id]) , s1.name OPEN update_stat_cursor FETCH NEXT FROM update_stat_cursor INTO @schema_name, @table_name, @stat_name WHILE (@@fetch_status = 0) BEGIN DECLARE @ini DATETIME, @fin DATETIME SET @update_stat_msg_header = '->Dropping ['+ RTRIM(@schema_name) +'].[' + RTRIM(@table_name) + '].[' +@stat_name + ']' PRINT @update_stat_msg_header SET @update_stat_cmd ='DROP STATISTICS ['+ RTRIM(@schema_name) +'].[' + RTRIM(@table_name) + '].[' +@stat_name + ']' SET @ini=GETDATE() EXEC (@update_stat_cmd) SET @fin=GETDATE() FETCH NEXT FROM update_stat_cursor INTO @schema_name, @table_name, @stat_name END PRINT ' ' PRINT '----------------------------------------------------------------------------- ' SET @update_stat_msg_header = '************* THERE ARE NO MORE STATISTICS TO BE UPDATED **************' PRINT @update_stat_msg_header PRINT ' ' PRINT 'All statistics not linked to any index were rebuilt!' CLOSE update_stat_cursor DEALLOCATE update_stat_cursor END SET NOCOUNT OFF
Categories:
DBA,
Indexes,
Performance Tuning,
Statistics
Saturday, 11 November 2017
How to migrate SQL Server aliases easily
Definitely, in my daily DBA life many times I had to complete the migration of hundreds of SQL Server aliases without wasting much time. The DBAs always have the need of carrying out administrative tasks quickly and easily, and in this sense today I am going to share with you a technique of how to migrate SQL Server aliases.
The technique is to use Export/Import option of the system registry. Be cautious and do not try to modify other things. All the keys of the SQL Server aliases can be found for a x64 system in the following path:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\Client\ConnectTo
And whether you have SQL Server 32-bit on x64 then the keys are found here:
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\MSSQLServer\Client\ConnectTo
Now, once the path was found we need to navigate to it and right click on it to select Export option to export the branch of aliases to a regedit file (.reg). Finally, copy the file to the new server and then using File->Import option you can import them into the registry of the new database server. That is all for now. Let me know any remarks you may have.
To begin with, think of having three aliases in the database server. You can see them using SQL Server Manager Configuration tool.
The technique is to use Export/Import option of the system registry. Be cautious and do not try to modify other things. All the keys of the SQL Server aliases can be found for a x64 system in the following path:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\Client\ConnectTo
And whether you have SQL Server 32-bit on x64 then the keys are found here:
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\MSSQLServer\Client\ConnectTo
Now, once the path was found we need to navigate to it and right click on it to select Export option to export the branch of aliases to a regedit file (.reg). Finally, copy the file to the new server and then using File->Import option you can import them into the registry of the new database server. That is all for now. Let me know any remarks you may have.
Friday, 3 November 2017
AUTO_CLOSE database option and its impact on the performance
The AUTO_CLOSE database option is only one of the many options related to performance and availability of the database in SQL Server. When AUTO_CLOSE is set to ON for a database, SQL Server closes all its files and releases the resources used for it shortly after the last connection is closed. This action will reduce the usage of memory, nevertheless, it is barely insignificant (12KB or 20KB). Furthermore, having this option turned on, the first connection will have to open the database again, as a result, it will experience a delay. I highly recommend having disabled this option all the time. Here is the code to turn this option off:
This option is turned off by default, but I have found many databases with this option turned on which also impacts on it performance. The other disadvantage of having enabled it is that when the last established connection to the database is closed, its files are accessible to be manipulated directly via Windows by some user, which means that someone is completely able to delete them while the database engine is running. So, we need to work with lots of cautiousness when it comes to changing not only this database option but also others.
That is all for now, let me know any remarks you may have. Thanks for reading again. Stay tuned.
ALTER DATABASE [UserDBInProduction] SET AUTO_CLOSE OFF WITH NO_WAIT
That is all for now, let me know any remarks you may have. Thanks for reading again. Stay tuned.
Monday, 16 October 2017
Getting information about referenced and referencing tables
At times we do need to carry out some tasks related to figure out certain information about references among some database objects. Today's post is going to show an example for tables by using T-SQL. There are two SQL Server system views we will use to query that information, they are 'sys.objects' and 'sys.sysreferences'. The view 'sys.objects' contains information for each database object (DDL and DML triggers) created inside of a particular user schema, and sys.sysreferences has the following important columns which will give us the object id of the referenced and referencing object.
As I said earlier, not only can we use those views for tables, but also for other objects like functions, stored procedures, views, etc. I hope you can make the most out of this tip. Let me know any remarks you may have. Stay tuned.
- rkeyid: Contains the ID of the object which is being referenced.
- fkeyid: Contains the ID of the object which is referencing.
SELECT S.[name] AS 'Referenced Table' FROM sys.objects S INNER JOIN sys.sysreferences R ON S.OBJECT_ID = R.rkeyid WHERE S.[type] = 'U' AND R.fkeyid = OBJECT_ID('[Production].[Product]')
Having done that, we can also figure out which tables are referencing to the 'Product' table.
SELECT S.[name] AS 'Referencing Table' FROM sys.objects S INNER JOIN sys.sysreferences R ON S.OBJECT_ID = R.fkeyid WHERE S.[type] = 'U' AND R.fkeyid = OBJECT_ID('[Production].[Product]')
Friday, 13 October 2017
Purging old backup files by using forfiles windows tool
It is well know that most backup strategies include a step to purge backup files to keep the most recent backups in the database server so that the disk space can be used properly. It is of paramount importance to schedule this task inside a SQL job in order to avoid running out of space. Today I am going to share a script to do that that uses forfiles windows tool via cmdshell. This script is within a stored procedure which has some input paramaters such as the database name, backup type, drive, and retention days.
The logic deletes old backups files located on a path with this pattern '<Drive>:\<BackupDirectory>\<DatabaseName>\<BackupType>'. For instance, if we want to delete Full + Diff + Log Backup Files of the database 'MyDB' older than one week and supposing that those backups files are located on the drive 'G' then the full path would be 'G:\SQLBackup\MyDB\Full' for Full Backups, 'G:\SQLBackup\MyDB\Diff' for Differential Backups, and 'G:\SQLBackup\MyDB\Log' for Log Backups. So, using the following stored procedure and according to the example above, we should execute it with the following parameters:
That is all for now. Let me know any remarks you may have. Stay tuned.
USE [master] GO CREATE PROCEDURE [dbo].[sp_DBA_Backup_FilesCleanup] ( @DatabaseName VARCHAR(200), @BackupType VARCHAR(100), @DriveName VARCHAR(1), @RetentionDays VARCHAR(4)) WITH ENCRYPTION AS BEGIN SET NOCOUNT ON DECLARE @strcmd VARCHAR(4000) DECLARE @directory VARCHAR(4000) SET @directory=@DriveName + ':\SQLBackup\' + @DatabaseName --+ '\' + @BackupType SET @strcmd='forfiles /p "'+@directory+'" /s /d -'+ @RetentionDays +' /c "cmd /c del /q @path"' -- print @strcmd EXEC master.dbo.xp_cmdshell @strcmd SET NOCOUNT OFF END GO
USE [master] GO EXEC dbo.sp_DBA_Backup_FilesCleanup @DatabaseName='MyDB' , @BackupType='FULL',@DriveName='G', @RetentionDays='7' GO EXEC dbo.sp_DBA_Backup_FilesCleanup @DatabaseName='MyDB' , @BackupType='Diff',@DriveName='G', @RetentionDays='7' GO EXEC dbo.sp_DBA_Backup_FilesCleanup @DatabaseName='MyDB' , @BackupType='Log',@DriveName='G', @RetentionDays='7'
Categories:
Backup/Restore,
DBA,
Tools,
Windows
Monday, 2 October 2017
Getting useful information of data & log files for all databases
It is a common DBA task to check the unused space of all database files in order to make the decision of extending the file sizes and provide them with more disk hard space in the likely event of running out of it. Today I have two scripts I would like to share with you to get that info easily. The first one is to report some very important information about every data & log file for all databases such as name, file size in GB, space used in GB, free space in GB and percentage, growth settings in tandem with other information at database level such as collation, compatibility level, owner, and more. Here you are:
But if you only want to get information about the size in GB, space used in GB, free space in GB of all data & log files for a specific database you can use this:
That is all for now. Let me know any remarks you may have.
USE master GO IF EXISTS (SELECT name FROM master.sys.tables WHERE name='TmpFileSpace') DROP TABLE TmpFileSpace CREATE TABLE [dbo].[TmpFileSpace]( [DatabaseName] [nvarchar](128) NULL, [FileName] [sysname] NOT NULL, [FileSizeGB] [decimal](10, 2) NULL, [SpaceUsedGB] [decimal](10, 1) NULL, [SpaceFreeGB] [decimal](10, 1) NULL, [SpaceFree%] [decimal](10, 1) NULL ) ON [PRIMARY] EXEC sp_MSforeachdb ' USE [?] INSERT INTO master.dbo.[TmpFileSpace]([DatabaseName], [FileName], [FileSizeGB], [SpaceUsedGB], [SpaceFreeGB], [SpaceFree%]) SELECT DB_NAME() DatabaseName, name FileName, CAST(size/128.0/1024.0 AS DECIMAL(10,2)) SizeGB, CAST(FILEPROPERTY(name,''SpaceUsed'') /128.0/1024.0 AS DECIMAL(10,1)) SpaceUsedGB, CAST((size - FILEPROPERTY(name,''SpaceUsed'')) /128.0/1024.0 AS DECIMAL(10,1)) SpaceFreeGB, CAST(((size - FILEPROPERTY(name,''SpaceUsed''))/(size*1.0)) *100 AS DECIMAL(10,1)) [SpaceFree%] FROM sys.database_files' SELECT db.name DatabaseName,db.collation_name,db.compatibility_level, SUSER_SNAME(owner_sid) OwnerName, db.page_verify_option_desc, db.is_auto_close_on, db.is_auto_create_stats_on,db.is_auto_shrink_on, db.is_auto_update_stats_on, db.is_auto_update_stats_async_on,db.name DatabaseName, fs.FileName, fs.FileSizeGB , fs.SpaceUsedGB, fs.SpaceFreeGB, fs.[SpaceFree%], physical_name, cast(size/128.0/1024.0 as decimal(10,2)) FileSizeGB, db.state_desc,max_size,growth,is_percent_growth FROM sys.master_files mf INNER JOIN sys.databases db ON mf.database_id = db.database_id INNER JOIN TmpFileSpace FS ON mf.database_id=db_id(Fs.DatabaseName) AND mf.name=fs.FileName DROP TABLE [TmpFileSpace]
USE [YourDatabaseName] SELECT DB_NAME() DatabaseName, name FileName, CAST(size/128.0/1024.0 AS DECIMAL(10,2)) SizeGB, CAST(FILEPROPERTY(name,'SpaceUsed') /128.0/1024.0 AS DECIMAL(10,1)) SpaceUsedGB, CAST((size - FILEPROPERTY(name,'SpaceUsed') ) /128.0/1024.0 AS DECIMAL(10,1)) SpaceFreeGB, CAST(((size - FILEPROPERTY(name,'SpaceUsed') )/(size*1.0)) *100 AS DECIMAL(10,1)) [SpaceFree%] FROM SYS.database_files
That is all for now. Let me know any remarks you may have.
HELLO, I'M PERCY REYES! I've been working as a senior SQL Server Database Engineer for over 20 years; I'm a three-time Microsoft Data Platform MVP. I'm a cryptographer conducting research on cryptographic Boolean functions and their applications.