Sunday, 21 November 2021

Missing SQL Server Configuration Manager (SSCM) from the Start Menu

 Have you encountered an issue where the SQL Server Configuration Manager console is not available among the other SQL Server-related start menu applications?

This happened to me after upgrading from Windows 10 to Windows 11.

I have two instances installed on my PC (2016 & 2019). But SSCM is missing on both of them in the start menu.











But it's available and you can access it either way shown below:

1. Directly from the directory.

The relevant management consol service (.msc) file can be found in C:\Windows\System32 folder. You can directly browse the file and open the application.








2. Accessing via Computer Management Console.

SSCM will be available in the Computer Management Console, under "Services and Applications"











3. Using Microsoft Management Console.

Prior to using SSCM, you need to add it to MMC first. You can open Microsoft Management Console, by running "mmc" in the Run dialogue.








Once the console is opened, add the SQL Server Configuration Manager using Add/Remove Snap-In.



























Hope this will be helpful for you.

Thursday, 4 November 2021

Full Backup Vs. Differential Backup Vs. Log Backup

When it comes to backing up SQL Server databases, it's critical to know what backup types are there and what you should choose. In order to do this, you must first understand what SQL Server will include in each of these backup files.

If your database is in the "Full Recovery" model, you have the option to choose whether you want to back up
the data file, log file or both.

You will be given these options:







To illustrate we will consider a hypothetical database which we will be backing up hourly.

Full Backup


As the name implies full backup will include all the details (data), every time you back up the database.

































Consider the above case, where we will take full backup hourly.

Full Backup #01 will contain all the data which was there since the beginning.
Full Backup #02 will contain all the data including the changes that happened between 8:00am - 9:00am
Full Backup #03 will contain all the data including the changes that happened between 9:00am - 10:00am

The backup file size will grow since it will contain everything all the time. You need only one backup in order to restore the database and it will take the database to the status (or time) when the backup was initiated.

Differential Backup

Unlike the full backup, the differential backup will only contain the changes that happened since the last full backup



Full Backup #01 will contain all the data which was there since the beginning.
Differential Backup #01 will contain only the data changes that happened between 8:00am - 9:00am
Differential Backup #02 will contain only the data changes that happened between 8:00am - 10:00am

The differential backup size is comparatively less than the full backup since it only contains changes since the last full backup. You need to have both full backup and the differential backup (last one) in order to restore the database.

Log Backup


The log backup will only backup the transaction log file. It will not contain any data pages from the data file. A log backup will always contain changes in the log file since the last log backup.
































Consider the above case. Even though the database is fully backed up at 8:00AM the first transaction log backup taken at 9:00am will contain details since 7:00am.

Full Backup #01 will contain details from both data and log file as of 8:00am
Log Backup #01 will only contain details from the transaction log between 7:00am - 9:00am 
Log Backup #02 will only contain details from the transaction log between 9:00am - 10:00am 

With the log backups, you can take the database to any time of the day (in this case up to hourly blocks). But you need to retain all the log backups in order to do that.





Monday, 1 November 2021

Removing Invalid Logins (Both Server and Database Principals) from SQL Server

Have you ever come across a situation where you need to remove SQL Logins of users which have already left the organisation and not valid anymore ?

This could be very tedious task since you need to remove the permission from all the databases in the server which users has access to prior removing the SQL login. It could be more painful when the user is owning a schema in the database. Then you need to assign the ownership of the schema to [dbo] prior dropping the permission from the database.

We will see how we can achieve this step by step and finally we will make a script which we can run and ultimately produce a script which we can use to clean up the SQL Server.

1. Frist we need to find out the logins which SQL Server is considered as invalid.  We can use a system stored procedure "sys.sp_validatelogins" for this.

  1. CREATE TABLE #InvalidUsers(
  2. [SID] VARBINARY(85)
  3. ,LOGIN_NAME SYSNAME
  4. )
  5.  
  6. INSERT INTO #InvalidUsers (
  7. [SID]
  8. ,LOGIN_NAME
  9. )
  10. EXEC sys.sp_validatelogins






















2. Next we need to iterate each user in all the databases on the server and remove them from the databases. We will generate a script and use it to drop users. We need to consider the following during the script generation.

- Whether the user owns a schema in the database (if so we need to change the schema ownership to dbo)

-  Whether the user owns a the database. If so we need to change db ownership to a different user. In this example I will be changing the ownership to "sa"

We will insert the details into another temporary table. I will explain part by part, but once the script is completed you can execute it as a whole. I have provided the completed script at the end.

We will use another system stored procedure to execute a query in all the databases in the SQL Server instance (sys.sp_MSforeachdb)

  1. DECLARE @LoginName AS sysname
  2. ,@Sql AS NVARCHAR(2000)
  3.  
  4. DECLARE cur CURSOR LOCAL READ_ONLY FORWARD_ONLY FOR
  5. SELECT LOGIN_NAME FROM #InvalidUsers
  6.  
  7. OPEN cur
  8.  
  9. FETCH NEXT FROM cur INTO @LoginName
  10. WHILE @@FETCH_STATUS = 0
  11. BEGIN
  12. SET @Sql = CONCAT(
  13. 'USE ?;INSERT INTO #InvalidUserDbInfo SELECT DB_NAME(),DP.sid,DP.name, S.[name] as OwnedSchema
  14. FROM sys.database_principals AS DP
  15. LEFT JOIN sys.schemas as S on S.principal_id = DP.principal_id
  16. WHERE [type] = ''U'' and DP.name = ''',@LoginName,''''
  17. )
  18. EXEC sys.sp_MSforeachdb @Sql
  19. FETCH NEXT FROM cur INTO @LoginName
  20. END
  21.  
  22.  
  23. CLOSE cur
  24. DEALLOCATE cur
  25.  

We will use the details populated into the table #InvalidUserDbInfo to generate a script to drop users from databases, change database ownerships and change schema ownerships.

  1. DECLARE
  2. @LoginName AS sysname
  3. ,@Sql AS NVARCHAR(2000)
  4. ,@DBName AS NVARCHAR(128)
  5. ,@DbNameCurrent AS NVARCHAR(128) = ''
  6. ,@SqlSyntax AS NVARCHAR(MAX) = ''
  7. ,@OwnedSchema AS VARCHAR(128)
  8.  
  9. DECLARE cur CURSOR LOCAL READ_ONLY FORWARD_ONLY FOR
  10. SELECT
  11. IUDI.DBNAME
  12. ,IUDI.LOGIN_NAME
  13. ,IUDI.OWNED_SCH
  14. FROM
  15. #InvalidUserDbInfo AS IUDI WHERE IUDI.DBNAME <> 'master' ORDER BY IUDI.DBNAME
  16.  
  17. OPEN cur
  18.  
  19. FETCH NEXT FROM cur INTO @DBName,@LoginName,@OwnedSchema
  20.  
  21. WHILE @@FETCH_STATUS = 0
  22. BEGIN
  23. IF @DbNameCurrent <> @DBName
  24. BEGIN
  25. IF @DbNameCurrent <> ''
  26. SET @SqlSyntax += CONCAT(CHAR(13),'---------------------------------',CHAR(13),CHAR(13))
  27.  
  28. SET @SqlSyntax += CONCAT('USE [',@DBName,'];',CHAR(13),CHAR(13))
  29. SET @DbNameCurrent = @DBName
  30. END
  31.  
  32. IF (ISNULL(@OwnedSchema,'') <> '' AND ISNULL(@OwnedSchema,'') <> @LoginName)
  33. BEGIN
  34. SET @SqlSyntax += CONCAT('ALTER AUTHORIZATION ON SCHEMA::[',@OwnedSchema,'] TO [dbo]', CHAR(13))
  35. END
  36.  
  37. SET @SqlSyntax += CONCAT('DROP USER IF EXISTS [',@LoginName,']',CHAR(13))
  38.  
  39. FETCH NEXT FROM cur INTO @DBName,@LoginName,@OwnedSchema
  40.  
  41. END
  42.  
  43. CLOSE cur
  44. DEALLOCATE cur
  45.  
  46. SELECT CAST('<![CDATA[' + CHAR(13) + @SqlSyntax + CHAR(13) + ']]>' AS XML) AS CleanupScript_Step_2

This will return a clickable result set. Upon clicking you will get the script to remove users from databases.




3. Finally we consider the database ownerships and drop the login from the SQL Server Instance

  1. DECLARE
  2. @SqlSyntax AS NVARCHAR(MAX) = ''
  3.  
  4.  
  5. SET @SqlSyntax += CONCAT(' --== Changing Database Ownership ==--',CHAR(13),CHAR(13),'USE master;',CHAR(13))
  6.  
  7. SELECT
  8. @SqlSyntax = @SqlSyntax + CHAR(13) + CONCAT('ALTER AUTHORIZATION ON DATABASE::[',S.[name],'] TO [sa]')
  9. FROM
  10. sys.databases AS S
  11. JOIN #InvalidUsers AS U
  12. ON SUSER_SNAME(S.owner_sid) = U.LOGIN_NAME
  13.  
  14. SET @SqlSyntax+= CONCAT(CHAR(13),CHAR(13))
  15.  
  16. ------------------------------------------------------------------------------------------
  17.  
  18.  
  19. SELECT
  20. @SqlSyntax = @SqlSyntax + CHAR(13) +
  21. CONCAT('IF EXISTS (SELECT 0 FROM [sys].[server_principals] AS [SP] WHERE [SP].[name] = N'''
  22. ,U.LOGIN_NAME
  23. ,''')'
  24. ,CHAR(13),CHAR(9)
  25. ,'DROP LOGIN [',U.LOGIN_NAME,']',CHAR(13),CHAR(13)
  26. )
  27. FROM
  28. [sys].[server_principals] AS [SP]
  29. JOIN #InvalidUsers AS U
  30. ON U.LOGIN_NAME = SP.[name]
  31.  
  32.  
  33. SELECT CAST('<![CDATA[' + CHAR(13) + @SqlSyntax + CHAR(13) + ']]>' AS XML) AS CleanupScript_Step_3

This will generate a script similar to the one shown below.














Following is the full script which I have combine all the steps and which can be executed in a one go. You can download the script from the following link:

  1.  
  2. CREATE TABLE #InvalidUsers(
  3. [SID] VARBINARY(85)
  4. ,LOGIN_NAME SYSNAME
  5. )
  6.  
  7. INSERT INTO #InvalidUsers (
  8. [SID]
  9. ,LOGIN_NAME
  10. )
  11. EXEC sys.sp_validatelogins
  12.  
  13. ------------------------------------------------------------------------------------------
  14.  
  15. CREATE TABLE #InvalidUserDbInfo(
  16. DBNAME NVARCHAR(128)
  17. ,[SID] VARBINARY(85)
  18. ,LOGIN_NAME SYSNAME
  19. ,OWNED_SCH VARCHAR(128)
  20. )
  21.  
  22. DECLARE
  23. @LoginName AS sysname
  24. ,@Sql AS NVARCHAR(2000)
  25. ,@DBName AS NVARCHAR(128)
  26. ,@DbNameCurrent AS NVARCHAR(128) = ''
  27. ,@SqlSyntax AS NVARCHAR(MAX) = ''
  28. ,@OwnedSchema AS VARCHAR(128)
  29.  
  30. DECLARE cur CURSOR LOCAL READ_ONLY FORWARD_ONLY FOR
  31. SELECT LOGIN_NAME FROM #InvalidUsers
  32.  
  33.  
  34. OPEN cur
  35.  
  36. FETCH NEXT FROM cur INTO @LoginName
  37. WHILE @@FETCH_STATUS = 0
  38. BEGIN
  39. SET @Sql = CONCAT(
  40. 'USE ?;INSERT INTO #InvalidUserDbInfo
  41. SELECT DB_NAME(),DP.sid,DP.name, S.[name] as OwnedSchema
  42. FROM sys.database_principals AS DP
  43. LEFT JOIN sys.schemas as S on S.principal_id = DP.principal_id
  44. WHERE [type] = ''U'' and DP.name = ''',@LoginName,''''
  45. )
  46. EXEC sys.sp_MSforeachdb @Sql
  47. FETCH NEXT FROM cur INTO @LoginName
  48. END
  49.  
  50.  
  51. CLOSE cur
  52. DEALLOCATE cur
  53.  
  54. ------------------------------------------------------------------------------------------
  55.  
  56. DECLARE cur CURSOR LOCAL READ_ONLY FORWARD_ONLY FOR
  57. SELECT
  58. IUDI.DBNAME
  59. ,IUDI.LOGIN_NAME
  60. ,IUDI.OWNED_SCH
  61. FROM
  62. #InvalidUserDbInfo AS IUDI WHERE IUDI.DBNAME <> 'master' ORDER BY IUDI.DBNAME
  63.  
  64. OPEN cur
  65.  
  66. FETCH NEXT FROM cur INTO @DBName,@LoginName,@OwnedSchema
  67.  
  68. WHILE @@FETCH_STATUS = 0
  69. BEGIN
  70. IF @DbNameCurrent <> @DBName
  71. BEGIN
  72. IF @DbNameCurrent <> ''
  73. SET @SqlSyntax += CONCAT(CHAR(13),'---------------------------------',CHAR(13),CHAR(13))
  74.  
  75. SET @SqlSyntax += CONCAT('USE [',@DBName,'];',CHAR(13),CHAR(13))
  76. SET @DbNameCurrent = @DBName
  77. END
  78.  
  79. IF (ISNULL(@OwnedSchema,'') <> '' AND ISNULL(@OwnedSchema,'') <> @LoginName)
  80. BEGIN
  81. SET @SqlSyntax += CONCAT('ALTER AUTHORIZATION ON SCHEMA::[',@OwnedSchema,'] TO [dbo]', CHAR(13))
  82. END
  83.  
  84. SET @SqlSyntax += CONCAT('DROP USER IF EXISTS [',@LoginName,']',CHAR(13))
  85.  
  86. FETCH NEXT FROM cur INTO @DBName,@LoginName,@OwnedSchema
  87.  
  88. END
  89.  
  90. CLOSE cur
  91. DEALLOCATE cur
  92.  
  93. ------------------------------------------------------------------------------------------
  94.  
  95. SET @SqlSyntax +=
  96. CONCAT(CHAR(13),CHAR(13)
  97. ,' --== Changing Database Ownership ==--',CHAR(13),CHAR(13)
  98. ,'USE master;',CHAR(13))
  99.  
  100. SELECT
  101. @SqlSyntax = @SqlSyntax + CHAR(13) + CONCAT('ALTER AUTHORIZATION ON DATABASE::[',S.[name],'] TO [sa]')
  102. FROM
  103. sys.databases AS S
  104. JOIN #InvalidUsers AS U
  105. ON SUSER_SNAME(S.owner_sid) = U.LOGIN_NAME
  106.  
  107. SET @SqlSyntax+= CONCAT(CHAR(13),CHAR(13))
  108.  
  109. ------------------------------------------------------------------------------------------
  110.  
  111.  
  112. SELECT
  113. @SqlSyntax = @SqlSyntax + CHAR(13) +
  114. CONCAT('IF EXISTS (SELECT 0 FROM [sys].[server_principals] AS [SP] WHERE [SP].[name] = N'''
  115. ,U.LOGIN_NAME,''')'
  116. ,CHAR(13),CHAR(9)
  117. ,'DROP LOGIN [',U.LOGIN_NAME,']',CHAR(13),CHAR(13))
  118. FROM
  119. [sys].[server_principals] AS [SP]
  120. JOIN #InvalidUsers AS U
  121. ON U.LOGIN_NAME = SP.[name]
  122.  
  123.  
  124. SELECT CAST('<![CDATA[' + CHAR(13) + @SqlSyntax + CHAR(13) + ']]>' AS XML) AS CleanupScript_Step_3

Hope this will be very useful to you!



Sunday, 31 October 2021

Attaching SQL Database with FILESTREAM data

 Have you ever come across a situation where you need to attach a database using existing files (mdf, ldf & file stream data), but you don't have an option to mention the file stream folder when you use SSMS.

This won't be an issue if the filestream folder resides on its original path.

Let's look at an example. I have SQL data, log and filestream data which I will first attach using SSMS.

  1. Right-click on the database folder in SSMS and choose "Attach"
  2. Click the "Add" button and choose the data file (.mdf). Upon selecting the mdf file it will list the associated ldf file as well. But make sure to change the path accordingly from the second pane.
  3. Click on Ok.


In the above example, it will succeed without an issue since I have not changed the paths.

Now we will detach the database and change the file stream folder name to a different one. I will change the folder name to "SampleSqlDatabase_FSData_New" (This is the most common case where you will get the relevant files from the production environment which you need to attach to a test or development environment.

Now if you try to attach the database using SSMS in the same way you will get the following error message.











Even if you try to generate the scripts using SSMS it doesn't provide you with the option to include the filestream folder.

















  1. USE [master]
  2. GO
  3. CREATE DATABASE [SampleSqlDatabase] ON
  4. ( FILENAME = N'D:\_SQL_DATA\SampleSqlDatabase_Data.mdf' ),
  5. ( FILENAME = N'D:\_SQL_LOG\SampleSqlDatabase_Log.ldf' )
  6. FOR ATTACH
  7. GO

We can use the above code snippet and include the file stream path.

  1. USE [master]
  2. GO
  3. CREATE DATABASE [SampleSqlDatabase] ON
  4. (FILENAME = N'D:\_SQL_DATA\SampleSqlDatabase_Data.mdf'),
  5. (FILENAME = N'D:\_SQL_LOG\SampleSqlDatabase_Log.ldf'),
  6. FILEGROUP NewFSGroup CONTAINS FILESTREAM (
  7. NAME = 'NewDatabaseFileStream'
  8. ,FILENAME = 'D:\_SQL_DATA\SampleSqlDatabase_FSData_New'
  9. )
  10. FOR ATTACH
  11. GO

Then you will be able to attach the database without an issue. You can use any name for the FILEGROUP and NAME (under the FILEGROUP). It doesn't need to be the same as your source database.

Hope this will be a helpful tip to you.

SQL Server Transaction Log file behaviour in Full Recovery Mode

In a previous blog post, I have explained different types of recovery models in SQL Server databases. During the explanations, I have stated that the transaction log data will be truncated during certain backup types. We will look into this more closely and see how it works.

Note: Even the details are removed from the transaction log file, the physical size may not change unless you issue a DBCC SHRINKFILE command. Only the file space will be re-used once the details are truncated.

We will start with creating a sample database named 'SqlTransactionLogExample'

  1. USE master;
  2. GO
  3.  
  4. CREATE DATABASE [SqlTransactionLogExample];
  5.  
  6. --== Makesure the database is in "Full" recovery model ==--
  7.  
  8. SELECT
  9. [name]
  10. ,DATABASEPROPERTYEX([name],'Recovery') AS recovery_model
  11. FROM
  12. sys.databases
  13. WHERE
  14. [name] = 'SqlTransactionLogExample'

We will create one table and insert records (1,000,000 rows to see a significant change in the file size).

  1. USE SqlTransactionLogExample;
  2. GO
  3.  
  4. CREATE TABLE dbo.UidInformation(
  5. Id UNIQUEIDENTIFIER NOT NULL
  6. ,PayLoad CHAR(1000) NOT NULL CONSTRAINT [DF_SampleTable] DEFAULT (SPACE(1000))
  7. )
  8.  
  9. INSERT INTO dbo.UidInformation (
  10. Id
  11. ,PayLoad
  12. )
  13. SELECT
  14. NEWID() AS Id,
  15. ([100000].Num
  16. + [10000].Num
  17. + [1000].Num
  18. + [100].Num
  19. + [10].Num
  20. + [1].Num) AS PayLoad
  21. FROM (
  22. SELECT 1 AS Num UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL
  23. SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6
  24. UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 UNION ALL
  25. SELECT 0
  26. ) AS [1]
  27. CROSS JOIN (
  28. SELECT 10 AS Num UNION ALL SELECT 20 UNION ALL SELECT 30 UNION ALL
  29. SELECT 40 UNION ALL SELECT 50 UNION ALL SELECT 60 UNION ALL
  30. SELECT 70 UNION ALL SELECT 80 UNION ALL SELECT 90 UNION ALL
  31. SELECT 0
  32. ) AS [10]
  33. CROSS JOIN (
  34. SELECT 100 AS Num UNION ALL SELECT 200 UNION ALL SELECT 300
  35. UNION ALL SELECT 400 UNION ALL SELECT 500 UNION ALL SELECT 600
  36. UNION ALL SELECT 700 UNION ALL SELECT 800 UNION ALL SELECT 900
  37. UNION ALL SELECT 0
  38. ) AS [100]
  39. CROSS JOIN (
  40. SELECT 1000 AS Num UNION ALL SELECT 2000 UNION ALL SELECT 3000
  41. UNION ALL SELECT 4000 UNION ALL SELECT 5000 UNION ALL SELECT 6000
  42. UNION ALL SELECT 7000 UNION ALL SELECT 8000 UNION ALL SELECT 9000
  43. UNION ALL SELECT 0
  44. ) AS [1000]
  45. CROSS JOIN (
  46. SELECT 10000 AS Num UNION ALL SELECT 20000 UNION ALL SELECT 30000
  47. UNION ALL SELECT 40000 UNION ALL SELECT 50000 UNION ALL SELECT 60000
  48. UNION ALL SELECT 70000 UNION ALL SELECT 80000 UNION ALL SELECT 90000
  49. UNION ALL SELECT 0
  50. ) AS [10000]
  51. CROSS JOIN (
  52. SELECT 100000 AS Num UNION ALL SELECT 200000 UNION ALL SELECT 300000
  53. UNION ALL SELECT 400000 UNION ALL SELECT 500000 UNION ALL SELECT 600000
  54. UNION ALL SELECT 700000 UNION ALL SELECT 800000 UNION ALL SELECT 900000
  55. UNION ALL SELECT 0
  56. ) AS [100000]
  57. WHERE(
  58. [100000].Num
  59. + [10000].Num
  60. + [1000].Num
  61. + [100].Num
  62. + [10].Num
  63. + [1].Num
  64. ) BETWEEN 1 AND 1000000
  65. ORDER BY ([100000].Num + [10000].Num + [1000].Num + [100].Num + [10].Num + [1].Num)


Now we will inspect the number of entries in the transaction log.

  1. SELECT COUNT(*) FROM sys.fn_dblog (NULL,NULL)
  2. --Returns 2511475


Note: Record count may be varied in the transaction log of your database.


We will check the size of the transaction log file 



Now we will take a full database backup

  1. USE master;
  2. GO
  3.  
  4. BACKUP DATABASE [SqlTransactionLogExample] TO DISK = 'D:\_SQL_BACKUP\SqlTransactionLogExample_FullDB_Backup.bak'
  5.  
  6. Now we will inspect number of entries in the transaction log file again.
  7.  
  8. USE SqlTransactionLogExample
  9. GO
  10.  
  11. SELECT COUNT(*) FROM sys.fn_dblog (NULL,NULL)
  12.  
  13. --Returns 2511475


Note: There is a slight increment of the record count since there are new entries written to the log file during the database backup.

We will now inspect the transaction log entries

  1. SELECT COUNT(*) FROM sys.fn_dblog (NULL,NULL)
  2. -- Returns 2511545


We will now take a backup of the transaction log 

  1. BACKUP LOG [SqlTransactionLogExample] TO
  2. DISK = 'D:\_SQL_BACKUP\SqlTransactionLogExample_LogBackup_AfterInsert.bak'


And if we inspect the number of entries in the log file it has drastically reduced.

  1. SELECT COUNT(*) FROM sys.fn_dblog (NULL,NULL)
  2. --Returns 10


However, the space consumed by the log file is still the same. This is the reason I have mentioned it previously. In order to release the space, we have to issue the DBCC SHRINKFILE command.


  1. --This is to find the name of the logical log file name
  2. SELECT * FROM sys.sysfiles
  3. --SqlTransactionLogExample_log
  4. DBCC SHRINKFILE('SqlTransactionLogExample_log',1)


Hope this will be helpful to you.



Saturday, 30 October 2021

Recovery Models in SQL Server Databases

 "Recovery Model" determines how long the data is to be kept in the transaction logs. It also allows what sort of backups and restores you can perform on the database.


Types of recovery models

There are three types of recovery models in SQL Server

  1. Simple
  2. Full 
  3. Bulk-logged

How can I change the recovery model of a database?

The recovery model of a database can be changed either using the GUI (SSMS) or using a T-SQL statement.

To change using SSMS follow these steps:
1. Right-click the database and choose options
2. Select "Options" from the pages
3. From the Recovery model drop-down list choose the appropriate one.
4. Click OK






To change using T-SQL statement use the following syntax:

  1. USE master;
  2. ALTER DATABASE YourDBName SET RECOVERY SIMPLE
You can use options SIMPLE, FULL or BULK_LOGGED

You can find out the recovery model of the databases which resides on your server using the following query.

  1. SELECT
  2. [name]
  3. ,DATABASEPROPERTYEX([name],'Recovery') AS recovery_model
  4. FROM
  5. sys.databases
  6. WHERE
  7. database_id > 4

Note: database_id > 4 will ensure that system databases information is excluded.

Now we will look into closely what options it may enable during backup/restore for each type of recovery model.

Simple Recovery Model


  • You cannot take backups of the transaction log explicitly
  • A database backup will always contain both data and transaction log file











  • Transaction log will not grow beyond what's been defined. The space in the transaction log file will be re-used (overwritten) once the transactions are written to the data file.
  • In case of a disaster transaction occurred between the last full backup and the disaster timeline cannot be recovered. (E.g: Assume we have a database in which the backups are taken every hour. (8am, 9am etc.) In case a disaster happens at 10:45am, transactions between 10am and 10:45am will be lost)


















Bulk-logged Recovery Model



Full Recovery Model

 
  • Supports transaction log backup
  • Chances of a data loss are minimal (subject to the backup strategy implemented)
  • Log truncation must be done manually (Upon taking the transaction log backup, it will mark unused space which will be overwritten later by future transactions)
  • Transaction log could grow large compared to the "Simple" recovery model

The scope of this post is to give a brief idea of the recovery models in SQL Server. In a future post, I will explain how to bring a database to a certain date and time by having a Full Recovery Model and a good backup strategy.


Friday, 10 May 2019

Strange behavior on JSON_VALUE when table contains blank and non-blank values (JSON text is not properly formatted. Unexpected character '.' is found at position 0.)


Few days back we had a requirement to search whether a certain value exists in one of the table fields where the values are stored as JSON strings. The default constructor has been set up not to allow any NULLs but in case there’s no value, the default value has been setup as an empty string.
So basically the query would be similar to something shown below


SELECT 
 'x'
FROM
 [schema].[TableName] AS Src
WHERE
 JSON_VALUE(Src.ColumnName,'$.Root.AttributeName') LIKE 'SearchValue%'

How ever when we ran this query we got the following error

Msg 13609, Level 16, State 2, Line 36
JSON text is not properly formatted. Unexpected character '.' is found at position 0.


Initially we thought that we have typed the attribute incorrectly since it’s case sensitive. But in this case it was correct.

We investigated further and found out few things. But prior explaining them we will replicate this issue. For this I will create one simple table and insert three records.


--== Create a table ==--
CREATE TABLE dbo.Employee_Information (
 Id    INT
 ,FirstName  NVARCHAR(100)
 ,LastName  NVARCHAR(100)
 ,JsonData  NVARCHAR(MAX)
)

--== Insert few rows ==--
INSERT INTO dbo.Employee_Information (
 Id
 ,FirstName
 ,LastName
 ,JsonData
)
VALUES
(1,'John','Doe','{"Employee":{"Id":1,"FirstName":"John","LastName":"Doe"}}')
,(2,'Jane','Doe','{"Employee":{"Id":2,"FirstName":"Jane","LastName":"Doe"}}')
,(3,'Luke','Skywalker','')



Now we will use the following query to find any records which the LastName is like ‘Doe’.


SELECT 
 Id
FROM
 dbo.Employee_Information AS E
WHERE
 JSON_VALUE(E.JsonData,'$.Employee.LastName') LIKE 'Doe%'




Msg 13609, Level 16, State 2, Line 36
JSON text is not properly formatted. Unexpected character '.' is found at position 0.


**Note : The query will return results till the error occurs. Hence you will see some rows in your result tab in SSMS.

These are the observations we made during our investigation

Observation 01

If you query the table with a predicate and if that predicate doesn’t include any rows with blank values in the JSON (it’s an NVARCHAR column) field the query will executed successfully.


--== Success ==--
SELECT 
 Id
FROM
 dbo.Employee_Information AS E
WHERE
 JSON_VALUE(E.JsonData,'$.Employee.LastName') LIKE 'Doe%'
 AND Id IN (1,2)

--== Success ==--
SELECT 
 Id
FROM
 dbo.Employee_Information AS E
WHERE
 JSON_VALUE(E.JsonData,'$.Employee.LastName') LIKE 'Doe%'
 AND Id <> 3

--== Fail ==--
SELECT 
 Id
FROM
 dbo.Employee_Information AS E
WHERE
 JSON_VALUE(E.JsonData,'$.Employee.LastName') LIKE 'Doe%'
 AND Id = 3


Observation 02

Even you use a filter to fetch only rows containing a valid JSON the execution will be successful.


--== Success ==--
SELECT 
 Id
FROM
 dbo.Employee_Information AS E
WHERE
 ISJSON(E.JsonData) > 0
 AND JSON_VALUE(E.JsonData,'$.Employee.LastName') LIKE 'Doe%'


Observation 03

Even you use a filter to fetch only rows containing a non-blank value in the JSON field, it will fail.


--== Fail ==--
SELECT 
 Id
FROM
 dbo.Employee_Information AS E
WHERE
 E.JsonData <> ''
 AND JSON_VALUE(E.JsonData,'$.Employee.LastName') LIKE 'Doe%'


Observation 04

If you remove records and only keep either one type of rows (either only blank rows or only non-blank) the query will be executed successfully.


TRUNCATE TABLE dbo.Employee_Information
INSERT INTO dbo.Employee_Information (
 Id
 ,FirstName
 ,LastName
 ,JsonData
)
VALUES
(1,'John','Doe','{"Employee":{"Id":1,"FirstName":"John","LastName":"Doe"}}')
,(2,'Jane','Doe','{"Employee":{"Id":2,"FirstName":"Jane","LastName":"Doe"}}')

--== Success ==--
SELECT 
 Id
FROM
 dbo.Employee_Information AS E
WHERE
 JSON_VALUE(E.JsonData,'$.Employee.LastName') LIKE 'Doe%'


Observation 05

If you have rows only containing blank values in the JSON field the query will fail.


TRUNCATE TABLE dbo.Employee_Information
INSERT INTO dbo.Employee_Information (
 Id
 ,FirstName
 ,LastName
 ,JsonData
)
VALUES
(1,'John','Doe','')
,(2,'Jane','Doe','')


--== Fail ==--
SELECT 
 Id
FROM
 dbo.Employee_Information AS E
WHERE
 JSON_VALUE(E.JsonData,'$.Employee.LastName') LIKE 'Doe%'

Hope this might help you if you encounter this strange behavior during your development.

Note : All the above queries are executed under the following SQL Server Version (SELECT @@VERSION)




Microsoft SQL Server 2016 (SP1) (KB3182545) - 13.0.4001.0 (X64)
     Oct 28 2016 18:17:30
     Copyright (c) Microsoft Corporation
     Developer Edition (64-bit) on Windows Server 2012 Standard 6.2 (Build 9200: ) (Hypervisor)








Monday, 10 September 2018

Applying database principal through out the server (for all databases) for a particular user

Ever come across a requirement which you required to give db_datareader access to a specific user across all the databases on a particular SQL Server. The task is simple as long as you don’t have many databases in the same SQL Server. How ever if the number of databases are very high this can be a very time consuming one.

This can be done either using the GUI (SSMS) or using a T-SQL script. We will consider both options.

Using SQL Server Management Studio

In order to illustrate this we will create a SQL Login ‘db_user_read_only’ with ‘public’ server role and on the user mapping, we will apply the db_datareader principal.

image

image

image

Like mentioned it would be easy to use the GUI when you have less number of databases. But if the SQL Server contains lots of databases this will be a very time consuming job. Then it would be very handy to use the latter approach.

Using T-SQL

You can use the following script to apply the db_datareader principal across all the databases on a particular server.


DECLARE 
	@Sql AS NVARCHAR(MAX)
	,@UserId AS VARCHAR(MAX) = 'YourLoginId'
SET @Sql = CONCAT('
USE [?];
IF EXISTS (SELECT 0 FROM sys.database_principals AS DP WHERE name = ''',@UserId,''')
BEGIN
	EXEC sys.sp_change_users_login ''update_one'',''',@UserId,''',''',@UserId,'''
END
ELSE
	
	CREATE USER [',@UserId,'] FOR LOGIN [',@UserId,']
	ALTER ROLE [db_datareader] ADD MEMBER [',@UserId,']
')
EXEC sys.sp_MSforeachdb 
	@command1 = @Sql
	,@replacechar = '?'

Please note the following:

  • On the above code I haven’t excluded the system databases.
  • If the login exists on the database it will map the database user using sp_change_users_login

Hope this might be very useful to you.

Friday, 6 July 2018

Replacing sp_depends with sys.dm_sql_referencing_entities and sys.dm_sql_referenced_entities

sp_depends have been one of the most used system stored procedures in SQL Server. Infact many of us still use that even though Microsoft had annouced that it will be removed from the future releases.
https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-depends-transact-sql?view=sql-server-2017
image
Alternatively Microsoft has provided two dynamic management views (these have been introduced with SQL Server 2008) in order to get similar kind of information.
You can get further details on the aforementioned view by visiting the link. (links are embedded into the view name)
However if you have used sp_depends you might have already faced the issue that the results which is being returned from this stored procedure is not very accurate (most of the time it seems fine)
Otherday I was going through these two view in order to create an sp which is similar to sp_depends and thought of sharing the query so that it can be useful to anyone who depends on this sp.


DECLARE
 @objname   AS NVARCHAR(100) = 'Website.SearchForPeople'
 ,@objclass   AS NVARCHAR (60) = 'OBJECT'


  SELECT 
   CONCAT(sch.[name],'.',Obj.[name]) AS [name]
   ,(CASE Obj.type
    WHEN 'C'  THEN 'CHECK constraint'
    WHEN 'D'  THEN 'DEFAULT (constraint or stand-alone)'
    WHEN 'F'  THEN 'FOREIGN KEY constraint'
    WHEN 'PK' THEN 'PRIMARY KEY constraint'
    WHEN 'R'  THEN 'Rule (old-style, stand-alone)'
    WHEN 'TA' THEN 'Assembly (CLR-integration) trigger'
    WHEN 'TR' THEN 'SQL trigger'
    WHEN 'UQ' THEN 'UNIQUE constraint'
    WHEN 'AF' THEN 'Aggregate function (CLR)'
    WHEN 'C' THEN 'CHECK constraint'
    WHEN 'D' THEN 'DEFAULT (constraint or stand-alone)'
    WHEN 'F' THEN 'FOREIGN KEY constraint'
    WHEN 'FN' THEN 'SQL scalar function'
    WHEN 'FS' THEN 'Assembly (CLR) scalar-function'
    WHEN 'FT' THEN 'Assembly (CLR) table-valued function'
    WHEN 'IF' THEN 'SQL inline table-valued function'
    WHEN 'IT' THEN 'Internal table'
    WHEN 'P' THEN 'SQL Stored Procedure'
    WHEN 'PC' THEN 'Assembly (CLR) stored-procedure'
    WHEN 'PG' THEN 'Plan guide'
    WHEN 'PK' THEN 'PRIMARY KEY constraint'
    WHEN 'R' THEN 'Rule (old-style, stand-alone)'
    WHEN 'RF' THEN 'Replication-filter-procedure'
    WHEN 'S' THEN 'System base TABLE'
    WHEN 'SN' THEN 'Synonym'
    WHEN 'SO' THEN 'Sequence OBJECT'
    WHEN 'U' THEN 'Table (user-defined)'
    WHEN 'V' THEN 'VIEW'
    WHEN 'SQ' THEN 'Service queue'
    WHEN 'TA' THEN 'Assembly (CLR) DML trigger'
    WHEN 'TF' THEN 'SQL table-valued-function'
    WHEN 'TR' THEN 'SQL DML trigger'
    WHEN 'TT' THEN 'Table type'
    WHEN 'UQ' THEN 'UNIQUE CONSTRAINT'
    WHEN 'X'  THEN 'Extended stored procedure'
    ELSE 'Undefined'
   END) AS [type]
   ,Obj.create_date
   ,Obj.modify_date
   ,src.referenced_minor_name AS [column]
   ,IIF(src.is_selected   = 1,'yes','no') AS is_selected
   ,IIF(src.is_updated    = 1,'yes','no') AS is_updated
   ,IIF(src.is_select_all = 1,'yes','no') AS is_select_all
   ,IIF(src.is_insert_all = 1,'yes','no') AS is_insert_all
  FROM 
   sys.dm_sql_referenced_entities (@objname,@objclass) AS src
   JOIN sys.objects AS Obj
    ON src.referenced_id = Obj.[object_id]
   JOIN sys.schemas AS Sch
    ON Sch.[schema_id] = Obj.[schema_id]
  WHERE 1=1
  
  SELECT 
   CONCAT(Src.referencing_schema_name,'.',Src.referencing_entity_name) AS [name]
   ,(CASE Obj.type
    WHEN 'C'  THEN 'CHECK constraint'
    WHEN 'D'  THEN 'DEFAULT (constraint or stand-alone)'
    WHEN 'F'  THEN 'FOREIGN KEY constraint'
    WHEN 'PK' THEN 'PRIMARY KEY constraint'
    WHEN 'R'  THEN 'Rule (old-style, stand-alone)'
    WHEN 'TA' THEN 'Assembly (CLR-integration) trigger'
    WHEN 'TR' THEN 'SQL trigger'
    WHEN 'UQ' THEN 'UNIQUE constraint'
    WHEN 'AF' THEN 'Aggregate function (CLR)'
    WHEN 'C' THEN 'CHECK constraint'
    WHEN 'D' THEN 'DEFAULT (constraint or stand-alone)'
    WHEN 'F' THEN 'FOREIGN KEY constraint'
    WHEN 'FN' THEN 'SQL scalar function'
    WHEN 'FS' THEN 'Assembly (CLR) scalar-function'
    WHEN 'FT' THEN 'Assembly (CLR) table-valued function'
    WHEN 'IF' THEN 'SQL inline table-valued function'
    WHEN 'IT' THEN 'Internal table'
    WHEN 'P' THEN 'SQL Stored Procedure'
    WHEN 'PC' THEN 'Assembly (CLR) stored-procedure'
    WHEN 'PG' THEN 'Plan guide'
    WHEN 'PK' THEN 'PRIMARY KEY constraint'
    WHEN 'R' THEN 'Rule (old-style, stand-alone)'
    WHEN 'RF' THEN 'Replication-filter-procedure'
    WHEN 'S' THEN 'System base TABLE'
    WHEN 'SN' THEN 'Synonym'
    WHEN 'SO' THEN 'Sequence OBJECT'
    WHEN 'U' THEN 'Table (user-defined)'
    WHEN 'V' THEN 'VIEW'
    WHEN 'SQ' THEN 'Service queue'
    WHEN 'TA' THEN 'Assembly (CLR) DML trigger'
    WHEN 'TF' THEN 'SQL table-valued-function'
    WHEN 'TR' THEN 'SQL DML trigger'
    WHEN 'TT' THEN 'Table type'
    WHEN 'UQ' THEN 'UNIQUE CONSTRAINT'
    WHEN 'X'  THEN 'Extended stored procedure'
    ELSE 'Undefined'
   END) AS [type]
   ,Obj.create_date
   ,Obj.modify_date
  FROM 
   sys.dm_sql_referencing_entities (@objname,@objclass) AS Src
   JOIN sys.objects AS Obj
    ON Obj.[object_id] = Src.referencing_id 
I have even compiled a stored procedure using this syntax and it can be found on the following reporsitory: https://github.com/manjukefernando/sp_depends_v2

Wednesday, 30 May 2018

Computed columns in SQL Server

Computed columns are type of columns which the values are derived based on one or more other columns. Hence the data type on the computed column depends on the result of the derived column values.
Computed columns is a feature which has been there in SQL Server since version 2000. But in my experience I feel that it has been a feature which's been used less compared to many other features available, and during discussions and interviews this is something which most developers slips or fails to answer.

Why do we need computed columns ?

First we will consider a case where we need to store details on a table without the usage of computed columns.
Consider we have a table which contains employee details. We have two columns to store employee’s first and last names. But we also required to have a column which we need to store their full name as well by concatenating the first and last names. So the correct way is to have the third column which contains the full name and the data needs to be inserted during the employee record is created and it should be maintained in the case where the details are updated as well. Otherwise the data integrity will be lost. (One might debate that the full name can be built from the business logic code using the first and last names. But for the illustration purpose we would consider that we are maintaining it using SQL Server)

CREATE TABLE dbo.Employee(
    Id     INT 
    ,FirstName    VARCHAR(30)
    ,LastName    VARCHAR(30)
    ,FullName    VARCHAR(61)
)
How ever we could achieve the same with the use of a computed column and with a less effort compared to the first approach.

CREATE TABLE dbo.Employee(
    Id     INT 
    ,FirstName    VARCHAR(30)
    ,LastName    VARCHAR(30)
    ,FullName AS CONCAT(FirstName,' ',LastName)
)

Let’s insert few records to the table which we created now.

INSERT INTO dbo.Employee(Id, FirstName, LastName) 
VALUES (1,’John’,’Doe'),(2,’Jane’,’Doe')

image

PERSISTED, DETERMINISTIC or NON-DETERMINISTIC ?

The values reflected on computed column can be either deterministic or persisted.
When the values are deterministic or non-deterministic the value in the column will not be saved on to the table physically. Instead it always calculated during the query execution. Hence the value could differ based on the functions you use in the formula. E.g: If you use GETDATE() in the calculated column, it will always return a different value during each execution.

CREATE TABLE dbo.Employee2(
    Id     INT 
    ,FirstName    VARCHAR(30)
    ,LastName    VARCHAR(30)
    ,CreatedDate AS GETDATE()
)

INSERT INTO dbo.Employee2(Id, FirstName, LastName) VALUES 
    (1,'John','Doe') 


And when queried the calculated column returns different values as shown below.

image
**Note: The above mentioned can be achieved using a default constraint as well. I have used that example on strictly illustration basis.
You can further read on deterministic and non-deterministic function on the following Microsoft documentation.
https://docs.microsoft.com/en-us/sql/relational-databases/user-defined-functions/deterministic-and-nondeterministic-functions?view=sql-server-2017
Computed column values can be persisted by adding the keyword PERSISTED when the column is created using T-SQL or by the table designer in SSMS.
We will drop ‘FullName’ column and recreate the column.

ALTER TABLE dbo.Employee DROP COLUMN FullName;
ALTER TABLE dbo.Employee 
 ADD FullName AS CONCAT(FirstName,' ',LastName) PERSISTED;
**Note: If you try to drop the ‘CreatedDate’ column on Employee2 and try to create it as PERSISTED, it will throw an error. Because computed columns can only be persisted when it’s deterministic.
Msg 4936, Level 16, State 1, Line 45
Computed column 'CreatedDate' in table 'Employee2' cannot be persisted because the column is non-deterministic.

Now when the expression is evaluated during the execution, the ‘FullName’ will be saved into the table.
The data is read-only to the developer and it’s maintained by the engine. When the data is changed on the columns which was used in the formula, the computed values will be changed.