Sunday, 21 November 2021

Missing SQL Server Configuration Manager (SSCM) from the Start Menu

 Have you encountered an issue where the SQL Server Configuration Manager console is not available among the other SQL Server-related start menu applications?

This happened to me after upgrading from Windows 10 to Windows 11.

I have two instances installed on my PC (2016 & 2019). But SSCM is missing on both of them in the start menu.











But it's available and you can access it either way shown below:

1. Directly from the directory.

The relevant management consol service (.msc) file can be found in C:\Windows\System32 folder. You can directly browse the file and open the application.








2. Accessing via Computer Management Console.

SSCM will be available in the Computer Management Console, under "Services and Applications"











3. Using Microsoft Management Console.

Prior to using SSCM, you need to add it to MMC first. You can open Microsoft Management Console, by running "mmc" in the Run dialogue.








Once the console is opened, add the SQL Server Configuration Manager using Add/Remove Snap-In.



























Hope this will be helpful for you.

Thursday, 4 November 2021

Full Backup Vs. Differential Backup Vs. Log Backup

When it comes to backing up SQL Server databases, it's critical to know what backup types are there and what you should choose. In order to do this, you must first understand what SQL Server will include in each of these backup files.

If your database is in the "Full Recovery" model, you have the option to choose whether you want to back up
the data file, log file or both.

You will be given these options:







To illustrate we will consider a hypothetical database which we will be backing up hourly.

Full Backup


As the name implies full backup will include all the details (data), every time you back up the database.

































Consider the above case, where we will take full backup hourly.

Full Backup #01 will contain all the data which was there since the beginning.
Full Backup #02 will contain all the data including the changes that happened between 8:00am - 9:00am
Full Backup #03 will contain all the data including the changes that happened between 9:00am - 10:00am

The backup file size will grow since it will contain everything all the time. You need only one backup in order to restore the database and it will take the database to the status (or time) when the backup was initiated.

Differential Backup

Unlike the full backup, the differential backup will only contain the changes that happened since the last full backup



Full Backup #01 will contain all the data which was there since the beginning.
Differential Backup #01 will contain only the data changes that happened between 8:00am - 9:00am
Differential Backup #02 will contain only the data changes that happened between 8:00am - 10:00am

The differential backup size is comparatively less than the full backup since it only contains changes since the last full backup. You need to have both full backup and the differential backup (last one) in order to restore the database.

Log Backup


The log backup will only backup the transaction log file. It will not contain any data pages from the data file. A log backup will always contain changes in the log file since the last log backup.
































Consider the above case. Even though the database is fully backed up at 8:00AM the first transaction log backup taken at 9:00am will contain details since 7:00am.

Full Backup #01 will contain details from both data and log file as of 8:00am
Log Backup #01 will only contain details from the transaction log between 7:00am - 9:00am 
Log Backup #02 will only contain details from the transaction log between 9:00am - 10:00am 

With the log backups, you can take the database to any time of the day (in this case up to hourly blocks). But you need to retain all the log backups in order to do that.





Monday, 1 November 2021

Removing Invalid Logins (Both Server and Database Principals) from SQL Server

Have you ever come across a situation where you need to remove SQL Logins of users which have already left the organisation and not valid anymore ?

This could be very tedious task since you need to remove the permission from all the databases in the server which users has access to prior removing the SQL login. It could be more painful when the user is owning a schema in the database. Then you need to assign the ownership of the schema to [dbo] prior dropping the permission from the database.

We will see how we can achieve this step by step and finally we will make a script which we can run and ultimately produce a script which we can use to clean up the SQL Server.

1. Frist we need to find out the logins which SQL Server is considered as invalid.  We can use a system stored procedure "sys.sp_validatelogins" for this.

  
CREATE TABLE #InvalidUsers(
	[SID]		VARBINARY(85)
	,LOGIN_NAME	SYSNAME
)

INSERT INTO #InvalidUsers (
	[SID]
	,LOGIN_NAME
)
EXEC sys.sp_validatelogins






















2. Next we need to iterate each user in all the databases on the server and remove them from the databases. We will generate a script and use it to drop users. We need to consider the following during the script generation.

- Whether the user owns a schema in the database (if so we need to change the schema ownership to dbo)

-  Whether the user owns a the database. If so we need to change db ownership to a different user. In this example I will be changing the ownership to "sa"

We will insert the details into another temporary table. I will explain part by part, but once the script is completed you can execute it as a whole. I have provided the completed script at the end.

We will use another system stored procedure to execute a query in all the databases in the SQL Server instance (sys.sp_MSforeachdb)

	
DECLARE @LoginName AS sysname
	,@Sql AS NVARCHAR(2000)

DECLARE cur CURSOR LOCAL  READ_ONLY FORWARD_ONLY FOR 
SELECT LOGIN_NAME FROM #InvalidUsers

OPEN cur

FETCH NEXT FROM cur INTO @LoginName
WHILE @@FETCH_STATUS = 0
BEGIN
    
	SET @Sql = CONCAT(
		'USE ?;INSERT INTO #InvalidUserDbInfo SELECT DB_NAME(),DP.sid,DP.name, S.[name] as OwnedSchema 
        FROM sys.database_principals AS DP 
        LEFT JOIN sys.schemas as S on S.principal_id = DP.principal_id 
        WHERE [type] = ''U'' and DP.name = ''',@LoginName,''''
	)
	EXEC sys.sp_MSforeachdb @Sql
	FETCH NEXT FROM cur INTO @LoginName
END


CLOSE cur
DEALLOCATE cur


We will use the details populated into the table #InvalidUserDbInfo to generate a script to drop users from databases, change database ownerships and change schema ownerships.

   
DECLARE 
	@LoginName AS sysname
	,@Sql AS NVARCHAR(2000)
	,@DBName AS NVARCHAR(128)
	,@DbNameCurrent AS NVARCHAR(128) = ''
	,@SqlSyntax AS NVARCHAR(MAX) = ''
	,@OwnedSchema AS VARCHAR(128)

DECLARE cur CURSOR LOCAL  READ_ONLY FORWARD_ONLY FOR 
SELECT 
	IUDI.DBNAME
		,IUDI.LOGIN_NAME 
		,IUDI.OWNED_SCH
FROM 
	#InvalidUserDbInfo AS IUDI WHERE IUDI.DBNAME <> 'master' ORDER BY IUDI.DBNAME

OPEN cur

FETCH NEXT FROM cur INTO @DBName,@LoginName,@OwnedSchema

WHILE @@FETCH_STATUS = 0
BEGIN
	
	IF @DbNameCurrent <> @DBName
	BEGIN
		IF @DbNameCurrent <> ''
			SET @SqlSyntax += CONCAT(CHAR(13),'---------------------------------',CHAR(13),CHAR(13))

		SET @SqlSyntax += CONCAT('USE [',@DBName,'];',CHAR(13),CHAR(13))
		
		SET @DbNameCurrent = @DBName
		
	END

	IF (ISNULL(@OwnedSchema,'') <> '' AND ISNULL(@OwnedSchema,'') <> @LoginName)
	BEGIN
		SET @SqlSyntax += CONCAT('ALTER AUTHORIZATION ON SCHEMA::[',@OwnedSchema,'] TO [dbo]', CHAR(13))
	END

	SET @SqlSyntax += CONCAT('DROP USER IF EXISTS [',@LoginName,']',CHAR(13))

	FETCH NEXT FROM cur INTO @DBName,@LoginName,@OwnedSchema

END

CLOSE cur
DEALLOCATE cur

SELECT CAST('<![CDATA[' + CHAR(13) + @SqlSyntax + CHAR(13) + ']]>' AS XML) AS CleanupScript_Step_2

This will return a clickable result set. Upon clicking you will get the script to remove users from databases.




3. Finally we consider the database ownerships and drop the login from the SQL Server Instance

   
DECLARE 
	@SqlSyntax AS NVARCHAR(MAX) = ''


SET @SqlSyntax += CONCAT(' --== Changing Database Ownership ==--',CHAR(13),CHAR(13),'USE master;',CHAR(13))

SELECT 
	@SqlSyntax = @SqlSyntax + CHAR(13) + CONCAT('ALTER AUTHORIZATION ON DATABASE::[',S.[name],'] TO [sa]') 
FROM 
	sys.databases AS S
	JOIN #InvalidUsers AS U
		ON SUSER_SNAME(S.owner_sid) = U.LOGIN_NAME

SET @SqlSyntax+= CONCAT(CHAR(13),CHAR(13))

------------------------------------------------------------------------------------------


SELECT 
	@SqlSyntax = @SqlSyntax + CHAR(13) + 
    	CONCAT('IF EXISTS (SELECT 0 FROM [sys].[server_principals] AS [SP] WHERE [SP].[name] = N'''
        	,U.LOGIN_NAME
            ,''')'
            ,CHAR(13),CHAR(9)
            ,'DROP LOGIN [',U.LOGIN_NAME,']',CHAR(13),CHAR(13)
        )
FROM
	[sys].[server_principals] AS [SP]
	JOIN #InvalidUsers AS U
		ON U.LOGIN_NAME = SP.[name]


SELECT CAST('<![CDATA[' + CHAR(13) + @SqlSyntax + CHAR(13) + ']]>' AS XML) AS CleanupScript_Step_3  
  

This will generate a script similar to the one shown below.














Following is the full script which I have combine all the steps and which can be executed in a one go. You can download the script from the following link:

  

CREATE TABLE #InvalidUsers(
	[SID]		VARBINARY(85)
	,LOGIN_NAME	SYSNAME
)

INSERT INTO #InvalidUsers (
	[SID]
	,LOGIN_NAME
)
EXEC sys.sp_validatelogins

------------------------------------------------------------------------------------------

CREATE TABLE  #InvalidUserDbInfo(
	DBNAME		NVARCHAR(128)		
	,[SID]			VARBINARY(85)
	,LOGIN_NAME	SYSNAME
	,OWNED_SCH		VARCHAR(128)
)

DECLARE 
	@LoginName AS sysname
	,@Sql AS NVARCHAR(2000)
	,@DBName AS NVARCHAR(128)
	,@DbNameCurrent AS NVARCHAR(128) = ''
	,@SqlSyntax AS NVARCHAR(MAX) = ''
	,@OwnedSchema AS VARCHAR(128)

DECLARE cur CURSOR LOCAL  READ_ONLY FORWARD_ONLY FOR 
SELECT LOGIN_NAME FROM #InvalidUsers

		

OPEN cur

FETCH NEXT FROM cur INTO @LoginName
WHILE @@FETCH_STATUS = 0
BEGIN
    
	SET @Sql = CONCAT(
		'USE ?;INSERT INTO #InvalidUserDbInfo 
		SELECT DB_NAME(),DP.sid,DP.name, S.[name] as OwnedSchema 
		FROM sys.database_principals AS DP 
		LEFT JOIN sys.schemas as S on S.principal_id = DP.principal_id 
		WHERE [type] = ''U'' and DP.name = ''',@LoginName,''''
	)
	EXEC sys.sp_MSforeachdb @Sql
	FETCH NEXT FROM cur INTO @LoginName
END


CLOSE cur
DEALLOCATE cur

------------------------------------------------------------------------------------------

DECLARE cur CURSOR LOCAL  READ_ONLY FORWARD_ONLY FOR 
SELECT 
	IUDI.DBNAME
		,IUDI.LOGIN_NAME 
		,IUDI.OWNED_SCH
FROM 
	#InvalidUserDbInfo AS IUDI WHERE IUDI.DBNAME <> 'master' ORDER BY IUDI.DBNAME

OPEN cur

FETCH NEXT FROM cur INTO @DBName,@LoginName,@OwnedSchema

WHILE @@FETCH_STATUS = 0
BEGIN
	
	IF @DbNameCurrent <> @DBName
	BEGIN
		IF @DbNameCurrent <> ''
			SET @SqlSyntax += CONCAT(CHAR(13),'---------------------------------',CHAR(13),CHAR(13))

		SET @SqlSyntax += CONCAT('USE [',@DBName,'];',CHAR(13),CHAR(13))
		
		SET @DbNameCurrent = @DBName
		
	END

	IF (ISNULL(@OwnedSchema,'') <> '' AND ISNULL(@OwnedSchema,'') <> @LoginName)
	BEGIN
		SET @SqlSyntax += CONCAT('ALTER AUTHORIZATION ON SCHEMA::[',@OwnedSchema,'] TO [dbo]', CHAR(13))
	END

	SET @SqlSyntax += CONCAT('DROP USER IF EXISTS [',@LoginName,']',CHAR(13))

	FETCH NEXT FROM cur INTO @DBName,@LoginName,@OwnedSchema

END

CLOSE cur
DEALLOCATE cur

------------------------------------------------------------------------------------------

SET @SqlSyntax += 
	CONCAT(CHAR(13),CHAR(13)
    ,' --== Changing Database Ownership ==--',CHAR(13),CHAR(13)
    ,'USE master;',CHAR(13))

SELECT 
	@SqlSyntax = @SqlSyntax + CHAR(13) + CONCAT('ALTER AUTHORIZATION ON DATABASE::[',S.[name],'] TO [sa]') 
FROM 
	sys.databases AS S
	JOIN #InvalidUsers AS U
		ON SUSER_SNAME(S.owner_sid) = U.LOGIN_NAME

SET @SqlSyntax+= CONCAT(CHAR(13),CHAR(13))

------------------------------------------------------------------------------------------


SELECT 
	@SqlSyntax = @SqlSyntax + CHAR(13) + 
		CONCAT('IF EXISTS (SELECT 0 FROM [sys].[server_principals] AS [SP] WHERE [SP].[name] = N'''
			,U.LOGIN_NAME,''')'
			,CHAR(13),CHAR(9)
			,'DROP LOGIN [',U.LOGIN_NAME,']',CHAR(13),CHAR(13))
FROM
	[sys].[server_principals] AS [SP]
	JOIN #InvalidUsers AS U
		ON U.LOGIN_NAME = SP.[name]


SELECT CAST('<![CDATA[' + CHAR(13) + @SqlSyntax + CHAR(13) + ']]>' AS XML) AS CleanupScript_Step_3
    

Hope this will be very useful to you!



Sunday, 31 October 2021

Attaching SQL Database with FILESTREAM data

 Have you ever come across a situation where you need to attach a database using existing files (mdf, ldf & file stream data), but you don't have an option to mention the file stream folder when you use SSMS.

This won't be an issue if the filestream folder resides on its original path.

Let's look at an example. I have SQL data, log and filestream data which I will first attach using SSMS.

  1. Right-click on the database folder in SSMS and choose "Attach"
  2. Click the "Add" button and choose the data file (.mdf). Upon selecting the mdf file it will list the associated ldf file as well. But make sure to change the path accordingly from the second pane.
  3. Click on Ok.


In the above example, it will succeed without an issue since I have not changed the paths.

Now we will detach the database and change the file stream folder name to a different one. I will change the folder name to "SampleSqlDatabase_FSData_New" (This is the most common case where you will get the relevant files from the production environment which you need to attach to a test or development environment.

Now if you try to attach the database using SSMS in the same way you will get the following error message.











Even if you try to generate the scripts using SSMS it doesn't provide you with the option to include the filestream folder.

















   
USE [master]
GO
CREATE DATABASE [SampleSqlDatabase] ON 
( FILENAME = N'D:\_SQL_DATA\SampleSqlDatabase_Data.mdf' ),
( FILENAME = N'D:\_SQL_LOG\SampleSqlDatabase_Log.ldf' )
 FOR ATTACH
GO

We can use the above code snippet and include the file stream path.

    
USE [master]
GO
CREATE DATABASE [SampleSqlDatabase] ON 
(FILENAME = N'D:\_SQL_DATA\SampleSqlDatabase_Data.mdf'),
(FILENAME = N'D:\_SQL_LOG\SampleSqlDatabase_Log.ldf'),
FILEGROUP NewFSGroup CONTAINS FILESTREAM (
	NAME = 'NewDatabaseFileStream'
	,FILENAME = 'D:\_SQL_DATA\SampleSqlDatabase_FSData_New'
)
FOR ATTACH
GO

Then you will be able to attach the database without an issue. You can use any name for the FILEGROUP and NAME (under the FILEGROUP). It doesn't need to be the same as your source database.

Hope this will be a helpful tip to you.

SQL Server Transaction Log file behaviour in Full Recovery Mode

In a previous blog post, I have explained different types of recovery models in SQL Server databases. During the explanations, I have stated that the transaction log data will be truncated during certain backup types. We will look into this more closely and see how it works.

Note: Even the details are removed from the transaction log file, the physical size may not change unless you issue a DBCC SHRINKFILE command. Only the file space will be re-used once the details are truncated.

We will start with creating a sample database named 'SqlTransactionLogExample'

      
USE master;
GO 

CREATE DATABASE [SqlTransactionLogExample];

--== Makesure the database is in "Full" recovery model ==--

SELECT 
	[name]
	,DATABASEPROPERTYEX([name],'Recovery') AS recovery_model
FROM
	sys.databases
WHERE
	[name] = 'SqlTransactionLogExample'
    

We will create one table and insert records (1,000,000 rows to see a significant change in the file size).

   
USE SqlTransactionLogExample;
GO

CREATE TABLE dbo.UidInformation(
	Id				UNIQUEIDENTIFIER NOT NULL
	,PayLoad		CHAR(1000) NOT NULL CONSTRAINT [DF_SampleTable] DEFAULT (SPACE(1000))
)

INSERT INTO dbo.UidInformation (
	Id
	,PayLoad
)
SELECT	
	NEWID() AS Id,
	([100000].Num 
	+ [10000].Num  
	+ [1000].Num 
	+ [100].Num 
	+ [10].Num 
	+ [1].Num) AS PayLoad
FROM (
	SELECT 1 AS Num UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL 
	SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 
	UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 UNION ALL 
	SELECT 0
) AS [1]
CROSS JOIN (
	SELECT 10 AS Num UNION ALL SELECT 20 UNION ALL SELECT 30 UNION ALL 
	SELECT 40 UNION ALL SELECT 50 UNION ALL SELECT 60 UNION ALL 
	SELECT 70 UNION ALL SELECT 80 UNION ALL SELECT 90 UNION ALL 
	SELECT 0
) AS [10]
CROSS JOIN (
	SELECT 100 AS Num UNION ALL SELECT 200 UNION ALL SELECT 300 
	UNION ALL SELECT 400 UNION ALL SELECT 500 UNION ALL SELECT 600 
	UNION ALL SELECT 700 UNION ALL SELECT 800 UNION ALL SELECT 900 
	UNION ALL SELECT 0
) AS [100]
CROSS JOIN (
	SELECT 1000 AS Num UNION ALL SELECT 2000 UNION ALL SELECT 3000 
	UNION ALL SELECT 4000 UNION ALL SELECT 5000 UNION ALL SELECT 6000 
	UNION ALL SELECT 7000 UNION ALL SELECT 8000 UNION ALL SELECT 9000 
	UNION ALL SELECT 0
) AS [1000]
CROSS JOIN (
	SELECT 10000 AS Num UNION ALL SELECT 20000 UNION ALL SELECT 30000 
	UNION ALL SELECT 40000 UNION ALL SELECT 50000 UNION ALL SELECT 60000 
	UNION ALL SELECT 70000 UNION ALL SELECT 80000 UNION ALL SELECT 90000 
	UNION ALL SELECT 0
) AS [10000]
CROSS JOIN (
	SELECT 100000 AS Num UNION ALL SELECT 200000 UNION ALL SELECT 300000 
	UNION ALL SELECT 400000 UNION ALL SELECT 500000 UNION ALL SELECT 600000 
	UNION ALL SELECT 700000 UNION ALL SELECT 800000 UNION ALL SELECT 900000 
	UNION ALL SELECT 0
) AS [100000]
WHERE(
	[100000].Num 
	+ [10000].Num  
	+ [1000].Num 
	+ [100].Num 
	+ [10].Num 
	+ [1].Num
) BETWEEN 1 AND 1000000
ORDER BY ([100000].Num + [10000].Num  + [1000].Num + [100].Num + [10].Num + [1].Num)


Now we will inspect the number of entries in the transaction log.

   
  SELECT COUNT(*) FROM sys.fn_dblog (NULL,NULL)
  --Returns 2511475


Note: Record count may be varied in the transaction log of your database.


We will check the size of the transaction log file 



Now we will take a full database backup

USE master;
GO

BACKUP DATABASE [SqlTransactionLogExample] TO DISK = 'D:\_SQL_BACKUP\SqlTransactionLogExample_FullDB_Backup.bak'

Now we will inspect number of entries in the transaction log file again.

USE SqlTransactionLogExample
GO

SELECT COUNT(*) FROM sys.fn_dblog (NULL,NULL)

--Returns 2511475


Note: There is a slight increment of the record count since there are new entries written to the log file during the database backup.

We will now inspect the transaction log entries

  
SELECT COUNT(*) FROM sys.fn_dblog (NULL,NULL)
-- Returns 2511545


We will now take a backup of the transaction log 

  
BACKUP LOG [SqlTransactionLogExample] TO 
DISK = 'D:\_SQL_BACKUP\SqlTransactionLogExample_LogBackup_AfterInsert.bak'


And if we inspect the number of entries in the log file it has drastically reduced.

  
SELECT COUNT(*) FROM sys.fn_dblog (NULL,NULL)
--Returns 10


However, the space consumed by the log file is still the same. This is the reason I have mentioned it previously. In order to release the space, we have to issue the DBCC SHRINKFILE command.


--This is to find the name of the logical log file name
SELECT * FROM sys.sysfiles
--SqlTransactionLogExample_log
DBCC SHRINKFILE('SqlTransactionLogExample_log',1)


Hope this will be helpful to you.



Saturday, 30 October 2021

Recovery Models in SQL Server Databases

 "Recovery Model" determines how long the data is to be kept in the transaction logs. It also allows what sort of backups and restores you can perform on the database.


Types of recovery models

There are three types of recovery models in SQL Server

  1. Simple
  2. Full 
  3. Bulk-logged

How can I change the recovery model of a database?

The recovery model of a database can be changed either using the GUI (SSMS) or using a T-SQL statement.

To change using SSMS follow these steps:
1. Right-click the database and choose options
2. Select "Options" from the pages
3. From the Recovery model drop-down list choose the appropriate one.
4. Click OK






To change using T-SQL statement use the following syntax:

      
USE master;
ALTER DATABASE YourDBName SET RECOVERY SIMPLE
You can use options SIMPLE, FULL or BULK_LOGGED

You can find out the recovery model of the databases which resides on your server using the following query.

SELECT 
	[name]
	,DATABASEPROPERTYEX([name],'Recovery') AS recovery_model
FROM
	sys.databases
WHERE
	database_id > 4   

Note: database_id > 4 will ensure that system databases information is excluded.

Now we will look into closely what options it may enable during backup/restore for each type of recovery model.

Simple Recovery Model


  • You cannot take backups of the transaction log explicitly
  • A database backup will always contain both data and transaction log file











  • Transaction log will not grow beyond what's been defined. The space in the transaction log file will be re-used (overwritten) once the transactions are written to the data file.
  • In case of a disaster transaction occurred between the last full backup and the disaster timeline cannot be recovered. (E.g: Assume we have a database in which the backups are taken every hour. (8am, 9am etc.) In case a disaster happens at 10:45am, transactions between 10am and 10:45am will be lost)


















Bulk-logged Recovery Model



Full Recovery Model

 
  • Supports transaction log backup
  • Chances of a data loss are minimal (subject to the backup strategy implemented)
  • Log truncation must be done manually (Upon taking the transaction log backup, it will mark unused space which will be overwritten later by future transactions)
  • Transaction log could grow large compared to the "Simple" recovery model

The scope of this post is to give a brief idea of the recovery models in SQL Server. In a future post, I will explain how to bring a database to a certain date and time by having a Full Recovery Model and a good backup strategy.


Friday, 10 May 2019

Strange behavior on JSON_VALUE when table contains blank and non-blank values (JSON text is not properly formatted. Unexpected character '.' is found at position 0.)


Few days back we had a requirement to search whether a certain value exists in one of the table fields where the values are stored as JSON strings. The default constructor has been set up not to allow any NULLs but in case there’s no value, the default value has been setup as an empty string.
So basically the query would be similar to something shown below


SELECT 
 'x'
FROM
 [schema].[TableName] AS Src
WHERE
 JSON_VALUE(Src.ColumnName,'$.Root.AttributeName') LIKE 'SearchValue%'

How ever when we ran this query we got the following error

Msg 13609, Level 16, State 2, Line 36
JSON text is not properly formatted. Unexpected character '.' is found at position 0.


Initially we thought that we have typed the attribute incorrectly since it’s case sensitive. But in this case it was correct.

We investigated further and found out few things. But prior explaining them we will replicate this issue. For this I will create one simple table and insert three records.


--== Create a table ==--
CREATE TABLE dbo.Employee_Information (
 Id    INT
 ,FirstName  NVARCHAR(100)
 ,LastName  NVARCHAR(100)
 ,JsonData  NVARCHAR(MAX)
)

--== Insert few rows ==--
INSERT INTO dbo.Employee_Information (
 Id
 ,FirstName
 ,LastName
 ,JsonData
)
VALUES
(1,'John','Doe','{"Employee":{"Id":1,"FirstName":"John","LastName":"Doe"}}')
,(2,'Jane','Doe','{"Employee":{"Id":2,"FirstName":"Jane","LastName":"Doe"}}')
,(3,'Luke','Skywalker','')



Now we will use the following query to find any records which the LastName is like ‘Doe’.


SELECT 
 Id
FROM
 dbo.Employee_Information AS E
WHERE
 JSON_VALUE(E.JsonData,'$.Employee.LastName') LIKE 'Doe%'




Msg 13609, Level 16, State 2, Line 36
JSON text is not properly formatted. Unexpected character '.' is found at position 0.


**Note : The query will return results till the error occurs. Hence you will see some rows in your result tab in SSMS.

These are the observations we made during our investigation

Observation 01

If you query the table with a predicate and if that predicate doesn’t include any rows with blank values in the JSON (it’s an NVARCHAR column) field the query will executed successfully.


--== Success ==--
SELECT 
 Id
FROM
 dbo.Employee_Information AS E
WHERE
 JSON_VALUE(E.JsonData,'$.Employee.LastName') LIKE 'Doe%'
 AND Id IN (1,2)

--== Success ==--
SELECT 
 Id
FROM
 dbo.Employee_Information AS E
WHERE
 JSON_VALUE(E.JsonData,'$.Employee.LastName') LIKE 'Doe%'
 AND Id <> 3

--== Fail ==--
SELECT 
 Id
FROM
 dbo.Employee_Information AS E
WHERE
 JSON_VALUE(E.JsonData,'$.Employee.LastName') LIKE 'Doe%'
 AND Id = 3


Observation 02

Even you use a filter to fetch only rows containing a valid JSON the execution will be successful.


--== Success ==--
SELECT 
 Id
FROM
 dbo.Employee_Information AS E
WHERE
 ISJSON(E.JsonData) > 0
 AND JSON_VALUE(E.JsonData,'$.Employee.LastName') LIKE 'Doe%'


Observation 03

Even you use a filter to fetch only rows containing a non-blank value in the JSON field, it will fail.


--== Fail ==--
SELECT 
 Id
FROM
 dbo.Employee_Information AS E
WHERE
 E.JsonData <> ''
 AND JSON_VALUE(E.JsonData,'$.Employee.LastName') LIKE 'Doe%'


Observation 04

If you remove records and only keep either one type of rows (either only blank rows or only non-blank) the query will be executed successfully.


TRUNCATE TABLE dbo.Employee_Information
INSERT INTO dbo.Employee_Information (
 Id
 ,FirstName
 ,LastName
 ,JsonData
)
VALUES
(1,'John','Doe','{"Employee":{"Id":1,"FirstName":"John","LastName":"Doe"}}')
,(2,'Jane','Doe','{"Employee":{"Id":2,"FirstName":"Jane","LastName":"Doe"}}')

--== Success ==--
SELECT 
 Id
FROM
 dbo.Employee_Information AS E
WHERE
 JSON_VALUE(E.JsonData,'$.Employee.LastName') LIKE 'Doe%'


Observation 05

If you have rows only containing blank values in the JSON field the query will fail.


TRUNCATE TABLE dbo.Employee_Information
INSERT INTO dbo.Employee_Information (
 Id
 ,FirstName
 ,LastName
 ,JsonData
)
VALUES
(1,'John','Doe','')
,(2,'Jane','Doe','')


--== Fail ==--
SELECT 
 Id
FROM
 dbo.Employee_Information AS E
WHERE
 JSON_VALUE(E.JsonData,'$.Employee.LastName') LIKE 'Doe%'

Hope this might help you if you encounter this strange behavior during your development.

Note : All the above queries are executed under the following SQL Server Version (SELECT @@VERSION)




Microsoft SQL Server 2016 (SP1) (KB3182545) - 13.0.4001.0 (X64)
     Oct 28 2016 18:17:30
     Copyright (c) Microsoft Corporation
     Developer Edition (64-bit) on Windows Server 2012 Standard 6.2 (Build 9200: ) (Hypervisor)








Monday, 10 September 2018

Applying database principal through out the server (for all databases) for a particular user

Ever come across a requirement which you required to give db_datareader access to a specific user across all the databases on a particular SQL Server. The task is simple as long as you don’t have many databases in the same SQL Server. How ever if the number of databases are very high this can be a very time consuming one.

This can be done either using the GUI (SSMS) or using a T-SQL script. We will consider both options.

Using SQL Server Management Studio

In order to illustrate this we will create a SQL Login ‘db_user_read_only’ with ‘public’ server role and on the user mapping, we will apply the db_datareader principal.

image

image

image

Like mentioned it would be easy to use the GUI when you have less number of databases. But if the SQL Server contains lots of databases this will be a very time consuming job. Then it would be very handy to use the latter approach.

Using T-SQL

You can use the following script to apply the db_datareader principal across all the databases on a particular server.


DECLARE 
	@Sql AS NVARCHAR(MAX)
	,@UserId AS VARCHAR(MAX) = 'YourLoginId'
SET @Sql = CONCAT('
USE [?];
IF EXISTS (SELECT 0 FROM sys.database_principals AS DP WHERE name = ''',@UserId,''')
BEGIN
	EXEC sys.sp_change_users_login ''update_one'',''',@UserId,''',''',@UserId,'''
END
ELSE
	
	CREATE USER [',@UserId,'] FOR LOGIN [',@UserId,']
	ALTER ROLE [db_datareader] ADD MEMBER [',@UserId,']
')
EXEC sys.sp_MSforeachdb 
	@command1 = @Sql
	,@replacechar = '?'

Please note the following:

  • On the above code I haven’t excluded the system databases.
  • If the login exists on the database it will map the database user using sp_change_users_login

Hope this might be very useful to you.

Friday, 6 July 2018

Replacing sp_depends with sys.dm_sql_referencing_entities and sys.dm_sql_referenced_entities

sp_depends have been one of the most used system stored procedures in SQL Server. Infact many of us still use that even though Microsoft had annouced that it will be removed from the future releases.
https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-depends-transact-sql?view=sql-server-2017
image
Alternatively Microsoft has provided two dynamic management views (these have been introduced with SQL Server 2008) in order to get similar kind of information.
You can get further details on the aforementioned view by visiting the link. (links are embedded into the view name)
However if you have used sp_depends you might have already faced the issue that the results which is being returned from this stored procedure is not very accurate (most of the time it seems fine)
Otherday I was going through these two view in order to create an sp which is similar to sp_depends and thought of sharing the query so that it can be useful to anyone who depends on this sp.


DECLARE
 @objname   AS NVARCHAR(100) = 'Website.SearchForPeople'
 ,@objclass   AS NVARCHAR (60) = 'OBJECT'


  SELECT 
   CONCAT(sch.[name],'.',Obj.[name]) AS [name]
   ,(CASE Obj.type
    WHEN 'C'  THEN 'CHECK constraint'
    WHEN 'D'  THEN 'DEFAULT (constraint or stand-alone)'
    WHEN 'F'  THEN 'FOREIGN KEY constraint'
    WHEN 'PK' THEN 'PRIMARY KEY constraint'
    WHEN 'R'  THEN 'Rule (old-style, stand-alone)'
    WHEN 'TA' THEN 'Assembly (CLR-integration) trigger'
    WHEN 'TR' THEN 'SQL trigger'
    WHEN 'UQ' THEN 'UNIQUE constraint'
    WHEN 'AF' THEN 'Aggregate function (CLR)'
    WHEN 'C' THEN 'CHECK constraint'
    WHEN 'D' THEN 'DEFAULT (constraint or stand-alone)'
    WHEN 'F' THEN 'FOREIGN KEY constraint'
    WHEN 'FN' THEN 'SQL scalar function'
    WHEN 'FS' THEN 'Assembly (CLR) scalar-function'
    WHEN 'FT' THEN 'Assembly (CLR) table-valued function'
    WHEN 'IF' THEN 'SQL inline table-valued function'
    WHEN 'IT' THEN 'Internal table'
    WHEN 'P' THEN 'SQL Stored Procedure'
    WHEN 'PC' THEN 'Assembly (CLR) stored-procedure'
    WHEN 'PG' THEN 'Plan guide'
    WHEN 'PK' THEN 'PRIMARY KEY constraint'
    WHEN 'R' THEN 'Rule (old-style, stand-alone)'
    WHEN 'RF' THEN 'Replication-filter-procedure'
    WHEN 'S' THEN 'System base TABLE'
    WHEN 'SN' THEN 'Synonym'
    WHEN 'SO' THEN 'Sequence OBJECT'
    WHEN 'U' THEN 'Table (user-defined)'
    WHEN 'V' THEN 'VIEW'
    WHEN 'SQ' THEN 'Service queue'
    WHEN 'TA' THEN 'Assembly (CLR) DML trigger'
    WHEN 'TF' THEN 'SQL table-valued-function'
    WHEN 'TR' THEN 'SQL DML trigger'
    WHEN 'TT' THEN 'Table type'
    WHEN 'UQ' THEN 'UNIQUE CONSTRAINT'
    WHEN 'X'  THEN 'Extended stored procedure'
    ELSE 'Undefined'
   END) AS [type]
   ,Obj.create_date
   ,Obj.modify_date
   ,src.referenced_minor_name AS [column]
   ,IIF(src.is_selected   = 1,'yes','no') AS is_selected
   ,IIF(src.is_updated    = 1,'yes','no') AS is_updated
   ,IIF(src.is_select_all = 1,'yes','no') AS is_select_all
   ,IIF(src.is_insert_all = 1,'yes','no') AS is_insert_all
  FROM 
   sys.dm_sql_referenced_entities (@objname,@objclass) AS src
   JOIN sys.objects AS Obj
    ON src.referenced_id = Obj.[object_id]
   JOIN sys.schemas AS Sch
    ON Sch.[schema_id] = Obj.[schema_id]
  WHERE 1=1
  
  SELECT 
   CONCAT(Src.referencing_schema_name,'.',Src.referencing_entity_name) AS [name]
   ,(CASE Obj.type
    WHEN 'C'  THEN 'CHECK constraint'
    WHEN 'D'  THEN 'DEFAULT (constraint or stand-alone)'
    WHEN 'F'  THEN 'FOREIGN KEY constraint'
    WHEN 'PK' THEN 'PRIMARY KEY constraint'
    WHEN 'R'  THEN 'Rule (old-style, stand-alone)'
    WHEN 'TA' THEN 'Assembly (CLR-integration) trigger'
    WHEN 'TR' THEN 'SQL trigger'
    WHEN 'UQ' THEN 'UNIQUE constraint'
    WHEN 'AF' THEN 'Aggregate function (CLR)'
    WHEN 'C' THEN 'CHECK constraint'
    WHEN 'D' THEN 'DEFAULT (constraint or stand-alone)'
    WHEN 'F' THEN 'FOREIGN KEY constraint'
    WHEN 'FN' THEN 'SQL scalar function'
    WHEN 'FS' THEN 'Assembly (CLR) scalar-function'
    WHEN 'FT' THEN 'Assembly (CLR) table-valued function'
    WHEN 'IF' THEN 'SQL inline table-valued function'
    WHEN 'IT' THEN 'Internal table'
    WHEN 'P' THEN 'SQL Stored Procedure'
    WHEN 'PC' THEN 'Assembly (CLR) stored-procedure'
    WHEN 'PG' THEN 'Plan guide'
    WHEN 'PK' THEN 'PRIMARY KEY constraint'
    WHEN 'R' THEN 'Rule (old-style, stand-alone)'
    WHEN 'RF' THEN 'Replication-filter-procedure'
    WHEN 'S' THEN 'System base TABLE'
    WHEN 'SN' THEN 'Synonym'
    WHEN 'SO' THEN 'Sequence OBJECT'
    WHEN 'U' THEN 'Table (user-defined)'
    WHEN 'V' THEN 'VIEW'
    WHEN 'SQ' THEN 'Service queue'
    WHEN 'TA' THEN 'Assembly (CLR) DML trigger'
    WHEN 'TF' THEN 'SQL table-valued-function'
    WHEN 'TR' THEN 'SQL DML trigger'
    WHEN 'TT' THEN 'Table type'
    WHEN 'UQ' THEN 'UNIQUE CONSTRAINT'
    WHEN 'X'  THEN 'Extended stored procedure'
    ELSE 'Undefined'
   END) AS [type]
   ,Obj.create_date
   ,Obj.modify_date
  FROM 
   sys.dm_sql_referencing_entities (@objname,@objclass) AS Src
   JOIN sys.objects AS Obj
    ON Obj.[object_id] = Src.referencing_id 
I have even compiled a stored procedure using this syntax and it can be found on the following reporsitory: https://github.com/manjukefernando/sp_depends_v2

Wednesday, 30 May 2018

Computed columns in SQL Server

Computed columns are type of columns which the values are derived based on one or more other columns. Hence the data type on the computed column depends on the result of the derived column values.
Computed columns is a feature which has been there in SQL Server since version 2000. But in my experience I feel that it has been a feature which's been used less compared to many other features available, and during discussions and interviews this is something which most developers slips or fails to answer.

Why do we need computed columns ?

First we will consider a case where we need to store details on a table without the usage of computed columns.
Consider we have a table which contains employee details. We have two columns to store employee’s first and last names. But we also required to have a column which we need to store their full name as well by concatenating the first and last names. So the correct way is to have the third column which contains the full name and the data needs to be inserted during the employee record is created and it should be maintained in the case where the details are updated as well. Otherwise the data integrity will be lost. (One might debate that the full name can be built from the business logic code using the first and last names. But for the illustration purpose we would consider that we are maintaining it using SQL Server)

CREATE TABLE dbo.Employee(
    Id     INT 
    ,FirstName    VARCHAR(30)
    ,LastName    VARCHAR(30)
    ,FullName    VARCHAR(61)
)
How ever we could achieve the same with the use of a computed column and with a less effort compared to the first approach.

CREATE TABLE dbo.Employee(
    Id     INT 
    ,FirstName    VARCHAR(30)
    ,LastName    VARCHAR(30)
    ,FullName AS CONCAT(FirstName,' ',LastName)
)

Let’s insert few records to the table which we created now.

INSERT INTO dbo.Employee(Id, FirstName, LastName) 
VALUES (1,’John’,’Doe'),(2,’Jane’,’Doe')

image

PERSISTED, DETERMINISTIC or NON-DETERMINISTIC ?

The values reflected on computed column can be either deterministic or persisted.
When the values are deterministic or non-deterministic the value in the column will not be saved on to the table physically. Instead it always calculated during the query execution. Hence the value could differ based on the functions you use in the formula. E.g: If you use GETDATE() in the calculated column, it will always return a different value during each execution.

CREATE TABLE dbo.Employee2(
    Id     INT 
    ,FirstName    VARCHAR(30)
    ,LastName    VARCHAR(30)
    ,CreatedDate AS GETDATE()
)

INSERT INTO dbo.Employee2(Id, FirstName, LastName) VALUES 
    (1,'John','Doe') 


And when queried the calculated column returns different values as shown below.

image
**Note: The above mentioned can be achieved using a default constraint as well. I have used that example on strictly illustration basis.
You can further read on deterministic and non-deterministic function on the following Microsoft documentation.
https://docs.microsoft.com/en-us/sql/relational-databases/user-defined-functions/deterministic-and-nondeterministic-functions?view=sql-server-2017
Computed column values can be persisted by adding the keyword PERSISTED when the column is created using T-SQL or by the table designer in SSMS.
We will drop ‘FullName’ column and recreate the column.

ALTER TABLE dbo.Employee DROP COLUMN FullName;
ALTER TABLE dbo.Employee 
 ADD FullName AS CONCAT(FirstName,' ',LastName) PERSISTED;
**Note: If you try to drop the ‘CreatedDate’ column on Employee2 and try to create it as PERSISTED, it will throw an error. Because computed columns can only be persisted when it’s deterministic.
Msg 4936, Level 16, State 1, Line 45
Computed column 'CreatedDate' in table 'Employee2' cannot be persisted because the column is non-deterministic.

Now when the expression is evaluated during the execution, the ‘FullName’ will be saved into the table.
The data is read-only to the developer and it’s maintained by the engine. When the data is changed on the columns which was used in the formula, the computed values will be changed.