Sunday 31 October 2021

Attaching SQL Database with FILESTREAM data

 Have you ever come across a situation where you need to attach a database using existing files (mdf, ldf & file stream data), but you don't have an option to mention the file stream folder when you use SSMS.

This won't be an issue if the filestream folder resides on its original path.

Let's look at an example. I have SQL data, log and filestream data which I will first attach using SSMS.

  1. Right-click on the database folder in SSMS and choose "Attach"
  2. Click the "Add" button and choose the data file (.mdf). Upon selecting the mdf file it will list the associated ldf file as well. But make sure to change the path accordingly from the second pane.
  3. Click on Ok.


In the above example, it will succeed without an issue since I have not changed the paths.

Now we will detach the database and change the file stream folder name to a different one. I will change the folder name to "SampleSqlDatabase_FSData_New" (This is the most common case where you will get the relevant files from the production environment which you need to attach to a test or development environment.

Now if you try to attach the database using SSMS in the same way you will get the following error message.











Even if you try to generate the scripts using SSMS it doesn't provide you with the option to include the filestream folder.

















   
USE [master]
GO
CREATE DATABASE [SampleSqlDatabase] ON 
( FILENAME = N'D:\_SQL_DATA\SampleSqlDatabase_Data.mdf' ),
( FILENAME = N'D:\_SQL_LOG\SampleSqlDatabase_Log.ldf' )
 FOR ATTACH
GO

We can use the above code snippet and include the file stream path.

    
USE [master]
GO
CREATE DATABASE [SampleSqlDatabase] ON 
(FILENAME = N'D:\_SQL_DATA\SampleSqlDatabase_Data.mdf'),
(FILENAME = N'D:\_SQL_LOG\SampleSqlDatabase_Log.ldf'),
FILEGROUP NewFSGroup CONTAINS FILESTREAM (
	NAME = 'NewDatabaseFileStream'
	,FILENAME = 'D:\_SQL_DATA\SampleSqlDatabase_FSData_New'
)
FOR ATTACH
GO

Then you will be able to attach the database without an issue. You can use any name for the FILEGROUP and NAME (under the FILEGROUP). It doesn't need to be the same as your source database.

Hope this will be a helpful tip to you.

SQL Server Transaction Log file behaviour in Full Recovery Mode

In a previous blog post, I have explained different types of recovery models in SQL Server databases. During the explanations, I have stated that the transaction log data will be truncated during certain backup types. We will look into this more closely and see how it works.

Note: Even the details are removed from the transaction log file, the physical size may not change unless you issue a DBCC SHRINKFILE command. Only the file space will be re-used once the details are truncated.

We will start with creating a sample database named 'SqlTransactionLogExample'

      
USE master;
GO 

CREATE DATABASE [SqlTransactionLogExample];

--== Makesure the database is in "Full" recovery model ==--

SELECT 
	[name]
	,DATABASEPROPERTYEX([name],'Recovery') AS recovery_model
FROM
	sys.databases
WHERE
	[name] = 'SqlTransactionLogExample'
    

We will create one table and insert records (1,000,000 rows to see a significant change in the file size).

   
USE SqlTransactionLogExample;
GO

CREATE TABLE dbo.UidInformation(
	Id				UNIQUEIDENTIFIER NOT NULL
	,PayLoad		CHAR(1000) NOT NULL CONSTRAINT [DF_SampleTable] DEFAULT (SPACE(1000))
)

INSERT INTO dbo.UidInformation (
	Id
	,PayLoad
)
SELECT	
	NEWID() AS Id,
	([100000].Num 
	+ [10000].Num  
	+ [1000].Num 
	+ [100].Num 
	+ [10].Num 
	+ [1].Num) AS PayLoad
FROM (
	SELECT 1 AS Num UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL 
	SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 
	UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 UNION ALL 
	SELECT 0
) AS [1]
CROSS JOIN (
	SELECT 10 AS Num UNION ALL SELECT 20 UNION ALL SELECT 30 UNION ALL 
	SELECT 40 UNION ALL SELECT 50 UNION ALL SELECT 60 UNION ALL 
	SELECT 70 UNION ALL SELECT 80 UNION ALL SELECT 90 UNION ALL 
	SELECT 0
) AS [10]
CROSS JOIN (
	SELECT 100 AS Num UNION ALL SELECT 200 UNION ALL SELECT 300 
	UNION ALL SELECT 400 UNION ALL SELECT 500 UNION ALL SELECT 600 
	UNION ALL SELECT 700 UNION ALL SELECT 800 UNION ALL SELECT 900 
	UNION ALL SELECT 0
) AS [100]
CROSS JOIN (
	SELECT 1000 AS Num UNION ALL SELECT 2000 UNION ALL SELECT 3000 
	UNION ALL SELECT 4000 UNION ALL SELECT 5000 UNION ALL SELECT 6000 
	UNION ALL SELECT 7000 UNION ALL SELECT 8000 UNION ALL SELECT 9000 
	UNION ALL SELECT 0
) AS [1000]
CROSS JOIN (
	SELECT 10000 AS Num UNION ALL SELECT 20000 UNION ALL SELECT 30000 
	UNION ALL SELECT 40000 UNION ALL SELECT 50000 UNION ALL SELECT 60000 
	UNION ALL SELECT 70000 UNION ALL SELECT 80000 UNION ALL SELECT 90000 
	UNION ALL SELECT 0
) AS [10000]
CROSS JOIN (
	SELECT 100000 AS Num UNION ALL SELECT 200000 UNION ALL SELECT 300000 
	UNION ALL SELECT 400000 UNION ALL SELECT 500000 UNION ALL SELECT 600000 
	UNION ALL SELECT 700000 UNION ALL SELECT 800000 UNION ALL SELECT 900000 
	UNION ALL SELECT 0
) AS [100000]
WHERE(
	[100000].Num 
	+ [10000].Num  
	+ [1000].Num 
	+ [100].Num 
	+ [10].Num 
	+ [1].Num
) BETWEEN 1 AND 1000000
ORDER BY ([100000].Num + [10000].Num  + [1000].Num + [100].Num + [10].Num + [1].Num)


Now we will inspect the number of entries in the transaction log.

   
  SELECT COUNT(*) FROM sys.fn_dblog (NULL,NULL)
  --Returns 2511475


Note: Record count may be varied in the transaction log of your database.


We will check the size of the transaction log file 



Now we will take a full database backup

USE master;
GO

BACKUP DATABASE [SqlTransactionLogExample] TO DISK = 'D:\_SQL_BACKUP\SqlTransactionLogExample_FullDB_Backup.bak'

Now we will inspect number of entries in the transaction log file again.

USE SqlTransactionLogExample
GO

SELECT COUNT(*) FROM sys.fn_dblog (NULL,NULL)

--Returns 2511475


Note: There is a slight increment of the record count since there are new entries written to the log file during the database backup.

We will now inspect the transaction log entries

  
SELECT COUNT(*) FROM sys.fn_dblog (NULL,NULL)
-- Returns 2511545


We will now take a backup of the transaction log 

  
BACKUP LOG [SqlTransactionLogExample] TO 
DISK = 'D:\_SQL_BACKUP\SqlTransactionLogExample_LogBackup_AfterInsert.bak'


And if we inspect the number of entries in the log file it has drastically reduced.

  
SELECT COUNT(*) FROM sys.fn_dblog (NULL,NULL)
--Returns 10


However, the space consumed by the log file is still the same. This is the reason I have mentioned it previously. In order to release the space, we have to issue the DBCC SHRINKFILE command.


--This is to find the name of the logical log file name
SELECT * FROM sys.sysfiles
--SqlTransactionLogExample_log
DBCC SHRINKFILE('SqlTransactionLogExample_log',1)


Hope this will be helpful to you.



Saturday 30 October 2021

Recovery Models in SQL Server Databases

 "Recovery Model" determines how long the data is to be kept in the transaction logs. It also allows what sort of backups and restores you can perform on the database.


Types of recovery models

There are three types of recovery models in SQL Server

  1. Simple
  2. Full 
  3. Bulk-logged

How can I change the recovery model of a database?

The recovery model of a database can be changed either using the GUI (SSMS) or using a T-SQL statement.

To change using SSMS follow these steps:
1. Right-click the database and choose options
2. Select "Options" from the pages
3. From the Recovery model drop-down list choose the appropriate one.
4. Click OK






To change using T-SQL statement use the following syntax:

      
USE master;
ALTER DATABASE YourDBName SET RECOVERY SIMPLE
You can use options SIMPLE, FULL or BULK_LOGGED

You can find out the recovery model of the databases which resides on your server using the following query.

SELECT 
	[name]
	,DATABASEPROPERTYEX([name],'Recovery') AS recovery_model
FROM
	sys.databases
WHERE
	database_id > 4   

Note: database_id > 4 will ensure that system databases information is excluded.

Now we will look into closely what options it may enable during backup/restore for each type of recovery model.

Simple Recovery Model


  • You cannot take backups of the transaction log explicitly
  • A database backup will always contain both data and transaction log file











  • Transaction log will not grow beyond what's been defined. The space in the transaction log file will be re-used (overwritten) once the transactions are written to the data file.
  • In case of a disaster transaction occurred between the last full backup and the disaster timeline cannot be recovered. (E.g: Assume we have a database in which the backups are taken every hour. (8am, 9am etc.) In case a disaster happens at 10:45am, transactions between 10am and 10:45am will be lost)


















Bulk-logged Recovery Model



Full Recovery Model

 
  • Supports transaction log backup
  • Chances of a data loss are minimal (subject to the backup strategy implemented)
  • Log truncation must be done manually (Upon taking the transaction log backup, it will mark unused space which will be overwritten later by future transactions)
  • Transaction log could grow large compared to the "Simple" recovery model

The scope of this post is to give a brief idea of the recovery models in SQL Server. In a future post, I will explain how to bring a database to a certain date and time by having a Full Recovery Model and a good backup strategy.