Thursday 29 September 2016

Enable Code Lense feature on VS 2015 Community Edition

As per the Visual Studio 2015 feature comparisson between the editions, Microsoft has clearly mentioned that certain features will only be available in Professional & Enterprise editions of VS 2015.

One such feature happens to be the ‘CodeLense’

If you google you can easily find out what’s code lense and its features. This is a good article which explains about code lense (http://www.codeproject.com/Articles/794766/What-is-CodeLens)

This is what has been mentioned in the feature comparisson for Visual Studio 2015 (https://www.visualstudio.com/vs/compare/)

image_thumb3

However installing SQL Server Data Tools for VS 2015, will enable this feature in the Community Edition of VS 2015.

SSDT can be obtained from the this site: https://msdn.microsoft.com/en-us/mt186501.aspx

image_thumb10

Codel lense feature can be customize using option dialog.

image_thumb8

Hope this will be useful to you.

Wednesday 7 September 2016

Native JSON Support in SQL Server 2016

When it comes to modern web development, JSON is one of the well known technologies when you require to exchange information between different applications. Before JSON, XML was used (and still being used in various applications and technologies) to do this job. But compared to XML, JSON is less verbose (like XML, JSON doesn’t have a closing tag), and it will make the JSON data somewhat smaller in size, ultimately making the data flow much faster. Perhaps the most significant advantage that JSON has over XML is that JSON is a subset of JavaScript, so code to parse and package it fits very naturally into JavaScript code. This seems highly beneficial for JavaScript programs and that happens to be a good reason for JSON to be very popular among web application developers.

However using XML or JSON is up to the personal preference and the requirement.

Prior to SQL Server 2016, there’s wasn’t any support for JSON in the earlier editions. So native JSON support is one of the new features which Microsoft introduced in SQL Server 2016.

Prior to SQL Server 2016 there are other databases which supports JSON.

  • MongoDB
  • CouchDB
  • eXistDB
  • Elastisearch
  • BaseX
  • MarkLogic
  • OrientDB
  • Oracle Database
  • PostgresSQL
  • Riak

But my main focus in this post will be the JSON support in SQL Server 2016.

In order to support JSON, in SQL 2016 following in built functions have been introduced:

  • ISJSON
  • JSON_VALUE
  • JSON_QUERY
  • JSON_MODIFY
  • OPENJSON
  • FOR JSON

 

There’s no specific data type in SQL Server to be used for JSON (unlike XML). You have to use NVARCHAR when you interact with JSON in SQL Server.

 

image

 

This is how we assign JSON data to a variable.

 

DECLARE @varJData AS NVARCHAR(4000)
SET @varJData = 
N'{
	"OrderInfo":{
		"Tag":"#ONLORD_12546_45634",
		"HeaderInfo":{
			"CustomerNo":"CUS0001",
			"OrderDate":"04-Jun-2016",
			"OrderAmount":1200.00,
			"OrderStatus":"1",
			"Contact":["+0000 000 0000000000", "info@abccompany.com", "finance@abccompany.com"]
		},
		"LineInfo":[
			{"ProductNo":"P00025", "Qty":3, "Price":200},
			{"ProductNo":"P12548", "Qty":2, "Price":300}
		]
	}
}'

We will look closely how the aforementioned functions can be used with some sample

 

ISJSON()

As the name implies ISJSON functions is used to validate a given JSON string. The function will return in INT value and If the provided string is properly formatted as JSON it will return 1 else it will return 0.

Eg:

SELECT ISJSON(@varJData)

 

JSON_VALUE()

JSON_VALUE function can be used to return a scalar value from a JSON string.

Eg:

DECLARE @varJData AS NVARCHAR(4000)
SET @varJData = 
N'{
	"OrderInfo":{
		"Tag":"#ONLORD_12546_45634",
		"HeaderInfo":{
			"CustomerNo":"CUS0001",
			"OrderDate":"04-Jun-2016",
			"OrderAmount":1200.00,
			"OrderStatus":"1",
			"Contact":["+0000 000 0000000000", "info@abccompany.com", "finance@abccompany.com"]
		},
		"LineInfo":[
			{"ProductNo":"P00025", "Qty":3, "Price":200},
			{"ProductNo":"P12548", "Qty":2, "Price":300}
		]
	}
}'

SELECT JSON_VALUE(@varJData,'$.OrderInfo.Tag')
SELECT JSON_VALUE(@varJData,'$.OrderInfo.HeaderInfo.CustomerNo')

Please note that the provided key is case sensitive and instead of ‘Tag’ if you pass ‘tag’ it will return a NULL since the function cannot find the key.

SELECT JSON_VALUE(@varJData,'$.OrderInfo.tag') /* This will returns NULL */

In such case if you require to see the exact error or the root cause, you need to specify ‘strict’ prior to the key. Eg:

SELECT JSON_VALUE(@varJData,'strict $.OrderInfo.tag') /* This will thorw an Error */

This will return the following error message instead of returning a NULL value.

Msg 13608, Level 16, State 1, Line 62
Property cannot be found on the specified JSON path.

Also JSON_VALUE can be used to fetch an element from a simple array (not an object array). In our sample JSON there are two arrays, which are ‘Contact’ and ‘LineInfo’, where the first being a simple string array and the latter is an object array.

Suppose if we require to fetch only the phone number from the contact details we can use the following query:

SELECT JSON_VALUE(@varJData,'$.OrderInfo.HeaderInfo.Contact[0]')

Also this can be used when we require to fetch an attribute from an array element as well. Suppose if we require to get the product number from the first element of the ‘LineInfo’ we could use:

SELECT JSON_VALUE(@varJData, '$.OrderInfo.LineInfo[0].ProductNo')

 

JSON_QUERY()

JSON_QUERY function is used when you require to extract an array of data or an object from a JSON.And we can extract the contact details and the line details which are arrays in this scenario as follows

SELECT JSON_QUERY(@varJData, '$.OrderInfo.HeaderInfo.Contact')
SELECT JSON_QUERY(@varJData, '$.OrderInfo.LineInfo')

And this can be used to fetch a certain element from an object array. Suppose if we want to fetch details for the second prodcut in the LineInfo section we can use the following:

SELECT JSON_QUERY(@varJData, '$.OrderInfo.LineInfo[1]')

**Note: If the JSON text contains duplicate properties - for example, two keys with the same name on the same level- the JSON_VALUE and JSON_QUERY functions will return the first value that matches the path.

 

JSON_MODIFY()

JSON_MODIFY function updates the value of a property in a JSON string and return the updated JSON string. The syntax for this function is as follows:

JSON_MODIFY(expression, path, new_value)

Using this function you can either Update, Insert, Delete or Append a value to the JSON string. We will see each of these operations using the above JSON string.

Updating an Exitsing Value

In order to update an existing value you need to provide the exact path followed by the value which should be updated to.

SET @varJData = JSON_MODIFY(@varJData,'$.OrderInfo.Tag','#NEWTAG_00001')
SELECT JSON_VALUE(@varJData,'$.OrderInfo.Tag')

 

Deleting an Existing Value

In order to delete an existing value you need to provide the exact path follwed by the value ‘NULL’.

SET @varJData = JSON_MODIFY(@varJData,'$.OrderInfo.Tag',NULL)
SELECT JSON_VALUE(@varJData,'$.OrderInfo.Tag')
PRINT @varJData

When the value is printed you can see that the ‘Tag’ attribute has been completely removed from the JSON string.

image

 

Inserting a Value

In order to insert an attribute along with a value you need to provide a path which isn’t currently availble in the JSON followed by the value. If the provided path is already present then the existing value will be replaced by the new value. The new value will always be added at the end of the existing JSON string.

SET @varJData = JSON_MODIFY(@varJData,'$.OrderInfo.Batch','#B_100000')
SELECT JSON_VALUE(@varJData,'$.OrderInfo.Batch')
PRINT @varJData

image

 

Appending a Value

In order to append an existing array in a JSON, you need to use ‘append’ before the path. Suppose if we require to add another element to the

SET @varJData = JSON_MODIFY(@varJData, 'append $.OrderInfo.HeaderInfo.Contact','+0010 111 1111111111')
SELECT JSON_QUERY(@varJData,'$.OrderInfo.HeaderInfo.Contact')

image

 

JSON_MODIFY can only manipulate a single value at a time. Therefore if the requirement is to change multiple values within a single query, you need to use JSON_MODIFY function multiple times. Suppose if we require to change the ‘ProductNo’ and the ‘Price’ of the first product in the ‘LineInfo’ we coud use the following syntax.

SET @varJData =
	JSON_MODIFY( 
		JSON_MODIFY(@varJData,'$.OrderInfo.LineInfo[0].ProductNo','P99999')
		,'$.OrderInfo.LineInfo[0].Price'
		,150
	)

image

 

FOR JSON

FOR JSON functionality is used When we are required to export SQL Tabular data as JSON data. This is very similar to the functionality of ‘FOR XML’. Each row will be formatted as a JSON object and values in cells will be generated as values of those respective JSON objects. Column names (or aliases) will be used as key names. Based on the options provided there are two variations in ‘FOR JSON’ usage.

1. FOR JSON AUTO - This will automatically create nested JSON sub arrays based on the table hierarchy used in the query. (similar to FOR XML AUTO)

2. FOR JSON PATH - This enables you to define the structure of output JSON using the column names/aliases. If you put dot-separated names in the column aliases, JSON properties will follow the naming convention. (This is similar to FOR XML PATH where you can use slash separated paths)

 

In order to illustrate the aforementioned concepts we need to prepare some sample data. Please use the following scripts to generate the necessary data.

--== Generate Required Schemas ==--
CREATE TABLE OrderHeader(
	TAG					VARCHAR(24)
	,ORD_NO				VARCHAR(10)
	,CUST_NO			VARCHAR(8)
	,ORD_DATE			DATE
	,ORD_AMOUNT			MONEY
	,ORD_STATUS			TINYINT
	
)

CREATE TABLE OrderLine(
	ORD_NO				VARCHAR(10)
	,LINE_NO			INT
	,PROD_NO			VARCHAR(8)
	,ORD_QTY			INT
	,ITEM_PRICE			MONEY
)

CREATE TABLE CustomerContact(
	CONT_ID				INT
	,CUST_NO			VARCHAR(8)
	,CONTACT_DETAILS	VARCHAR(24)
)

--== Insert Sample Data ==--
INSERT INTO dbo.OrderHeader(TAG,ORD_NO,CUST_NO,ORD_DATE,ORD_AMOUNT,ORD_STATUS)
VALUES('#ONLORD_12546_45634','ORD_1021','CUS0001','04-Jun-2016',1200.00,1)

INSERT INTO dbo.OrderLine(ORD_NO,LINE_NO,PROD_NO,ORD_QTY,ITEM_PRICE)
VALUES ('ORD_1021',1,'P00025',3,200.00), ('ORD_1021',1,'P12548',2,300.00)

INSERT INTO dbo.CustomerContact(CONT_ID, CUST_NO, CONTACT_DETAILS)
VALUES (1,'CUS0001','+0000 000 0000000000') ,(2,'CUS0001','info@abccompany.com'),(3,'CUS0001','finance@abccompany.com')

 

Extracting data as JSON using FOR JSON AUTO

SELECT 
    H.TAG
   ,H.ORD_NO
   ,H.CUST_NO
   ,H.ORD_DATE
   ,H.ORD_AMOUNT
   ,H.ORD_STATUS
   ,L.ORD_NO
   ,L.LINE_NO
   ,L.PROD_NO
   ,L.ORD_QTY
   ,L.ITEM_PRICE
FROM
	dbo.OrderHeader AS H
	JOIN dbo.OrderLine AS L
		ON L.ORD_NO = H.ORD_NO
WHERE
	H.ORD_NO = 'ORD_1021'
FOR JSON AUTO

 

You will get a similar result which is shown below:

[
    {
        "TAG":"#ONLORD_12546_45634",
        "ORD_NO":"ORD_1021",
        "CUST_NO":"CUS0001",
        "ORD_DATE":"2016-06-04",
        "ORD_AMOUNT":1200.0000,
        "ORD_STATUS":1,
        "L":[
            {"ORD_NO":"ORD_1021","LINE_NO":1,"PROD_NO":"P00025","ORD_QTY":3,"ITEM_PRICE":200.0000},
            {"ORD_NO":"ORD_1021","LINE_NO":1,"PROD_NO":"P12548","ORD_QTY":2,"ITEM_PRICE":300.0000}
        ]
    }
]

As described previously ‘FOR JSON AUTO’ will simply convert the column names or aliases as keys and produce the JSON. Table aliases will be used to create sub arrays.

But we could get a similar resultset like what we had in our previous examples by tweaking the above select statement as follows:

SELECT 
    H.TAG AS Tag
   ,H.ORD_NO AS OrderNo
   ,H.CUST_NO AS CustNo
   ,H.ORD_DATE AS OrderDate
   ,H.ORD_AMOUNT AS OrderAmount
   ,H.ORD_STATUS AS  OrderStatus
   ,LineInfo.ORD_NO AS [OrderNo]
   ,LineInfo.LINE_NO AS [LineNo]
   ,LineInfo.PROD_NO AS [ProdNo]
   ,LineInfo.ORD_QTY AS [Qty]
   ,LineInfo.ITEM_PRICE AS [ItemPrice]
FROM
	dbo.OrderHeader AS H
	JOIN dbo.OrderLine AS LineInfo
		ON LineInfo.ORD_NO = H.ORD_NO
WHERE
	H.ORD_NO = 'ORD_1021'
FOR JSON AUTO, ROOT ('OrderInfo')

Then we will be able to get the following JSON string.

{
    "OrderInfo":[
        {
            "Tag":"#ONLORD_12546_45634",
            "OrderNo":"ORD_1021",
            "CustNo":"CUS0001",
            "OrderDate":"2016-06-04",
            "OrderAmount":1200.0000,
            "OrderStatus":1,
            "LineInfo":[
                {"OrderNo":"ORD_1021","LineNo":1,"ProdNo":"P00025","Qty":3,"ItemPrice":200.0000},
                {"OrderNo":"ORD_1021","LineNo":1,"ProdNo":"P12548","Qty":2,"ItemPrice":300.0000}
            ]
        }
    ]
}

 

Extracting data as JSON using FOR JSON PATH

We can use the FOR JSON PATH functionality to format the output JSON the way we require easily. But there’s a restriction when we use ‘FOR JSON PATH’ to extract data, which is that you cannot have the same column name (or alias) duplicated among multiple columns. This will result in an error.

We will see how the details will be fetched using ‘FOR JSON PATH’

SELECT 
    H.TAG
   ,H.ORD_NO
   ,H.CUST_NO
   ,H.ORD_DATE
   ,H.ORD_AMOUNT
   ,H.ORD_STATUS
   --,L.ORD_NO	--If this line is uncommented it will throw an error	
   ,L.LINE_NO
   ,L.PROD_NO
   ,L.ORD_QTY
   ,L.ITEM_PRICE
FROM
	dbo.OrderHeader AS H
	JOIN dbo.OrderLine AS L
		ON L.ORD_NO = H.ORD_NO
WHERE
	H.ORD_NO = 'ORD_1021'
FOR JSON PATH

 

We will get the following JSON result.

[
    {
        "TAG":"#ONLORD_12546_45634",
        "ORD_NO":"ORD_1021",
        "CUST_NO":"CUS0001",
        "ORD_DATE":"2016-06-04",
        "ORD_AMOUNT":1200.0000,
        "ORD_STATUS":1,
        "LINE_NO":1,
        "PROD_NO":"P00025",
        "ORD_QTY":3,
        "ITEM_PRICE":200.0000
    },
    {
        "TAG":"#ONLORD_12546_45634",
        "ORD_NO":"ORD_1021",
        "CUST_NO":"CUS0001",
        "ORD_DATE":"2016-06-04",
        "ORD_AMOUNT":1200.0000,
        "ORD_STATUS":1,
        "LINE_NO":1,
        "PROD_NO":"P12548",
        "ORD_QTY":2,
        "ITEM_PRICE":300.0000
    }
]

Advantage in using ‘FOR JSON PATH’ is that you have the ability to control the structure using the column names/aliases. When dot seperated aliases are used, JSON properties will follow the naming convention. Please consider the below query and the results.

SELECT 
    H.TAG		AS 'HeaderInfo.Tag'
   ,H.ORD_NO		AS 'HeaderInfo.OrderNo'
   ,H.CUST_NO		AS 'HeaderInfo.CustNo'
   ,H.ORD_DATE		AS 'HeaderInfo.OrderDate'
   ,H.ORD_AMOUNT	AS 'HeaderInfo.OrderAmount'
   ,H.ORD_STATUS	AS 'HeaderInfo.OrderStatus'
   ,L.ORD_NO		AS 'LineInfo.OrderNo'
   ,L.LINE_NO		AS 'LineInfo.LineNo'
   ,L.PROD_NO		AS 'LineInfo.ProdNo'
   ,L.ORD_QTY		AS 'LineInfo.Qty'
   ,L.ITEM_PRICE	AS 'LineInfo.ItemPrice'
FROM
	dbo.OrderHeader AS H
	JOIN dbo.OrderLine AS L
		ON L.ORD_NO = H.ORD_NO
WHERE
	H.ORD_NO = 'ORD_1021'
FOR JSON PATH

You will see the following JSON result.

[
    {
        "HeaderInfo":{
            "Tag":"#ONLORD_12546_45634",
            "OrderNo":"ORD_1021",
            "CustNo":"CUS0001",
            "OrderDate":"2016-06-04",
            "OrderAmount":1200.0000,
            "OrderStatus":1
        },
        "LineInfo":{"OrderNo":"ORD_1021","LineNo":1,"ProdNo":"P00025","Qty":3,"ItemPrice":200.0000}
    },
    {
        "HeaderInfo":{
            "Tag":"#ONLORD_12546_45634",
            "OrderNo":"ORD_1021",
            "CustNo":"CUS0001",
            "OrderDate":"2016-06-04",
            "OrderAmount":1200.0000,
            "OrderStatus":1
        },
        "LineInfo":{"OrderNo":"ORD_1021","LineNo":1,"ProdNo":"P12548","Qty":2,"ItemPrice":300.0000}
    }
]

 

OPENJSON

OPENJSON is a table value function which will go through a given JSON string, and returns a relational table with it’s contents. It’ll iterate through JSON object arrays, elemets and generates a row for each element. There are two variations of this functionality.

  • Without a pre-defined schema where the values will be returned as key value pairs including it’s type to identify what sort of value is being returned.
  • With a well defined schema. This schema will be provided by us in the OPENJSON statement.

 

OPENJSON without a pre-defined schema

We will use the following JSON data string to find out the types which will be returned based on the data type.

{
    "Null Data":null,
    "String Data":"Some String Data",
    "Numeric Data": 1000.00,
    "Boolean Data": true,
    "Array Data":["A","B","C"],
    "Object Data":{"SomeKey":"Some Value"}
    }

 

DECLARE @vJSON AS NVARCHAR(4000) = N'{
	"Null Data":null,
	"String Data":"Some String Data",
	"Numeric Data": 1000.00,
	"Boolean Data": true,
	"Array Data":["A","B","C"],
	"Object Data":{"SomeKey":"Some Value"}
	}';  
  
SELECT * FROM OPENJSON(@vJSON) 

image

 

With some realistic set of JSON data.

DECLARE @vJSON AS NVARCHAR(4000) = N'{
	"Tag":"#ONLORD_12546_45634",
	"OrderNo":"ORD_1021",
	"CustNo":"CUS0001",
	"OrderDate":"2016-06-04",
	"OrderAmount":1200.0000,
	"OrderStatus":1
}';  
  
SELECT * FROM OPENJSON(@vJSON)  

image

 

OPENJSON with a pre-defined schema

We will use the same JSON string which we have used in the previous example and generate the result set with a pre-defined schema.

DECLARE @vJSON AS NVARCHAR(4000) = N'{
	"Tag":"#ONLORD_12546_45634",
	"OrderNo":"ORD_1021",
	"CustNo":"CUS0001",
	"OrderDate":"2016-06-04",
	"OrderAmount":1200.0000,
	"OrderStatus":1
}';  
  
SELECT * FROM OPENJSON(@vJSON) WITH(
	Tag				VARCHAR(24)
	,OrderNo		VARCHAR(8)
	,CustNo			VARCHAR(8)
	,OrderDate		DATE
	,OrderAmount	MONEY
	,OrderStatus	INT
) 
 

image

 

This is basically what has  been provided to support with JSON data in SQL 2016 natively. Hope this would be helpful for you.

Tuesday 23 August 2016

DROP IF EXISTS in SQL Server 2016 (DIE)

 

Prior to SQL Server 2016, when we need to drop a SQL Object, it's the best practice to check whether the respective object exists or not. Otherwise the operation will return in an error.


DROP TABLE [SomeTable]

If the object is not found it will return the following error.

Msg 3701, Level 11, State 5, Line 11
Cannot drop the table 'SomeTable', because it does not exist or you do not have permission.

Hence we need to change the syntax as:

IF EXISTS(SELECT 'x' FROM sys.objects AS O WHERE O.name = 'SomeTable' AND O.[type] = 'U')
    DROP TABLE [SomeTable]

   
OR

IF OBJECT_ID('dbo.SomeTable','U') IS NOT NULL
    DROP TABLE [SomeTable]

   
   
In SQL Server 2016 there is an easier way to do this using comparatively less amount for coding.

DROP TABLE IF EXISTS [SomeTable];
DROP PROCEDURE IF EXISTS [SomeProcedure];

Even this can be use when dropping columns and constraints from a table.

ALTER TABLE [TableName] DROP CONSTRAINT IF EXISTS [ConstraintName]
ALTER TABLE [TableName] DROP COLUMN IF EXISTS [TableName]

Eg:
CREATE TABLE SomeTable(
    Id        INT
    ,Name    VARCHAR(10)        NOT NULL CONSTRAINT [DF_SomeTable_Name] DEFAULT ('')
)

ALTER TABLE dbo.SomeTable
DROP CONSTRAINT IF EXISTS [DF_SomeTable_Name]

ALTER TABLE dbo.SomeTable
DROP COLUMN IF EXISTS [Name]


The beauty of this functionality is that even the object does not exists, it will not fail and execution will continue.

Currently, the following objects can be dropped with the DIE functionality:

  • ASSEMBLY
  • VIEW
  • DATABASE
  • DEFAULT
  • FUNCTION
  • PROCEDURE
  • INDEX
  • AGGREGATE
  • ROLE
  • RULE
  • SCHEMA
  • SECURITY POLICY
  • SEQUENCE
  • SYNONYM
  • TABLE
  • TRIGGER
  • TYPE
  • USER
  • VIEW

Hope this will be useful to you.

Saturday 13 August 2016

Connecting to an MS SQL Instance using NodeJS (Fixing ConnectionError: Port for SQLServer not found in ServerName & Failed to connect to localhost:undefined in 15000ms)

After few debates and discussions on new technologies in the market and how to adapt to them, during the weekend I thought of exploring NodeJS and it applications. Since I spent most of my time designing data-centric applications at office, as the first step I thought of connecting to MS SQL Server using NodeJS.
As a newbie to NodeJS, I went through the official documentation and managed to achieve it. However, during the course, I faced many difficulties and by referring to many of the articles, I was managed to resolve all these hurdles.
So I thought of including these problems which I faced and how to overcome them. So it would be a great help to anyone who’s exploring or trying to achieve this more easily.
For this, I will be using SQL Server 2014 on an Instance (.\SQL2K14).

1. First you need to download and install NodeJS. (https://nodejs.org/en/)
2. Install MSSQL package for Node, using the following syntax: (Use windows command prompt)
npm install mssql
3. Create a file named ‘connecttosql.js’ and include the following code:

//We require mssql package for this sample
var sqlcon = require('mssql');

function GetSQLData(queryCallback){        //A callback function is taken as an argument. Once the operation is completed we will be calling this
   
    //SQL Configuration
    var config = {
        user:'###'            //SQL User Id. Please provide a valid user
        ,password:'######'    //SQL Password. Please provide a valid password
        ,server:'localhost\\SQL2K14'   
        /*
            Since my SQL is an instance I am using 'localhost\\Instance'.
            If you have SQL installed on the default instance, it should be server:'localhost'
        */
        ,database: 'master'        //You can use any database here
    }

    var connection = new sqlcon.Connection(config,function(err){
        //In case of an error print the error to the console. You can have your customer error handling
        if (err) console.log(err);
       
        //Query Database
        var dbQuery = new sqlcon.Request(connection);
        //Purposely we are delaying the results
        dbQuery.query("WAITFOR DELAY '00:00:05';SELECT * FROM INFORMATION_SCHEMA.TABLES",function(err,resultset){
            //In case of an error print the error to the console. You can have your customer error handling
            if (err) console.log(err);
           
            //Passing the resultset to the callback function
            queryCallback(resultset);
        })
    });
}

function callback (resultset){
    console.dir('Results returned and printed from the call back function');
    console.dir(resultset);
   
    //Exit the application
    console.dir('Exiting the Application');
    process.exit(0);
}

//Calling the function
console.dir('Calling GetSQLData');
GetSQLData(callback);
/*
    Once we call this function even there's a delay to return the results
    you will see the next line printing 'Waiting for callback function to get invoked...'
*/
console.dir('Waiting for callback function to get invoked...');

I have provided the relevant information as comments.
Before running the program please make sure the following configurations on the SQL server is already done:

1. Enable TCP/IP Protocols in SQL Server Configuration Manager for both server and client.

image

Or else when running it will result an error shown below:
{ [ConnectionError: Port for SQL2K14 not found in ServerName;[YourPCName];InstanceName;SQL2K14;IsClustered;No;Version;12.0.2000.8;np;\\[YourPCName]\pipe\M
SSQL$SQL2K14\sql\query;;]
  name: 'ConnectionError',
  message: 'Port for SQL2K14 not found in ServerName;[YourPCName];InstanceName;SQL2K14;IsClustered;No;Version;12.0.2000.8;np;\\\\[YourPCName]\\pipe\\MSSQL
$SQL2K14\\sql\\query;;',
  code: 'EINSTLOOKUP' }
{ [ConnectionError: Connection is closed.]
  name: 'ConnectionError',
  message: 'Connection is closed.',
  code: 'ECONNCLOSED' }

2. In SQL Server Configuration Manager under SQL Server Services make sure that ‘SQL Server Browser’ service is running.

image

Or else when running the script it will result an error shown below:
{ [ConnectionError: Failed to connect to localhost:undefined in 15000ms]
  name: 'ConnectionError',
  message: 'Failed to connect to localhost:undefined in 15000ms',
  code: 'ETIMEOUT' }
{ [ConnectionError: Connection is closed.]
  name: 'ConnectionError',
  message: 'Connection is closed.',
  code: 'ECONNCLOSED' }

if the aforementioned issues are already addressed execute the above file using the following syntax in a Windows Command Window:
node connecttosql.js
You should get a similar result which is shown below:
'Calling GetSQLData'
'Waiting for callback function to get invoked...'
'Results returned and printed from the call back function'
[ { TABLE_CATALOG: 'master',
    TABLE_SCHEMA: 'dbo',
    TABLE_NAME: 'spt_fallback_db',
    TABLE_TYPE: 'BASE TABLE' },
  { TABLE_CATALOG: 'master',
    TABLE_SCHEMA: 'dbo',
    TABLE_NAME: 'spt_fallback_dev',
    TABLE_TYPE: 'BASE TABLE' },
  { TABLE_CATALOG: 'master',
    TABLE_SCHEMA: 'dbo',
    TABLE_NAME: 'spt_fallback_usg',
    TABLE_TYPE: 'BASE TABLE' },
  { TABLE_CATALOG: 'master',
    TABLE_SCHEMA: 'dbo',
    TABLE_NAME: 'spt_values',
    TABLE_TYPE: 'VIEW' },
  { TABLE_CATALOG: 'master',
    TABLE_SCHEMA: 'dbo',
    TABLE_NAME: 'spt_monitor',
    TABLE_TYPE: 'BASE TABLE' },
  { TABLE_CATALOG: 'master',
    TABLE_SCHEMA: 'dbo',
    TABLE_NAME: 'MSreplication_options',
    TABLE_TYPE: 'BASE TABLE' } ]
'Exiting the Application'

I hope this will help anyone who’s using node to connect to SQL and facing the aforementioned issues.

Wednesday 9 March 2016

String or binary data would be truncated / Arithmetic overflow error converting numeric to data type numeric – Workaround

 

There’s nothing more annoying than getting the error ‘String or binary data would be truncated’ or ‘Arithmetic overflow error converting numeric to data type numeric’, when you need to insert data to a table using a SELECT statement. To make it more interesting, the SQL won’t be providing us the name of the column (or columns) which is causing this issue. (This is due to the SQL architecture on how it executes queries)

To illustrate this I will use a small sample.

Suppose we have a table to store some Customer details:

CREATE TABLE Customer_Data(
    CustId        TINYINT
    ,CustFName    VARCHAR(10)
    ,CustLName    VARCHAR(10)
    ,MaxCredit    NUMERIC(6,2)
)

We will try to insert details to the above table. (In reality the SELECT statement will be very complex and could fetch lots of rows)


INSERT INTO dbo.Customer_Data(
    CustId
    ,CustFName
    ,CustLName
    ,MaxCredit
)

SELECT 1 AS CustId,'John' AS CustFName,'Doe' AS CustLName,1000.00 AS MaxCredit UNION ALL
SELECT 2,'Jane','Doe',1000.00 UNION ALL
SELECT 3,'James','Whitacker Jr.',15000.00

 

This will result the following error:

Msg 8152, Level 16, State 14, Line 48
String or binary data would be truncated.

The statement has been terminated.

The challenge here is to find out actually which columns are having this issue. (As mentioned in reality number of columns could be very large)

However there is a small workaround which we can use to find out the columns which is causing the insertion to fail. You need to do the following in order to find out these columns.

1. First create a table using the same select statement. (You can either create a temporary table or an actual table based on the environment and your need). I will create two tables, one actual and one temporary to illustrate both the options.

SELECT A.*
INTO Temp_Customer_Data
FROM(
    SELECT 1 AS CustId,'John' AS CustFName,'Doe' AS CustLName,1000.00 AS MaxCredit UNION ALL
    SELECT 2,'Jane','Doe',1000.00 UNION ALL
    SELECT 3,'James','Whitacker Jr.',15000.00
) AS A


SELECT A.*
INTO #Customer_Data
FROM(
    SELECT 1 AS CustId,'John' AS CustFName,'Doe' AS CustLName,1000.00 AS MaxCredit UNION ALL
    SELECT 2,'Jane','Doe',1000.00 UNION ALL
    SELECT 3,'James','Whitacker Jr.',15000.00
) AS A

2. Use the following query to identify the issue columns

Actual Table:

;WITH Cte_Source AS (
SELECT
    C.COLUMN_NAME
    ,C.DATA_TYPE
    ,C.CHARACTER_MAXIMUM_LENGTH
    ,C.NUMERIC_PRECISION
    ,C.NUMERIC_SCALE
FROM
    INFORMATION_SCHEMA.TABLES AS T
    JOIN INFORMATION_SCHEMA.COLUMNS AS C
        ON C.TABLE_CATALOG = T.TABLE_CATALOG
        AND C.TABLE_NAME = T.TABLE_NAME
        AND C.TABLE_SCHEMA = T.TABLE_SCHEMA
WHERE
    T.TABLE_NAME = 'Temp_Customer_Data'        -- Source Table
    AND T.TABLE_SCHEMA = 'dbo'
)
,Cte_Destination AS (
SELECT
    C.COLUMN_NAME
    ,C.DATA_TYPE
    ,C.CHARACTER_MAXIMUM_LENGTH
    ,C.NUMERIC_PRECISION
    ,C.NUMERIC_SCALE
FROM
    INFORMATION_SCHEMA.TABLES AS T
    JOIN INFORMATION_SCHEMA.COLUMNS AS C
        ON C.TABLE_CATALOG = T.TABLE_CATALOG
        AND C.TABLE_NAME = T.TABLE_NAME
        AND C.TABLE_SCHEMA = T.TABLE_SCHEMA
WHERE
    T.TABLE_NAME = 'Customer_Data'        -- Destination Table
    AND T.TABLE_SCHEMA = 'dbo'
)
SELECT
    S.COLUMN_NAME
   ,S.DATA_TYPE
   ,S.CHARACTER_MAXIMUM_LENGTH
   ,S.NUMERIC_PRECISION
   ,S.NUMERIC_SCALE

   ,D.COLUMN_NAME
   ,D.DATA_TYPE
   ,D.CHARACTER_MAXIMUM_LENGTH
   ,D.NUMERIC_PRECISION
   ,D.NUMERIC_SCALE
FROM
    Cte_Source AS S
    JOIN Cte_Destination AS D
        ON D.COLUMN_NAME = S.COLUMN_NAME
WHERE
    S.CHARACTER_MAXIMUM_LENGTH > D.CHARACTER_MAXIMUM_LENGTH
    OR S.NUMERIC_PRECISION > D.NUMERIC_PRECISION

 

Temporary Table:

;WITH Cte_Source AS (
SELECT
    C.COLUMN_NAME
    ,C.DATA_TYPE
    ,C.CHARACTER_MAXIMUM_LENGTH
    ,C.NUMERIC_PRECISION
    ,C.NUMERIC_SCALE
FROM
    tempdb.sys.objects so
    JOIN tempdb.INFORMATION_SCHEMA.TABLES AS T
        ON so.name = T.TABLE_NAME
        AND so.[object_id] = OBJECT_ID('tempdb..#Customer_Data')
    JOIN tempdb.INFORMATION_SCHEMA.COLUMNS AS C
        ON C.TABLE_CATALOG = T.TABLE_CATALOG
        AND C.TABLE_NAME = T.TABLE_NAME
        AND C.TABLE_SCHEMA = T.TABLE_SCHEMA
   
WHERE
    T.TABLE_SCHEMA = 'dbo'
)
,Cte_Destination AS (
SELECT
    C.COLUMN_NAME
    ,C.DATA_TYPE
    ,C.CHARACTER_MAXIMUM_LENGTH
    ,C.NUMERIC_PRECISION
    ,C.NUMERIC_SCALE
FROM
    INFORMATION_SCHEMA.TABLES AS T
    JOIN INFORMATION_SCHEMA.COLUMNS AS C
        ON C.TABLE_CATALOG = T.TABLE_CATALOG
        AND C.TABLE_NAME = T.TABLE_NAME
        AND C.TABLE_SCHEMA = T.TABLE_SCHEMA
WHERE
    T.TABLE_NAME = 'Customer_Data'        -- Destination Table
    AND T.TABLE_SCHEMA = 'dbo'
)
SELECT
    S.COLUMN_NAME
   ,S.DATA_TYPE
   ,S.CHARACTER_MAXIMUM_LENGTH
   ,S.NUMERIC_PRECISION
   ,S.NUMERIC_SCALE

   ,D.COLUMN_NAME
   ,D.DATA_TYPE
   ,D.CHARACTER_MAXIMUM_LENGTH
   ,D.NUMERIC_PRECISION
   ,D.NUMERIC_SCALE
FROM
    Cte_Source AS S
    JOIN Cte_Destination AS D
        ON D.COLUMN_NAME = S.COLUMN_NAME
WHERE
    S.CHARACTER_MAXIMUM_LENGTH > D.CHARACTER_MAXIMUM_LENGTH
    OR S.NUMERIC_PRECISION > D.NUMERIC_PRECISION

 

Both the aforementioned queries will return the following result.

image

The reason to return the above three columns as follows:

1. CustId ==> In our destination table CustId’s data type is TINYINT. Even the select query is returning the results within the boundary, the data type which our insertion query is returning is an INT. So there could be a possibility that there could be large numbers that the destination table could not hold.

2. CustName ==> ‘Whitacker Jr.’ is exceeding the maximum length of 10 which is in the destination table.

3. MaxCredit ==> In the destination table the size of the column is numeric (6,2). Which means it can hold values up to 9999.99. But our insertion query contains a record which consists of 15000.00.

 

Hope this might be helpful to you.

Thursday 3 March 2016

Extracting Date (Excluding Time) from a DateTime value in SQL Server

 

SQL Server supports many data types where we can store the Date along with the time, such as

  • DateTime
  • SmallDateTime
  • DateTimeOffset
  • DateTime2

But in some cases it’s required only to fetch only the date portion from an aforementioned type of field.

There are few ways which we can achieve this task easily using T-SQL.

The easiest of the method is to CAST the DateTime value directly to a DATE type.

SELECT CAST(GETDATE() AS DATE)                                    --==> 2016-03-03

Also you can achieve this by using the CONVERT function providing different styles as per your requirement.

SELECT CONVERT(VARCHAR(24),GETDATE(),101)                        --==> 03/03/2016
SELECT CONVERT(VARCHAR(24),GETDATE(),102)                        --==> 2016.03.03

Please refer to the following URL (https://msdn.microsoft.com/en-sg/library/ms187928.aspx) for more details on the CONVERT function and supported styles:

But if your requirement is to return a DateTime type but having only the date portion you can use the following syntax:

SELECT CAST(FLOOR(CAST(GETDATE() AS FLOAT)) AS DATETIME)        --==> 2016-03-03 00:00:00.000
SELECT CAST(CONVERT(VARCHAR(8),GETDATE(),112) AS DATETIME)      --==> 2016-03-03 00:00:00.000

Tuesday 16 February 2016

Index REBUILD vs. REORGANIZE in SQL SERVER

Couple of days back there was an interesting statement (or rather a question) was brought up by one of the colleagues in the company. Ultimately the initial stement left us with one simple question, which is the difference between Index REBUILD and REORGANIZE and when should be exactly use it.

If you google the aforementioned you can find numerous posts/blogs regarding this. Therefore I will keep things very simple and easier way to understand.

Rebuilding an index or Reorganizing is required when index fragmentation has reached to a considerable percentage. The fragmentation percentage can be identified using the Dynamic Management View - sys.dm_db_index_physical_stats in SQL Server.

You may get more details on the view on the following link: https://msdn.microsoft.com/en-us/library/ms188917.aspx

You can get a list of fragmented indexes using the following query:

SELECT
    OBJECT_NAME(Stat.object_id)
    ,I.name
    ,Stat.index_type_desc
    ,Stat.avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats(DB_ID(),NULL,NULL,NULL,NULL) AS Stat
JOIN sys.indexes AS I
        ON Stat.index_id = I.index_id
        AND Stat.object_id = I.object_id
WHERE
    Stat.avg_fragmentation_in_percent > 30

Executing the above query will give you a list of fragmented indexes which has more than 30% fragmentation. ‘index_type_desc’ will give you a hint what sort of index is it. (clustered, non-clustered, heap etc…)

As per the guidlines provided by Microsoft, it’s the best practice to Reorganize the index if the fragmentation is less than or equal to 30% (more than 5%) and Rebuild it if it’s more than 30%

 

Rebuilding Indexes

  • Should perform this if the fragmentation is more than 30%
  • Operation can be done online or offline

Index rebuilding can be done useing the following syntax:

In order to build all the indexes on a specific table:

USE <Database_Name>
GO

ALTER INDEX ALL ON <Table_Name> REBUILD
GO

 

In order to build only a specific index:

USE <Database_Name>
GO

ALTER INDEX <Index_Name> ON <Table_Name> REBUILD
GO

 

Reorganizing Indexes

  • Should perform this if the fragmentation is more than 5% but less than or equal to 30%
  • Operation is always online

Index reorganizing can be done using the following syntx:

In order to reorganize all the indexes on a specific table:

USE <Database_Name>
GO

ALTER INDEX ALL ON <Table_Name> REORGANIZE
GO

In order to reorganize only a specific index:

USE <Database_Name>
GO

ALTER INDEX <Index_Name> ON <Table_Name> REORGANIZE
GO

 

Optionally you can set many attributes during the Rebuild or Re-Organize process (Eg: FILLFACTOR, SORT_IN_TEMPDB etc..). Please check on the following link for more details on the REBUILD options: https://msdn.microsoft.com/en-us/library/ms188388.aspx

How ever REBUILD or REORGANIZE will not have an effect on the HEAP fragmentation. In order to remove the heap fragmentation you can use the followng syntax (*** NOT THE BEST PRACTICE):

USE <Database_Name>
GO

ALTER TABLE <Table_Name> REBUILD
GO

** Eventhough the aforementioned syntax will remove the HEAP fragmentation, it is considered as bad as creating and dropping a clustered index, which will leave behind lots of fragmentation on non clustered indexes. The best practise would be to create a clustered index on the table to remove the HEAP fragmentation. You can find more details on this on the following blog post by Paul. S Randal which he had illustrated nicely.

Wednesday 20 January 2016

Error accessing Oracle Database Objects via Linked Server in SQL Server (The OLE DB provider "OracleOLEDB.Oracle" for linked server reported an error. Access denied.)


In one of the project we are working these days, there was a requirement to fetch some details, directly from the Oracle Database via VIEWS. Initially everything was setup correctly on the Oracle Database & Server side so that we can access the relevant schemas and fetch data without any issue. And once the oracle client is setup and the configurations are correctly setup (“tnsnames.ora”), we were able to fetch the details using .Net Code. And when we checked using the Oracle SQL Developer UI, it was evident that the details were easily fetched.
How ever we faced an issue when we were asked to access and fetch the same set of details from SQL objects using OPENQUERY. Even when we try a simple query such as retrieving “sysdate”, we got an ‘Access Denied’ error.


SELECT FROM OPENQUERY ([LINKED_SERVER], 'SELECT sysdate FROM DUAL')

The OLE DB provider "OracleOLEDB.Oracle" for linked server reported an error. Access denied. Cannot get the column information from OLE DB provider "OraOLEDB.Oracle" for linked server "<Linked_Server>"

image_thumb[2]

After spending some time with the configurations on both SQL and Oracle side, we were able to rectify this issue by allowing “Allow inprocess” option in linked server providers in SQL side.

image_thumb[4]

I am sharing this hoping that it would help someone to resolve the similar kind of issue without any hassle.





Wednesday 25 November 2015

Analyzing SQL Server Error Logs / Agent Logs using T-SQL

Even though you design our SQL Scripts with the best methods using best practices, or configure the SQL Server to perform correctly and in the optimized manner, you cannot prevent things going wrong. Luckily SQL Server does a great job on logging all the issues which we will be encountering during the course. Things could have been worse if you need to go through the error log file using only a text editor like the ‘Note Pad’ application (Favorite text editor of majority people). But fortunately SQL Server had provided us some help when you need to dig deep into Error Log.

image

But things could get more complicated if the Error Log contains lots of records and in those records if you require to swim for the issue which you are looking for.

image

Even though it provides you some searching and filtering capabilities, it could still be very challenging and time consuming.

image

image

However we do have another workaround which might come in handy. That’s to query the Error Logs using T-SQL. This can be done using the system procedure ‘sys.sp_readerrorlog’. This consists with few parameters.
USE [master]
GO
/****** Object:  StoredProcedure [sys].[sp_readerrorlog]    Script Date: 24/11/2015 7:11:36 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
ALTER proc [sys].[sp_readerrorlog](
    @p1        int = 0,
    @p2        int = NULL,
    @p3        nvarchar(4000) = NULL,
    @p4        nvarchar(4000) = NULL)
as
begin

    if (not is_srvrolemember(N'securityadmin') = 1)
    begin
       raiserror(15003,-1,-1, N'securityadmin')
       return (1)
    end
    
    if (@p2 is NULL)
        exec sys.xp_readerrorlog @p1
    else 
        exec sys.xp_readerrorlog @p1,@p2,@p3,@p4   
end



  1. @p1 –> This represents the error log which you need to inspect (0 ~ Current | 1 ~ Archive #1 etc..)
  2. @p2 –> Type of the error log which you want to inspect (NULL or 1 ~ Error Log | 2 ~ SQL Agent Log)
  3. @p3 –> 1st Search Parameter (A value which you want to search the contents for)
  4. @p4 –> 2nd Search Parameter (A value which you want to search to further refine the result set)

**Please note: Aforementioned parameters are optional. Therefore if you don’t provide any parameters, it will return the whole contents of the current/active Error Log.

Few Examples

1. This will return all entries in the current Error Log
EXEC sys.xp_readerrorlog @p1 = 0


2. This will return all the entries in the current SQL Agent Log
EXEC sys.xp_readerrorlog @p1 = 0, @p2 = 2


3. This will return all the entries in the current SQL Error log where ever the value ‘CLR’ consist.
EXEC sys.sp_readerrorlog @p1=0, @p2=1, @p3='CLR'


image


4. This will return the entries in the current SQL Error log when the value ‘CLR’ and ‘Framework’ exist.
EXEC sys.sp_readerrorlog @p1=0, @p2=1, @p3='CLR', @p4='Framework'


image

When we execute the stored procedure ‘sys.sp_readerrorlog’, inside it will call an extended stored procedure which will accept 7 parameters, which is  ‘sys.xp_readerrorlog’. The parameter details are as follows:

Param #ParameterDetails
1Log Number0 – Current / 1 – Archive #1 / 2 – Archive #2 etc…
2Log Type1 – SQL Error Log / 2 – SQL Agent Log
3Search Text 1Search term which will be searched on the Text column
4Search Text 2Search term which will be searched on the Text column. **If both search texts are supplied it will return rows containing both texts.
5Start DateLog entries which the ‘Log Date’ is newer than the date provided. (including the date provided)
6End DateLog entries which is between the Start Date and End Date
7Sort OrderASC – Ascending / DESC - Descending

Eg:

EXEC sys.xp_readerrorlog 0,1,N'',N'', '20151124','20151125','DESC'    


I hope this information will help you when you need to query the Error Log in order to troubleshoot an issue.

Sunday 22 November 2015

Enabling Instant File Initialization in SQL Server

Technorati Tags:

Every time the SQL data file or log file expands, it fills the newly allocated (expanded) space with zero. There are few good and bad having this feature (Zeroing the allocated space). One downside of this is, this process will block all the sessions which are writing to these files (data and log files), during this initialization period. One might debate this time period will be very small, but it could be an extremely critical one for some process.

However enabling the Instant File Initialization behavior will make sure that the aforementioned issue will not have any effects when SQL Data file is expanded. But there will be a security risk enabling this feature. When this is enabled, it could be a possibility that unallocated part of the SQL data file could contain information related to previously deleted files (OS related information). There are tools which can examine this data and people who will be having access to the data file (most probably DB Administrators) can easily see the underlying data of these unallocated areas.

Before enabling this we will check how SQL will behave with this option disabled:

DBCC TRACEON(3004,3605,-1)
GO

CREATE DATABASE Sample_Database
GO

EXEC sys.sp_readerrorlog
GO

DROP DATABASE Sample_Database
GO

DBCC TRACEOFF(3004,3605,-1)
GO


 


image


You can clearly see that SQL had an operation to zero out both data and the log file. (I have only highlighted the data file [.mdf])


Enabling Instant File Initialization can be done by adding a ‘SA_MANAGE_VOLUME_NAME’ permission (also know as ‘Perform Volume Maintenance Task’) to the SQL Server Startup account. This can be done as follows:


Open the Local Security Policy Management Application. (execute secpol.msc from the run command or from command line). Double click on the ‘Perform Volume Maintenance Tasks’ which can be found In Security Settings –> Local Policies –> User Rights Assignment.


image 


And add the SQL Server start up account to the list.


image


 


SQL Server startup account can be found in ‘Log On’ Tab in ‘SQL Server <Instance>’ in the Services.


image


 


*** Please note: SQL will check whether this feature is enabled or disable during start up. Therefore we need to restart the service once it’s enabled or disabled. Also we can only enable this for SQL data files only. We cannot enable this for log files.


After restarting the SQL Service we will execute the code snippet which we executed earlier. You will be able to see that SQL is only zeroing the log file now.


image