Loading...

Follow SQLRx – The Daily Dose on Feedspot

Continue with Google
Continue with Facebook
or

Valid

I have been working on some improvements to some of the regular ways we monitor for important changes.  We always have to be on the lookout for unexpected changes being made in the SQL instances that we monitor since often times we are not the only team who has sysadmin access to the instance.  We are always the best trained to take care of and configure things but we sometimes find that someone makes a change either to the SQL or database configuration without telling us.  We want to know when things like this happen!

Two of those important changes that I want to know about are when configuration changes are made to a SQL instance and when something about a database is changed.  The database change can be adding or removing files, new or dropped databases, changes in state or configuration changes.  It’s easy to get immediate notification of all of these changes by putting in a few DDL triggers at the server level.  I will show you how to do this on my local test instance.

Standard Disclaimer –– Please make sure to initially test these DDL triggers in a development or test server and only move to production when you are sure that things work as expected. — Standard Disclaimer

Requirements:  To make this work you need to have a SQL instance with Database Mail configured and able to send email.  Substitute your mail profile name everywhere you see “DBMail” and substitute your email where you see somebody@somecompany.com.

Instance Configuration Change Notification

Use the following to create the DDL trigger that will notify you when something is changed in the instance configuration:

USE master
GO
-- Trigger to notify of instance configuration changes
CREATE TRIGGER [ddl_trig_alter_inst_config]
ON ALL SERVER
FOR ALTER_INSTANCE
AS
DECLARE @DDLText VARCHAR(MAX)
DECLARE @EmailSubject VARCHAR(MAX)
DECLARE @CharInxOutput INT
SET @EmailSubject = 'Instance configuration changed on ' + @@SERVERNAME + ' by ' + SUSER_SNAME() 
SET @DDLText = (SELECT EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','NVARCHAR(MAX)'))
SET @CharInxOutput = CHARINDEX('show', @DDLText)

IF @CharInxOutput = 0  -- Use this to filter out all the 'show advanced options' statements that occur when using the GUI
	EXEC msdb.dbo.sp_send_dbmail
	 @profile_name = 'DBMail',
	 @recipients = 'somebody@somecompany.com',
	 @body = @DDLText,
	 @subject = @EmailSubject
GO

Now make a configuration change:

EXEC sys.sp_configure N'max server memory (MB)', N'7000'  -- Changes max memory to 7GB
GO
RECONFIGURE WITH OVERRIDE
GO

And watch your email:

Database Configuration Change Notifications

Use the following to create DDL triggers that will notify you when something is changed in any database in the instance:

-- Trigger to notify of new database created
CREATE TRIGGER [ddl_trig_new_db]
ON ALL SERVER
FOR CREATE_DATABASE
AS
DECLARE @DDLText VARCHAR(MAX)
DECLARE @EmailSubject VARCHAR(MAX)
SET @EmailSubject = 'Database Created on ' + @@SERVERNAME + ' by ' + SUSER_SNAME() 
SET @DDLText = (SELECT EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','NVARCHAR(MAX)'))

EXEC msdb.dbo.sp_send_dbmail
 @profile_name = 'DBMail',
 @recipients = 'somebody@somecompany.com',
 @body = @DDLText,
 @subject = @EmailSubject
GO

-- Trigger to notify of dropped database
CREATE TRIGGER [ddl_trig_drop_db]
ON ALL SERVER
FOR DROP_DATABASE
AS
DECLARE @DDLText VARCHAR(MAX)
DECLARE @EmailSubject VARCHAR(MAX)
SET @EmailSubject = 'Database dropped on ' + @@SERVERNAME + ' by ' + SUSER_SNAME() 
SET @DDLText = (SELECT EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','NVARCHAR(MAX)'))

EXEC msdb.dbo.sp_send_dbmail
 @profile_name = 'DBMail',
 @recipients = 'somebody@somecompany.com',
 @body = @DDLText,
 @subject = @EmailSubject
GO

-- Trigger to notify of changed database
CREATE TRIGGER [ddl_trig_alter_db]
ON ALL SERVER
FOR ALTER_DATABASE
AS
DECLARE @DDLText VARCHAR(MAX)
DECLARE @EmailSubject VARCHAR(MAX)
SET @EmailSubject = 'Database altered on ' + @@SERVERNAME + ' by ' + SUSER_SNAME() 
SET @DDLText = (SELECT EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','NVARCHAR(MAX)'))

EXEC msdb.dbo.sp_send_dbmail
 @profile_name = 'DBMail',
 @recipients = 'somebody@somecompany.com',
 @body = @DDLText,
 @subject = @EmailSubject
GO

-- Trigger to notify of database scoped changes
CREATE TRIGGER [ddl_trig_alter_db_scoped]
ON ALL SERVER
FOR ALTER_DATABASE_SCOPED_CONFIGURATION
AS
DECLARE @Qry VARCHAR(MAX)
DECLARE @EmailSubject VARCHAR(MAX)
DECLARE @DBName VARCHAR(255)
SET @DBName = (SELECT EVENTDATA().value('(/EVENT_INSTANCE/DatabaseName)[1]', 'VARCHAR(255)'))
SET @EmailSubject = @DBName + ' database altered on ' +  @@SERVERNAME + ' by ' + SUSER_SNAME() 
SET @Qry = (SELECT EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','NVARCHAR(MAX)'))

EXEC msdb.dbo.sp_send_dbmail
 @profile_name = 'DBMail',
 @recipients = 'somebody@somecompany.com',
 @body = @Qry,
 @subject = @EmailSubject
GO

I tested a bunch of different database changes including taking one offline, and all different kinds of configuration changes but may not have run into the specific change that cannot be captured using these methods.  Here are some of the notifications that I received:

I do have one thing that you need to be aware of when a new database is created with the DDL triggers in place.  Not only will you receive the above notice, you will receive a notification of all of the database configuration that is set up when it is created.  So you need to be ready receive a bunch of emails when a new database is created.  I received 28 emails of all the configuration settings in mine, so just be ready.  However, I feel like it is better to know about new databases being created in production environments than to find out about it later on.

If you want to see what DDL triggers might be present on your system or just want to check out the triggers we just made, all you have to do is query sys.server_triggers.

SELECT * FROM sys.server_triggers
Cleanup!  Cleanup!  Everybody cleanup!!

Here is how to get rid of the triggers when you are done testing.  I always try to provide cleanup statements since I am an OCD clean freak on my SQL Servers.

--Cleanup
DROP TRIGGER ddl_trig_alter_inst_config  
ON ALL SERVER;  
GO

DROP TRIGGER ddl_trig_new_db  
ON ALL SERVER;  
GO
DROP TRIGGER ddl_trig_drop_db  
ON ALL SERVER;  
GO
DROP TRIGGER ddl_trig_alter_db  
ON ALL SERVER;  
GO
DROP TRIGGER ddl_trig_alter_db_scoped  
ON ALL SERVER;  
GO

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

The post Use DDL Triggers to get Instance and Database Change Notifications appeared first on SQLRx.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Hooray!!  We finally have our 70-764 and 70-765 certification exams done and can turn our attention back to blogging more regularly.  Those exams were pretty tough mostly because there was a lot of information that had to be memorized since there are always aspects of SQL that we don’t regularly use and there is no access to internet searches during the exam.  All of us (or maybe it’s just me) are by now so used to having access to SQL help and detail information on how to find things using the internet that it seems to me that exams that cover such a large subject like SQL will likely become either more general or even more specialized.  I just don’t know anyone who has managed to memorize all of the DMV’s with their columns and input parameters along with all of the SQL PowerShell commandlets with their input parameters that apply.  Okay….rant over and on to the fun stuff!

I was recently working with one of our developers on an upgrade project where we had to move a boatload of jobs to the new SQL instance, many of which call SSIS packages stored in msdb.  Since we are moving up to SQL 2016 in this particular project, we wanted to store packages in the SSIS Catalog to take advantage of it’s flexibility.  This meant that my developer needed a query that would give him some info on jobs that were actually being actively run and were candidates for move.  They also wanted to find the jobs that called SSIS packages so that they could focus their work in the right places.

Here is what I came up with.  Hey….since he liked it, I thought you might too.

USE msdb
GO

-- Just job info
SELECT j.name AS JobName, 
	(CASE WHEN j.[enabled] = 1 THEN 'Job Enabled' ELSE 'Job Disabled' END) AS JobEnabledDisabled, 
	(CASE WHEN ss.[enabled] = 1 THEN 'Schedule Enabled' WHEN ss.[enabled] = 0 THEN 'Schedule Disabled' ELSE 'Not Scheduled' END) AS JobScheduleEnabledDisabled, 
	s.last_run_date AS JobLastRunDate,
	s.last_run_time AS JobLastRunTime,
	(CASE WHEN s.last_run_outcome = 0 THEN 'Failed' WHEN s.last_run_outcome = 1 THEN 'Sucessful' WHEN s.last_run_outcome = 0 THEN 'Cancelled' END) AS JobLastOutcome,
	js.next_run_date AS JobNextRunDate, 
	js.next_run_time AS JobNextRunTime
FROM sysjobs j 
JOIN sysjobsteps s ON j.job_id = s.job_id
LEFT JOIN sysjobschedules js ON j.job_id = js.job_id
LEFT JOIN sysschedules ss ON js.schedule_id = ss.schedule_id
WHERE j.enabled = 1 -- 1= enabled, 0 = disabled
	--AND j.name LIKE 'Index%'  -- filter on job name
ORDER BY j.name


--  Job step with command
SELECT j.name AS JobName, 
	s.step_id AS JobStepID,
	s.step_name AS JobStepName, 
	(CASE WHEN j.[enabled] = 1 THEN 'Job Enabled' ELSE 'Job Disabled' END) AS JobEnabledDisabled, 
	(CASE WHEN ss.[enabled] = 1 THEN 'Schedule Enabled' WHEN ss.[enabled] = 0 THEN 'Schedule Disabled' ELSE 'Not Scheduled' END) AS JobScheduleEnabledDisabled, 
	--js.next_run_date AS JobNextRunDate, 
	--js.next_run_time AS JobNextRunTime,
	c.name AS JobCategory,
	s.subsystem AS JobSubsystem, 
	s.command AS JobStepCommand
FROM sysjobs j 
JOIN sysjobsteps s ON j.job_id = s.job_id
JOIN syscategories c ON j.category_id = c.category_id
LEFT JOIN sysjobschedules js ON j.job_id = js.job_id
LEFT JOIN sysschedules ss ON js.schedule_id = ss.schedule_id
WHERE j.enabled = 1 -- 1= enabled, 0 = disabled
	--AND j.name LIKE 'Index%'  -- filter on job name
	--AND c.name LIKE '%Maint%'  -- filter on category name
	--AND s.subsystem = 'CmdExec'  -- TSQL, CmdExec, Snapshot, LogReader, Distribution, Merge, QueueReader, ANALYSISQUERY, ANALYSISCOMMAND, SSIS, PowerShell
ORDER BY j.name, s.step_id
GO

I have provided a few ways to filter info if you need it.  The results are pretty straightforward…

You can run each query independently if you like.  As usual, feel free to add to these and make them your own.  But, most importantly, enjoy.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

The post And Now a Couple of Job Queries appeared first on SQLRx.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Overview

If you are looking for major performance gains, the answer is no.  However, if you are using SQL Server 2017 and developing new code, you should consider using the new String_agg function that is available on SQL Server 2017 because code that uses this function is so much more readable and easier to write and maintain.  This article compares the performance of the two methods while explaining the differences in the code.  The examples provided employ lists of SQL Server table names that are merged into a dynamic SQL clause that can be used in where or join clauses.

Testing Strategy

Since list length is often a concern when splitting or merging items, various numbers of records were merged and timed.  A relatively small table was chosen to minimize the impact of I/O, and caches were purged before the execution of each method.  Since the individual execution times were very small, the pertinent code was placed within a 10,000-iteration loop.  The tested numbers of records to be merged into a string were as follows:  225, 450, 675, 900, and 1125, and the loop code is shown below. 

declare @NumRows int = 1125 -– # of rows to be merged
declare @strSeparator nvarchar(20) = ' or '
declare @MaxLoop int = 10000
declare @LoopInx int = 0
declare @strFilter nvarchar(max) = ''
set nocount on
while @LoopInx <= @MaxLoop 
begin
-– String merge code is inserted here 
	set @LoopInx += 1
end
set nocount off
go

Each test was performed within its own batch on an otherwise idle system.  Run-time performance was captured using an Extended Events session, which was configured to capture sqlserver.sql_batch_completed events ONLY to minimize overhead.  The session definition, start, and stop code are shown below.  This session collects information for completed batches that ran longer than 10 microseconds and were executed from database #7.

use master;
IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='SQLRxPerformanceMonitoring')
    DROP EVENT session SQLRxPerformanceMonitoring ON SERVER
GO

 -- replace database_id(s) with list
CREATE EVENT SESSION SQLRxPerformanceMonitoring ON SERVER 
ADD EVENT sqlserver.sql_batch_completed(
SET collect_batch_text=(1)
ACTION(sqlserver.server_instance_name, sqlserver.database_id, sqlserver.session_id, sqlserver.client_pid, sqlserver.client_app_name, sqlserver.database_name, sqlserver.server_principal_name, sqlserver.client_hostname, sqlserver.tsql_frame, sqlserver.tsql_stack, sqlserver.sql_text, sqlserver.request_id, sqlserver.query_hash, sqlserver.query_plan_hash, package0.event_sequence)
    	WHERE ([duration] > (10) and (database_id = 7)) -- values must be hard-coded
) -- replace database_id
ADD TARGET package0.event_file(SET filename= N'E:\SQLRx\SQLScriptOutput\DevPerf.xel')
WITH (MAX_MEMORY=4096 KB, EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS, MAX_DISPATCH_LATENCY=5 SECONDS, MAX_EVENT_SIZE=0 KB, MEMORY_PARTITION_MODE=NONE, TRACK_CAUSALITY=OFF, STARTUP_STATE=OFF)
GO

-- entry exists when session is active
if not exists(select * from [master].[sys].[dm_xe_sessions] where [name] = 'SQLRxPerformanceMonitoring')
	alter EVENT SESSION SQLRxPerformanceMonitoring ON SERVER STATE = START

-- entry exists when session is active
if exists(select * from [master].[sys].[dm_xe_sessions] where [name] = 'SQLRxPerformanceMonitoring')
	alter EVENT SESSION SQLRxPerformanceMonitoring ON SERVER STATE = STOP

The Code

Two different T-SQL methods were used for testing.  The FOR XML PATH solution incorporates the reverse and stuff functions along with FOR XML PATH.  The String_agg solution is much simpler, but is only available on SQL Server 2017 currently.

The FOR XML PATH method has been documented thoroughly over the years in many articles, so these explanations will not be repeated here.  However, two points are worth mentioning.  The first one is that the @strSeparator (in this case ‘ or  ’) string will reside at the end of the constructed string (in this case @strFilter).  Since the last occurrence of @strSeparator must be removed to prevent a syntax error, characters must be dropped from the end of the string.  Although it may seem obvious that four characters (the length of @strSeparator) should be removed, actually only three need be removed because the end of the constructed string is “r” not “ “.  Therefore, the length argument of the left function specification is the length of @strFilter minus three.

The second point of interest concerns the parentheses that surround the select statement.  They may seem unnecessary, but they are important because without them, the following errors occur:

Msg 156, Level 15, State 1, Line 26

Incorrect syntax near the keyword ‘select’.

Msg 102, Level 15, State 1, Line 28

Incorrect syntax near ‘)’.

The second method, String_agg, is much simpler to implement, but does have a restriction, as shown below:

Msg 9829, Level 16, State 1, Line 62

STRING_AGG aggregation result exceeded the limit of 8000 bytes. Use LOB types to avoid result truncation.

This limitation can be circumvented by converting the [Full Table Name] column to an nvarchar(max) variable, so that the overall string length no longer matters.

The code for the two methods is shown below:

-- FOR XML PATH
select @strFilter = reverse( stuff( reverse(
	(select 'FullTableName = ' + '''' + [Full Table Name] + '''' +
@strSeparator as [text()]
from IndexColumnAnalysisList 
where RecID <= @NumRows for xml path('')
)
) -- first reverse
	,1,1,'') -- stuff
	) -- second reverse
Set @strFilter = left(@strFilter, len(@strFilter) - 3) – drop off last ‘ or ’
-- FOR XML PATH

-- String_agg
select @strFilter = string_agg('FullTableName = ' + '''' + 
cast([Full Table Name] as nvarchar(max)) + '''', @strSeparator)
	from IndexColumnAnalysisList
	where RecID <= @NumRows
–- String_agg

Test Results

Results of the tests are shown in the Figure 1 below.  Clearly, there was very little difference between the two methods, although the number of elements directly affected performance of both methods.    Durations were equal to processor times and proportional to the number of records that were merged.  The query plans for the two methods were quite similar and shown in Figure 2 and Figure 3, respectively.

Figure 1: Average Execution Time

Figure 2: FOR XML PATH Execution Plan

Figure 3: String_agg Execution Plan

Conclusion

In summary, existing T-SQL code need not be replaced.  However, if possible, new development that will run on SQL Server 2017 or higher should use the String_agg function for simplicity, improved readability, and easier maintenance.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

The post Should I Replace My FOR XML PATH String Merges with String_agg? appeared first on SQLRx.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Overview

In short, no.  However, for many years there was no alternative to the T-SQL option, and even when there were other viable options, it was unclear whether it was worth changing existing code to take advantage of the new technologies.  With the addition of the string_split function in SQL Server 2016, it is worth revisiting this topic.  This article will discuss three different methods (T-SQL, C# CLR, and string_split) for breaking delimited strings into result sets and compare their performance across varying length lists.  Three different T-SQL functions will be tested along with the C# CLR and string_split methods.  The examples provided employ numerical lists delimited by commas and return either strings or integers to determine whether there is a significant performance difference when numbers are converted from strings to integers.

Testing Strategy

To ensure as much consistency as possible, a series of random numbers were generated and merged into varying-length lists of 20, 50, 75, 100, 150, 500, 1000, 1500, 2500, and 5000 integers.  Although the list from which the test strings were derived contained random numbers, the same list was used as input for each of the tests, i.e., the first 20 elements of all tests were identical.  Query results were discarded because allowing them to be returned warped the final run times badly, increasing them by at least a factor of FOUR, even when the results were written to a file!  An example of a list of 20 is as follows:  select * from dbo.fcn_SplitCSV (‘7503, 87355, 74205, 3985, 5811, 1286, 94488, 33989, 8642, 17592, 80938, 48701, 84713, 430, 54960, 46492, 9916, 38679, 89117, 5703’).  The full T-SQL invocation code for the 20-element test of fcn_SplitCSV is shown below.  As evidenced by the code, each test was performed 2,500 times to minimize the probability that any individual variations would skew the final results.

-- `20`2500`convert to int`fcn_SplitCSV`
set nocount on;
declare @inx int = 0
declare @maxinx int = 2500

while @inx < @maxinx
begin
select * from dbo.fcn_SplitCSV ('7503, 87355, 74205, 3985, 5811, 1286, 94488, 33989, 8642, 17592, 80938, 48701, 84713, 430, 54960, 46492, 9916, 38679, 89117, 5703')
set @inx += 1
end
go

In addition to the three major methods cited above, two other variations were tested:  conversion of strings to numbers in the result set and specifying a delimiter using a string variable instead of a string constant.  Each test was performed in its own batch on an otherwise idle system.  Run-time performance was captured using an Extended Events session, which was configured to capture sqlserver.sql_batch_completed events ONLY.  The session definition, start, and stop code are shown below.  This session collects information for completed batches that ran longer than 10 microseconds and were executed from database #7.

use master;
IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='SQLRxPerformanceMonitoring')
    DROP EVENT session SQLRxPerformanceMonitoring ON SERVER
GO

 -- replace database_id(s) with list
CREATE EVENT SESSION SQLRxPerformanceMonitoring ON SERVER 
ADD EVENT sqlserver.sql_batch_completed(
SET collect_batch_text=(1)
ACTION(sqlserver.server_instance_name, sqlserver.database_id, sqlserver.session_id, sqlserver.client_pid, sqlserver.client_app_name, sqlserver.database_name, sqlserver.server_principal_name, sqlserver.client_hostname, sqlserver.tsql_frame, sqlserver.tsql_stack, sqlserver.sql_text, sqlserver.request_id, sqlserver.query_hash, sqlserver.query_plan_hash, package0.event_sequence)
    	WHERE ([duration]>(10) and (database_id = 7))
) -- replace database_id
ADD TARGET package0.event_file(SET filename= N'E:\SQLRx\SQLScriptOutput\DevPerf.xel')
WITH (MAX_MEMORY=4096 KB, EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS, MAX_DISPATCH_LATENCY=5 SECONDS, MAX_EVENT_SIZE=0 KB, MEMORY_PARTITION_MODE=NONE, TRACK_CAUSALITY=OFF, STARTUP_STATE=OFF)
GO

-- entry exists when session is active
if not exists(select * from [master].[sys].[dm_xe_sessions] where [name] = 'SQLRxPerformanceMonitoring')
	alter EVENT SESSION SQLRxPerformanceMonitoring ON SERVER STATE = START

-- entry exists when session is active
if exists(select * from [master].[sys].[dm_xe_sessions] where [name] = 'SQLRxPerformanceMonitoring')
	alter EVENT SESSION SQLRxPerformanceMonitoring ON SERVER STATE = STOP

The Code

Three different T-SQL functions were used for testing.  Each of the functions used different T-SQL logic to parse the string, and one of them was modified to use a delimiter, specified in a variable as opposed to using a string constant.  The code for the three T-SQL functions:  fn_CSVToTable,  fcn_SplitCSV, and fcn_SplitStringListDelimiter, is shown below:

-- fn_CSVToTable
CREATE Function [dbo].[fn_CSVToTable]
(
    @CSVList Varchar(max)
)
RETURNS @Table TABLE (ColumnData VARCHAR(100))
AS
BEGIN
    IF RIGHT(@CSVList, 1) <> ','
    SELECT @CSVList = @CSVList + ','

    DECLARE @Pos    BIGINT,
            @OldPos BIGINT
    SELECT  @Pos    = 1,
            @OldPos = 1

    WHILE   @Pos < LEN(@CSVList)
        BEGIN
            SELECT  @Pos = CHARINDEX(',', @CSVList, @OldPos)
            INSERT INTO @Table
            SELECT  LTRIM(RTRIM(SUBSTRING(@CSVList, @OldPos, @Pos - @OldPos))) Col001

            SELECT  @OldPos = @Pos + 1
        END

    RETURN
END -- fn_CSVToTable
go

-- fcn_SplitCSV
CREATE FUNCTION [dbo].[fcn_SplitCSV] ( @NumberList varchar(max))
	RETURNS @SplitList TABLE (    ListMember INT )
AS
BEGIN
      DECLARE @Pointer int, @ListMember varchar(256)

      SET @NumberList = LTRIM(RTRIM(@NumberList))
      IF (RIGHT(@NumberList, 1) != ',')
         SET @NumberList=@NumberList+ ','
      SET @Pointer = CHARINDEX(',', @NumberList, 1)
      IF REPLACE(@NumberList, ',', '') <> ''
      BEGIN
            WHILE (@Pointer > 0)
            BEGIN
                  SET @ListMember = LTRIM(RTRIM(LEFT(@NumberList, @Pointer - 1)))
                  IF (@ListMember <> '')
                  INSERT INTO @SplitList 
                        VALUES (convert(int,@ListMember)) 
                  SET @NumberList = RIGHT(@NumberList, LEN(@NumberList) - @Pointer)
                  SET @Pointer = CHARINDEX(',', @NumberList, 1)
            END
      END   
      RETURN
END
GO -- fcn_SplitCSV

-- fcn_SplitStringListDelimiter
CREATE FUNCTION [dbo].[fcn_SplitStringListDelimiter] (@StringList VARCHAR(MAX), @Delimiter varchar(2))
	RETURNS @TableList TABLE( StringLiteral VARCHAR(128))
BEGIN
    DECLARE @StartPointer INT, @EndPointer INT

    SELECT @StartPointer = 1, @EndPointer = CHARINDEX(@Delimiter, @StringList)
    WHILE (@StartPointer < LEN(@StringList) + 1) 
    BEGIN
        IF @EndPointer = 0 
            SET @EndPointer = LEN(@StringList) + 1
        INSERT INTO @TableList (StringLiteral) VALUES(LTRIM(RTRIM(SUBSTRING(@StringList, @StartPointer, @EndPointer - @StartPointer))))
        SET @StartPointer = @EndPointer + 1
        SET @EndPointer = CHARINDEX(@Delimiter, @StringList, @StartPointer)
    END -- WHILE
    RETURN
END -- fcn_SplitStringListDelimiter
GO

The C# CLR function, tvf_SplitString_Multi, is shown below followed by its T-SQL definition:

[Microsoft.SqlServer.Server.SqlFunction(IsDeterministic = true, DataAccess = DataAccessKind.None, FillRowMethodName = "FillRow_Multi", TableDefinition = "item nvarchar(4000)")]
public static IEnumerator tvf_SplitString_Multi(
        [SqlFacet(MaxSize = -1)]
                SqlChars Input,
        [SqlFacet(MaxSize = 255)]
                SqlChars Delimiter
        )
{
    return (
        (Input.IsNull || Delimiter.IsNull) ?
        new SplitStringMulti(new char[0], new char[0]) :
        new SplitStringMulti(Input.Value, Delimiter.Value));
}
public static void FillRow_Multi(object obj, out SqlString item)
{
    item = new SqlString((string)obj);
}
public class SplitStringMulti : IEnumerator
{
    public SplitStringMulti(char[] TheString, char[] Delimiter)
    {
        theString = TheString;
        stringLen = TheString.Length;
        delimiter = Delimiter;
        delimiterLen = (byte)(Delimiter.Length);
        isSingleCharDelim = (delimiterLen == 1);
        lastPos = 0;
        nextPos = delimiterLen * -1;
    }

    #region IEnumerator Members

    public object Current
    {
        get
        {
            return new string(theString, lastPos, nextPos - lastPos);
        }
    } // Current

    public bool MoveNext()
    {
        if (nextPos >= stringLen)
            return false;
        else
        {
            lastPos = nextPos + delimiterLen;
            for (int i = lastPos; i < stringLen; i++)
            {
                bool matches = true;
                //Optimize for single-character delimiters
                if (isSingleCharDelim)
                {
                    if (theString[i] != delimiter[0])
                        matches = false;
                }
                else
                {
                    for (byte j = 0; j < delimiterLen; j++)
                    {
                        if (((i + j) >= stringLen) || (theString[i + j] != delimiter[j]))
                        {
                            matches = false;
                            break;
                        }
                    }
                }
                if (matches)
                {
                    nextPos = i;
                    //Deal with consecutive delimiters
                    if ((nextPos - lastPos) > 0)
                        return true;
                    else
                    {
                        i += (delimiterLen - 1);
                        lastPos += delimiterLen;
                    }
                }
            }
            lastPos = nextPos + delimiterLen;
            nextPos = stringLen;
            if ((nextPos - lastPos) > 0)
                return true;
            else
                return false;
        }
    } // MoveNext

    public void Reset()
    {
        lastPos = 0;
        nextPos = delimiterLen * -1;
    } // Reset
    #endregion

    private int lastPos;
    private int nextPos;
    private readonly char[] theString;
    private readonly char[] delimiter;
    private readonly int stringLen;
    private readonly byte delimiterLen;
    private readonly bool isSingleCharDelim;
} // SplitStringMulti

CREATE FUNCTION [dbo].[tvf_SplitString_Multi](@DelimitedItemList nvarchar(max), @Delimiter nvarchar(255))
RETURNS TABLE (Item NVARCHAR(4000))
AS EXTERNAL NAME [ParseTraceQueryText].[MySPs.TableValuedFunctions].[tvf_SplitString_Multi]
GO

The final piece of the testing puzzle involves the simple invocation of the string_split function that is documented via this link:  https://docs.microsoft.com/en-us/sql/t-sql/functions/string-split-transact-sql?view=sql-server-2016.

Test Results

Although the author believed that the conversion of strings to integers and specifying a delimiter using a variable instead of a string constant might result in a discernible performance difference, these hypotheses were not supported by the test results.  However, significant performance differences were observed between the T-SQL and compiled-code (C# CLR and string_split) solutions.  Interestingly, the solutions performed similarly until the number of elements exceeded 150, at which point they diverged significantly.

The first two execution time graphs below, one with normal scaling and the other with truncated scaling, highlight the divergence of the two solutions.  As these graphs illustrate, the compiled-code solutions scaled extremely well all the way up to 1,500 elements.  They increase slightly after this point, but performance was still quite good, peaking at three milliseconds.  The same cannot be said for the T-SQL solutions, all of which peaked over 70 milliseconds.  Clearly, if these routines are executed a few times or parse very small lists, all of these solutions are acceptable.  However, if the parsing routine is executed thousands of times or the number of elements is long, the choice would be very important.  To emphasize this point, 2,500 executions of fn_CSVToTable took a total of 33 seconds for a list of 1,500 elements, whereas the string_split solution required only 0.45 seconds to complete the same amount of work.  This disparity becomes even greater when the length of the parsed list reaches 5,000 elements.  fn_CSVToTable required 180 seconds to complete 2,500 iterations, whereas string_split required only eight seconds.

Two other metrics were noteworthy:  logical reads and row counts.  As shown in Figure 3, the two compiled-code solutions performed almost no logical reads, whereas the T-SQL ones performed thousands to millions.  Secondly, as shown in Figure 4, the row counts of the T-SQL methods were approximately five times higher than those of the compiled-code. The execution plans for the five routines explain why the metrics were so different.  As shown in Figure 5 through Figure 9, despite the fact that each T-SQL routine is different, all of them generated table scans in addition to the table valued function calls, whereas the compiled-code ONLY generated the table valued function calls.

Figure 1: Average Execution Time – Normal Scaling

Figure 2: Average Execution Time – Truncated Scaling

Figure 3: Logical Reads

Figure 4: Row Counts

Figure 5: fcn_SplitCSV Execution Plan

Figure 6: fcn_SplitStringListDelimiter Execution Plan

Figure 7: fn_CSVToTable Execution Plan

Figure 8: tvf_SplitString_Multi Execution Plan

Figure 9: String_Split Execution Plan

Conclusion

In summary, existing T-SQL code need not be replaced unless it is expected to parse lists of more than 150 elements or be executed thousands of times.  However, if possible, new development that will run on SQL Server 2016 or higher should use the string_split function.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com.  We will be happy to help!  Leave a comment and feel free to track back to us.  We love to talk tech with anyone in our SQL family!

The post Are All Delimited-String Parsers Created Equal? appeared first on SQLRx.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Jobs on a SQL server can take milliseconds or minutes to complete depending on the size and complexity of your query and also the SQL environment on which it’s executing.  It’s important to have a baseline or at least be familiar with the normal amount of time your SQL jobs take to execute.   For a SQL job or script that is taking an excessive amount of time to execute, or one that is not completing at all you can troubleshoot in a few different ways by checking for blocking, deadlocking, wait types, hardware contention, missing indexes, bad query plans and a multitude of other things.

Recently we noticed a customer experiencing a very long running job on their system.  From the Job Activity Monitor, we saw a job that normally takes seconds had been running for hours:

After confirming there was no blocking, we wanted to find out the wait type and the spid of the unresponsive and sluggish job by running this script:

SELECT sessions.session_id
,requests.status
,requests.blocking_session_id 
,requests.wait_type
,requests.wait_time / (1000.0) 'wait_time(sec)'
,requests.cpu_time
,requests.total_elapsed_time / (1000.0) 'total_elapsed_time(Sec)'
,Substring(sqltext.TEXT, (requests.statement_start_offset / 2) + 1, 
((CASE requests.statement_end_offset
  WHEN 1 THEN Datalength(sqltext.TEXT)
  ELSE requests.statement_end_offset
  END - requests.statement_start_offset  ) / 2
 ) + 1) AS statement_text
,requests.command
,databases.name as database_name
,sessions.login_name
,sessions.host_name
,sessions.program_name
FROM sys.dm_exec_sessions AS sessions
INNER JOIN sys.dm_exec_requests AS requests ON requests.session_id = sessions.session_id
CROSS APPLY sys.dm_exec_sql_text(requests.sql_handle) AS sqltext
INNER JOIN sys.databases AS databases on requests.database_id = databases.database_id 
WHERE requests.session_id != @@SPID
ORDER BY requests.cpu_time DESC

*I can’t remember where I found this script.  If this is familiar to anyone please send me a link so I can give proper credit

The results gave us some beneficial information in order to troubleshoot who was doing what on the system, and what type of waits were happening for the spid involved in the long running job.  The results showed SLEEP_TASK as the wait type for the long running query:

Ordinarily SLEEP_TASK is a nonspecific wait type in SQL Server which occurs when a task sleeps while waiting for a generic event to occur, according to Microsoft documentation.  This wait type can usually be safely ignored, however on some occasions it can happen when a script does not execute completely or hangs up for long periods of time.

The SLEEP_TASK wait means that a thread is waiting on a resource or waiting for some event to occur, and could indicate background task scheduling, a query plan exchange operator that isn’t tracked by CXPACKET, or it could be a hashing operation that spills to tempdb.

Hash spills mean there is not enough memory to execute the query, and data has to be spilled into tempdb in order to execute.  This usually results in slower performance and space consumption in tempdb.  In order to determine if hash spills are occurring, you can run SQL Profiler and enable Hash Warning Events.  For more information on that click here https://docs.microsoft.com/en-us/sql/relational-databases/event-classes/hash-warning-event-class.

The best way to identify hash spills is to check execution plans in any suspected stored procedures or queries.  You will find them in execution plans by looking for the SORT operator with a yellow warning sign on it.  If you hover over the operator, you will see the warning message highlighted in yellow:

In the event it is a hash spill, the recommendation is to create and/or update statistics on the database being used.  This is the most effective way to reduce hash recursions or bailouts if statistics are needed in a query using join or aggregate operations.

If hashing events are occurring, it’s a good idea to update statistics:

--Method 1 – Update Statistics on specific tables
USE YourDatabase 
GO
UPDATE STATISTICS dbo.SalesOrderDetail

-- Method 2 – Update Statistics on the whole database
USE YourDatabase 
GO
EXEC sp_updatestats

Normally if you update statistics, plans will recompile when they are used next.  If you find that the plan is still hash spilling, it might be because the plan is trivial and not a candidate to be recompiled.  You may be forced to clean out the cache or force a recompile in some way.  Here are the ways to do that in increasing order of magnitude.

-- recompiles a specific object
EXEC sp_recompile N'BadProc'

-- clear out plan cache for a specific database
DBCC FLUSHPROCINDB(YourDatabaseID)

-- clear all execution plans out of cache
DBCC FREEPROCCACHE

-- clear out all plan caches for the instance (This is the big one!) try to not use this in production
-- only use this if all others fail
-- can cause slowdowns while all plans for all queries are recreated
DBCC FREESYSTEMCACHE('ALL')

Once the plan has recompiled with fresh stats you should see the hash spill go away.

Conclusion

SLEEP_TYPE wait types are normally ignored and very nonspecific.  Occasionally however it can indicate hash spills, among other things.  Hash spills are possible to correct, so this should be investigated in the event you experience a long running query along with the SLEEP_TASK wait type.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

The post Troubleshoot SLEEP_TASK Wait Type appeared first on SQLRx.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This might be kind of basic but since I am working on a comprehensive script to discover things that a DBA really needs to know about, I made a couple of queries that will produce a list of the databases that have multiple files along with the locations of the physical files.  One query finds multiple database files (mdf’s) and the other looks for multiple transaction log files (ldf’s).  This will also find the Filestream file locations.  Since I often have to take on instances without ever having seen them, it is good to know about little things like this. 

Databases with multiple files is a good thing.  It allows you some flexibility with regards to file placement.  If the files are all in the same file group, it can help performance. 

SELECT DB_NAME(f.database_id) AS DBName, f.name AS LogicalName, f.type_desc AS FileType, f.physical_name AS PhysicalPath
FROM sys.master_files f
JOIN (select mf.database_id, COUNT(*) AS CntDBs
FROM sys.master_files mf
WHERE mf.type <> 1
GROUP BY mf.database_id
HAVING COUNT(*) > 1) AS X
ON (f.database_id = X.database_id)
WHERE f.type <> 1
ORDER BY DB_NAME(f.database_id), f.name

For transaction log files, having multiple files does not help performance.  However, I have come into places where it was necessary at some point to have a second log file created.  It does not hurt anything but it does not really help either but I like to know about cases like this so that I can clean up extra unnecessary files if possible.  I like to keep things as simple as possible in my production environments.

SELECT DB_NAME(f.database_id) AS DBName, f.name AS LogicalName, f.physical_name AS PhysicalPath
FROM sys.master_files f
JOIN (select mf.database_id, COUNT(*) AS CntDBs
FROM sys.master_files mf
WHERE mf.type = 1
GROUP BY mf.database_id
HAVING COUNT(*) > 1) AS X
ON (f.database_id = X.database_id)
WHERE f.type = 1
ORDER BY DB_NAME(f.database_id), f.name

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com.  We will be happy to help!  Leave a comment and feel free to track back to us.  We love to talk tech with anyone in our SQL family!

The post Find Out Which Databases Have Multiple Files appeared first on SQLRx.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I was recently performing an in-place upgrade of SQL 2008 R2 to SQL 2014 on one of my client’s servers.  I have done a ton of successful and uneventful in-place upgrades and was surprised when the upgrade failed with the error message:  “Failed to create a new folder ‘X:\SQLBackups’. The specified path is invalid (for example, it is on an unmapped drive).”  This client had over the years changed from using a local drive for all backups to having backups sent to a network share.  So, the X drive really was no longer in existence.

I cancelled out of the install and had to look up where to change the default backup folder in SQL 2008 since it has been a long time since I have installed it.  In newer versions of SQL, you can find the default backup folder in the instance properties GUI under the Database settings:

In SQL 2008 & 2008 R2 apparently it is only located in a facet.  In newer versions the default backup folder is still located in a facet but is more prominently displayed in the instance properties.  You find the facets by right clicking on your instance name and clicking on Facets.

You can change the backup directory in either place and will have to restart SQL for the change to take effect.  I did all that and restarted the in-place upgrade.  However, I ran into another issue when it came time to select the features to upgrade…..I could not select any and got an error message like this one:

After some searching around, I found that I needed to check the Summary log to see what is going on.  The Summary log can be found in the Setup Bootstrap folder.  It should be at a path similar to this:

C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\Summary_xxx_xxx.txt

There Microsoft helpfully will provide a command that you can use in a command prompt to uninstall the failed upgrade.  Mine looked like this:

I had to change to the C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\SQLServer2012 and then can execute the command.

After that, I restarted the in-place upgrade and it completed with no errors.  Woo hoo!!  But I sure didn’t need the gray hair that the initial failure gave me.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com.  We will be happy to help!  Leave a comment and feel free to track back to us.  We love to talk tech with anyone in our SQL family!

The post Failed In-Place Upgrade – Missing Backup Directory appeared first on SQLRx.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When scrolling through the pages of a SQL Server Reporting Services (SSRS) report, it is very useful to be able to see the column headers throughout the report.  So let’s say you have successfully created an SSRS report using Visual Studio, and everything looks wonderful…except the headers on your columns disappear when you scroll down the page.  You have even set the properties of your Tablix to “Keep Headers Visible While Scrolling”, but it still doesn’t work!  Trying to keep the column headings visible while you scroll down the page of your SSRS report can be a frustrating endeavor.  The following steps will demonstrate how to make it work.

Adjust Tablix Properties
  • In Visual Studio, make sure you are on the Design tab of your report
  • Right click on the upper left corner of your Tablix to open the Tablix Properties

  • When properties dialog box opens, un-check the “Keep header visible while scrolling” checkboxes for both Row Headers and Column Headers. This doesn’t seem logical, but trust me.

Go to Advanced Mode of Grouping Section
  • In the Design tab of your report, click on drop down arrow on the far right side of the Grouping Section and select Advanced Mode.

  • A “Static” property element will show above your Row Groups as shown below.  Click on the top Static cell which represents the first column header of the report.
  • A properties box will open on the right side of the page for this Static element.

  • Change Tablix Member properties for the Static element as shown
    • Fixed Data = True
    • Keep With Group = After
    • Repeat On New Page = True
Preview and Run your Report
  • Click on the Preview tab on your report and deploy the report to Reporting Services.
  • Below you can see the report as shown in SSRS internet browser

  • As you can see below, the page header disappears when you scroll down, but the column headers remain visible.

Conclusion

Sometimes when creating a report for SQL Server Reporting Services, you want the column headers to remain visible while you scroll down the report page.  Adjusting the Tablix properties isn’t as straightforward as it should be, and can be frustrating to figure out.  Following the steps above will keep your column headers visible in your reports.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

The post SSRS – Keep Column Headers Visible appeared first on SQLRx.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A little while back I wrote a post going over sp_help in a beginning effort to find table dependency information.  https://www.sqlrx.com/using-sp_help-to-find-object-dependencies/.  That expanded to reviewing sys.dm_sql_referenced_entities & sys.dm_sql_referenced_entities  (https://www.sqlrx.com/finding_things_sql_that_reference_a_table/ & https://www.sqlrx.com/how-to-find-the-objects-and-columns-a-stored-procedure-or-view-uses/ ) and sys.sql_expression_dependencies (https://www.sqlrx.com/using-sys-sql_expression_dependencies-as-a-single-source-to-find-referenced-and-referencing-objects/  )

Now I want to loop back just a bit and expand on more information that is found in sp_help and alternative ways to find much the same information using queries.  My thought is that while the “help” stored procedures and other dmf’s and dmv’s are great, they often return multiple results sets and sometimes they are not exactly or maybe more than what you need so it is important to know that there are queries to get to the same info.  This will allow me to eventually get to the point where I can use queries to build some scripting statements.

Let’s take a fast look again at sp_help:

EXEC sp_help 'Sales.Orders'

As you can see multiple result sets are returned.  You can see the table columns, foreign keys, defaults and things that reference it.  This is great but since my ultimate goal is to be able to eventually script the things that are seen here, this will not work.  So let’s get some queries together that will individually provide much the same information.

I’ll start with column info for the table.  In my example, I have filtered for a single table but if you need to get info for all tables in the database, just comment out the WHERE clause and you will get everything.

-- Table column info
SELECT schema_name(t.schema_id) SchemaName
	,t.name TableName
	,c.name ColumnName
	,c.column_id
	,ct.name DataType
	,CASE CAST(c.max_length AS VARCHAR(50)) WHEN -1 THEN 'MAX' ELSE CAST(c.max_length AS VARCHAR(50)) END AS max_length
	,c.precision
	,c.scale
	,c.collation_name Collation
	,c.is_nullable
	,c.is_computed
	,c.is_identity
	,c.is_rowguidcol
	,c.is_replicated
	,c.is_merge_published
	,c.is_sparse
FROM sys.tables t
JOIN sys.columns c on (t.object_id = c.object_id)
JOIN sys.types ct on (ct.user_type_id = c.user_type_id)
WHERE schema_name(t.schema_id) = 'Sales'
AND t.name = 'Orders'
ORDER BY schema_name(t.schema_id), t.name, c.column_id

This returns more info than sp_help.  I also found that in sp_help for the table info there are a couple of odd columns named TrimTrailingBlanks and FixedLenNullInSource. 

TrimTrailingBlanks is apparently populated with NO if ANSI_PADDING is set to ON.

FixedLenNullInSource is something that is leftover from possibly as far back as Pre-SQL 2000 or Sybase days.  In researching it seems that this used to be hardcoded with YES if certain data type columns were nullable.  It looks like this has gone by the wayside but it is still part of the sp_help result set.  So, I think I’ll leave these 2 columns out of my results.

After the column info in sp_help there is also the information about column defaults and primary, foreign and unique key constraints listed.  You can also get this information by using sp_helpconstraint.  (https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-helpconstraint-transact-sql?view=sql-server-2017 )

EXEC sp_helpconstraint 'Sales.Orders'

However, while this data is great I am wanting to get individual result sets.  The below three queries are designed to bring back info much like shown in sp_helpconstraint.  You can see how this makes significant progress towards my final goal of gathering information that will allow me to script tables.

-- defaults
SELECT SCHEMA_NAME(t.schema_id) SchemaName
	,t.name TableName 
	,c.name ConstraintName
	,c.definition ConstraintDefinition
	,col.name ColumnName 
FROM sys.default_constraints c 
JOIN sys.columns col ON (col.default_object_id = c.object_id) 
JOIN sys.tables t ON (t.object_id = c.parent_object_id)
WHERE SCHEMA_NAME(t.schema_id) = 'Sales'
AND t.name = 'Orders'
ORDER BY SchemaName, TableName, ConstraintName

-- foreign keys
SELECT OBJECT_SCHEMA_NAME(f.parent_object_id) AS FKSchemaName
	,OBJECT_NAME(f.parent_object_id) FKBaseTable
	,CASE f.is_not_trusted WHEN 0 THEN 'WITH CHECK' WHEN 1 THEN 'NOCHECK' END AS CheckOrNoCheck
	,f.name AS ConstraintName
	,COL_NAME(fc.parent_object_id, fc.parent_column_id) AS FKBaseColumn
	,OBJECT_SCHEMA_NAME(f.referenced_object_id) AS PKRefdSchemaName
	,OBJECT_NAME (f.referenced_object_id) AS PKRefdBaseTable
	,COL_NAME(fc.referenced_object_id, fc.referenced_column_id) AS PKRefdBaseColumn
	,f.delete_referential_action_desc
	,f.update_referential_action_desc
FROM sys.foreign_keys AS f
JOIN sys.foreign_key_columns AS fc ON (f.object_id = fc.constraint_object_id)
WHERE OBJECT_SCHEMA_NAME(f.parent_object_id) = 'Sales'
AND OBJECT_NAME (f.parent_object_id) = 'Orders'
ORDER BY FKSchemaName, FKBaseTable, ConstraintName

-- primary or unique keys
SELECT SCHEMA_NAME(t.schema_id) SchemaName
	,t.name AS TableName
	,inx.name AS IndexName
	,CASE WHEN inx.is_unique_constraint = 1 THEN 'UNIQUE' ELSE '' END AS [Unique]
	,CASE WHEN inx.is_primary_key = 1 THEN 'PRIMARY' ELSE '' END AS PrimaryKey
	,inx.type_desc AS IndexType
	,CASE WHEN inx.is_padded = 1 THEN 'ON' ELSE 'OFF' END AS PadIndex
	,CASE WHEN inx.allow_page_locks=1 THEN 'ON' ELSE 'OFF' END AS AllowPageLocks
	,CASE WHEN inx.allow_row_locks=1 THEN 'ON' ELSE 'OFF' END AS AllowRowLocks
	,CASE WHEN INDEXPROPERTY(t.object_id, inx.name, 'IsStatistics') = 1 THEN 'ON' ELSE 'OFF' END AS StatisticsNoRecompute
	,CASE WHEN inx.ignore_dup_key=1 THEN 'ON' ELSE 'OFF' END AS IgnoreDupKey
	,inx.fill_factor AS [FillFactor]
	,FILEGROUP_NAME(inx.data_space_id) AS FileGroupName
FROM sys.tables t 
JOIN sys.indexes inx ON (t.object_id=inx.object_id)
WHERE  (inx.is_primary_key = 1 OR inx.is_unique_constraint = 1)
AND inx.type > 0
AND t.is_ms_shipped=0
AND SCHEMA_NAME(t.schema_id) = 'Sales'
AND t.name = 'Orders'
ORDER BY SchemaName, TableName, IndexName

So what now?  Well….we are beginning to get enough information to start building a basic script to create table-key-default-constraint statements.  My next installment will be to cover getting a little more information needed to script a table, and I will be looking for non-clustered indexes and table triggers.  Until then…. Happy Querying!!

In my examples I have used the Wide World Importers database.  This sample database replaces the old AdventureWorks databases.  You can download the Wide World Importers OLTP and Data Warehouse databases here:

https://github.com/Microsoft/sql-server-samples/releases/tag/wide-world-importers-v1.0

Find out more about them here: https://docs.microsoft.com/en-us/sql/samples/wide-world-importers-what-is?view=sql-server-2017

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com.  We will be happy to help!  Leave a comment and feel free to track back to us.  We love to talk tech with anyone in our SQL family!

The post Finding Table Defaults, Primary and Unique Keys and Foreign Keys with sp_helpconstraint appeared first on SQLRx.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I thought I would continue to expand on ways to view dependencies.  This post will give you an alternate way to find things referencing tables and finding things that are referenced by a stored procedure or view after my previous posts on the subject.  See these (https://www.sqlrx.com/finding_things_sql_that_reference_a_table/ & https://www.sqlrx.com/how-to-find-the-objects-and-columns-a-stored-procedure-or-view-uses/ )

A different way to find those same things can be done by using the sys.sql_expression_dependencies catalog view (https://docs.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-sql-expression-dependencies-transact-sql?view=sql-server-2017 ).

Finding Referencing Objects

For instance, you can use sys.dm_sql_referencing_entities to find what stored procedures or views that references the Sales.Orders table.

USE WideWorldImporters
GO
SELECT *
FROM sys.dm_sql_referencing_entities ('Sales.Orders', 'OBJECT');  
GO

Or you can use sys.dm_sql_referencing_entities to get a similar result.

SELECT OBJECT_SCHEMA_NAME ( referencing_id ) AS referencing_schema_name,  
    OBJECT_NAME(referencing_id) AS referencing_entity_name,   
    o.type_desc AS referencing_desciption,   
    COALESCE(COL_NAME(referencing_id, referencing_minor_id), '(n/a)') AS referencing_minor_id,   
    referencing_class_desc, referenced_class_desc,  
    referenced_server_name, referenced_database_name, referenced_schema_name,  
    referenced_entity_name,   
    COALESCE(COL_NAME(referenced_id, referenced_minor_id), '(n/a)') AS referenced_column_name,  
    is_caller_dependent, is_ambiguous  
FROM sys.sql_expression_dependencies AS sed  
INNER JOIN sys.objects AS o ON sed.referencing_id = o.object_id  
WHERE referenced_id = OBJECT_ID(N'Sales.Orders');  
GO

Finding Referenced Objects

And conversely to find what tables are referenced by a stored procedure you can use sys.dm_sql_referenced_entities like so…

USE WideWorldImporters
GO
SELECT referenced_entity_name AS table_name, referenced_minor_name as column_name, is_selected, is_updated, is_select_all  
FROM sys.dm_sql_referenced_entities ('Website.InvoiceCustomerOrders', 'OBJECT');  -- stored procedure

Or you can again use sys.dm_sql_referencing_entities to get a similar result.

SELECT OBJECT_NAME(referencing_id) AS referencing_entity_name,   
    o.type_desc AS referencing_desciption,   
    COALESCE(COL_NAME(referencing_id, referencing_minor_id), '(n/a)') AS referencing_minor_id,   
    referencing_class_desc,  
    referenced_server_name, referenced_database_name, referenced_schema_name,  
    referenced_entity_name,   
    COALESCE(COL_NAME(referenced_id, referenced_minor_id), '(n/a)') AS referenced_column_name,  
    is_caller_dependent, is_ambiguous  
FROM sys.sql_expression_dependencies AS sed  
INNER JOIN sys.objects AS o ON sed.referencing_id = o.object_id  
WHERE referencing_id = OBJECT_ID(N'Website.InvoiceCustomerOrders');  
GO

In both cases you can see that sys.sql_expression_dependencies provides the same basic info.  But when it comes to tables are referenced by a stored procedure, sys.sql_expression_dependencies gives you less detail than you can get when using sys.dm_sql_referenced_entities.  The biggest difference in both examples is that with sys.sql_expression_dependencies you will get info on cross-database and cross-server dependencies which can be super helpful.

However, once again I have to sound a note of caution because even sys.sql_expression_dependencies does not catch things referenced in a dynamic SQL string.  Sales.Orders is found in dynamic SQL in the, [Application].[Configuration_EnableInMemory] & [DataLoadSimulation].[Configuration_ApplyDataLoadSimulationProcedures] stored procedures but it does not catch this.  So far the best way to find objects in dynamic SQL strings that I know of is to check for the object in sys.sql_modules.

USE WideWorldImporters
GO
SELECT * 
FROM sys.sql_modules
WHERE definition like '%Sales.Orders%'
GO

This should give you some good options to find dependencies and you also know of some of the limitations.

In my examples I have used the Wide World Importers database.  This sample database replaces the old AdventureWorks databases.  You can download the Wide World Importers OLTP and Data Warehouse databases here:

https://github.com/Microsoft/sql-server-samples/releases/tag/wide-world-importers-v1.0

Find out more about them here: https://docs.microsoft.com/en-us/sql/samples/wide-world-importers-what-is?view=sql-server-2017

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com.  We will be happy to help!  Leave a comment and feel free to track back to us.  We love to talk tech with anyone in our SQL family!

The post Using sys.sql_expression_dependencies as a Single Source to Find Referenced and Referencing Objects appeared first on SQLRx.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview