Loading...

Follow SQL Undercover – The Home of the Undercover DBAs on Feedspot

Continue with Google
Continue with Facebook
or

Valid

The SQL Installation script can be found here

The Inspector sample report has been updated here

All of the below Git Issues can be found on the V1.4 Project page

#68 Added a new table [Inspector].[ExecutionLog] which will hold elapsed time for each of the stored procedures executed during a collection, this table is truncated at the beginning of each day.

#88 Fixed a bug where Database growths would not show on the Inspector report with certain ModuleConfig configurations.

#89 Increased the period in which SQL Edition/Version changes are reported from current day only to the last 24 hours.

#90 Added our getting started link to the Installation output within the messages tab

#91 Revised the PowerBI Backups view to a working revision, we were not finished with this view in 1.3 but needed the view available to be able to develop the PowerBi report during 1.4 development.

#93 Added Service Pack and CU translation of version numbers when SQL Edition/Version changes are detected.

Example of Service pack translation.

#95 Added a new column to the Growth html table which shows you the last 5 days of growth of the database file to help you understand whether the growth was a one off of whether it’s is a common trend.

#96 If you have the Undercover Catalogue installed in the same database as the Inspector you have the ability to enable 3 new modules within the Inspector, Missing logins between SQL servers in an AG. Dropped tables module to inform of tables which have been dropped in the last 24 hours and also a dropped databases module which will inform on the Inspector report of databases that have been dropped in the last 24 hours. These new modules can be enabled/Disabled in a new table [Inspector].[CatalogueModules].

Email report header conditions.

[Inspector].[CatalogueModules] table, enable/disable reporting as required.

#98 If you have the Undercover Catalogue installed in the same database as the Inspector you will now see extra information on the Inspector report within the server headers.

#99 Added Html colour customisation for Warning/Advisory/Info header including the highlighting and text colours.

Click to view slideshow.

#100 Added the ability to exclude databases from the BackupsCheck module either permanently or up until a certain date/time.

[Inspector].[BackupsCheckExcludes] Table

#101 Added No clutter mode for the Inspector report, this is a new parameter for the SQLUndercoverInspectorReport stored procedure which will only show you html tables where thresholds/warnings/advisories have been breached or if the table is an informational type table , this mode will not show tables that report “no issues were found”.

Tables that report no issues like this will not show with @NoClutter = 1

#102 We optimised the DriveSpaceInsert proc – on large instances it didn’t scale well.

#103 Added a new stored proc to show you Drive space Capacity increase history.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

You know what it’s like, you need to fail your AG over but is it safe to fail over?

Perhaps you’ve clicked on ‘failover’ for the AG and there’s a little green tick and no data loss reported…

Or maybe you’ve checked out sys.dm_hadr_database_replica_cluster_states and ‘is_failover_ready’ is reporting a 1.

So, you’re cool to failover, right? But hold those horses one cotton picking minute…

You’ve failed over but now you’re getting a ton of odd error messages or maybe things aren’t quite working quite as you or your users might expect.

But SQL Said The AG Was Failover Ready!

What SQL was really telling you was there there was no risk of data loss in the event of a failover but was it ‘really’ failover ready? Are there other things that you need to think about?

Agent Jobs

What about your SQL Agent jobs? Have you got jobs that perform actions on your data? If you have, do those jobs exist on the new primary? If they don’t then I’m happy to bet that whatever function that they were playing probably isn’t happening anymore.

One thing that I always want to make sure before I failover is, do I have all the relevant jobs ready to roll on the secondary server?

But what about the jobs that you’ve got on the old primary? There’s a fair chance that, if they’re doing any sort of data manipulation, they’re going to be failing.

So not only do you want to make sure all your jobs exist on the secondary nodes, there’s also some intelligence built in to check that they can run on the node in its current state (either primary or secondary).

I’ve built a little system for managing this for me automatically, I may have to blog about it at some point in the future.

Missing Logins

Another big thing that can trip you up is logins.

Firstly, does the login actually exist on the secondary server? If it’s not there then you can bet your bottom dollar that your users or applications are going to suddenly have issues when they try to access your databases.

Mismatched Passwords

So your login exists but your users are still having trouble! Does the password on the secondary match the password on the primary? I’ve seen plenty of times where the diligent DBA has created the login on all the nodes but for some reason, given them all a different password.

That’s not going to be helpful for your applications.

Mismatched SIDs

Finally, one more thing about logins that may trip you over. That’s the SID.

When you create a login, it’s allocated a unique SID. This SID is used to map the server login to a database user.

Why can this be a problem?

If you just go around creating the logins on each of your nodes, you’ll find that each of your logins will have a different SID. The problem here is that when it comes to failover, your database users will end up being orphaned from the login.

This can be fixed by running sp_change_users_login, I’ve known people to run this every minute via an Agent Job to automatically fix the issue in the event of a failover.

But why do that? Why not just make sure that the SIDs match in the first place? Instead of creating the login on each node, you can use sp_help_revlogin, this will script out the login, including the password and SID and will help you avoid the issues above.

Connection Strings

Another tripping point is your connection strings. Are these pointing to the AG listener or are they pointing at the name of the SQL Server?

If it’s the later, when you failover they’ll, at best be hitting a read-only copy of the database or, if the what was primary is now set to a non-readable secondary won’t be able to access anything at all.

But What Can I Do About It? Before You Failover…

Missing and Mismatched Logins

It really is well worth your time making sure that your availability groups really are in a state that a failover isn’t going to result in a bunch of support calls landing on your desk.

There are a few things that I make sure that I’ve got in place as standard so that I know that I’m always in a good place to failover.

Firstly I want to make sure that there are no missing logins across my nodes and those that are there have a matching password and SID. This is made very simple with our own Undercover Catalogue, as I explained in this post (note, version 0.2 is now released and does contain the availability group information).

An alert on any mismatch in logins will also be part of the soon to be released v1.4 of our Undercover Inspector.

Missing Agent Jobs

The Undercover Catalogue also holds information on Agent Jobs, so another script that I regularly run will use that to find any jobs that are missing on any of the nodes as well as checking that the schedules match. That’ll get published soon as well

After You’ve Failed Over…

After a failover, even with all my pre-process checks in place I still like to make sure that there aren’t any tell-tale signs that we’ve got an issue. These are failed logins and failed jobs on both the SQL Server that we’ve failed over from and to.

To help me with this, I use a couple of procs that Adrian wrote, sp_FailedLogins and sp_FailedJobs

Hopefully you’ve found this post useful and it’ll help you make your failovers as painless as possible.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is going to be a bit of a brain storming post that comes from an interesting question that I was asked today…

“I’ve got a table with a ID code field, now some of the rows have a value in that field and some are NULL. How can I go about filling in those NULL values with a valid code but at the same time avoid introducing duplicates?”

This person had had a good crack at solving this but had hit a problem. Taking a look at how they had gone about solving this, I could see their logic but their code was overly complex and full of cursors and loops. Now, I’m not against cursors when they’re used in the right context but this was most certainly one of those classic “cursors are bad and inefficient” moments.

That got me thinking, is there a better way that this could be solved?

The Problem

Before I go any further, let’s take a quick look at the problem. Take a look at my ‘albums’ table, it holds an album name and a code for that album (the real world example was far more complex than this, but you should get the point).

As you can see, the table has a bunch of missing ID codes. Now the IDs I’ve used here are numeric, but they could be any sort of alpha numeric code.

So how do we go about filling in those blanks?

My Solution

For the solution that I’ve come up with, we first need to create a temp table which is going to hold enough valid codes for each row of the table. For now don’t worry about generating code that already exist in the ‘albums’ table, we’ll deal with those later.

CREATE TABLE #Codes
(ID INT NOT NULL IDENTITY(1,1))

INSERT INTO #Codes DEFAULT VALUES
GO 10

SELECT ID FROM #Codes

So now we have a list of all possible ID codes and notice, we have got codes that are already assigned in the base table. We can deal with those pretty easily…

SELECT ID
FROM #Codes
WHERE #Codes.ID NOT IN (SELECT ID FROM albums WHERE albums.ID IS NOT NULL)

But how do we go filling in those blank codes? You’re going to need a couple of CTEs and a little trick I blogged a while back in order to join them https://sqlundercover.com/2017/06/26/joining-datasets-side-by-side-when-theres-no-common-key/

WITH albumsCTE (ID, Title, RowNo)
AS
(SELECT ID, Title, ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS RowNo
FROM albums
WHERE ID IS NULL),

CodesCTE (ID, RowNo)
AS 
(SELECT ID,  ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS RowNo
FROM #Codes
WHERE #Codes.ID NOT IN (SELECT ID FROM albums WHERE albums.ID IS NOT NULL))

UPDATE albumsCTE
SET ID = CodesCTE.ID
FROM CodesCTE
WHERE albumsCTE.RowNo = CodesCTE.RowNo

Run that in and now let’s check out our original table,

And there we have it, all our missing codes have been filled in.

That’s the solution that I came up with during the drive home, if you’ve got a clever solution to the problem, I’d love to hear about it.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A while back, I was having a conversation about a deadlocking issue and suggested that an index could perhaps help solve it. The reaction I got was along the lines of, ‘What, how can in index solve a deadlocking issue?’

So, can we solve a deadlocking issue with an index?

Let’s create a rather simple, contrived deadlock situation.

I’m going to start by creating a couple of rather simple tables.

--Address Table
CREATE TABLE [dbo].[Address](
	[AddressID] [int] IDENTITY(1,1) NOT NULL,
	[Street] [varchar](255) NULL,
	[City] [varchar](255) NULL,
PRIMARY KEY CLUSTERED 
(
	[AddressID] ASC
))
GO

--Name Table
CREATE TABLE [dbo].[Name](
	[NameID] [int] IDENTITY(1,1) NOT NULL,
	[Forename] [varchar](255) NULL,
	[Surname] [varchar](255) NULL,
PRIMARY KEY CLUSTERED 
(
	[NameID] ASC
)) 
GO

I’ll then populate the pair of them with 500 rows.

Create a Deadlock

Now I’ll open up two sessions on that database and create a simple deadlock situation.

On the first session, I’ll run the following code

BEGIN TRANSACTION

UPDATE Address
SET Street = '1 The Road'
WHERE City = 'Thunder Bay'

and on the second…

BEGIN TRANSACTION

UPDATE Name
SET Forename = 'Bob'
WHERE Surname = 'Blankenship'

We’re now in a place where we’ve got two sessions each holding an exclusive row lock in their respective tables. A pretty standard situation in SQL and not at all sinister.

Now, back to the first session and I’ll run the following select statement…

SELECT ForeName, Surname
FROM Name
WHERE Surname = 'Bryan'

Nothing gets returned, we’re blocked. That’s to be expected of course, session 2 is holding an exclusive lock on ‘Name’ thanks to the UPDATE that it’s not yet committed.

Let’s run the following from session 2

SELECT Street, City 
FROM Address
WHERE City = 'Karapinar'

and…… DEADLOCK!

A pretty straight forward deadlock scenario, I’m not going to explain that here, there’re plenty of resources explaining how and why this happens out there.

Can An Index Help Us Here?

Now for the big question, can we solve this using an index? Before we look at that, let’s have a look at what’s going on inside our ‘name’ table during this situation.

UPDATE Address
SET Street = ‘1 The Road’
WHERE City = ‘Thunder Bay’

The first thing that happens is the UPDATE statement takes out a lock on the row it’s updating.

Now lets run our SELECT statement from the second session and see what happens.

SELECT Forename, Surname FROM Name
WHERE Surname = ‘Bryan’

We get blocked. Let’s quickly check the execution plan and see what’s happening…

A clustered index scan. So what that means is that SQL is scanning the clustered index from top to bottom until it hits the locked row. It can’t go any further at that point so ends up getting blocked.

I wonder if we can help SQL out here. If we can make it easier for SQL to find that row, we might be able to avoid that block and in turn, avoid the deadlock situation.

What about the following index…

CREATE INDEX ix_Name_Surname_INCLUDE 
ON Name (Surname) INCLUDE(Forename)

Let’s create that and try to recreate our original deadlock situation…

WOW, no deadlock!

So what’s happening now. Let’s think about the update first, now be aware that because we’ve added an index, our update as also got to update that index too. Because of that, we’ll now see a lock on the new index as well.

But why is our SELECT not getting blocked? Let’s have a look at that execution plan now…

Notice anything different? Because we’ve built a covering index, we can now perform a seek on the index and avoid the locked record altogether.

So by adding a covering index we can avoid our session getting blocked and prevent the deadlock from occurring.

Just To Prove That The Seek Was The Cure

Just to prove that the scan is the cause of the block, we can add FORCESCAN to our query and see what happens.

SELECT * 
FROM Name WITH (FORCESCAN)
WHERE Surname = 'Burt'

So now we can see that we’re once again scanning the index and now we’re back to the blocking situation.

Hopefully the above illustrates how the use of an index can help prevent blocking and ultimately, deadlock situations from occurring.

Obviously the usual caveats around indexing apply, have too many or excessively large indexes on your tables can hurt write performance so make sure that an index is the right way forward for you.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In this episode, Adrian looks at some of the new features of the Undercover Inspector1.3 and special guest, Sean McCown give us a fantastic intro into the Powershell SMO (0:23:20). All Sean’s scripts can be found at on Git Hub at https://github.com/SQLUndercover/UndercoverToolbox/tree/master/Undercover%20TV%20Scripts/BeginningSMO

Undercover TV - Sean McCown Joins Us For a session on Beginning Powershell SMO - YouTube
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When we first published 7 ways to Query Always On Availability Groups using SQL we had no idea it would so popular! So here is a quick post with 7 more ways to query Always on availability groups using TSQL,  its always handy to have a few little snippets like these stashed away for when you need them!

Check which replicas have read only config in place to allow them to be readable when set within an AG/s:


SELECT
PrimaryServer.replica_server_name AS PrimaryServer,
Groups.name AS AGname,
ReadOnlyReplica.replica_server_name AS ReadOnlyReplica,
ReadOnlyReplica.read_only_routing_url AS RoutingURL,
RoutingList.routing_priority AS RoutingPriority
FROM sys.availability_read_only_routing_lists RoutingList
INNER JOIN sys.availability_replicas PrimaryServer ON RoutingList.replica_id = PrimaryServer.replica_id
INNER JOIN sys.availability_replicas ReadOnlyReplica ON RoutingList.read_only_replica_id = ReadOnlyReplica.replica_id
INNER JOIN sys.availability_groups Groups ON Groups.group_id = PrimaryServer.group_id
WHERE PrimaryServer.replica_server_name != ReadOnlyReplica.replica_server_name
ORDER BY
PrimaryServer ASC,
AGname ASC

Is this server a primary server for any availability group?


SELECT [Groups].[name]
FROM sys.dm_hadr_availability_group_states States
INNER JOIN sys.availability_groups Groups ON States.group_id = Groups.group_id
WHERE primary_replica = @@Servername

Is this server a primary server for a specific availability group?


SELECT [Groups].[name]
FROM sys.dm_hadr_availability_group_states States
INNER JOIN sys.availability_groups Groups ON States.group_id = Groups.group_id
WHERE primary_replica = @@Servername
AND Groups.name = '[AG name here]'

How many databases are there in each availability group on this server?


SELECT
Groups.name,
COUNT([AGDatabases].[database_name]) AS DatabasesInAG
FROM master.sys.availability_groups Groups
INNER JOIN Sys.availability_databases_cluster AGDatabases ON Groups.group_id = AGDatabases.group_id
GROUP BY Groups.name
ORDER BY Groups.name ASC

Total Database size in each availability group on this server?


SELECT
Groups.name,
SUM(CAST((CAST([master_files].[size] AS BIGINT )*8) AS MONEY)/1024/1024) AS TotalDBSize_GB
FROM master.sys.availability_groups Groups
INNER JOIN Sys.availability_databases_cluster AGDatabases ON Groups.group_id = AGDatabases.group_id
INNER JOIN sys.databases ON AGDatabases.database_name = databases.name
INNER JOIN sys.master_files ON databases.database_id = master_files.database_id
GROUP BY Groups.name
ORDER BY Groups.name ASC

Check Availability group health and whether a database is suspended.


SELECT DISTINCT
Groups.name AS AGname,
Replicas.replica_server_name,
States.role_desc,
States.synchronization_health_desc,
ISNULL(ReplicaStates.suspend_reason_desc,'N/A') AS suspend_reason_desc
FROM sys.availability_groups Groups
INNER JOIN sys.dm_hadr_availability_replica_states as States ON States.group_id = Groups.group_id
INNER JOIN sys.availability_replicas as Replicas ON States.replica_id = Replicas.replica_id
INNER JOIN sys.dm_hadr_database_replica_states as ReplicaStates ON Replicas.replica_id = ReplicaStates.replica_id

Set Availability group backup preference.


USE [master];

--Set Backup preference to Primary replica only
ALTER AVAILABILITY GROUP [AG name here] SET(AUTOMATED_BACKUP_PREFERENCE = PRIMARY);

--Set Backup preference to Secondary only
ALTER AVAILABILITY GROUP [AG name here] SET(AUTOMATED_BACKUP_PREFERENCE = SECONDARY_ONLY);

--Set Backup preference to Prefer secondary
ALTER AVAILABILITY GROUP [AG name here] SET(AUTOMATED_BACKUP_PREFERENCE = SECONDARY);

--Set Backup preference to Any replica (no preference)
ALTER AVAILABILITY GROUP [AG name here] SET(AUTOMATED_BACKUP_PREFERENCE = NONE);

--Backup preference via TSQL can be found here
SELECT
name AS AGname,
automated_backup_preference_desc
FROM sys.availability_groups;

Thanks for reading.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Please see https://sqlundercover.com/undercover-catalogue-0-2/ for full details on the Undercover Catalogue and how to obtain it.

We’ve spotted an issue where unicode datatypes were having their size recorded as the data length rather than character count in the ‘Tables’ module. Another issue affected any MAX datatypes which were having their length mis-recorded.

Version 0.2.1 fixes both these issues.

Upgrading From 0.2.0

If you’re currently running v0.2.0, you can upgrade simply by running ‘UndercoverCatalogue 0.2.1 Hotfix.sql’ against all servers where the Catalogue is installed.

You’ll also need update your Interrogator script to the lastest ‘CatalogueInterrogation.ps1’ script.

Upgrading From 0.1.0

If you’re upgrading from v0.1.0 you’ll need to run the full ‘UndercoverCatalogue Setup.sql’ script.

You’ll also need update your Interrogator script to the latest ‘CatalogueInterrogation.ps1’ script.

All scripts are available from our GitHub site.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview