Loading...

Follow SQLServerCentral Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

I will be the first to admit that I am what one might call a “SQL Head”.  In other words, I’m a nerd that likes to dig into the internal workings of SQL Server, sometimes down to the bits and bytes. The tools to facilitate this archaeology have slowly been evolving and with the release of SQL Server 2019 later this year (2nd half of 2019), this continues to hold true.  Two new tools, a dynamic management function (DMF) and the other a function, will be making an appearance, namely sys.dm_db_page_info and sys.fn_pagerescracker respectively.

Note: For this post I am using Azure Data Studio to get more familiar with the product.  

sys.dm_db_page_info

This new function will return the page header information of a specified page in a single row.  This information will contain vital information such as object_id, index_id, and partition_id, along with many others.  In the past, the only real tool available to retrieve this information was to utilize DBCC PAGE.  With the use of trace flag 3604, DBCC PAGE would output the contents of a page into the results pane of SSMS.   By having this information now exposed through a DMF, we can easily and quickly get information about a given page if needed.  We can even see that Azure Data Studio (shown below) already has intellisense available for the DMF.

In this example, I am connected to a SQL Server 2019 instance that is residing on Linux.

If I were to look at page ID 336 in the PagesOnLinux database on this instance, we can see information from the page header, such as the status of the differential map. We can see below that the status is set to “CHANGED” as well as the DIFF_MAP page ID is 6.

One thing to note about this particular is that all of the parameters except for the MODE (the last one) has NULL as the default value, however, if you attempt to pass a NULL value into the function when executing it will result in an error.  I find this to be confusing as other DMF’s allow you to pass in NULL values to return a larger sub-set of data.  It’s possible that functionality will change by the time SQL Server 2019 is released since the product is still under development.

sys.fn_pagerescracker

Another new addition to SQL Server 2019 is the function sys.fn_pagerescracker.   This function will consume a page resource binary value from sys.dm_exec_requests.  This binary value is a representation of the database ID, file ID, and page ID.  The function will convert it into these values which can then be cross applied to the sys.dm_db_page_info dynamic management function.  Note that the page resource value from sys.dm_exec_request will only be not NULL in the event that the WAIT_RESOURCE column has a valid value.  If the column has a NULL value, this indicates that there is not any type of wait resource, so the row can’t be used for this purpose.  The intent of this is to help diagnose active requests that are waiting on some type of page resource.  A good example of this would be page latches.

Watch for a future blog from me on how this function works internally.  Coming soon!

In Action

In our database, let’s create a table, dbo.Table1 and insert a single row into Table1.

CREATE TABLE dbo.table1 (id int, fname varchar(20))
GO
INSERT dbo.table1 (id, fname) select 1,'John'
GO

Now, let’s start a transaction and update the data in Table1.   Notice that there are query hints in play to force a particular behavior, namely we want to start a transaction that will remain open and lock the page the data is on.  This method is NOT recommended for production usage.   Subsequently, in another session, we will start another transaction and try to see the data that is behind the first transaction.

BEGIN TRANSACTION
UPDATE dbo.Table1
SET fname = 'John'  --I'm just setting it back to the same value; it doesn't matter what I'm setting it to
WHERE id = 1
GO

and our secondary query

-- Note the Pagelock: not recommends for production usage
SELECT * FROM dbo.Table1 WITH (PAGLOCK)

Now that we have a transaction that is blocking, we examine sys.dm_exec_request to obtain the page_resource value that our second query is waiting on.  Remember that the page_resource column will only have a non-NULL value if the request is waiting for another resource.

Now we can finish up and look at the entire query which would allow us to quickly identify what a request is waiting on.  Borrowed from Books Online:

SELECT page_info.* 
FROM sys.dm_exec_requests AS d 
CROSS APPLY sys.fn_PageResCracker (d.page_resource) AS r 
CROSS APPLY sys.dm_db_page_info(r.db_id, r.file_id, r.page_id, 1) AS page_info

Here we can see that the new function gave us the database id, file id, and page id of the corresponding page that we are waiting on.

Summary

Microsoft continues to make advancements in a number of areas with SQL Server 2019.  These two new tools will make analysis of various scenarios easier to decipher and resolve.  Make sure to add these two new tools to your tool belt so that you can have them ready at your disposal.

© 2019, John Morehouse. All rights reserved.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

One of the good things, when we have new clients, is that sometimes they have needs that you never heard before.
This does not necessarily mean that they are complex. As a matter of fact, they can be really simple..now the question is..are they doable?

From my experience, this can happen mainly because one of two reasons, they have some very specific need, or because the way the application is built will make you work with features that you haven’t played yet.

SQL Server Agent Job Schedules – Scenario

The client approached me and asked “Hey, we have an account that is the owner of our jobs, but we would like to use a different account to change the schedule of the job, mainly the start time: is that possible?
As I was not sure about it, I jumped to the documentation.

First things first

I double checked that the login they were mentioning had any permissions on the msdb database. In this case, the login was already part of one of the SQL Server Agent Fixed Database Roles, namely the SQLAgentOperatorRole, which have the following permissions described here.

If we take a look at the 1st row of the grid we can see that a login can change a Job Schedule if they own it.

Fair enough, let’s try it

With this information, I was confident that it would work.
To test it, I have created a login, added it to SQLAgentOperatorRole fixed role on the msdb database and had to change the schedule owner.

NOTE: We are talking about schedule owner and not job owner, this are two different things.

To find who is the owner of the schedule we can run the following query:

SELECT name, SUSER_SNAME(owner_sid) AS ScheduleOwner
  FROM dbo.sysschedules

Then, we need to change the owner to the login we want to use. For this, we should use the sp_update_schedule stored procedure on msdb database using the following code:

EXEC msdb.dbo.sp_update_schedule 
	@name = 'ScheduleName',
	@owner_login_name = 'NewOwnerLoginName'

Now that the login we want to use to change the schedule is the owner of it, the client can connect to the instance using SSMS and this login and edit the schedule, right? Unfortunately no.

Bug on GUI, or missing detail on documentation?

I tested on SSMS and the GUI is disabled

I had SSMS v17.3 which is a little bit out of date, so I upgraded to v17.9.1 which is the current GA (General Availability) version but I got the same behaviour. I have also installed the most recent version which is v18.0 preview 7 (by the time of this post) but, then again the same behaviour.

I decided to open a bug item 37230619 on SQL Server UserVoice called “Edit Job Schedule not working when login is the schedule owner” that you can upvote here if you agree that this should be fixed.

Workaround

Get the schedule id from the list above and you can run the following command (with the login that is the owner of the schedule) in order to change the schedule properties, in this case, the start date, to run at 1am.

USE [msdb]
GO
EXEC msdb.dbo.sp_update_schedule @schedule_id=0000, 
		@active_start_time=10000
GO

I agree that the @active_start_time parameter value is not the most intuitive, but if you look closer it has the format of ‘hh:mm:ss’ but without the ‘:’ characters and it’s a number.
In this example, ’01:00:00′ is translated to 10000.

At least this way it works. The client was happy to have one way to do it.

Bottom line

When the GUI doesn’t work, try to script out the action or find what is running behind the hood and run the command manually. Maybe you get a surprise!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

CTO at Clouds On Mars, former Microsoft employee as Data Insights Product Manager for Poland. In 2007 Pawel started Polish SQL Server User Group (PLSSUG), currently known as Data Community Poland, an official PASS Chapter in Poland. Pawel has been a speaker at many conferences in Poland and worldwide (e.g. SQLDay, SQLSaturday, European PASS Conference). Six times Microsoft Most Valuable...

Source

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When you start a Power BI project, you need to decide how and where you should store the data in your dataset. There are three “traditional” options:

  • Imported Model: Data is imported and compressed and stored in the PBIX file, which is then published to the Power BI Service (or Report Server if you are on-prem)
  • Live Connection: Data is stored in Analysis Services and your Power BI dataset is really a reference to the Analysis Services database.
  • DirectQuery: Data remains in the source system and Power BI stores metadata and a reference to the source data, executing live queries when a user interacts with a report

As Power BI has evolved, there are now some variations and additions to those options. Composite models allow you to combine imported data sources and DirectQuery data sources. We also now have dataflows, which allow you to use self-service data prep to define and share reusable data entities.

Each of these options has its advantages and limitations. There is no single right answer of which one you should always pick.

If you have been struggling with this topic, or just want to double-check your thinking, please join me and Kerry Tyler (@AirborneGeek on twitter) for our Denny Cherry & Associates Consulting webcast on April 5th at 12pm Mountain / 2pm Eastern.

The webcast will review your options for where to store data and explain the factors that should be used in determining what option is right for you. Obvious requirements such as data size, license costs and management, and desired data latency will be discussed. We’ll also talk about other factors such as the desire for self-service BI and avoiding data model sprawl. We’ll have content to present, but we are also happy to take questions during the webcast.

Register for the webcast today and join us next Friday, April 5th.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Midlands PASS in Columbia, SC will welcome Microsoft Data Platform MVP Matt Gordon on April 2, 2019. Our meet and greet starts at 5:30 PM with the presentation beginning around 6 PM. If you haven’t already, you can register at the following link so we’ll know how much food to order:

https://www.eventbrite.com/e/democratizing-data-analysis-howwhy-of-social-sentiment-scoring-tickets-56039146596

Here is a description of Matt’s presentation:

The job of a data professional is evolving rapidly, driving many of us to platforms and technologies that were not on our radar screen a few months ago. I am certainly no exception to that trend. Most of us aren’t just monitoring backups and tuning queries – we are collaborating with teams throughout the company to provide them data and insights that drive decisions. Cloud providers are democratizing technologies and techniques that were complicated and proprietary just a few months ago. This presentation walks you through how a silly idea from a soccer podcast got me thinking about how Azure Logic Apps, the Azure Cognitive Services API, and Azure SQL DB combine to provide potentially powerful insights to any company with a social media and sales presence. Join me as I walk you through building a solution that can impact your company’s bottom line – and potentially yours too!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When you drop a database from a SQL Server instance the underlying files are usually removed. This doesn’t happen however if you set the database to be offline first, or if you detach the database rather than dropping it.

The scenario with offline databases is the one that occurs most often in practice. I might ask if a database is no longer in use and whether I can remove it. A common response is that people don’t think it’s in use, but can I take it offline and we’ll see if anyone screams. I’ll often put a note in my calendar to remove it after a few weeks if no-one has complained. When I do come to remove it, hopefully I’ll remember to put it back online before I drop it so the files get removed, but sometimes I might forget, and in an environment where many people have permissions to create and drop databases you can end up with a lot of files left behind for databases that no longer exist – these are what I’m referring to as orphaned files.

Obviously this shouldn’t happen in production environments where change should be carefully controlled, but if you manage a lot of development and test environments this can certainly occur.

So I created a script I can run on an instance to identify any files in its default data and log directories that are not related to any databases on the instance. Here it is:

--Orphaned database files
DECLARE @DefaultDataPath VARCHAR(512);
DECLARE @DefaultLogPath VARCHAR(512);

SET @DefaultDataPath = CAST(SERVERPROPERTY('InstanceDefaultDataPath') AS VARCHAR(512));
SET @DefaultLogPath = CAST(SERVERPROPERTY('InstanceDefaultLogPath') AS VARCHAR(512));

IF OBJECT_ID('tempdb..#Files') IS NOT NULL
   DROP TABLE #Files;

CREATE TABLE #Files (
   Id INT IDENTITY(1,1),
   [FileName] NVARCHAR(512),
   Depth smallint,
   FileFlag bit,
   Directory VARCHAR(512) NULL,
   FullFilePath VARCHAR(512) NULL);

INSERT INTO #Files ([FileName], Depth, FileFlag)
EXEC MASTER..xp_dirtree @DefaultDataPath, 1, 1;

UPDATE #Files
SET Directory = @DefaultDataPath, FullFilePath = @DefaultDataPath + [FileName]
WHERE Directory IS NULL;

INSERT INTO #Files ([FileName], Depth, FileFlag)
EXEC MASTER..xp_dirtree @DefaultLogPath, 1, 1;

UPDATE #Files
SET Directory = @DefaultLogPath, FullFilePath = @DefaultLogPath + [FileName]
WHERE Directory IS NULL;

SELECT
   f.[FileName],
   f.Directory,
   f.FullFilePath
FROM #Files f
LEFT JOIN sys.master_files mf
   ON f.FullFilePath = REPLACE(mf.physical_name,'\\', '\')
WHERE mf.physical_name IS NULL
  AND f.FileFlag = 1
ORDER BY f.[FileName], f.Directory

DROP TABLE #Files;

I wouldn’t say that you can just go delete these once you’ve identified them, but at least now you have a list and can investigate further.

By the way, you might notice a nasty join statement in the above query. This is to deal with instances where the default directories have been defined with a double backslash at the end. SQL Server setup allows this and it doesn’t cause any day-to-day problems, but can make this sort of automation challenging. I’ve included it in this query as I’ve encountered a few people having this issue. In general I’d avoid such joins like the plague.

Making things more complicated

One complication can be where you have multiple SQL Server instances on the same server. This isn’t greatly recommended in production, but is common in dev\test. Where a database has been migrated from one instance to another, it’s possible that hasn’t been done correctly and the files still exist under the directories for the old instance and you might then see them as orphaned. I previously posted a script to identify such files:

https://matthewmcgiffen.com/2018/04/24/database-files-down-the-wrong-path/

Also in the case of multiple instances, you might want to report across all of them at once. For that you could use SQL CMS. I’ve posted on that too:

https://matthewmcgiffen.com/2018/04/10/cms-effortlessly-run-queries-against-multiple-sql-servers-at-once/

Combining these three techniques makes it relatively easy to identify files that are probably no longer needed. You can get a list of all files that don’t belong to databases on the instances they live under, correlate that to any files that are down the wrong path for any of your instances, then look at what’s left over.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Heads up for SQL Server on Linux folks using availability groups and Pacemaker. Pacemaker 1.1.18 has been out for a while now, but it’s worth mentioning that there was a behaviour change in how it fails-over a cluster. While the new behaviour is considered “correct”, it may affect you if you’ve configured availability groups on
-> Continue reading SQL Server on Linux – feature change in Pacemaker 1.1.18

The post SQL Server on Linux – feature change in Pacemaker 1.1.18 appeared first on Born SQL.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The SQL in the City Summits are back for 2019, and we’ve got more scheduled than every before. You can see the complete list on our event page, and I’m lucky to be invited to all of them. Here’s the schedule.

London

We kick off the year in London on April 30. Having been busy with volleyball all this year, my first weekend free is going to be heading back to London and helping kick off our 2019 tour.

Tickets are on sale now, and if you use “SteveJones” as a code, you should get 50% off. Register today and I’ll see you at Canary Wharf.

The US Tour

It’s a short tour, but we’ll be back in the fall again, so if you’re not near one of these cities, you’ll still have a chance to attend. Use “SteveJones” as the code when you register for a ticket.

We start on May 15 with SQL in the City Summit Los Angeles. I enjoy the city of Angels for a few day and I’m looking forward to coming back for an hour at Venice Beach. I’ll also be speaking, along with Kendra, our top engineer, Arneh, and the amazing Ike Ellis. It will be a great time, so brave the traffic and come see us at the Microsoft office in LA.

Next we head to Austin, TX on May 22 for the next Summit. Austin is a great town, and full of SQL Server professionals. We came through here a few years ago and I’m looking forward to coming back. A quick trip for me as my daughter graduates this week, so I’m jetting back to Colorado after the event.

SQL in the City Down Under

Redgate recently opened an office in Australia and as part of our expansion, we’re doing a SQL in the City Tour in June. I’m very excited as I’ve never been to this part of the world, though this will be a long trip with some vacation taken to enjoy the trip.

We start in Brisbane, on May 31 as the first stop. We coincide with SQL Saturday #838 – Brisbane on June 1, so be sure you register for both. Warwick Rudd, of SQL Masters Consulting, will be joining me to talk about Redgate and database DevOps. We also have Hamish Watson, Michael Noonan, and Dr. Greg Low joining us. A special thanks to Warwick for moving the SQL Saturday date to align here and help me out.

Next I’m taking a few days off and making my way to Christchurch, NZ for our second event on June 7, with SQL Saturday #831 on June 8. Hamish Watson is an exciting speaker and he’ll be hosting us. Looking forward to a fun NZ winter, perhaps with some skiing before or after.

We round out the tour in Melbourne on June 14, with SQL Saturday #865 on June 15. I’ve heard wonderful things about the city and I’m hoping I get to enjoy a little time there. If you say hi, let me know what you like to do around town.

It’s going to be a hectic few months, but I’m looking forward to delivering some sessions and meeting some wonderful people to talk databases.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The first critical task any data professional should ever learn how to do is how to connect to SQL Server. Without a connection to SQL Server, there is barely anything one could do to be productive in the data professional world (for SQL Server).

Yet, despite the commonality of this requirement and ease of the task, I find that there is frequent need to retrain professionals on how to connect to SQL Server. This connection could be attempted from any of the current options but for some reason it still perplexes many.

Let’s look at just how easy it is (should be) to connect to SQL Server (using both SQL Auth and Windows Auth).

Simplicity

First let’s take a look at the dialog box we would see when we try to connect to a server in SQL Server Management Studio (SSMS).

Circled in red we can see the two types of authentication I would like to focus on: “Windows Authentication” and “SQL Server Authentication”. These are both available from the dropdown box called Authentication. The default value here is “Windows Authentication”.

If I choose the default value for authentication or “Windows Authentication”, I only need to click on the connect button at the bottom of that same window. Upon closer inspection, it becomes apparent that the fields “User name:” and “Password:” are disabled and cannot be edited when this default value is selected. This is illustrated in the next image.

Notice that the fields circled in red are greyed out along with their corresponding text boxes. This is normal and is a GOOD thing. Simply click the connect button circled in green and then you will be connected if you have permissions to the specific server in the connection dialog.

Complicating things a touch is the “SQL Server Authentication”. This is where many people get confused. I see many people trying to enter windows credentials in this authentication type. Sure, we are authenticating to SQL Server, but the login used in this method is not a windows account. The account to be used for this type of authentication is strictly the type that is created explicitly inside of SQL Server. This is a type of login where the principal name and the password are both managed by SQL Server.

Let’s take a look at an example here.

Notice here that the text boxes are no longer greyed out and I can type a login and password into the respective boxes. Once done, I then click the “Connect” button and I will be connected (again, presuming I have been granted access and I typed the user name and password correctly).

What if I attempt to type a windows username and password into these boxes?

If I click connect on the preceding image, I will see the following error.

This is by design and to be expected. The authentication methods are different. We should never be able to authenticate to SQL Server when selecting the “SQL Server Authentication” and then providing windows credentials. Windows credentials must be used with the “Windows Authentication” option. If you must run SSMS as a different windows user, then I recommend reading this article.

The Wrap

Sometimes what may be ridiculously easy for some of us may be mind-blowing to others. Sometimes we may use what we think are common terms only to see eyes start to glaze over and roll to the backs of peoples heads. This just so happens to be one of those cases where launching an app as a different principal may be entirely new to the intended audience. In that vein, it is worthwhile to take a step back and “document” how the task can be accomplished.

Connecting to SQL Server is ridiculously easy. Despite the ease, I find myself helping “Senior” level development and/or database staff.

If you feel the need to read more connection related articles, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Brent recently did a post called In Azure SQL DB, what does “The connection is broken and recovery is not possible” mean? and really the main point of the post was this:

All it really means is, “Click Execute again, and you will be fine.”

FYI you can also get the same error when connecting to an on-premises instance and it still means the same thing.

Also along those lines here is another similar error that also means the same thing:

Translated as: “Click Execute again, and you will be fine.”

You can get additional information here.

I do want to give one warning though. When you see these errors trying again will almost always re-connect you. However, unless your connection is told to go directly to the database you need (as would happen with an Azure SQL DB connection) you may end up back at your default database.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview