Loading...

Follow {coding}Sight | Awesome blog for SQL Server and.. on Feedspot

Continue with Google
Continue with Facebook
or

Valid
The goal

Keeping two databases in sync is a task we often encounter when working on SQL database development and maintenance. One particular case is this – the versions of a database present on testing and development environments need to be consistently synchronized to make sure that tests are running on the most recent version. This should be done by detecting schema changes in the development version of a database and then automatically synchronizing them with the version from the testing environment on a scheduled basis.

How to achieve it

We’ll look into how automated synchronization of two SQL databases can be achieved by combining a schema comparison tool with PowerShell scripts and scheduling.

In the example we’re going to see, the databases will be synchronized every 15 minutes. As we’re using a schema comparison tool, we can be sure that the database will only be updated if there are some actual changes between the development and testing versions of the database – this prevents the execution of unnecessary resource-heavy operations.

We’ll use Devart Schema Compare as the utility of choice to diff and synchronize SQL Server databases. It’s a tool that allows you to compare and sync live SQL Server databases, snapshots, and backups. Let’s look at how we can implement our project below.

Setup

First of all, we’ll need to set some things up:

  1. Run Schema Compare
  2. Click the ‘New Schema Comparison‘ button on the toolbar or, alternatively, in the top right part of the start page:
  3. In the Source and Target tab of the New Schema Comparison window, choose the desired source and target databases:
  4. In the Options, Schema Mapping, Table Mapping and Object Filter tabs, you can set up the necessary comparison and synchronization options:
  5. When everything is set up, you can press the ‘Compare‘ button in the bottom right corner to start the comparison process.
  6. When the comparison is finished, you can see all objects that have been compared and their respective difference status in the top part of the window, while the detailed script for each of those objects will be located at the bottom:
  7. Check all objects to include them into the synchronization process and click the ‘Save‘ button or press File > Save:This will save the project with all the information we need to start the synchronization process.
  8. Once the project is saved, click the ‘Synchronize objects‘ button:
  9. Schema Synchronization Wizard will be opened. First, select ‘Execute the script directly against the target database‘ in the Output tab:
  10. You can choose the necessary settings in the Options tab:
  11. You can check all objects that are going to be synchronized in the Summary tab:
  12. Click ‘Synchronize‘ in the bottom-right corner to test the synchronization process.
  13. You will see the results displayed in the bottom pane of the window:
Automating the process

As schema synchronization was successful and now that we have a project file with all the necessary info, let’s automate the synchronization process by using a PowerShell script.

In the following example, it is assumed that you use Integrated security but the stored and encrypted database can still be easily accessed from the project file, along with repository credentials.

We’ll look at some parts of the script creation process that may be particularly interesting, but feel free to skip this section to download and test the finished script right away.

Creating a script

First, we’ll need to create a function that checks if the Outputs folder exists. This folder’s purpose is to store data-stamped commit summaries. The function should look like this:

#check if the Outputs folder exists
function CheckAndCreateFolder($rootFolder, [switch]$Outputs)
{
$location = $rootFolder

    #setting up its location 
    if($Outputs -eq $true)
{
$location += "\Outputs"
}
   
    #creating the folder if it doesn't currently exist
    if(-not (Test-Path $location))
{ mkdir $location -Force:$true -Confirm:$false | Out-Null }

return $location
}

Next, we’ll define the root folder and the location of schema output summaries:

#defining the root folder for the synchronization process
$rootFolder = "d:\temp\SchemaSync"

#defining the location of schema output summaries 
$outsLoc = CheckAndCreateFolder $rootFolder -Outputs

Now, we’ll need to define Schema Compare’s location, the date stamp variable and parameters of the application:

#Schema Compare location and the date stamp variable are defined, as well as  the tool’s parameters 
$diffLoc   = "C:\Program Files\Devart\dbForge Studio for MySQL\dbforgemysql.com"
$stamp = (Get-Date -Format "Mmddyyyy_HHMMss")

With this in place, we can set the path to the output log file:

#output log file path
$logPath = "$outsLoc\SchemaOutput_$stamp.txt"
$Params = "/schemacompare /compfile:""D:\temp\SchemaSync\Project\ALLA1vsALLA2.scomp"" /log:""$logPath"""
$sync = " /sync"

Next, we’ll call Schema Compare and let it execute its synchronization parameters:

#initiate the schema comparison and synchronization process
(Invoke-Expression ("& `"" + $diffLoc +"`" " +$Params))
$returnCode = $LASTEXITCODE

$message = ""

The final part of the script will serve to define all possible outcomes:

  1. Schema changes are detected, return code 0 – Success
  2. No schema changes detected, return code 100– No schema changes detected
  3. An error has been encountered and the output summary will be opened.
if ($returnCode -notin (100, 101))
{ #an error is encountered
$logPath = "$outsLoc\SchemaOutput_error.txt"
       
$message >> $logPath
clear-content $logPath
$message = "`r`n $returnCode - An error is encountered"
       
#output file is opened when an error is encountered
Invoke-Item "$logPath"
}
else{
if ($returnCode -eq 101)
{
clear-content $logPath
(Invoke-Expression ("& `"" + $diffLoc +"`" " +$Params+$sync))
$returnCode = $LASTEXITCODE
       
#schema changes are detected
}
if($returnCode -eq 0)
{
 $message = "`r`n $returnCode - Schema changes were successfully synchronized"
}
else
{
 #there are no schema changes
 if($returnCode -eq 100)
{
$message = "`r`n $returnCode - There are no schema changes. Job aborted"
}
}
}
$message >> $logPath
Scheduling

With the PowerShell script ready and the process automated, we can schedule this in a few different ways, e.g. via the Windows Scheduler.

Viewing results

Now that the scheduled job is up and running, you can view schema output summaries anytime you need. In the example we just seen, $outsLoc variable defined the location of schema output summaries. As a result, such summaries will be saved to $rootFolder\$outsLoc – in this particular case, SchemaSync\Outputs:

If any error is encountered, e.g. a project file’s extension is entered incorrectly, the corresponding error message will be displayed in the output summary.

The list of return error codes will help us to better understand the specific error we encountered.

FULL SCRIPT
#check if the Outputs folder exists
function CheckAndCreateFolder($rootFolder, [switch]$Outputs)
{
$location = $rootFolder

    #setting up its location 
    if($Outputs -eq $true)
{
$location += "\Outputs"
}
   
    #creating the folder if it doesn't currently exist
    if(-not (Test-Path $location))
{ mkdir $location -Force:$true -Confirm:$false | Out-Null }

return $location
}

#defining the root folder for the synchronization process
$rootFolder = "d:\temp\SchemaSync"

#defining the location of schema output summaries 
$outsLoc = CheckAndCreateFolder $rootFolder -Outputs 

#Schema Compare location and the date stamp variable are defined, as well as  the tool’s parameters 
$diffLoc = "C:\Program Files\Devart\dbForge Studio for MySQL\dbforgemysql.com"
$stamp = (Get-Date -Format "Mmddyyyy_HHMMss")

#output log file path
$logPath = "$outsLoc\SchemaOutput_$stamp.txt"
$Params = "/schemacompare /compfile:""D:\temp\SchemaSync\Project\ALLA1vsALLA2.scomp"" /log:""$logPath"""
$sync = " /sync"

#initiate the schema comparison and synchronization process
(Invoke-Expression ("& `"" + $diffLoc +"`" " +$Params))
$returnCode = $LASTEXITCODE

$message = ""

if ($returnCode -notin (100, 101))
{ #an error is encountered
$logPath = "$outsLoc\SchemaOutput_error.txt"

$message >> $logPath
clear-content $logPath
$message = "`r`n $returnCode - An error is encountered"
       
#output file is opened when an error is encountered
Invoke-Item "$logPath"
}
else{
if ($returnCode -eq 101)
{
clear-content $logPath
(Invoke-Expression ("& `"" + $diffLoc +"`" " +$Params+$sync))
$returnCode = $LASTEXITCODE
       
#schema changes are detected
}
if($returnCode -eq 0)
{
$message = "`r`n $returnCode - Schema changes were successfully synchronized"
}
else
{
#there are no schema changes
if($returnCode -eq 100)
{
$message = "`r`n $returnCode - There are no schema changes. Job aborted"
}
}
}
$message >> $logPath

If any questions or issues arise during the process of setting this up, feel free to contact us anytime at support@devart.com

The post How to Automate the Process of SQL Server Database Schema Synchronization appeared first on {coding}Sight.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

SQL Vulnerability Assessment is a feature available in the latest versions of SQL Server Management Studio (SSMS). This feature is very easy to use and it will show you all the security vulnerabilities and deviations in your SQL database. This is something you can run on your most critical databases to ensure you’re properly following strict security practices and that your client’s databases are in safe hands. In this article, we will describe the process of running these scans against your databases. With the amount of data growing with each year, database security is an important aspect every DBA needs to take care of. The consequences of data breaches are severe, so they may affect your future as a DBA and severely damage your firm’s reputation.

Download the latest SQL Server Management Studio (SSMS)

Before we begin with SQL vulnerability assessment, it is important for us to download the latest version of SQL Server Management Studio (SSMS). SSMS is free to download and no license is required. As you know, SSMS is no longer a part of the SQL Server installation and there are new releases of the product getting released frequently with newer and upgraded features. As SQL vulnerability assessment is an internal SSMS tool, it is recommended to always have your SSMS version up to date. At the time of me writing this, the latest SSMS general availability (GA) version is 17.9.1, which can be downloaded here. Make sure to close all versions of SSMS on your server before installing the new one. Once the installation is finished, restart your server and proceed with the following steps.

Restore a Sample database

In order to test the new built-in feature in SSMS, you could use one of the sample SQL Server databases available for download. I used the “WideWorldImporters” sample database in this case. You could download the sample backup file and restore it on your server. Refer to this link for the backup file. You can download the file as shown on the figure below:

Once the backup file is downloaded, you can restore the database by using the script below. Remember to change the folder paths according to your needs.

restore database WideWorldImporters
from disk='C:\WideWorldImporters-Full.bak'
with move 'WWI_Primary' to 'C:\SQLData\WideWorldImporters.mdf',
     move 'WWI_UserData' to 'C:\SQLData\WideWorldImporters_UserData.ndf',
	 move 'WWI_Log' to 'C:\SQLData\WideWorldImporters.ldf',
	 move 'WWI_InMemory_Data_1' to 'C:\SQLData\WideWorldImporters_InMemory_Data_1',
	 stats
Vulnerability Assessment

Once the database is restored, right-click on the database in SSMS, go to “Tasks”, then to “Vulnerability Assessment” and click “Scan for Vulnerabilities” as shown below:

In the next window, you will need to specify where the scans should be saved. Select the desired folder and click “Ok” to start the scan.

The scan will take some time to be completed – usually, this is quite quick. Once this is finished, you will be able to view the scan reports in SSMS. The details of the performed security checks, failed checks and other information is presented in an easy-to-read format. You can see a sample vulnerability assessment report below:

You can see that a security check with the VA1219 ID has a “Medium” risk rating and has failed. You can click on that security check to view more details:

Here, you can see that the Microsoft recommendation for this setting is “True” but it is set to “False” in your database, which means TDE is disabled on your database. Once you click “Open in Query Editor Window”, the query will be opened directly in SSMS. The screenshot below displays this query being ran in SSMS:

Here, you can see the query that is used in the back-end to return the security violation as either true or false. If you don’t intend enabling TDE on your database, you may just approve the baseline by clicking “Approve as Baseline” and then “Yes”. This is the message you get when you click on the “Approve as Baseline” option:

Once you click “Yes”, you will see the following message displayed in the vulnerability scan:

Right now, the total number of failing checks is still 4. You need to re-run the vulnerability scan in order to view the updated results. Once you run the scan again, the TDE-related check will not be failed and the total amount of failing checks will change to 3:

Upon further investigation, you will notice that the total amount of successful check will be changed to 49 – in comparison with the previous run’s 48 successful checks. The TDE check is making the difference here:

View older vulnerability scans

You can also view the vulnerability scan reports you ran earlier on your database. This can be done by opening these reports in the scan report folder. You can get to the older reports by right-clicking on the database and going to “Tasks”, then to “Vulnerability Assessment” and, finally, to “Open Existing Scan”.

Once you click “Open Existing Scan”, you can view the scan reports that were saved earlier when you ran the scans:

On the screenshot, you can see the list of previously-ran scan reports. You can just click on the one you need, select the .json file and then click “Open”. Once this is done, you will be able to view the report – and notice the “Read only” attribute:

You will see that this report is for statistical purposes only, so you won’t be able to make any changes to the security checks – no matter if they passed or failed. For example, when you click on the “VA1245” security check related to the “Surface Area Reduction” category, you won’t see an option to change the baseline setting.

On the screenshot, you can see that both the “Approve as Baseline” and “Clear Baseline” options are grayed out and cannot be changed.

Server-Level Scan – Run the scan on the master database

In the previous example, we ran the vulnerability scan to view the security checks on a user database. You can also run the scan to view the security checks on the server level. To do this, just right-click on the master database and run the vulnerability assessment scan. The steps you will need to follow are the same you went through when running the scan on a user database.

Once you click “Scan for vulnerabilities”, you will get the option to save the scan in a folder of your choice. Click “Ok” to complete the scan.

Here, you can view the scan results at the level of your SQL instance. The total number of security checks is larger when compared to that on the user database. In our case, there are some checks with a “High” risk level, so it is recommended to review them. Review each of those failed security checks and perform the required actions by either accepting them as a baseline or resolving the security issues. Once this is done, re-run the vulnerability assessment scans to confirm that all security issues are addressed and the corresponding security checks have passed.

Vulnerability Assessment Scan – MSDB database

In the previous example, we ran the assessment on the master database. In this example, we will run it on the msdb database. Just follow the same steps as before, and you will get the following window:

You can review the scan report, perform the necessary actions and run the scan again to ensure that all security risks are addressed.

Vulnerability Assessment Scan – Model database

Below is a sample scan report from the model database. As any new database you create on the SQL instance will be structured similarly to this database, it is important to address its security flags.

You will see that the scan reports yield different results when being ran on different system databases. The information may overlap, but it is still a good practice to run the vulnerability assessment scans on each indiviual database to get a general picture of what security risks there may be on the instance level. This will help us ensure that all security requirements are met and that high security standards are met on the SQL Server instances and databases we manage.

Vulnerability Scan on the Azure SQL Database

The same features we described earlier are also available for Azure SQL databases. However, this article is mainly focused on the topic of running the vulnerability assessment scans against on-premises SQL databases. So, we won’t get too much into the details of scanning Azure SQL databases. The scan reports from Azure SQL databases look liker to any other scan reports. However, one benefit of the cloud version is that the vulnerability assessment scans can be scheduled to occur periodically, with the reports being sent to a chosen email address. This makes the tool much more easy to use by automating a process we would otherwise need to perform manually. For the on-premise SQL databases, we still need to run the assessment scans manually to review the reports as there seems to be no way to schedule and automate the scans in SSMS.

Load on the systems while running the vulnerability scan

This vulnerability assessment tool is really lightweight – you may have noticed it when running the scans. Even for large databases, it doesn’t take long to create a report. Also, no changes are made in the databases we run these scans on -basically, the vulnerability assessment scan validates a database on a basis of best practices recommended by Microsoft and highlights any discrepancies. Overall, this is a great free tool which you can run periodically on the most critical databases to make sure there are no security issues. You can also develop your own custom security baseline and keep track of how well your databases and SQL server instances adhere to it.

Conclusion

In this article, we learned:

  • Why it’s important to keep your SSMS version up-to-date
  • How to easily test new features by restoring a sample SQL Server database from a backup
  • How to run the vulnerability assessment scans on user and system databases
  • How to open previous scan reports and review archived scan reports
  • How lightweight and robust the SQL Vulnerability Assessment feature is

The post SQL Vulnerability Assessment appeared first on {coding}Sight.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Introduction

This tutorial includes information about SQL (DDL, DML) that I have gathered during my professional life. This is the minimum you need to know while working with databases. If there is a need to use complex SQL constructions, then usually I surf the MSDN library, which can be easily found on the internet. To my mind, it is very difficult to keep everything in your head and, by the way, there is no need for this. I recommend that you should know all the main constructions used in most relational databases such as Oracle, MySQL, and Firebird. Still, they may differ in data types. For example, to create objects (tables, constraints, indexes, etc.), you may simply use integrated development environment (IDE) to work with databases and there is no need in studying visual tools for a particular database type (MS SQL, Oracle, MySQL, Firebird, etc.). This is convenient because you can see the whole text, and you do not need to look through numerous tabs to create, for example, an index or a constraint. If you are constantly working with databases, creating, modifying, and especially rebuilding an object using scripts is much faster than in a visual mode. Besides, in my opinion, in the script mode (with due precision), it is easier to specify and control rules for naming objects. In addition, it is convenient to use scripts when you need to transfer database changes from a test database to a production database.

SQL is divided into several parts. In my article, I will review the most important ones:

DDL – Data Definition Language

DML – Data Manipulation Language, which includes the following constructions:

  • SELECT – data selection
  • INSERT – new data insertion
  • UPDATE – data update
  • DELETE – data deletion
  • MERGE – data merging

I will explain all the constructions in study cases. In addition, I think that a programming language, especially SQL, should be studied in practice for better understanding.

This is a step-by-step tutorial, where you need to perform examples while reading it. However, if you need to know command in details, then surf the Internet, for example, MSDN.

When creating this tutorial, I have used the MS SQL Server database, version 2014, and MS SQL Server Management Studio (SSMS) to execute scripts.

Briefly about MS SQL Server Management Studio (SSMS)

SQL Server Management Studio (SSMS) is the Microsoft SQL Server utility for configuring, managing and administering database components. It includes a script editor and a graphics program that works with server objects and settings. The main tool of SQL Server Management Studio is Object Explorer, which allows a user to view, retrieve, and manage server objects. This text is partially taken from Wikipedia.

To create a new script editor, use the New Query button:

To switch from the current database, you can use the drop-down menu:

To execute a particular command or set of commands, highlight it and press the Execute button or F5. If there is only one command in the editor or you need to execute all commands, then do not highlight anything.

After you have executed scripts that create objects (tables, columns, indexes), select the corresponding object (for example, Tables or Columns) and then click Refresh on the shortcut menu to see the changes.

Actually, this is all you need to know to execute the examples provided herein.

Theory

A relational database is a set of tables linked together. In general, a database is a file that stores structured data.

Database Management System (DBMS) is a set of tools to work with particular database types (MS SQL, Oracle, MySQL, Firebird, etc.).

Note: As in our daily lives, we say “Oracle DB” or just “Oracle” actually meaning “Oracle DBMS”, then in this tutorial, I will use the term “database”.

A table is a set of columns. Very often, you can hear the following definitions of these terms: fields, rows, and records, which mean the same.

A table is the main object of the relational database. All the data is stored row by row in table columns.

For each table as well as for its columns, you need to specify a name, according to which you can find a required item.

The name of the object, table, column, and index may have the minimum length – 128 symbols.

Note: In Oracle databases, an object name may have the minimum length – 30 symbols. Thus, in a particular database, it is necessary to create custom rules for object names.

SQL is a language that allows executing queries in databases via DBMS. In a particular DBMS, an SQL language may have its own dialect.

DDL and DML –  the SQL sublanguage:

  • The DDL language serves for creating and modifying a database structure (table and link deletion);
  • The DML language allows manipulating table data, its rows. It also serves for selecting data from tables, adding new data, as well as updating and deleting current data.

It is possible to use two types of comments in SQL (single-line and delimited):

-- single-line comment

and

/* delimited comment */

That’s all as to the theory.

DDL – Data Definition Language

Let’s consider a sample table with data about employees represented in a manner familiar to a person who is not a programmer.

Employee ID Full Name Birthdate E-mail Position Department
1000 John 19.02.1955 john@test.tt CEO Administration
1001 Daniel 03.12.1983 daniel@test.tt programmer IT
1002 Mike 07.06.1976 mike@test.tt Accountant Accounts dept
1003 Jordan 17.04.1982 jordan@test.tt Senior programmer IT

In this case, the columns have the following titles: Employee ID, Full Name, Birthdate, E-mail, Position, and Department.

We can describe each column of this table by its data type:

  • Employee ID – integer
  • Full Name – string
  • Birthdate – date
  • E-mail – string
  • Position – string
  • Department – string

A column type is a property that specifies what data type each column can store.

To start with, you need to remember the main data types used in MS SQL:

Definition Designation in MS SQL Description
Variable-length string varchar(N)
and
nvarchar(N)
Using the N number, we can specify the maximum possible string length for a particular column. For example, if we want to say that the value of the Full Name column can contain 30 symbols (at the most), then it is necessary to specify the type of nvarchar(30).

The difference between varchar from nvarchar is that varchar allows storing strings in the ASCII format, while nvarchar stores strings in the Unicode format, where each symbol takes 2 bytes.
I recommend using the varchar type only if you are sure that you will not need to store the Unicode symbols in the given field. For example, you can use varchar to store email contacts.

Fixed-length string char(N)
and
nchar(N)
This type differs from the variable-length string in the following: if the string length is less than N symbols, then spaces are always added to the N length on the right. Thus, in a database, it takes exactly N symbols, where one symbol takes 1 byte for char and 2 bytes for nchar. In my practice, this type is not used much. Still, if anyone uses it, then usually this type has the char(1) format, i.e when a field is defined by 1 symbol.
Integer int This type allows us to use only integer (both positive and negative) in a column. Note: a number range for this type is as follows: from 2 147 483 648 to 2 147 483 647. Usually, it is the main type used to вуашту identifiers.
Floating-point number float Numbers with a decimal point.
Date date It is used to store only a date (date, month, and year) in a column. For example, 02/15/2014. This type can be used for the following columns: receipt date, birthdate, etc., when you need to specify only a date or when time is not important to us and we can drop it.
Time time You can use this type if it is necessary to store time: hours, minutes, seconds, and milliseconds. For example, you have 17:38:31.3231603 or you need to add the flight departure time.
Date and time datetime This type allows users to store both date and time. For example, you have the event on 02/15/2014 17:38:31.323.
Indicator bit You can use this type to store values such as ‘Yes’/’No’, where ‘Yes’ is 1, and ‘No’ is 0.

In addition, it is not necessary to specify the field value, unless it is forbidden. In this case, you can use NULL.

To execute examples, we will create a test database named ‘Test’.

To create a simple database without any additional properties, run the following command:

CREATE DATABASE Test

To delete a database, execute this command:

DROP DATABASE Test

To switch to our database, use the command:

USE Test

Alternatively, you can select the Test database from the drop-down menu in the SSMS menu area.

Now, we can create a table in our database using descriptions, spaces, and Cyrillic symbols:

CREATE TABLE [Employees]( 
[EmployeeID] int, 
[FullName] nvarchar(30), 
[Birthdate] date, 
[E-mail] nvarchar(30), 
[Position] nvarchar(30), 
[Department] nvarchar(30) 
)

In this case, we need to wrap names in square brackets […].

Still, it is better to specify all object names in Latin and not to use spaces in the names. In this case, every word starts with a capital letter. For example, for the “EmployeeID” field, we could specify the PersonnelNumber name. You can also use numbers in the name, for example, PhoneNumber1.

Note: In some DBMSs, it is more convenient to use the following name format «PHONE_NUMBER». For example, you can see this format in ORACLE databases. In addition, the field name should not coincide with the keywords used in DBMS.

For this reason, you can forget about the square brackets syntax and can delete the Employees table:

DROP TABLE [Employees]

For example, you can name the table with employees as “Employees” and set the following names for its fields:

  • ID
  • Name
  • Birthday
  • Email
  • Position
  • Department

Very often, we use ‘ID’ for the identifier field.

Now, let’s create a table:

CREATE TABLE Employees( 
ID int, 
Name nvarchar(30), 
Birthday date, 
Email nvarchar(30), 
Position nvarchar(30), 
Department nvarchar(30) 
)

To set the mandatory columns, you can use the NOT NULL option.

For the current table, you can redefine the fields using the following commands:

-- ID field update

ALTER TABLE Employees ALTER COLUMN ID int NOT NULL

-- Name field update

ALTER TABLE Employees ALTER COLUMN Name nvarchar(30) NOT NULL

Note: The general concept of the SQL language for most DBMSs is the same (from my own experience). The difference among DDLs in different DBMSs is mainly in the data types (they can differ not only by their names but also by their specific implementation). In addition, the specific SQL implementation (commands) are the same, but there may be slight differences in the dialect. Knowing SQL basics, you can easily switch from one DBMS to another. In this case, you will only need to understand the specifics of implementing commands in a new DBMS.

Compare the same commands in the ORACLE DBMS:

-- create table 
CREATE TABLE Employees( 
ID int, -- In ORACLE the int type is a value for number(38) 
Name nvarchar2(30), -- in ORACLE nvarchar2 is identical to nvarchar in MS SQL Birthday date, 
Email nvarchar2(30), 
Position nvarchar2(30), 
Department nvarchar2(30) 
); 

-- ID and Name field update (here we use MODIFY(…) instead of ALTER COLUMN
ALTER TABLE Employees MODIFY(ID int NOT NULL,Name nvarchar2(30) NOT NULL); 
-- add PK (in this case the construction is the same as in the MS SQL) 
ALTER TABLE Employees ADD CONSTRAINT PK_Employees PRIMARY KEY(ID);

ORACLE differs in implementing the varchar2 type. Its format depends on the DB settings and you can save a text, for example, in UTF-8. In addition, you can specify the field length both in bytes and symbols. To do this, you need to use the BYTE and CHAR values followed by the length field. For example:

NAME varchar2(30 BYTE) – field capacity equals 30 bytes 
NAME varchar2(30 CHAR) -- field capacity equals 30 symbols

The value (BYTE or CHAR) to be used by default when you merely indicate varchar2(30) in ORACLE will depend on the DB settings. Often, you can be easily confused. Thus, I recommend explicitly specifying CHAR when you use the varchar2 type (for example, with UTF-8) in ORACLE (since it is more convenient to read the string length in symbols).

However, in this case, if there is any data in the table, then to successfully execute commands, it is necessary to fill in the ID and Name fields in all the table rows.

I will show it in a particular example.

Let’s insert data in the ID, Position, and Department fields using the following script:

INSERT Employees(ID,Position,Department) VALUES 
(1000,’CEO,N'Administration'), 
(1001,N'Programmer',N'IT'), 
(1002,N'Accountant',N'Accounts dept'), 
(1003,N'Senior Programmer',N'IT')

In this case, the INSERT command also returns an error. This happens because we have not specified the value for the mandatory field Name.

If there were some data in the original table, then the “ALTER TABLE Employees ALTER COLUMN ID int NOT NULL” command would work, while the “ALTER TABLE Employees ALTER COLUMN Name int NOT NULL” command would return an error that the Name field has NULL values.

Let’s add values in the Name field:

INSERT Employees(ID,Position,Department,Name) VALUES (1000,N'CEO',N'Administration',N'John'), 
(1001,N'Programmer',N'IT',N'Daniel'), 
(1002,N'Accountant',N'Accounts dept',N'Mike'), 
(1003,N'Senior Programmer',N'IT',N'Jordan’)

In addition, you can use NOT NULL when creating a new table with the CREATE TABLE statement.

At first, let’s delete a table:

DROP TABLE Employees

Now, we are going to create a table with the ID and Name mandatory fields:

CREATE TABLE Employees( ID int NOT NULL, Name nvarchar(30) NOT NULL, Birthday date, Email nvarchar(30), Position nvarchar(30), Department nvarchar(30) )

Also, you can specify NULL after a column name implying that NULL values are allowed. This is not obligatory, as this option is set by default.

If you need to make the current column non-mandatory, use the following syntax:

ALTER TABLE Employees ALTER COLUMN Name nvarchar(30) NULL

Alternatively, you can use this command:

ALTER TABLE Employees ALTER COLUMN Name nvarchar(30)

In addition, with this command, we can either modify the field type to another compatible one or change its length. For example, let’s extend the Name field to 50 symbols:

ALTER TABLE Employees ALTER COLUMN Name nvarchar(50)

Primary key

When creating a table, you need to specify a column or a set of columns unique for each row. Using this unique value, you can identify a record. This value is called the primary key. The ID column (that contains «an employee’s personal number»  – in our case this is the unique value for each employee and cannot be duplicated) can be the primary key for our Employees table.

You can use the following command to create a primary key for the table:

ALTER TABLE Employees ADD CONSTRAINT PK_Employees PRIMARY KEY(ID)

‘PK_Employees’ is a constraint name defining the primary key. Usually, the name of a primary key consists of the ‘PK_’ prefix and the table name.

If the primary key contains several fields, then you need to list these fields in brackets separated by a comma:

ALTER TABLE table_name ADD CONSTRAINT constraint_name PRIMARY KEY(field1,field2,…)

Keep in mind that in MS SQL, all fields of the primary key should be NOT NULL.

Besides, you can define a primary key when creating a table. Let’s delete the table:

DROP TABLE Employees

Then, create a table using the following syntax:

CREATE TABLE Employees( ID int NOT NULL, Name nvarchar(30) NOT NULL, Birthday date, Email nvarchar(30), Position nvarchar(30), Department nvarchar(30), CONSTRAINT PK_Employees PRIMARY KEY(ID) – describe PK after all the fileds as a constraint )

Add data to the table:

INSERT Employees(ID,Position,Department,Name) VALUES (1000,N'CEO',N'Administration',N'John'), 
(1001,N'Programmer',N'IT',N'Daniel'), 
(1002,N'Accountant',N'Accounts dept',N'Mike'), 
(1003,N'Senior programmer',N'IT',N'Jordan')

Actually, you do not need to specify the constraint name. In this case, a system name will be assigned. For example, «PK__Employee__3214EC278DA42077»:

CREATE TABLE Employees( 
ID int NOT NULL, 
Name nvarchar(30) NOT NULL, 
Birthday date, 
Email nvarchar(30), 
Position nvarchar(30), 
Department nvarchar(30), 
PRIMARY KEY(ID) )

or

CREATE TABLE Employees( ID int NOT NULL PRIMARY KEY, Name nvarchar(30) NOT NULL, Birthday date, Email nvarchar(30), Position nvarchar(30), Department nvarchar(30) )

Personally, I would recommend explicitly specifying the constraint name for permanent tables, as it is easier to work with or delete an explicitly defined and clear value in the future. For example:

ALTER TABLE Employees DROP CONSTRAINT PK_Employees

Still, it is more comfortable to apply this short syntax, without constraint names when creating temporary database tables (the name of a temporary table begins with # or ##.

Summary:

We have already analyzed the following commands:

  • CREATE TABLEtable_name (listing of fields and their types, as well as constraints) – serves for creating a new table in the current database;
  • DROP TABLEtable_name – serves for deleting a table from the current database;
  • ALTER TABLEtable_name ALTER COLUMN column_name … – serves for updating the column type or for modifying its settings (for example, when you need to set NULL or NOT NULL);
  • ALTER TABLEtable_name ADD CONSTRAINT constraint_name PRIMARY KEY(field1, field2,…) – used to add a primary key to the current table;
  • ALTER TABLEtable_name DROP CONSTRAINT constraint_name – used to delete a constraint from the table.
 Temporary tables

Abstract from MSDN. There are two types of temporary tables in MS SQL Server: local (#) and global (##). Local temporary tables are visible only to their creators before the instance of SQL Server is disconnected. They are automatically deleted after the user is disconnected from the instance of SQL Server. Global temporary tables are visible to all users during any connection sessions after creating these tables. These tables are deleted once users are disconnected from the instance of SQL Server.

Temporary tables are created in the tempdb system database, which means that we do not flood the main database. Additionally, you can delete them using the DROP TABLE command. Very often, local (#) temporary tables are used.

To create a temporary table, you can use the CREATE TABLE command:

CREATE TABLE #Temp( ID int, Name nvarchar(30) )

You can delete the temporary table with the DROP TABLE command:

DROP TABLE #Temp

In addition, you can create a temporary table and fill it in with the data using the SELECT … INTO syntax:

SELECT ID,Name INTO #Temp FROM Employees

Note: In different DBMSs, the implementation of temporary databases can vary. For example, in the ORACLE and Firebird DBMSs, the structure of temporary tables should be defined in advance by the CREATE GLOBAL TEMPORARY TABLE command. Also, you need to specify the way of storing data. After this, a user sees it among common tables and works with it as with a conventional table.

Database normalization: splitting into subtables (reference tables) and defining table relationships

Our current Employees table has a drawback: a user can type any text in the Position and Department fields, which may returns mistakes, as for one employee he can specify “IT” as a department, while for another employee, he can specify “IT department”. As a result, it will be unclear what the user meant, whether these employees work for the same department or whether there is a misspell and there are 2 different departments. Moreover, in this case, we will not be able to correctly group the data for a report, where we need to show the number of employees for each department.

Another drawback is storage volume and its duplication, i.e. you need to specify a full name of the department for each employee, which requires space in databases to store each symbol of the department name.

The third disadvantage is the complexity of updating field data when you need to modify a name of any position – from programmer to junior programmer. In this case, you will need to add new data in every table row where the Position is “Programmer”.

To avoid such situations, it is recommended to use database normalization – splitting into subtables – reference tables.

Let’s create 2 reference tables “Positions” and “Departments”:

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

With the recent data protection laws and the implementation of GDPR last year, it is imperative for us to know the type of data our databases are storing. This will help us classify data based on the regulations and help us meet the data privacy standards. It is quite difficult for DBAs to know the type of data within a database. However, there is a new built-in tool within SQL Server Management Studio (SSMS) that can help us to easily discover the data within the database. With this tool, we will be able to classify the data and work on reporting any sensitive data.

Global Data Protection Regulation aka GDPR

GDPR was first adopted in 2016 and came into enforcement from 2018. Basically, it is a new framework for data protection in Europe. However, it not only affects organizations located within the EU but anything remotely related to it. It is important for organizations to know the implications of not adhering to these regulations, as the consequences can be damaging. To get an idea of the consequences, refer to paragraph 5 here:  https://gdpr-info.eu/art-83-gdpr/, “Infringements of the following provisions shall, in accordance with paragraph 2, be subject to administrative fines up to 20 000 000 EUR, or in the case of an undertaking, up to 4 % of the total worldwide annual turnover of the preceding financial year, whichever is higher”. For one of these reasons, you would notice a number of companies scrambling to meet the regulatory requirements last year. Even major companies worldwide, including Microsoft, have implemented GDPR across all their customers worldwide irrespective of whether they are located within the EU or outside. As DBAs or data professionals, we need to be aware of the consequences of our actions and understand the importance of the data that is stored within our databases.

Download the latest SQL Server Management Studio (SSMS)

Before we start the demo for data discovery and classification, it is important for us to download the latest version of SQL Server Management Studio (SSMS). As you know, SSMS is no longer part of the SQL Server installation, there are new releases of the product which are released frequently with newer and upgraded features. As data discovery and classification is an in-built tool within SSMS, it is recommended to always have the latest version of SSMS. At the time of this writing, the latest SSMS version is 17.9.1 and can be downloaded here. Make sure to close all versions of SSMS on your server before installing, as the installation stops when SSMS is running. Once installed, restart your server and proceed with the next steps.

Restore a Sample database

To demonstrate the new built-in data classification feature in SSMS, you can use the sample SQL Server database that is available for download. I used the “WideWorldImporters” sample database in this demo. You can download a sample backup file and restore it on your server. Refer to this link for the backup file. You can download the selected file as shown below.

Once the backup file is downloaded, restore the database using it. You can use this sample script for performing the restoration. You may need to update the folder paths as per the location on your server.

restore database WideWorldImporters
from disk='C:\WideWorldImporters-Full.bak'
with move 'WWI_Primary' to 'C:\SQLData\WideWorldImporters.mdf',
     move 'WWI_UserData' to 'C:\SQLData\WideWorldImporters_UserData.ndf',
	 move 'WWI_Log' to 'C:\SQLData\WideWorldImporters.ldf',
	 move 'WWI_InMemory_Data_1' to 'C:\SQLData\WideWorldImporters_InMemory_Data_1',
	 stats

Classify Data

Once the database is restored, right-click the database in SSMS, go to Tasks and click Classify Data, as shown below.

Once you click Classify Data,  a dialog will open where you can view the classification recommendations.

Once you click Click to view, you will be able to view the detailed list of columns that are recommended for data classification. On the screenshot below, you can see a list of 92 columns with classification recommendations. The tool was able to perform the scan for you and provide recommendations with no user input.

Select several columns in the recommended list

Out of the 92 columns that were recommended, we can first select several columns in the list and view the changes that occur. In our demo, we will select the EmailAddress column in the People table. On the screenshot below, you can see that the EmailAddress column is selected on the left-hand side. Once that is done, you will see the Accept selected recommendations option. Otherwise, this option is greyed out.

In the sensitivity label column, you can also see whether the column needs to adhere to the GDPR or not. Once the column is selected, you will see that the recommendation needs to be saved.

Click Save, and you will see the confirmation that the changes have been updated successfully.

Now, you can click View Report to generate a report for the SQL Data Classification for the WideWorldImporters database. The report is shown below.

In the report, you can see the total count of tables, columns, and data distribution. Under the data distribution, you can see if the data is labeled as GDPR or not. Using this tool, you have a better understanding of the importance and the kind of data being stored in your database. In order to make things easier, you can also export this report either as an Excel, PDF or Word and email it to your business manager or CIO for review. Just right-click anywhere in the report and use any of the options as shown below.

Select all columns in the recommended list

In the previous example, you applied only one column from the list of recommended columns. However, you have the option to select all the columns in the recommended list and make your database compliant in terms of data security policies. Just select the Checkbox option thereby selecting all the columns in the recommended list, as shown on the screenshot below.

With this option, all the columns get selected and are ready to be applied. In order to make the changes to your database, click Accept selected recommendations. On the screenshot, you can see 91 columns with classification recommendations. With just a click of a button, you can apply the recommendations for all those columns. Just like you did in the previous example for a single column, go ahead and click Save to complete the process.

Once you click Save, the classification changes get updated successfully. Now, click View Report to view the detailed data classification report.

From this report, you can see that all the classified columns belonged to the tables within 4 schemas. To get details of the table names, click the Table column to view. The report has a pie chart with details of GDPR sensitivity. There is also another useful pie chart for the information type distribution.

Add classification option

You may have noticed that a number of tables were labeled as containing sensitive data from the generated data classification report. This was through the recommendation that was generated by using this data classification tool. However, you may want to review the report further and add your own data classification based on your understanding of the data. This is a manual process, where we select the columns on our own. From the data classification report below, you can see that only one column TaxRate in the OrderLines table is classified to contain confidential data.

However, you realize that other columns in this table need to be in this report too. You can review the OrderLines table and plan to include the Description column in the data classification report.

In this case, right-click the database and perform the same operations as you did earlier for classifying data. You will see the Add Classification option.

Once done, you can select the table and column as shown below. Select the options that are available in the drop-down menu to classify the data. On the screenshot below, you can see that the Description column in the OrderLines table is marked as Confidential. Add the data classification by clicking the Add button.

Once done, click Save to update the changes.

In order to confirm that the changes have been applied successfully, you can re-run the data classification report as you did in the previous steps. After that, you can verify that the column was added successfully.

In the report, you can confirm that the Description column was indeed added successfully and is labeled as confidential. With this approach, you can label other columns of your choice based on certain requirements or directions from the management.

Extended properties in columns

When you check the columns further, you can get details of the extended properties catalog view related to the columns labeled for data classification. For example, when you right-click the Description column in the Sales.OrderLines table, you will be able to view the details in the Extended Properties section for that column.

The sys_information_type_name and sys_sensitivity_label_name values are descriptions in the column of the sys.extended_properties catalog view. The metadata regarding these data classifications are stored in this catalog view:  sys.extended_properties.

Additional Comments

With this new tool, you will notice that the Microsoft product team has made this task of data classification so simple and easy-to-use. To be honest, there is no real excuse for any lack of understanding of the underlying data within your database. Considering the importance of user data protection and the concerns of various global agencies regarding compromising user data by well-known organizations, it is highly important for DBAs and data professionals to take data protection seriously and keep up to speed with the latest technology offering. Considering that this tool is free and easy-to-use, there is no real reason not to use it across your production systems.

Conclusion
  • In this article, you read about the use of a new cool feature within SQL Server Management Studio (SSMS)
  • You briefly learned a number of important items regarding data protection
  • You understood the importance of keeping your SSMS version up to date with the latest versions so that you don’t miss out any new features or upgrades
  • You saw how you can easily test out new features by restoring a sample SQL Server database
  • You learned how to classify the data using SSMS
  • You learned how to generate reports to view details of the labeled columns
  • You learned how to manually add your own data classifications based on your understanding of the underlying data
  • You also know that all of the data classifications can be done based on this simple tool and you can generate amazing reports and get a better understanding of the data and the database you are managing.

The post SQL Data Discovery and Classification appeared first on {coding}Sight.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This article is focused on the T-SQL Not Equal comparison operator (<>) and its uses in basic to slightly complicated SQL scripting tasks.

The article also highlights the importance of understanding the correct use of Not Equal comparative operator with expressions.

The Not Equal comparative operator is a very common operator used in T-SQL, however, it is important to understand how to use it effectively in different scenarios.

Basics of Not Equal <> Comparison Operator

Let us first get familiar with the Not Equal T-SQL operator in the light of Microsoft documentation.

Microsoft Definition

A Not Equal To <> is a comparison operator which compares two expressions and returns a Boolean type result.

Boolean Type Result

Boolean type result is the type of result which returns True or False.

In the context of T-SQL under normal circumstances, Boolean type returns one of the three values:

  1. True
  2. False
  3. NULL
Comparison Operator

A comparison operator in simple words compares two expressions and based on the comparison it tells us whether they are same or not.

Please note that Not Equal comparison operator (<>) is a binary operator which means an operator which compares two expressions.

Expression

According to Microsoft documentation, an expression is a combination of symbols and operators that the SQL Server Database Engine evaluates to obtain a single data value.

Simple expressions can be a single constant, variable, column, or scalar function.

In other words, the simplest form of expression is a constant (fixed number) such as 1.

Example –Operator and Expressions

The simplest example of an expression with operators is an arithmetic operation such as 1+2.

Would you like to guess what is the operator and what is the expression in the above example?

Please refer to Not Equal comparison operator (<>) definition which states, “Not Equal comparison operator (<>) compares two expressions…” so + (addition) is the operator in this example and 1 (constant) and 2 (constant) are expressions.

Example – Comparison Operator and Expressions

Considering the example of arithmetic operation 1+2, simply replacing the arithmetic operator + with Not Equal <> comparison operator, it becomes a comparison operation in T-SQL.

For example, 1<>2 is a comparison operation with comparison operator <> between two expressions 1 and 2.

Not Equal Operator “<>” vs “!=”

There is another way to express Not Equal comparison operator and that is by using exclamation mark followed by equal sign (!=).

Please note that both “<>” and “!=” represent Not Equal comparison operator, except that the <> sign complies with the ISO standard, whereas the Not Equal comparison operator != does not comply with the ISO standard.

Compatibility

According to Microsoft documentation, the Not Equal comparison operator (<>) is compatible with the following SQL database versions:

  1. SQL Server 2008 and upper versions
  2. Azure SQL Database
  3. Azure SQL Data Warehouse
  4. Parallel Data Warehouse
Syntax

expression (constant, variable, column etc.) <> expression (constant, variable, column etc.)

Pre-requisites

This article is basically written for beginners and there are some pre-requisites before basic and complex uses of Not Equal comparison operator (<>) are discussed.

T-SQL Familiarity

This article assumes that the readers have basic knowledge of  T-SQL and are capable of writing and running basic SQL scripts.

Setup Sample Database (SQLDevBlogV2)

This article requires the following sample database to run examples of basic and advanced uses of the Not Equal comparison operator (<>).

The sample database consists of three tables:

  1. Article
  2. Author
  3. Category

Please set up the sample database named SQLDevBlogV2 by writing the following code:

-- Create sample database (SQLDevBlog)
CREATE DATABASE SQLDevBlogV2;
GO


USE SQLDevBlogV2;

-- (1) Create Author table in the sample database
CREATE TABLE Author (
  AuthorId INT PRIMARY KEY IDENTITY (1, 1)
 ,Name VARCHAR(40)
 ,RegistrationDate DATETIME2
 ,Notes VARCHAR(400)
)

-- (2) Create Article Category table in the sample database
CREATE TABLE Category (
  CategoryId INT PRIMARY KEY IDENTITY (1, 1)
 ,Name VARCHAR(50)
 ,Notes VARCHAR(400)
)

-- (3) Create Article table in the sample database
CREATE TABLE Article (
  ArticleId INT PRIMARY KEY IDENTITY (1, 1)
 ,CategoryId INT
 ,AuthorId INT
 ,Title VARCHAR(150)
 ,Published DATETIME2
 ,Notes VARCHAR(400)  
)

-- Adding foreign keys for author and article category
ALTER TABLE Article ADD CONSTRAINT FK_Category_CategoryId FOREIGN KEY (CategoryId) REFERENCES Category (CategoryId)
ALTER TABLE Article ADD CONSTRAINT FK_Author_AuthorId FOREIGN KEY (AuthorId) REFERENCES Author (AuthorId)

GO


-- (5) Populating Author table
INSERT INTO Author (Name, RegistrationDate, Notes)
  VALUES ('Sam', '2017-01-01', 'Database Analyst'),
  ('Asif', '2017-01-02', 'Database and Business Intelligence Developer'),
  ('Sadaf', '2018-01-01', 'Database Analyst Programmer')

-- (6) Populating Category table
INSERT INTO Category (Name, Notes)
  VALUES ('Development', 'Articles about database development'),
  ('Testing', 'Database testing related articles'),
  ('DLM', 'Database lifecycle management')

-- (7) Populating Article 
INSERT INTO Article (CategoryId, AuthorId, Title, Published, Notes)
  VALUES (1, 1, 'Fundamentals of SQL Database Development', '02-01-2018', ''),
  (1, 2, 'Advanced Database Development', '02-01-2018', ''),
  (2, 3, 'All About Database Testing', '03-01-2018', '');
GO

Basic Uses of Not Equal Operator (<>)

In this section, we are going to explore some basic uses of the Not Equal comparison operator (<>) in T-SQL.

Please remember that the upcoming examples run against the sample database SQLDevBlogV2 which must be created if you would like to run the scripts of examples as mentioned in the pre-requisites.

Authors list excluding the first author

We have the Author table in the sample database which contains a list of all the registered authors.

Let us first view all the authors as follows:

-- View all authors
SELECT [AuthorId], [Name], [RegistrationDate] FROM dbo.Author

Now, if we want to view all the authors except the first one, we are going to use the Not Equal comparison operator (<>) as follows:

-- View all authors excluding first one
SELECT [AuthorId], [Name], [RegistrationDate]
FROM Author where AuthorId<>1

Here, the Not Equal comparison operator (<>) is used to filter the results based on the desired criteria.

Please note that in the WHERE clause of the SQL script (where AuthorId<>1), the AuthorId being a column that fulfills the definition of the expression and 1 is a constant, is also an expression so the use complements its syntax (expression <> expression).

Authors list not registered in 2017

Let us say we want to see the list of authors who were not registered in 2017.

This can be achieved by using the Not Equal comparison operator (<>) as follows:

--Authors list not registered in 2017.sql
SELECT [AuthorId], [Name], [RegistrationDate] FROM Author where Year(RegistrationDate)<>2017

Keeping the expression <> expression syntax in mind, this time expression on the left Year(RegistrationDate) is a function that returns year (number) compared with the expression on the right which is 2017 (number).

Please remember that we have already mentioned that the simplest form of expression is a number and in this example, we saw it in action.

Articles list excluding testing category

If you are interested to see the list of all the articles except for the testing category stored in the sample database then you would require the Not Equal comparison operator (<>).

If you refer to the sample database diagram at the beginning of this article, it is obvious that the Article table is connected with Category through foreign key relationship, so, first of all, we need to see the categories with articles by using INNER JOIN as follows:

-- Articles with categories list
SELECT art.ArticleId, cat.Name AS Category, art.Title,art.Published FROM  Article art INNER JOIN Category cat
on art.CategoryId=cat.CategoryId

Now, we need to exclude the Testing category to see all the articles except the testing articles using the Not Equal comparison operator (<>) as follows:

-- All articles with categories excluding testing category
SELECT art.ArticleId, art.Title,cat.Name AS Category, art.Published FROM  Article art INNER JOIN Category cat
on art.CategoryId=cat.CategoryId
WHERE cat.Name<>'Testing'

Running the script produces the following output:

Complex Uses of Not Equal comparison operator (<>)

We are now going to look at some slightly complex examples, which involve the use of the Not Equal comparison operator (<>).

Authors list excluding current year authors

A slightly complex scenario is when you are asked to get all the authors except those who were registered in the current year.

Since the current year is dynamic and it changes from year to year, this makes the script slightly complex as we cannot simply use a fixed year number with the comparison operator.

The solution is to get the current year from the current date and then get the list of all the authors who were not registered in the current year.

This is achieved by the following code:

-- Authors list excluding current year authors
SELECT a.AuthorId,a.Name,a.RegistrationDate FROM Author a
WHERE YEAR(a.RegistrationDate)<>YEAR(GETDATE())

The result is as follows:

Please note that since all the authors in the sample database were registered in 2017 and 2018 so all the authors are shown without any exclusion.

Authors list excluding the author(s) with most articles

The requirement is to view the authors’ list without the top author(s) makes the script slightly more complex.

We have to divide the problem into smaller parts:

  1. Getting top author(s) (author with most articles)
  2. Excluding top author(s) using the Not Equal comparison operator (<>)
  3. Viewing all authors excluding top author(s)

The top author is an author who has more written articles than any other author.

Let us first add one more article for the author Sadaf to make her the top author by running the following script:

--Add one more article for Sadaf (AuthorId: 3)
INSERT INTO Article (CategoryId, AuthorId, Title, Published, Notes)
  VALUES (1, 3, 'Database Development with SQL Server Data Tools (SSDT)', '10-01-2019', '')

Now, build a script to view authors with the total number of articles ordered by authors with most articles first:

--Authors with total number of articles order by author with most articles first
SELECT ath.name as Author_NAME,count(*) as Total_Articles FROM Article art
INNER JOIN Author ath
on art.AuthorId=ath.AuthorId
GROUP BY ath.AuthorId,ath.Name
ORDER BY COUNT(*) DESC

The output of the script is as follows:

Now first we need to find out the most articles by an author and put the result into a variable and then we can list all the authors excluding the ones with most articles by using Not Equal comparison operator (<>) with the variable as follows:

--Authors list excluding the author(s) with most articles
DECLARE @MaxArticles INT=(SELECT TOP 1 count(*) as Total_Articles FROM Article art
INNER JOIN Author ath
on art.AuthorId=ath.AuthorId
GROUP BY ath.AuthorId,ath.Name
ORDER BY COUNT(*) DESC)

-- Authors excluding the author with most articles
SELECT ath.Name,ath.RegistrationDate,Count(*) as Total_Articles FROM Author ath
INNER JOIN Article art
ON ath.AuthorId=art.AuthorId
GROUP BY ath.Name,ath.RegistrationDate
HAVING COUNT(*)<>@MaxArticles

Running the script shows us the following result set:

Congratulations, you have successfully learned how to use the Not Equal comparison operator (<>) in simple and slightly complex scenarios.

Things to do

Now that you are familiar with basic and complex uses of the Not Equal comparison operator (<>), you can improve your skills further by trying the following things:

  1. Please try to find out the list of all the articles excluding the articles which were written last year, which should be calculated dynamically.
  2. Please try to create SQL script to get the list of all the authors who were not registered last year, where the last year should be calculated dynamically.
  3. Please try finding out the list of all the articles, where the category was not development using the sample database created in this article.
  4. Please try to build a SQL script to get all the categories excluding the top category (category with most articles written).
  5. Please try creating a scalar-valued function and use it with the Not Equal comparison operator (<>) to explore further.

The post Basic and Complex Uses of Not Equal Comparison Operator in T-SQL appeared first on {coding}Sight.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In SQL Server, we can combine the same type of data from multiple tables using SET operators. After combining multiple SQL statements, it returns one result set. Following is the list of T-SQL SET operators:

  1. UNION
  2. UNION ALL
  3. INTERSECT
  4. EXCEPT

To use SET operators, we must follow a number of rules:

  1. The result set of both queries must have the same number of columns.
  2. The data type of columns retrieved by the top and bottom queries must be the same.
  3. If we want to sort the final result set, the ORDER BY clause must be at the end of the query.
  4. The positional ordering of the columns returned by the top and bottom queries must be same.

In this article, I am going to explain the following:

  1. UNION and UNION ALL operator.
  2. Difference between UNION and UNION ALL.
  3. Performance comparison between UNION and UNION ALL.
  4. Performance comparison of UNION and UNION ALL with SELECT Distinct.
What is UNION

UNION is one of the SET operators. The UNION operator combines results generated by multiple SQL queries or multiple tables and returns a single result set. The final result set contains all the rows returned by all the queries in the UNION, and duplicate rows are removed.

Following is the syntax of the UNION operator.

SELECT COLUMN1, 
       COLUMN2, 
       COLUMN3, 
       COLUMN4..FROM TABLE1 
UNION 
SELECT COLUMN1, 
       COLUMN2, 
       COLUMN3, 
       COLUMN4..FROM TABLE2

What is UNION ALL

UNION All is also the SET operators. Similar to UNION, it combines results generated by multiple SQL queries or tables and returns a single result set. The final result set contains all the rows returned by all the queries in the UNION ALL, but it also contains duplicate records. The following image illustrates the UNION ALL.

Following is the syntax of the UNION ALL operator.

SELECT COLUMN1, 
       COLUMN2, 
       COLUMN3, 
       COLUMN4.FROM TABLE1 
UNION ALL
SELECT COLUMN1, 
       COLUMN2, 
       COLUMN3, 
       COLUMN4.FROM TABLE2

Difference between UNION and UNION ALL

  1. UNION retrieves only distinct records from all queries or tables, whereas UNION ALL returns all the records retrieved by queries.
  2. Performance of UNION ALL is higher than UNION.

In following the demonstration, I will briefly explain the difference between UNION and UNION ALL.

Prepare Demo Setup

To demonstrate the syntax of the UNION and UNION ALL operators, I have created the following setup.

Firstly, create two tables named STUDENT_ GRADE_A and STUDENT _GRADE_B in DemoDatabase. To do that, execute the following query:

CREATE TABLE STUDENT_GRADE_A 
  ( 
     ID          INT IDENTITY(1, 1), 
     STUDENTNAME VARCHAR(50), 
     GRADE       CHAR(1), 
     PERCENTAGE  INT 
  ) 
GO 

CREATE TABLE STUDENT_GRADE_B 
  ( 
     ID          INT IDENTITY(1, 1), 
     STUDENTNAME VARCHAR(50), 
     GRADE       CHAR(1), 
     PERCENTAGE  INT 
  ) 

GO

Add some dummy data by executing the following query:

INSERT INTO STUDENT_GRADE_A 
VALUES      ('KEN J SÁNCHEZ', 
             'A', 
             90), 
            ('TERRI LEE DUFFY', 
             'A', 
             80), 
            ('ROBERTO TAMBURELLO', 
             'B', 
             55), 
            ('ROB WALTERS', 
             'B', 
             60) 

GO 

INSERT INTO STUDENT_GRADE_B 
VALUES      ('GAIL A ERICKSON', 
             'A', 
             90), 
            ('JOSSEF H GOLDBERG', 
             'A', 
             50), 
            ('DIANE L MARGHEIM', 
             'B', 
             60), 
            ('GIGI N MATTHEW', 
             'C', 
             35) 

GO

Execute the following query to see the data in both tables:

Now, let us combine the result set of both queries using UNION. To do that, execute the following query:

USE DEMODATABASE 
GO 

SELECT GRADE, 
       PERCENTAGE 
FROM   STUDENT_GRADE_A 
UNION 
SELECT GRADE, 
       PERCENTAGE 
FROM   STUDENT_GRADE_B

Following is the output:

As you can see in the above image, UNION returned 6 rows instead of 8, that means it combined the output of both queries, but it removed duplicate records.

Now let’s take a look at an execution plan of the above query. Following is a screenshot of the execution plan.

As you can see, the UNION operator first combines the output generated by both queries using the concatenation operator (Red Box) and then it performs the distinct operation (green box) on the result set.

Now, let’s join both the tables using UNION ALL. To do that, execute the following query.

USE DEMODATABASE 
GO 

SELECT GRADE, 
       PERCENTAGE 
FROM   STUDENT_GRADE_A 
UNION ALL 
SELECT GRADE, 
       PERCENTAGE 
FROM   STUDENT_GRADE_B

As I explained above, UNION ALL returns matching records and duplicate records.  Following is the output:

As you can see on an above screenshot, the query returned 8 rows, and the final result set contains duplicate records.

Now let’s take a look at an execution plan of the above query. Following is a screenshot of the execution plan.

As you can see, the UNION ALL operator combines the output generated by both queries using the concatenation operator (Red Box).

Performance comparison of UNION and UNION ALL

Now as I mentioned, the UNION operator combines the results and performs distinct sort when generating the final result set whereas UNION ALL combines the result set of both queries or tables. So, when we use UNION ALL to combine the result sets, it gives the faster result.

To demonstrate that, execute the following queries:

USE DEMODATABASE 
GO 

SELECT GRADE, 
       PERCENTAGE 
FROM   STUDENT_GRADE_A 
UNION 
SELECT GRADE, 
       PERCENTAGE 
FROM   STUDENT_GRADE_B 

SELECT GRADE, 
       PERCENTAGE 
FROM   STUDENT_GRADE_A 
UNION ALL 
SELECT GRADE, 
       PERCENTAGE 
FROM   STUDENT_GRADE_B

Following is the execution plan of the above queries:

As you can see in the above image:

  1. UNION performs expensive distinct SORT operation which reduces the performances. The query cost relative to the batch is 73%.
  2. UNION ALL does not perform a distinct sort. The query cost relative to the batch is 27%.

Now let’s try to perform UNION ALL on the result set generated by  SELECT DISTINCT and compare its execution plan. To do that, execute the following query:

/*Query with UNION*/
SELECT GRADE, 
       PERCENTAGE 
FROM   STUDENT_GRADE_A 
UNION 
SELECT GRADE, 
       PERCENTAGE 
FROM   STUDENT_GRADE_B 
GO 
/*Query with UNION All and Select Distinct*/
SELECT DISTINCT GRADE, 
                PERCENTAGE 
FROM   STUDENT_GRADE_A 
UNION ALL 
SELECT DISTINCT GRADE, 
                PERCENTAGE 
FROM   STUDENT_GRADE_B

Following is the execution plan:

As you can see in the above image:

  1. UNION: query cost relative to the batch is 38%.
  2. UNION ALL with Select distinct: query cost relative to the batch is 62%.

So, combining UNION ALL with SELECT DISTINCT performs two distinct sorts, but this does not give performance benefits, in fact, it reduces the performance.

The above scenario proves that:

  1. UNION ALL is faster and more optimized than UNION. But we cannot use it in all scenarios.
  2. UNION ALL with SELECT DISTINCT is not equivalent to UNION.
Summary

In this article, I have covered:

  1. T-SQL SET operators.
  2. What are UNION and UNION ALL
  3. Performance comparison of UNION and UNION ALL.

The post T-SQL SET Operators Part 1: UNION and UNION ALL appeared first on {coding}Sight.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Microsoft has recently released the preview version of Visual Studio 2019. Like all previous editions, the latest Visual Studio comes with lots of improvements and new features that are focused on faster execution, more productivity for developers and team collaboration.

If you haven’t downloaded it yet, you can do it here for free: Visual Studio 2019 preview. You can then easily install and run it alongside any other edition of Visual Studio – they won’t interfere with each other and this won’t require any considerable upgrades of your OS. You can also download Visual Studio 2019 RC (Release Candidate) from the official website.

Now, let’s dig into the most important features and improvements in Visual Studio 2019.

New Start Screen

The first thing we notice after opening VS 2019 is the start screen’s completely new look. This new start screen provides us with the following four options on the right side:

  1. Clone or check out code
  2. Open a project or solution
  3. Open a local folder and
  4. Create a new project

With the help of these options presented in a simple dialog box, we will be able to directly perform the following  activities:

  • Clone repositories
  • Open previous projects
  • Create new projects
  • Navigate and open folders

At the bottom right corner, you can find an option called “Continue without code”. When chosen, this will dismiss the window and opens Visual studio 2019 without any solutions. Alternatively, you can hit the ESC button to do the same thing.

Create Project

Visual studio 2019 has a completely new project creation window. However, it provides almost the same functionality present in VS 2017.

Here, we have multiple filtering options like the filter search textbox, language (C++, C#, Java, Java script,etc.), platform (Android, Azure, iOS, Linux, Xbox,etc.) and project type (Cloud, console, Machine learning, etc.).

Let’s look at these in more detail:

  1. Text search box 
  2. Language (C++, C#, Java, Java script,etc.)
  3. Platform (Android, Azure, iOS, Linux, Xbox,etc.)
  4. Project type (Cloud, console, Machine learning, etc.)

Here, you will be able to find and install new templates and select the project type. Once this is done, Visual Studio will take you to the new project configuration screen. You will need to choose the project’s location and name. Based on which project you selected, different project configuration screens can be shown.

For example:

Let’s create a console application.

First, search ‘console’, select “Console App” and then click the “Next” button:

Then the configuration screen will be opened. Here, we have to set the project name and target framework and then click Create.

When the project is created, the Visual Studio 2019 user interface will be opened. Here is how Visual Studio looks when loaded with a .NET Core console application setting.

Better Search

Another new thing worth mentioning is the updated search box. It allows us to quickly access nearly anything in Visual Studio. This feature is simple, fast and more effective than in the IDE’s previous editions. Now, search results are updated in real time as you type. You can also see the corresponding keyboard shortcuts while searching, so you  will be able to memorize and effectively retain them for future use.

We can also search for settings, components, documentation, project code as well.

Visual Studio Live Share

This feature allows us to easily share our codebase and collaborate with our teammates when developing applications. No matter how far away the members of your team are located from each other, you can use Live Share to achieve real-time collaborative development.

To use this feature, you and your teammates must sign into Visual Studio. Through Live Share, a teammate can read, navigate, edit, and debug the codebase you shared with your team, and do so seamlessly and securely. In Visual Studio 2019 Preview, this feature is installed by default.

Once the Live Share document is ready, you will see a “Sharing” menu and an invitation link. You can copy and then share this link with your teammate so that they can join the live session by navigating to File > Join Collaboration Session and entering the Shareable Invitation Link.

One-click code cleanup

Another new option can be found while a document is opened in Visual Studio – this option indicates the current document’s health. It will also display the summary of all warnings, errors, and suggestions when you hover over the yellow bulb icon.

You can run code clean-up and configure its settings by right-clicking the broom  icon and selecting the corresponding option.

When you choose Configure Code Cleanup, the configuration menu window will be opened. There, you can see two profiles to which you can add or remove filters. They also allow multiple cleanup configuration at the same time. You can choose whatever profile you want, but don’t add multiple profiles at this time. This feature will be useful when we set up multiple filters for a solution that includes both back-end and front-end projects.

Select any available filter you need, click the ^ up arrow to add it and then click OK.

Now you can run Code cleanup for the selected profile you selected. Click the broom icon again and select Run Code Cleanup (Profile 1).

Here, we will be able to sort the usings at the top of the auto-generated file and remove the unnecessary ones. This is probably one of the most exciting new features in Visual Studio 2019.

Debugger improvement

In Visual Studio 2019, we can quickly find our variables and their properties by using a new feature called search in the Watch, Autos, and Locals windows while debugging the code.

With the help of this newly added search feature, we can highlight and navigate to particular values contained within the name, type, and value columns of each watch window.

We will also be able to change how a value is displayed within the Auto, Locals, and Watch windows. To do that, double-click any item in any of these windows and add a “,” (comma) to get to the drop-down list of feasible format specifiers, each of them coming with a tooltip that lets you know about their functionality.

Manage Pull Requests (PRs)

Microsoft have introduced a new extension that we can download and use with VS 2019. Through this new extension, we can review the code to run and debug our team’s pull requests without leaving the Visual Studio IDE.

Currently, this feature supports code from Azure Repos, but we can expect support for GitHub and better overall experience. To try this feature out, download the Pull Requests for Visual Studio extension from the VS Marketplace.

In Visual Studio, we can find this pull request feature under the Team Explorer. Click on Pull Request – a new window with more details will be opened.

Develop with .NET Core 3 Preview

The Visual Studio 2019 preview supports .NET Core 3 application building for any platform. To test and work with this feature, you would first need to download the .NET Core 3.0 SDK. Microsoft would like to continue supporting this and improve cross-platform C++ development and .NET mobile development for Android and iOS with Xamarin.

PMA (Per Monitor Aware) Rendering

This feature will allow Visual Studio to render properly while working with a remote machine or on a projector. We can enable this feature in the Tools – Preview Feature submenu. This feature’s minimum requirements are Windows 10 Version 1803 and .NET Framework 4.8.

Visual Studio IntelliCode

IntelliCode is a VS 2019 extension that improves the software development process by using advanced algorithms that tailor the completion list to best serve your particular style of coding conventions and promote common practices. When installed, IntelliCode will draw its power from thousands of highly rated open-source GitHub projects to enhance and streamline your coding efforts.

Summary

The features and improvements we discussed will enhance developer productivity and team collaboration in Visual Studio 2019 compared to previous editions. There is a new start screen which allows you to decide on the specifics of how you want to start working – and you will be able to do it quickly and easily. Also, the user interface is now much cleaner, with an updated search bar that provides an ability to reach the place or information you’re looking for in a flash. The project creation wizard was completely re-imagined. Debugging is now up to 50% faster than before and we have access to a highly functional search bar in the Watch, Autos, and Locals windows.

I tried my best to distill the most important features introduced in Visual Studio 2019. However, there are many other improvements in various areas, and covering their entirety would be difficult. To check them all out for yourself and to get acquainted with Visual Studio’s new edition in general, you can download the latest version from the official website. Feel free to share what new features you thought are the most exciting!

The post New Features in Visual Studio 2019 appeared first on {coding}Sight.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

At present, the gathering of SQL Server configuration information is no longer a problem— the opportunities to add value through an update and to build a stable database environment exist permanently. Most of the third party tools provide the appropriate functionality in order to capture the software market. Through this guide, you’ll figure out the ways to capture few SQL Server configurations and manipulate the data by using PowerShell.

Moreover, you will get the answers to the following questions:

  • How to gather SQL Server configuration
  • How to automatically discover SQL instances
  • How to manually input the SQL Server instances to speed up the process
  • How to transform data by using PowerShell
  • And more…

Getting Started

In this article, we’ll go over an existing example of using PowerShell and SQL Server. Proper tuning up the configuration parameters will always produce a better performance. In the appendix section, you will be able to see a full-length script to query the configuration settings. The output can be used as a baseline for your instance and your databases for performance troubleshooting.

Introduction

In a moment, we’ll see how to audit the SQL Server configuration details of the database environment. In our case, we will talk about how to list XP_CMDSHELL, SQL Server Memory Settings—Max and Min memory configurations, and Traceflags property values across the SQL instances.

XP_CMDSHELL

The xp_cmdshell is a configuration property that enables the required option to execute the extended stored procedure inside the system.

Note: The xp_cmdshell is disabled by default. There are several instances where the legacy application may require this feature to be enabled. It is critical to consider this option enabled because it has several security implications and risks.

In the following PowerShell code, the configuration class of SQL Server management objects is instantiated in order to get the configvalue parameter of each SQL Server instance.

Note: the XPCmdShellEnabled property that is used to get the ConfigProperty or configvalue object, is used to configure XP_CMDSHELL configuration in our case.

The following PowerShell code lists the configuration value for the XP_CMDSHELL property of the listed server ‘hqdbt01’:

'hqdbt01'|

ForEach-Object {

Get-ChildItem -Path "SQLSERVER:\SQL\$_"} | Select-Object -Property ComputerNamePhysicalNetBIOS, Name, DisplayName, InstanceName |

%{

#$_.ComputerNamePhysicalNetBIOS

#$_.DisplayName

If (($_.DisplayName -eq 'Default') -or ($_.DisplayName -eq 'MSSQLSERVER')) {

$SQL = $_.name

}

elseif ($_.InstanceName -ne $NULL)

{

#$SQLInstance = "$($_.ComputerNamePhysicalNetBIOS)\$($_.InstanceName)"

$SQL = $_.name

}

New-Object ('Microsoft.SqlServer.Management.Smo.Server') $SQL |

SELECT @{label='ServerName';expression={$_.ComputerNamePhysicalNetBIOS}}, @{label='InstanceName';expression={$_.Name}},@{label='xp_cmdshell';expression= {$_.Configuration.XPCmdShellEnabled.configValue}} |SELECT ServerName,InstanceName,xp_cmdshell

}

In this section, you’ll learn how to display a custom header named xp_cmdshell along with the customized value for that property. xp_cmdshell configuration is explained in detail.

Note: To get property of configuration class, you need to use $_ as it represents the current scope of SMO class ‘Microsoft.SqlServer.Management.Smo.Server’ in the pipeline extended with the configuration class that uses a dot membership operator to get the property of xp_cmdshell.

SELECT @{label='InstanceName';expression={$_.Name}},@{label='xp_cmdshell';expression= {$_.Configuration.XPCmdShellEnabled.configValue}}
Next, you’ll learn how to use the condition statement in the expression operation. In the following code, you can see the usage of IF statement in the expression. The IF statement runs the conditional block, in this case, $_.Configuration.XPCmdShellEnabled.configValue. It is evaluated for ‘1’ to return Enabled. For other values, it returns Disabled.
@{label='xp_cmdshellDesc';expression= { if($_.Configuration.XPCmdShellEnabled.configValue -eq 1) { 'enabled' } else {'disabled'}}}
Let’s see the PowerShell code to retrieve the xp_cmdshell description using “if” statement in the expression.
'hqdbt01'|

ForEach-Object {

Get-ChildItem -Path "SQLSERVER:\SQL\$_"} | Select-Object -Property ComputerNamePhysicalNetBIOS, Name, DisplayName, InstanceName |

%{

#$_.ComputerNamePhysicalNetBIOS

#$_.DisplayName

If (($_.DisplayName -eq 'Default') -or ($_.DisplayName -eq 'MSSQLSERVER')) {

$SQL = $_.name

}

elseif ($_.InstanceName -ne $NULL)

{

#$SQLInstance = "$($_.ComputerNamePhysicalNetBIOS)\$($_.InstanceName)"

$SQL = $_.name

}

New-Object ('Microsoft.SqlServer.Management.Smo.Server') $SQL |

SELECT @{label='ServerName';expression={$_.ComputerNamePhysicalNetBIOS}},

@{label='InstanceName';expression={$_.Name}},@{label='xp_cmdshell';expression= {$_.Configuration.XPCmdShellEnabled.configValue}},

@{label='xp_cmdshellDesc';expression= { if($_.Configuration.XPCmdShellEnabled.configValue -eq 1) { 'enabled' } else {'disabled'}}}`

|SELECT ServerName,InstanceName,xp_cmdshell,xp_cmdshellDesc

}
SQL Server Memory Settings

SQL Server memory settings have two options: “max_memory” setting and “min_memory” setting. Now, learn how to list the configured values of memory parameters of all the listed servers.

Note: By default, min server memory setting is configured with a 0 value, and the max server memory is configured with 2,147,483,647 MB. On another note, by default, the SQL Server engine will manage memory requirements more dynamically.

The following PowerShell code gathers the memory setting of all the SQL instances that are listed in the input servers.

'hqdbt01'|

ForEach-Object {

Get-ChildItem -Path "SQLSERVER:\SQL\$_"} | Select-Object -Property ComputerNamePhysicalNetBIOS, Name, DisplayName, InstanceName |

%{

#$_.ComputerNamePhysicalNetBIOS

#$_.DisplayName

If (($_.DisplayName -eq 'Default') -or ($_.DisplayName -eq 'MSSQLSERVER')) {

$SQL = $_.name

}

elseif ($_.InstanceName -ne $NULL)

{

#$SQLInstance = "$($_.ComputerNamePhysicalNetBIOS)\$($_.InstanceName)"

$SQL = $_.name

}

New-Object ('Microsoft.SqlServer.Management.Smo.Server') $SQL |

SELECT @{label='ServerName';expression={$_.ComputerNamePhysicalNetBIOS}},

@{label='InstanceName';expression={$_.Name}},

@{label='Max';expression={$_.Configuration.MaxServerMemory.configValue}},

@{label='MaxDesc';expression={ if ($_.Configuration.MaxServerMemory.configValue -eq 2147483647) {'Default'} else {'Custom'}}},

@{label='Min';expression={$_.Configuration.MinServerMemory.configValue}},

@{label='MinDesc';expression={ if ($_.Configuration.MinServerMemory.configValue -eq 0) {'Default'} else {'Custom'}}}`

|SELECT ServerName,InstanceName,Max,MaxDesc,Min,MinDesc

}

In the following screenshot, you can see that the memory-related properties MaxServerMemory.configValue and MinServerMemory.configValue are gathered in order to use the same method as described in the XP_CMDSHELL section.

OS and SQL Wildcard expression

In this section, we’ll see how to use PowerShell regular expression with a “switch” statement in the expression statement.
Wildcard characters are forms of regular expression. In this case, the wildcard character is represented by an asterisk and is used to represent any character that can be repeated any number of times. In this case, you’ll see asterisk wildcard that is used with the “switch” statement.

'hqdbt01'|

ForEach-Object {

Get-ChildItem -Path "SQLSERVER:\SQL\$_"} | Select-Object -Property ComputerNamePhysicalNetBIOS, Name, DisplayName, InstanceName |

%{

#$_.ComputerNamePhysicalNetBIOS

#$_.DisplayName

If (($_.DisplayName -eq 'Default') -or ($_.DisplayName -eq 'MSSQLSERVER')) {

$SQL = $_.name

}

elseif ($_.InstanceName -ne $NULL)

{

#$SQLInstance = "$($_.ComputerNamePhysicalNetBIOS)\$($_.InstanceName)"

$SQL = $_.name

}

New-Object ('Microsoft.SqlServer.Management.Smo.Server') $SQL |

SELECT @{label='ServerName';expression={$_.ComputerNamePhysicalNetBIOS}},

@{label='InstanceName';expression={$_.Name}},

@{label='SQLVersion';expression={ switch -wildcard( $_.VersionString )`

{`

"8*" { 'SQL Server 2000' }

"9*" { 'SQL Server 2005' }

"10*" { 'SQL Server 2008/R2' }

"10.5*" { 'SQL Server 2008 R2' }

"11*" { 'SQL Server 2012' }

"12*" { 'SQL Server 2014' }

"13*" { 'SQL Server 2016' }

"14*" { 'SQL Server 2017' }

`

}}},

@{label='OSVersion';expression={

switch -wildcard( $_.OSVersion )

{

"5.0*" { 'Windows Server 2000' }

"5.1*" { 'Windows XP' }

"5.2*" { 'Windows Server 2003' }

"6.0*" { 'Windows Server 2008' }

"6.1*" { 'Windows Server 2008 R2' }

"6.2*" { 'Windows Server 2012' }

"6.3*" { 'Windows Server 2012 R2' }

"10.0*" { 'Windows 10/2019/2016' }

}

}

}

}

The “switch” statement is an extension of IF-ELSE conditional statement. In this demo, it will retrieve a versionString and OSVersion properties of the SQL Server object and its compared values to print the respective OS and SQL Server version. The usage of the ‘*’ wildcard in the statement includes the same logic just as the LIKE operator to match each field.

Note: The Switch statement is actually a nested IF-ELSE statement.

Next, let’s put all together to retrieve a set of configuration values by using PowerShell. In the following PowerShell code, replace the input array of servers.

Code
#An array to hold the output values

$Results=@()

#Input server names for the configuration listing

'hqmesrp01','hqmesrp02','hqdbt01'|

ForEach-Object {

Get-ChildItem -Path "SQLSERVER:\SQL\$_"} | Select-Object -Property ComputerNamePhysicalNetBIOS, Name, DisplayName, InstanceName |

%{

#$_.ComputerNamePhysicalNetBIOS

#$_.DisplayName

If (($_.DisplayName -eq 'Default') -or ($_.DisplayName -eq 'MSSQLSERVER')) {

$SQL = $_.name

}

elseif ($_.InstanceName -ne $NULL)

{

#$SQLInstance = "$($_.ComputerNamePhysicalNetBIOS)\$($_.InstanceName)"

$SQL = $_.name

}

$a=''

$inv=New-Object ('Microsoft.SqlServer.Management.Smo.Server') $SQL |

SELECT @{label='ServerName';expression={$_.ComputerNamePhysicalNetBIOS}}, @{label='InstanceName';expression={$_.Name}} ,@{label='Max';expression={$_.Configuration.MaxServerMemory.configValue}},

@{label='Min';expression={$_.Configuration.MinServerMemory.configValue}},

@{label='xp_cmdshell';expression= {$_.Configuration.XPCmdShellEnabled.configValue}},

@{label='xp_cmdshell1';expression= { if($_.Configuration.XPCmdShellEnabled.configValue -eq 1) { 'enabled' } else {'disabled'}}},

@{label='SQLVersion';expression={ switch -wildcard( $_.VersionString )`

{`

"8*" { 'SQL Server 2000' }`

"9*" { 'SQL Server 2005' }`

"10*" { 'SQL Server 2008/R2' }`

"10.5*" { 'SQL Server 2008 R2' }`

"11*" { 'SQL Server 2012' }`

"12*" { 'SQL Server 2014' }`

"13*" { 'SQL Server 2016' }`

"14*" { 'SQL Server 2017' }`

`

}}},

@{label='OSVersion';expression={

switch -wildcard( $_.OSVersion )

{

"5.0*" { 'Windows Server 2000' }

"5.1*" { 'Windows XP' }

"5.2*" { 'Windows Server 2003' }

"6.0*" { 'Windows Server 2008' }

"6.1*" { 'Windows Server 2008 R2' }

"6.2*" { 'Windows Server 2012' }

"6.3*" { 'Windows Server 2012 R2/8.1' }

"10.0*" { 'Windows 10/2019/2016' }

}

}

},

@{label='Traceflag';expression={ $_.EnumActiveGlobalTraceFlags()| % {

[string]$b=$_.status

[string]$c=$_.TraceFlag

$a+=$c.ToString() +'->'+ $b.ToString()+ ' '

}

$a

}

}

ForEach($i in $inv)

{

$Properties = @{Name=$i.ServerName

InstanceName =$i.InstanceName

MaxMemory=$i.Max

MinMemory=$i.Min

xp_cmdshell=$i.xp_cmdshell

xp_cmdshellStatus=$i.xp_cmdshell1

SQLVersion=$i.SQLVersion

OSVersion=$i.OSVersion

Traceflag=$i.Traceflag

}

$Results += New-Object psobject -Property $properties

}

}

$Results |Select Name,InstanceName,OSVersion,SQLVersion,xp_cmdshell,xp_cmdshellStatus,Traceflag,MinMemory,MaxMemory| format-table -AutoSize

The output is a formatted list of configuration values generated via using SQL Server Managed objects along with the data transformation using PowerShell techniques.

Summary

This guide is an effort to outline some of the SQL Server information and configuration, such as xp_cmdshell, memory settings, trace flag, os version, and SQL version. It is recommended to use the above snippets to gather other properties of your choice using SMO (SQL Server Management Object) configuration class. You’ll also learn the tips to use a conditional statement, switch statement and looping statement in the expression statement using PowerShell.

I hope you liked this article. Please, leave your feedback in the comment section.

Appendix Code
#Output file to hold the list

$Outputfile='c:\output.csv'

#An array to hold the output values

$Results=@()

'hqmesrp01','hqmesrp02','hqdbt01'|

ForEach-Object {

Get-ChildItem -Path "SQLSERVER:\SQL\$_"} | Select-Object -Property ComputerNamePhysicalNetBIOS, Name, DisplayName, InstanceName |

%{

#$_.ComputerNamePhysicalNetBIOS

#$_.DisplayName

If (($_.DisplayName -eq 'Default') -or ($_.DisplayName -eq 'MSSQLSERVER')) {

$SQL = $_.name

}

elseif ($_.InstanceName -ne $NULL)

{

#$SQLInstance = "$($_.ComputerNamePhysicalNetBIOS)\$($_.InstanceName)"

$SQL = $_.name

}

$inv=New-Object ('Microsoft.SqlServer.Management.Smo.Server') $SQL |

SELECT @{label='ServerName';expression={$_.ComputerNamePhysicalNetBIOS}}, @{label='InstanceName';expression={$_.Name}} ,@{label='Max';expression={$_.Configuration.MaxServerMemory.configValue}},

@{label='Min';expression={$_.Configuration.MinServerMemory.configValue}},

@{label='AdHocDistributedQueriesEnabled';expression={$_.Configuration.AdHocDistributedQueriesEnabled.configValue}},

@{label='Affinity64IOMask';expression={$_.Configuration.Affinity64IOMask.configValue}},

@{label='Affinity64Mask';expression={$_.Configuration.Affinity64Mask.configValue}},

@{label='xp_cmdshell';expression= {$_.Configuration.XPCmdShellEnabled.configValue}},

@{label='xp_cmdshell1';expression= { if($_.Configuration.XPCmdShellEnabled.configValue -eq 1) { 'enabled' } else {'disabled'}}},

@{label='AgentXPsEnabled';expression={$_.Configuration.AgentXPsEnabled.configValue}},

@{label='AllowUpdates';expression={$_.Configuration.AllowUpdates.configValue}},

@{label='AweEnabled';expression={$_.Configuration.AweEnabled.configValue}},

@{label='BlockedProcessThreshold';expression={$_.Configuration.BlockedProcessThreshold.configValue}},

@{label='C2AuditMode';expression={$_.Configuration.C2AuditMode.configValue}},

@{label='CommonCriteriaComplianceEnabled';expression={$_.Configuration.CommonCriteriaComplianceEnabled.configValue}},

@{label='ContainmentEnabled';expression={$_.Configuration.ContainmentEnabled.configValue}},

@{label='CostThresholdForParallelism';expression={$_.Configuration.CostThresholdForParallelism.configValue}},

@{label='CrossDBOwnershipChaining';expression={$_.Configuration.CrossDBOwnershipChaining.configValue}},

@{label='CursorThreshold';expression={$_.Configuration.CursorThreshold.configValue}},

@{label='DatabaseMailEnabled';expression={$_.Configuration.DatabaseMailEnabled.configValue}},

@{label='DefaultBackupCompression';expression={$_.Configuration.DefaultBackupCompression.configValue}},

@{label='DefaultFullTextLanguage';expression={$_.Configuration.DefaultFullTextLanguage.configValue}},

@{label='DefaultLanguage';expression={$_.Configuration.DefaultLanguage.configValue}},

@{label='DefaultTraceEnabled';expression={$_.Configuration.DefaultTraceEnabled.configValue}},

@{label='DisallowResultsFromTriggers';expression={$_.Configuration.DisallowResultsFromTriggers.configValue}},

@{label='ExtensibleKeyManagementEnabled';expression={$_.Configuration.ExtensibleKeyManagementEnabled.configValue}},

@{label='FilestreamAccessLevel';expression={$_.Configuration.FilestreamAccessLevel.configValue}},

@{label='FillFactor';expression={$_.Configuration.FillFactor.configValue}},

@{label='FullTextCrawlBandwidthMax';expression={$_.Configuration.FullTextCrawlBandwidthMax.configValue}},

@{label='FullTextCrawlBandwidthMin';expression={$_.Configuration.FullTextCrawlBandwidthMin.configValue}},

@{label='FullTextCrawlRangeMax';expression={$_.Configuration.FullTextCrawlRangeMax.configValue}},

@{label='FullTextNotifyBandwidthMax';expression={$_.Configuration.FullTextNotifyBandwidthMax.configValue}},

@{label='FullTextNotifyBandwidthMin';expression={$_.Configuration.FullTextNotifyBandwidthMin.configValue}},

@{label='IndexCreateMemory';expression={$_.Configuration.IndexCreateMemory.configValue}},

@{label='InDoubtTransactionResolution';expression={$_.Configuration.InDoubtTransactionResolution.configValue}},

@{label='IsSqlClrEnabled';expression={$_.Configuration.IsSqlClrEnabled.configValue}},

@{label='LightweightPooling';expression={$_.Configuration.LightweightPooling.configValue}},

@{label='Locks';expression={$_.Configuration.Locks.configValue}},

@{label='MaxDegreeOfParallelism';expression={$_.Configuration.MaxDegreeOfParallelism.configValue}},

@{label='MaxServerMemory';expression={$_.Configuration.MaxServerMemory.configValue}},

@{label='MaxWorkerThreads';expression={$_.Configuration.MaxWorkerThreads.configValue}},

@{label='MediaRetention';expression={$_.Configuration.MediaRetention.configValue}},

@{label='MinMemoryPerQuery';expression={$_.Configuration.MinMemoryPerQuery.configValue}},

@{label='MinServerMemory';expression={$_.Configuration.MinServerMemory.configValue}},

@{label='NestedTriggers';expression={$_.Configuration.NestedTriggers.configValue}},

@{label='NetworkPacketSize';expression={$_.Configuration.NetworkPacketSize.configValue}},

@{label='OleAutomationProceduresEnabled';expression={$_.Configuration.OleAutomationProceduresEnabled.configValue}},

@{label='OpenObjects';expression={$_.Configuration.OpenObjects.configValue}},

@{label='OptimizeAdhocWorkloads';expression={$_.Configuration.OptimizeAdhocWorkloads.configValue}},

@{label='Parent';expression={$_.Configuration.Parent.configValue}},

@{label='PrecomputeRank';expression={$_.Configuration.PrecomputeRank.configValue}},

@{label='PriorityBoost';expression={$_.Configuration.PriorityBoost.configValue}},

@{label='Properties';expression={$_.Configuration.Properties.configValue}},

@{label='ProtocolHandlerTimeout';expression={$_.Configuration.ProtocolHandlerTimeout.configValue}},

@{label='QueryGovernorCostLimit';expression={$_.Configuration.QueryGovernorCostLimit.configValue}},

@{label='QueryWait';expression={$_.Configuration.QueryWait.configValue}},

@{label='RecoveryInterval';expression={$_.Configuration.RecoveryInterval.configValue}},

@{label='RemoteAccess';expression={$_.Configuration.RemoteAccess.configValue}},

@{label='RemoteDacConnectionsEnabled';expression={$_.Configuration.RemoteDacConnectionsEnabled.configValue}},

@{label='RemoteDataArchiveEnabled';expression={$_.Configuration.RemoteDataArchiveEnabled.configValue}},

@{label='RemoteLoginTimeout';expression={$_.Configuration.RemoteLoginTimeout.configValue}},

@{label='RemoteProcTrans';expression={$_.Configuration.RemoteProcTrans.configValue}},

@{label='RemoteQueryTimeout';expression={$_.Configuration.RemoteQueryTimeout.configValue}},

@{label='ReplicationMaxTextSize';expression={$_.Configuration.ReplicationMaxTextSize.configValue}},

@{label='ReplicationXPsEnabled';expression={$_.Configuration.ReplicationXPsEnabled.configValue}},

@{label='ScanForStartupProcedures';expression={$_.Configuration.ScanForStartupProcedures.configValue}},

@{label='ServerTriggerRecursionEnabled';expression={$_.Configuration.ServerTriggerRecursionEnabled.configValue}},

@{label='SetWorkingSetSize';expression={$_.Configuration.SetWorkingSetSize.configValue}},

@{label='ShowAdvancedOptions';expression={$_.Configuration.ShowAdvancedOptions.configValue}},

@{label='SmoAndDmoXPsEnabled';expression={$_.Configuration.SmoAndDmoXPsEnabled.configValue}},

@{label='SqlMailXPsEnabled';expression={$_.Configuration.SqlMailXPsEnabled.configValue}},

@{label='TransformNoiseWords';expression={$_.Configuration.TransformNoiseWords.configValue}},

@{label='TwoDigitYearCutoff';expression={$_.Configuration.TwoDigitYearCutoff.configValue}},

@{label='UserConnections';expression={$_.Configuration.UserConnections.configValue}},

@{label='UserInstancesEnabled';expression={$_.Configuration.UserInstancesEnabled.configValue}},

@{label='UserInstanceTimeout';expression={$_.Configuration.UserInstanceTimeout.configValue}},

@{label='UserOptions';expression={$_.Configuration.UserOptions.configValue}},

@{label='WebXPsEnabled';expression={$_.Configuration.WebXPsEnabled.configValue}},

@{label='XPCmdShellEnabled';expression={$_.Configuration.XPCmdShellEnabled.configValue}},

@{label='SQLVersion';expression={ switch -wildcard( $_.VersionString )`

{`

"8*" { 'SQL Server 2000' }`

"9*" { 'SQL Server 2005' }`

"10*" { 'SQL Server 2008/R2' }`

"10.5*" { 'SQL Server 2008 R2' }`

"11*" { 'SQL Server 2012' }`

"12*" { 'SQL Server 2014' }`

"13*" { 'SQL Server 2016' }`

"14*" { 'SQL Server 2017' }`

`

}}},

@{label='OSVersion';expression={

switch -wildcard( $_.OSVersion )

{

"5.0*" { 'Windows Server 2000' }

"5.1*" { 'Windows XP' }

"5.2*" { 'Windows Server 2003' }

"6.0*" { 'Windows Server 2008' }

"6.1*" { 'Windows Server 2008 R2' }

"6.2*" { 'Windows Server 2012' }

"6.3*" { 'Windows Server 2012 R2/8.1' }

"10.0*" { 'Windows 10/2019/2016' }

}

}

},

@{label='Traceflag';expression={ $_.EnumActiveGlobalTraceFlags()| % {

[string]$b=$_.status

[string]$c=$_.TraceFlag

$a+=$c.ToString() +'->'+ $b.ToString()+ ' '

}

$a

}

}

ForEach($i in $inv)

{

$Properties = @{Name=$i.ServerName

InstanceName..
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When working as a SQL Server database administrator or developer, you cannot live in your isolated SQL Server world without communicating with other data sources. For example, there is rarely a day when you won’t be requested to import data from an Excel, Access or CSV file into your SQL Server table. Or, conversely, export one of your SQL Server database tables to an external file in order for this table to be used in another database engine or to be analyzed externally by the corresponding team.

SQL Server provides us with a number of methods that can be used to export an existing SQL Server database table to an Excel or text file, and import data from an external data source to a new or existing SQL Server database table. In this article, we will gradually cover all these methods.

BCP Utility

The Bulk Copy Program utility, also known as BCP, is a command line utility that can be used to import large number of rows from an operating system data file into your SQL Server database table, or exporting an existing SQL Server database table to an operating system data file.

BCP tool’s syntax contains a large variety of options. These are the most commonly-used ones:

  • The name of the SQL Server database table or view – this will act as the data source in case of data export process and as the destination in case of data import process.
  • The data direction, where IN indicates data import and OUT indicates data export.
  • The name of the local data file that will acts as the source in case of data import process and as the destination in case of data export process.
  • -S server\instance – this is used to specify the SQL Server instance name.
  • –d – this is used to specify the database that contains the source or destination table.
  • -T – this is used to specify that Windows Authentication will be used to connect to the SQL Server.
  • -U username and -P password – these specify the username and the password used to connect to the SQL Server instance.
  • The Bulk Copy Program utility, also known as BCP, is a command line utility that can be used to import large number of rows from an operating system data file into your SQL Server database table, or exporting an existing SQL Server database table to an operating system data file. – these are used to specify the format for the source or destination data file and delimiters used in that data file.

For example, the BCP command below is used to export the contents of the Employee_Main database table in bulks into a CSV file, as follows:

bcp AdventureWorks2016CTP3.dbo.Employee_Main out C:\Test\Employees.csv -S MININT-QQBE87A -T -c -t , -r \n

This data will be quickly copied into a CSV file, as shown in the result below:

In order to import the content of a CSV file into a database table, the following BCP command can be used to achieve this quickly in bulks:

bcp AdventureWorks.dbo.Employee_Main in C:\Test\Employees.csv -S MININT-QQBE87A -T -c -t , -r \n

And the data will be inserted into the database table, as shown in the result below:

By spending a few minutes preparing these commands with proper options, you can easily use the BCP utility to import data from or export it to text data files.

BULK INSERT

The BULK INSERT T-SQL statement works similar to the BCP IN command and is used to import data from a data file that is accessible by the SQL Server service into a SQL Server database table – all done directly within SQL Server. Unlike the BCP IN command, the BULK INSERT statement can be grouped with other operations in a single SQL Server transaction that is running under your control.

The BULK INSERT T-SQL statement below can be used to import data from a CSV file into an existing database table, specifying both the column and row delimiters, as shown below:

BULK INSERT AdventureWorks.dbo.Employee_Main
FROM 'C:\Test\Employees.csv'
WITH
(
FIELDTERMINATOR =',',
ROWTERMINATOR ='\n'
);
GO

OPENROWSET

The OPENROWSET function is used to connect to data sources using an OLEDB connection provider and then use the T-SQL query specified to retrieve data from that data source. The key advantages of the OPENROWSET function over the BCP and BULK INSERT methods are the following:

  • you can filter the data retrieved by the OPENROWSET statement using a WHERE clause
  • you can use it in a SELECT statement for reading the data from the data source – not only with the INSERT statement

To use the OPENROWSET statement, you will need to enable the ad hoc distributed queries system configuration option first by using the script below:

sp_configure 'show advanced options', 1;  
RECONFIGURE;
GO 
sp_configure 'Ad Hoc Distributed Queries', 1;  
RECONFIGURE;  
GO

After enabling the ad hoc distributed queries system configuration option, you can use the OPENROWSET function to import data into SQL Server database tables from CSV files with a previously-defined data format, as shown in the T-SQL query below:

INSERT INTO [AdventureWorks].[dbo].[Employee_Main]
([EMP_FirsrName],[EMP_LastName],[EMP_BirthDate],[EMP_PhoneNumber])
SELECT [EMP_FirsrName],[EMP_LastName],[EMP_BirthDate],[EMP_PhoneNumber] 
FROM OPENROWSET(
   BULK 'C:\Test\Employees.csv',
      FORMATFILE = 'C:\Test\Fmt.xml'
) AS rows;

Linked Server

The SQL Server Linked Server is a server object that is configured to allow the SQL Server Database Engine to connect to different types of OLE DB data sources outside of the SQL Server instance. The Linked Server can be easily created using the SQL Server Management Studio or T-SQL Commands, as shown below:

Once created, the linked server can be easily used to access the remote data source and import the corresponding data into the SQL Server database table as shown in the T-SQL query below:

INSERT INTO [AdventureWorks].[dbo].[Employee_Main] 
([EMP_FirsrName],[EMP_LastName],[EMP_BirthDate],[EMP_PhoneNumber],[EMP_Address])
SELECT [EMP_FirsrName],[EMP_LastName],[EMP_BirthDate],[EMP_PhoneNumber],[EMP_Address] FROM TEST...Employee_Main$

SQL Server Export/Import Wizard

The SQL Server Export and Import wizard provides us with a simple interface for creating SQL Server Integration Service packages that can be used right away or saved for later use. These packages allow importing data from and export it to a wide variety of data sources, starting from simple flat files and ending with more complex database engines.

The Export and Import wizard can be used, for example, to import data from an Excel data source into an existing database table. The following steps describe this process:

  1. In SQL Server Management Studio’s Object Explorer, right-click on the AdventureWorks database that contains the table to which you will write the data. Then, choose the Import Data option from the Tasks submenu:
  2. The SQL Server Import and Export Wizard will be opened. In the Choose a Data Source window, specify the type of the data source. In this particular example, we will choose Microsoft Excel, and the full path for the desired Excel file. After specifying it, click the Next button:
  3. In the Choose a Destination window, you will be requested to specify the type of destination to which the source data will be written. In our example, it’s SQL Server database. You will also be asked to specify the server name, credentials and the name of the database in which the destination table is located. After providing all of this information, click Next to proceed:
  4. In the Specify Table Copy or Query window, you will have an option to either copy all data from the source to an existing or new table, or write a customized query to retrieve a specific range of data. When this is decided, click the Next button:
  5. In the Select Source Tables and Views window, specify the source sheet or table from the data source and the destination table to which the source data will be written. You will have the ability to preview the source, check the mapping of the columns between the source and destination tables, and specify if the data should be appended to an existing table or to a newly-generated table. Click the Next button to proceed:
  6. In the Save and Run Package Window, you have the option to save the SSIS package that is created within the Import and Export Wizard to run it immediately or save it for scheduling or running it later. Click the Next button to proceed:
  7. In the Complete the Wizard window, you will see a summary for the selected options and provided settings. Click the Finish button to start copying the data:
  8. After completing the copiyng process, the wizard will display the number of records copied to each table. If there are no errors, you can click on the Close button to close the window: You can see from the previous example that the Import and Export wizard can be easily used to copy data between different types of data sources and destinations, without considering the relationship between the tables. If the table has a Foreign Key constraint that references one of the columns of the parent table, you need to manage the order of the data copiyng process manually. In addition, to start a new data import process, you will need to close the wizard and open it again.
dbForge Data Pump

dbForge Data Pump is a third-party tool included with the SQL Server Management Studio as an add-in. It can be used to easily migrate data between SQL Server and different external data sources. It allows you to import and export from 10+ commonly-used data formats (Text, MS Excel, XML, CSV, JSON etc.) with a number of advanced options and templates that can be utilized for recurring scenarios.

After downloading dbForge Data Pump from the Devart download page, you will need to install it to the machine as an SSMS tool add-in, as shown below:

Data Export

dbForge Data Pump can be used within SSMS to export data from a SQL Server database table or to view data in 13 supported types of formats including HTML, Text, MS Excel, MS Excel 2007, MS Access, RTF, PDF, XML, CSV, ODBC, DBF (Foxpro, dBase III, dBase IV, dBase 7), SQL and JSON.

To export data from a specific SQL Server table, follow the steps below:

  1. Connect to the SQL Server instance through SSMS. From the Objects Explorer, right-click on the database that contains the source table, and choose Export Data from the Data Pump submenu:
  2. In the Export Format window, choose the format of the file to which the source table data will be exported, or load a previously saved template. In our example, we will choose MS Excel 2007. Click the Next button to proceed:
  3. In the Source window, review the connection details and select the table(s) that contain the source data to be exported. Click Next to proceed:
  4. In the Options window, check and set the different customizable options for the exported data’s table grid. Then, click Next to proceed:
  5. In the Data Formats window, under the Columns tab, select the columns that will be exported and review their data types:Also, in the Formats tab, set up the format of different data types for the columns you selected. Click the Next button to proceed:
  6. In the Page Print Settings window, you can set the different options for the exported file pages for printing purposes. Click Next to proceed:
  7. In the Exported Rows window, specify a range of rows to be exported from the source table or choose to export all table rows. Click Next to proceed:
  8. The Error Handling window provides you with a number of options that can be used to specify how to process errors if they occur and where to log the execution results. Click on the Save Template to save the settings as a template to be used in the future, or press the Export button to start the data export process, as shown below: During the data export process, Data Pump will provide you with informational messages that display the number of records copied from the current table, as shown below:When the data export process is completed successfully, a summary for the export process will be displayed. Options will be provided to open the generated file or the containing folder and to export more data from the same page without the need to close and reopen the tool:
Data Import

dbForge Data Pump can be used within SSMS to import data into a SQL Server database table. The following 9 widely-used data formats are supported: Text, MS Excel, MS Excel 2007, MS Access, XML, CSV, ODBC, DBF (Foxpro, dBase III, dBase IV, dBase 7) and JSON.

To import data from an Excel sheet to a specific SQL Server table, follow the steps below:

  1. Connect to the SQL Server instance using SSMS. From the Objects Explorer, right-click on the database that contains the destination database table, and choose Import Data from the Data Pump submenu:
  2. In the Source File window, choose the format of the source file or simply load a previously-saved template to perform the import process. In our example, we will use Microsoft Excel 2007, and click Browse button to locate the Excel sheet. Click the Next button to proceed:
  3. In the Destination window, review the connection information and specify whether the data should be imported to a new table or an existing table from the provided list. Click Next to proceed:
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Introduction

Every backup operation in SQL Server is written to the SQL Server Error log. This includes Transaction Log Backups even when they occur as part of a Transaction Log Shipping Configuration. Sometimes logging the entire Log Backup can be a nuisance in the SQL Server Error Log and needs to be managed. Trace Flag 3226 is used to suppress such logging and we will demonstrate how this can be done in this article.

Configuring Transaction Log Shipping

In order to demonstrate the value of this trace flag, we will implement a small log shipping configuration using a SQL Server 2017 database called Practice2017. Our primary instance is EPG-KIGIRI\I2017 and we are replicating this database to a SQL Server 2019 instance EPG-KIGIRI\I2019 (See Fig. 2). The entire configuration script is shown in Listing 1.

Fig. 1 Log Shipping Configuration on Primary

Code
-- Listing 1: Transaction Log Shipping Configuration Script

-- Execute the following statements on the primary to configure log shipping 
-- for database [EPG-KIGIRI\I2017].[Practice2017],
-- The script is to be run on the primary in the context of the [msdb] database.  
------------------------------------------------------------------------------------- 
-- Adding the log shipping configuration 

-- ****** Begin: Script to be run on the primary: [EPG-KIGIRI\I2017] ******


DECLARE @LS_BackupJobId	AS uniqueidentifier 
DECLARE @LS_PrimaryId	AS uniqueidentifier 
DECLARE @SP_Add_RetCode	As int 


EXEC @SP_Add_RetCode = master.dbo.sp_add_log_shipping_primary_database 
		@database = N'Practice2017' 
		,@backup_directory = N'G:\Backup\LogShip\' 
		,@backup_share = N'\\Epg-kigiri\g$\Backup\LogShip\' 
		,@backup_job_name = N'LSBackup_Practice2017' 
		,@backup_retention_period = 1440
		,@backup_compression = 2
		,@monitor_server = N'EPG-KIGIRI\I2017' 
		,@monitor_server_security_mode = 1 
		,@backup_threshold = 60 
		,@threshold_alert_enabled = 1
		,@history_retention_period = 2880 
		,@backup_job_id = @LS_BackupJobId OUTPUT 
		,@primary_id = @LS_PrimaryId OUTPUT 
		,@overwrite = 1 


IF (@@ERROR = 0 AND @SP_Add_RetCode = 0) 
BEGIN 

DECLARE @LS_BackUpScheduleUID	As uniqueidentifier 
DECLARE @LS_BackUpScheduleID	AS int 


EXEC msdb.dbo.sp_add_schedule 
		@schedule_name =N'LSBackupSchedule_EPG-KIGIRI\I20171' 
		,@enabled = 1 
		,@freq_type = 4 
		,@freq_interval = 1 
		,@freq_subday_type = 4 
		,@freq_subday_interval = 5 
		,@freq_recurrence_factor = 0 
		,@active_start_date = 20190113 
		,@active_end_date = 99991231 
		,@active_start_time = 0 
		,@active_end_time = 235900 
		,@schedule_uid = @LS_BackUpScheduleUID OUTPUT 
		,@schedule_id = @LS_BackUpScheduleID OUTPUT 

EXEC msdb.dbo.sp_attach_schedule 
		@job_id = @LS_BackupJobId 
		,@schedule_id = @LS_BackUpScheduleID  

EXEC msdb.dbo.sp_update_job 
		@job_id = @LS_BackupJobId 
		,@enabled = 1 


END 


EXEC master.dbo.sp_add_log_shipping_primary_secondary 
		@primary_database = N'Practice2017' 
		,@secondary_server = N'EPG-KIGIRI\I2019' 
		,@secondary_database = N'Practice2017' 
		,@overwrite = 1 

-- ****** End: Script to be run on the primary: [EPG-KIGIRI\I2017] ******


-- Execute the following statements on the secondary to configure log shipping 
-- for database [EPG-KIGIRI\I2019].[Practice2017],
-- the script to be run on the secondary in the context of the [msdb] database. 
------------------------------------------------------------------------------------- 
-- Adding the log shipping configuration 

-- ****** Begin: Script to be run on the secondary: [EPG-KIGIRI\I2019] ******


DECLARE @LS_Secondary__CopyJobId	AS uniqueidentifier 
DECLARE @LS_Secondary__RestoreJobId	AS uniqueidentifier 
DECLARE @LS_Secondary__SecondaryId	AS uniqueidentifier 
DECLARE @LS_Add_RetCode	As int 


EXEC @LS_Add_RetCode = master.dbo.sp_add_log_shipping_secondary_primary 
		@primary_server = N'EPG-KIGIRI\I2017' 
		,@primary_database = N'Practice2017' 
		,@backup_source_directory = N'\\Epg-kigiri\g$\Backup\LogShip\' 
		,@backup_destination_directory = N'G:\Backup\LogShipCopy\' 
		,@copy_job_name = N'LSCopy_EPG-KIGIRI\I2017_Practice2017' 
		,@restore_job_name = N'LSRestore_EPG-KIGIRI\I2017_Practice2017' 
		,@file_retention_period = 1440 
		,@monitor_server = N'EPG-KIGIRI\I2017' 
		,@monitor_server_security_mode = 1 
		,@overwrite = 1 
		,@copy_job_id = @LS_Secondary__CopyJobId OUTPUT 
		,@restore_job_id = @LS_Secondary__RestoreJobId OUTPUT 
		,@secondary_id = @LS_Secondary__SecondaryId OUTPUT 

IF (@@ERROR = 0 AND @LS_Add_RetCode = 0) 
BEGIN 

DECLARE @LS_SecondaryCopyJobScheduleUID	As uniqueidentifier 
DECLARE @LS_SecondaryCopyJobScheduleID	AS int 


EXEC msdb.dbo.sp_add_schedule 
		@schedule_name =N'DefaultCopyJobSchedule' 
		,@enabled = 1 
		,@freq_type = 4 
		,@freq_interval = 1 
		,@freq_subday_type = 4 
		,@freq_subday_interval = 15 
		,@freq_recurrence_factor = 0 
		,@active_start_date = 20190114 
		,@active_end_date = 99991231 
		,@active_start_time = 0 
		,@active_end_time = 235900 
		,@schedule_uid = @LS_SecondaryCopyJobScheduleUID OUTPUT 
		,@schedule_id = @LS_SecondaryCopyJobScheduleID OUTPUT 

EXEC msdb.dbo.sp_attach_schedule 
		@job_id = @LS_Secondary__CopyJobId 
		,@schedule_id = @LS_SecondaryCopyJobScheduleID  

DECLARE @LS_SecondaryRestoreJobScheduleUID	As uniqueidentifier 
DECLARE @LS_SecondaryRestoreJobScheduleID	AS int 


EXEC msdb.dbo.sp_add_schedule 
		@schedule_name =N'DefaultRestoreJobSchedule' 
		,@enabled = 1 
		,@freq_type = 4 
		,@freq_interval = 1 
		,@freq_subday_type = 4 
		,@freq_subday_interval = 15 
		,@freq_recurrence_factor = 0 
		,@active_start_date = 20190114 
		,@active_end_date = 99991231 
		,@active_start_time = 0 
		,@active_end_time = 235900 
		,@schedule_uid = @LS_SecondaryRestoreJobScheduleUID OUTPUT 
		,@schedule_id = @LS_SecondaryRestoreJobScheduleID OUTPUT 

EXEC msdb.dbo.sp_attach_schedule 
		@job_id = @LS_Secondary__RestoreJobId 
		,@schedule_id = @LS_SecondaryRestoreJobScheduleID  


END 


DECLARE @LS_Add_RetCode2	As int 


IF (@@ERROR = 0 AND @LS_Add_RetCode = 0) 
BEGIN 

EXEC @LS_Add_RetCode2 = master.dbo.sp_add_log_shipping_secondary_database 
		@secondary_database = N'Practice2017' 
		,@primary_server = N'EPG-KIGIRI\I2017' 
		,@primary_database = N'Practice2017' 
		,@restore_delay = 0 
		,@restore_mode = 0 
		,@disconnect_users	= 0 
		,@restore_threshold = 45   
		,@threshold_alert_enabled = 1 
		,@history_retention_period	= 2880 
		,@overwrite = 1 

END 


IF (@@error = 0 AND @LS_Add_RetCode = 0) 
BEGIN 

EXEC msdb.dbo.sp_update_job 
		@job_id = @LS_Secondary__CopyJobId 
		,@enabled = 1 

EXEC msdb.dbo.sp_update_job 
		@job_id = @LS_Secondary__RestoreJobId 
		,@enabled = 1 

END 


-- ****** End: Script to be run on the secondary: [EPG-KIGIRI\I2019] ******

Backup, copy and restore jobs are scheduled to run every five minutes, and whenever this happens, the database engine also writes an entry in the error log. This may be considered superfluous, as we can easily track the log backups using the SQL Agent job history.

Fig. 2 Log Shipping Backup Entries in SQL Error Log

Enabling Trace Flag 3226

Typically, we can enable trace flags either for the current connection or globally. We can use T-SQL to enable trace flags or implement the trace flag in the SQL Server startup parameters. It is recommended that you use the startup parameters approach to enable trace flags you want to apply to the instance. To apply the trace flag 3226 in the SQL Server startup parameters, follow these steps:

  1. Run SQL Server Configuration Manager as Administrator

Fig. 3 Run SQL Server Configuration Manager as Administrator

  1.  Right-click the desired instance and click Properties.

Fig. 4 Open Instance Properties

  1. Select the Startup Parameters

Fig. 5 Startup Parameters

  1. In the text box labeled Specify a startup parameter, type –T3226 and click Add.

Fig. 6 Adding Trace Flag 3226 as a Startup Parameter

  1. Once –T3226 has been added to the list of Existing Parameters, click OK.

-- Listing 2: Enable a Trace Flag

-- Turn on a trace flag for the current connection
DBCC TRACEON (3205);  
GO 

-- Turn on a trace flag globally
DBCC TRACEON (3205, -1);  
GO

The SQL Server error log shows that the trace flag is enabled on startup. (See Fig. 8)

Fig. 8 Startup Parameters Indicated in the SQL Server error log

Viewing the Results

Once it is confirmed that the trace flag is working, we discover that the SQL Server error log no longer writes log backups associated with log shipping (or any other log backup) to the error log.  Pay close attention to Fig. 9 showing that all log backups stored in the backup job history are not written to the error log. This aligns with the point at which we enabled trace flag 3226 (about 8:15 PM).

Fig. 9 No Log Backups Recorded in SQL Server Error Log

If we also enable trace flag 3226 on the secondary instance EPG-KIGIRI\I2019, we find that the log restore operations are also no longer written to the error log since we enabled trace flag 3226 on the secondary instance at about 8:30 PM. (See Fig. 10)

Conclusion

In this article, we have demonstrated how we can use trace flag 3226 to suppress logging of transaction log backups on the primary instance, and the transaction log restores the log shipping settings on the secondary instance. This will be very useful to avoid unnecessary logging that could “hide” real issues popping up in the error log.

References

DBCC TraceOn Trace Flags

The post Using Trace Flag 3226 to Suppress Log Backup Logging appeared first on {coding}Sight.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview