Loading...

Follow Planet PowerShell on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Microsoft SQL Server can be one of the most resource-intensive server applications out there. Depending on the number of databases and the load on them, resource utilization could be intensive. As a result, if you don't routinely monitor important metrics around this utilization,this can lead to slow performance and service downtime. Here's a few pointers to create a PowerShell script to monitor SQL services.

Windows has had built into it a great resource for getting insights into various performance metrics for the operating system itself as well as many other applications known as performance counters.

The Windows Performance Monitor is a staple of many IT professionals out there. Although the Windows Performance Monitor is a great tool to visually see performance statistics, it's not necessarily great for automation. This is where PowerShell comes in.

By using PowerShell to query performance statistics you can use these numbers as triggers for other functions like making decisions to automatically move databases to faster storage, automatically add an additional CPU, memory, etc.

In this article, let's go over some of the most common performance metrics for SQL and how to query them with PowerShell.

Using Get-Counter

The PowerShell cmdlet you're going to get the most comfortable with is Get-Counter. Get-Counter is a cmdlet that allows you to query any number of performance metrics from Windows. Run it by itself and you'll get a list of various common performance counters and their current values.

Get-Counter output

Get-Counter, used without parameters queries the local computer. We're going to be querying a SQL server so use the ComputerName parameter to specify your remote SQL server.

PS> Get-Counter -ComputerName SQLSRV
Finding important performance counters

Get-Counter will return the same counter values. However, we're going to look for some counters that are important to SQL server. We'll be querying:

  • Avg. CPU Queue Length
  • Avg. Disk Queue Length
  • Memory Pages/Sec
  • Latch Wait Time
  • Buffer Page Life Expectancy
  • Average Lock Wait Time.

To get the current values for each of these counters, we'll first need to determine which set they are in and their proper name. In any metric where multiple instances can be used, I'm using an asterisk to get an average across all available instances or using _total to get a sum of all instances.

  • Avg. CPU Queue Length = \System\Processor Queue Length
  • Avg. Disk Queue Length = \PhysicalDisk()\Avg. Disk Queue Length
  • Memory Pages/Sec = \Memory\Pages/sec
  • Latch Wait Time = \SQLServer:Latches\Average Latch Wait Time (ms)
  • Buffer Page Life Expectancy = \SQLServer:Buffer Manager\Page lifeexpectancy
  • Average Lock Wait Time = \SQLServer:Locks(_total)\Average WaitTime (ms)

Now that we have the correct counter names defined and the counter sets the counters are a part of, we can now begin to build some PowerShell code to query them.

Querying the SQL performance counters

To get an overall picture of performance and remove the possibility of a one-time spike in any of the counters, I'm going to query each of the counters ten times. I'll use the default sample interval of one second which means I'm going to query each counter ten times over ten seconds.This should give me a more realistic number than simply querying eachcounter a single time.

To do this, I'll use the MaxSamples parameter on Get-Counter. This will allow me to specify the maximum number of times each counter will be queried. I'll gather up all of these figures and then take an average of each when I'm finished. Also, because I just want the values and don't necessarily care about the formatting, I'll specify the CounterSamples property directly and the CookedValue property as part of that. This gives me only the actual integer values.

Because I'll be querying multiple performance counters, it's good practice to group them up into array. I've done so below and assigning it the variable $counters.  I've chosen to use hashtables with the"friendly" name of the counter and the actual counter name. I then have something I can read with a foreach loop.

$counters = @(
    @{
        'Name' = 'Avg. CPU Queue Length'
        'CounterName' ='\System\Processor Queue Length'
    }
    @{
        'Name' = 'Avg. Disk Queue Length'
        'CounterName' ='\PhysicalDisk()\Avg. Disk Queue Length' }
    @{
        'Name' = 'Memory Pages/Sec'
        'CounterName' = '\Memory\Pages/sec'
    }
    @{
        'Name' = 'Latch Wait Time'
        'CounterName' ='\SQLServer:Latches\Average Latch Wait Time (ms)'     }
    @{
        'Name' = 'Buffer Page Life Expectancy'
        'CounterName' ='\SQLServer:Buffer Manager\Page life expectancy'     }
    @{
        'Name' = 'Average Lock Wait Time'
        'CounterName' ='\SQLServer:Locks(_total)\Average Wait Time (ms)'
    }
)

Once I have the array of hashtables defined, I'll then begin to read each counter, gather up all ten samples and add the average value to the counter instance itself.

Finally, I'm creating a PowerShell custom object so the output can be easily used for other purposes, if necessary.

$sqlServerName = 'LABSQL'
foreach ($counter in $counters) {
    $values = (Get-Counter -ComputerName $sqlServerName -Counter $counter.CounterName -MaxSamples 10).CounterSamples.CookedValue
    $counter.Add('Value', ($values | Measure-Object -Average).Average)
    [pscustomobject]$counter
}

This ends up getting us a nice report showing performance counter statistics for our SQL server!

Summary

You can see that working with performance counters in PowerShell is pretty straightforward. The hardest part for me was just trying to find the counter names themselves!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

PowerShell's real power comes through in its tool-making abilities. What do I mean by a tool? A tool, in this context, is a PowerShell script, a module, a function, whatever that helps you perform some management task. That task could be creating a report, gathering information about a computer, creating a company user account, and so on.

PowerShell shines as an automation language. In this huge blog post mini-course built as a PowerShell tutorial, you'll learn how to build robust automation tools using PowerShell and a little time. This PowerShell tutorial will teach you from having nothing all the way up to a PowerShell script that can query and return information across all of your servers.

Not only will you learn how to write a PowerShell script but you'll also learn the mindset behind writing a script like this. You'll see how a PowerShell expert thinks and how he approaches writing a PowerShell tool.

PowerShell Tutorial Instructions and Structure

This mini-course is meant to be followed along from top to bottom. If you plan on building the tool completely, please don't skip a section because all sections depend on one another. I also highly encourage you to follow along in this PowerShell tutorial mini-course if you're on mobile.

You will also find asides called challenges. These challenges will ask you to take the task one step further on your own. These challenges give you the opportunity to experiment on your own and try to make this tool even better than the examples we're covering here.

Look for challenges using the CHALLENGE keyword like so:

CHALLENGE: Try to write MOAR PowerShell!

Finally, you'll occasionally see asides that stop to explain an important concept but have no bearing on the tutorial itself. These asides are pointers to take into consideration. Asides will look similar to challenges but we will be prefaced with ASIDE:.

ASIDE: Some informative piece of information.
Table of Contents

This mini-course will be broken down into the following sections. These sections represent the typical manner in which someone sits down to build a real-life PowerShell tool.

Here's a table of contents to get an idea as to what you're in for. Click each link to navigate to that section.

Each post will contain code snippets which will always represent the PowerShell script you'll be working with. There may be instances where you will see output the script returns. In those cases, it will be clearly defined.

ASIDE: This PowerShell tutorial mini-course is a sample from my book PowerShell for SysAdmins. If you want to learn PowerShell or pick up some tricks of the trade, be sure to check it out!

Prerequisites

This blog post series is going to be hands-on all of the way. To follow along to the T, make sure you've got a few prereqs in order:

  • Beginner to beginner-intermediate knowledge of PowerShell
  • Scripting computer is a member of an Active Directory (AD) domain
  • Windows PowerShell 5.1 (All code was tested using this version)
  • You are logged in with an AD user that has rights to query computer accounts in AD. If you can run Get-AdUser -Identity $env:USERNAME without an error, you're good.
  • You can run Get-ChildItem -Path "\\<someserver>\c$" on all servers you intend to query
  • You can run Get-CimInstance -ComputerName <someserver> -ClassName 'Win32_OperatingSystem' on all servers you intend to query
  • You can run Get-Service -ComputerName <someserver> on all servers you intend to query
  • You have RSAT installed. You can get this by running Install-WindowsFeature RSAT-AD-PowerShell on Windows 10.
Script Scaffolding

Since we're going to be building scripts in this post and not just executing code at the console, we need first to create a new PowerShell script. Go ahead and create a script called Get-ServerInformation.ps1. I've placed mine in a folder at C:\ServerInventory.  We'll be continually adding code to this script throughout the post.

Defining the Final Output

Before we get started coding, it's always important to make a "back of the napkin" plan of how you want the output to look like when you're done. This simple sketch is a great way to measure progress, especially when building large scripts.

For this server inventory script, you've decided that when the script ends, you'd like an output to the PowerShell console that looks like this with an example server.

ServerName    IPAddress    OperatingSystem    AvailableDriveSpace (GB)   Memory (GB)    UserProfilesSize (GB)    StoppedServices
MYSERVER      x.x.x.x      Windows.....       10                         4              50.4                     service1,service2,service3

Now that you know what you want to see let's now dive into how to make it happen.

Discovery and Script Input

Before we can start gathering information, we first need to decide how to tell our script what to query. For this project, we're going to be collecting information from multiple servers.

Because I'm assuming most readers have Active Directory in their environment, I'll be querying Active Directory for server names and using them. You could query server names from text files, as an array in the PowerShell script, from the registry, from WMI, from databases and anything else you can think of though. It doesn't matter. As long as you can get an array of strings representing server names into your script somehow you're good to go.

We've got all of our servers in a single OU. If you don't, that's OK; you will just have to read computer objects in each of them with a loop. Our first task is reading all of the computer objects in the OU.

In my environment, all of my servers are in the Servers OU, and my domain is called powerlab.local.

To retrieve computer objects from AD, I'll use the Get-AdComputer command. This command will return all of the AD computer objects for the servers I'm interested in.

$serversOuPath = 'OU=Servers,DC=powerlab,DC=local'
$servers = Get-ADComputer -SearchBase $serversOuPath -Filter *

The $servers variable will contain AD objects for each computer account as shown below.

DistinguishedName : CN=SQLSRV1,OU=Servers,DC=Powerlab,DC=local
DNSHostName       : SQLSRV1.Powerlab.local
Enabled           : True
Name              : SQLSRV1
ObjectClass       : computer
ObjectGUID        : c288d6c1-56d4-4405-ab03-80142ac04b40
SamAccountName    : SQLSRV1$
SID               : S-1-5-21-763434571-1107771424-1976677938-1105
UserPrincipalName :

DistinguishedName : CN=WEBSRV1,OU=Servers,DC=Powerlab,DC=local
DNSHostName       : WEBSRV1.Powerlab.local
Enabled           : True
Name              : WEBSRV1
ObjectClass       : computer
ObjectGUID        : 3bd2da11-4abb-4eb6-9c71-7f2c58594a98
SamAccountName    : WEBSRV1$
SID               : S-1-5-21-763434571-1107771424-1976677938-1106
UserPrincipalName :
$servers variable value with AD computer accounts
CHALLENGE: Instead of using Active Directory to pull computer names, create a text file of server names or a CSV file and see if you use that to return the appropriate server names that will be assigned to the $servers variable. If you do it right, you should be able to switch out Active Directory with a text file using a single line of code.

Notice that instead of setting the SearchBase parameter argument directly, I've defined a variable. Get used to doing this. Every time you've got a specific configuration item like this, it's always a good idea to put it into a variable.

You never know when you'll need to use that value again somewhere else. Also, notice that I'm returning the output of Get-AdComputer to a variable as well. Since we're going to be doing some other things against these servers, we'll want to reference all of the server names later in the script.

This returns the AD objects, but we're just looking for the server name. We can narrow this down by only returning the Name property by using Select-Object.

$servers = Get-ADComputer -SearchBase $serversOuPath -Filter * |
Select-Object -ExpandProperty Name

The $servers value now looks like:

SQLSRV1
WEBSRV1
Querying Each Server

Now that we have a way to gather up a list of the servers we're interested in, we can soon begin to iterate over each of these servers to gather some information to match the output we've defined earlier. But first, we need to create a loop to make it possible to query every server in our array without repeating ourselves.

To not make any assumptions that your code will work immediately (it usually doesn't), I like to start slow and "test" each piece as I'm building it. In this example, instead of diving in and figuring out all the other code to make this happen, I'll put a Write-Host reference in just to ensure the script is returning the server name as I expect.

foreach ($server in $servers) {
    Write-Host $server
}
ASIDE: Normally it's not recommended to use Write-Host but it works great when scaffolding out code like this or testing. Here's a great article explaining why.

Once I run my script, I can see that the value of $server gets returned for each server in my array.

PS> C:\ServerInventory\Get-ServerInformation.ps1

SQLSRV1
WEBSRV1

Great! We've got a loop setup that's iterating over each of the server names in our array. One task complete!

Building a Common Object Type

When most people first start writing PowerShell code, they lack context. They don't know what could happen by the time they finally get that perfect script built. As a result, they create scripts in a zig-zag pattern by bouncing all over the place, linking things together, going back, doing it over again and so on. If we didn't take a moment and explain this now and the purposeful progression of code in this post, I'd be doing you a disservice.

I know from experience just by looking at the output, we're going to have to use a few different commands that pull information from various sources like WMI, the file system, and Windows services. Each of these sources is going to return a different kind of object that would look terrible if combined.

If we'd just try to jump in and start writing code to gather all of this stuff, the output would look something like this example below.

Below I'm querying a service and trying to get memory from a server at the same time. The objects are different, the properties on those objects are different, and it looks terrible if you attempt to merge all of that output.

Status   Name               DisplayName
------   ----               -----------
Running  wuauserv           Windows Update

__GENUS              : 2
__CLASS              : Win32_PhysicalMemory
__SUPERCLASS         : CIM_PhysicalMemory
__DYNASTY            : CIM_ManagedSystemElement
__RELPATH            : Win32_PhysicalMemory.Tag="Physical Memory 0"
__PROPERTY_COUNT     : 30
__DERIVATION         : {CIM_PhysicalMemory, CIM_Chip...
__SERVER             : DC
__NAMESPACE          : root\cimv2
__PATH               : \\DC\root\cimv2:Win32_PhysicalMemory...

Let me just save you some time now and hopefully in the future to make you think ahead of time before diving in.

Since we'll be combining different kinds of output, we have to create our own type of output and no; it's not going to be as complicated as you may think. PSCustomObject type objects are generic objects in PowerShell that allow you to add your own properties easily and are perfect for what we're doing here.

To get an output like we want for each server, every object that's returned from our script must be of the same type. Since one command can't pull all of this information and "convert" these items to a particular object type, we can do it on our own.

We know the headers of the output we need and, by now, I hope you understand that these "headers" will always be object properties. Let's create a custom object with the properties we'd like to see in the output.

I've called this object $output because of this the object that our script is going to return after we've populated all of the properties in it.

Below I'm creating a hashtable and casting it to a pscustomobject object to show you what the final output will be. However, in the script you'll be creating, you'll first create a hashtable, add some things to it and you'll then cast it to a pscustomobject object to be returned to the console.

$output = [pscustomobject]@{
    'ServerName'                  = $null
    'IPAddress'                   = $null
    'OperatingSystem'             = $null
    'AvailableDriveSpace (GB)'    = $null
    'Memory (GB)'                 = $null
    'UserProfilesSize (GB)'       = $null
    'StoppedServices'             = $null
}
ASIDE: The concept of casting is a term that refers to "converting" one object to another. Casting is a common programming term. In this instance, you are "converting" a hashtable with key/value pairs and making that hashtable a object of type pscustomobject with the hashtable keys as object properties and the hashtable values as object property values. For a breakdown of how casting works, check out Using PowerShell to cast objects.

If we copy this to the console and then return it with a formatting cmdlet Format-Table, we can see the headers we're looking for.

PS> $output | Format-Table -AutoSize

ServerName IPAddress OperatingSystem AvailableDriveSpace (GB) Memory (GB) UserProfilesSize (GB) StoppedServices
---------- --------- --------------- ------------------------ ----------- --------------------- ---------------
The Format-Table command is one of a few format commands in PowerShell that are meant to be used as the last command in the pipeline. They transform current output and display it differently. In this instance, I'm telling PowerShell to transform my object output into a table format and auto size the rows based on the width of the console.

Once we've got our custom output object defined, we can now add this inside of our loop to make every server return one. Since we already know the server name, we can already set this object property.

$serversOuPath = 'OU=Servers,DC=powerlab,DC=local'
$servers = Get-ADComputer -SearchBase $serversOuPath -Filter * |
Select-Object -ExpandProperty Name

foreach ($server in $servers) {
    $output = [ordered]@{
        'ServerName'                  = $null
        'IPAddress'                   = $null
        'OperatingSystem'             = $null
        'AvailableDriveSpace (GB)'    = $null
        'Memory (GB)'                 = $null
        'UserProfilesSize (GB)'       = $null
        'StoppedServices'             = $null
    }
    $output.ServerName = $server
    [pscustomobject]$output
}

Notice that instead of immediately casting the hashtable to a pscustombject object via [pscustomobject]@{}, I've instead created the hashtable and when I'm finishing modifying it, I'm then casting it to the pscustomobject.

ASIDE: Notice the [ordered] type on line 6. When you create a hashtable with just @{}, PowerShell will not maintain the order of the keys if the hashtable is modified in any way. To guarantee the keys stay in the order you initially defined them, you can preface the @{} hashtable declaration with [ordered]. This ensure all keys stay in the same order.

Since we only care about the object being that type when it's output, it's simpler to keep the property values in a hashtable first and then convert it to the pscustomobject at the end of each loop iteration.

I can now rerun the script and you can see we've already got some information in there. We are well on our way.

PS> C:\ServerInventory\Get-ServerInformation.ps1 | Format-Table -AutoSize

ServerName UserProfilesSize (GB) AvailableDriveSpace (GB) OperatingSystem StoppedServices IPAddress Memory (GB)
---------- --------------------- ------------------------ --------------- --------------- --------- -----------
SQLSRV1
WEBSRV1
The Tool (Thus Far)

If you've followed all instructions in this section, your server inventory script will look like the below example:

$serversOuPath = 'OU=Servers,DC=powerlab,DC=local'
$servers = Get-ADComputer -SearchBase $serversOuPath -Filter * |
Select-Object -ExpandProperty Name

foreach ($server in $servers) {
    $output = [ordered]@{
        'ServerName'                  = $null
        'IPAddress'                   = $null
        'OperatingSystem'             = $null
        'AvailableDriveSpace (GB)'    = $null
        'Memory (GB)'                 = $null
        'UserProfilesSize (GB)'       = $null
        'StoppedServices'             = $null
    }
    $output.ServerName = $server
    [pscustomobject]$output
}
C:\ServerInventory\Get-ServerInformation.ps1

You've now scaffolded out a PowerShell script that will allow you to query various information from servers and return them all in a common object type (pscustomobject).

Next up, let's begin plugging in functionality to your script by enumerating files and returning folder size information from your servers.

Enumerating User Profiles

Now that we've got our foundation built, it's now a matter of figuring out how to pull the information we need from each server and return the appropriate properties. Let's now focus on getting the value for UserProfileSize (GB) for each server.

Perhaps I know some Citrix or Remote Desktop Services servers have large user profiles. I'd like to see how much space is being consumed by all of these profiles located in C:\Users of each server.

Before we can gather this information for all servers and add it to the script, we must first figure out how to do it with one server. Since I know the folder path, I'll first see if I can query all files under all of the user profile folders on just one of my servers. When I run Get-ChildItem -Path \\WEBSRV1\c$\Users -Recurse -File, I can immediately see it's returning all of the files and folders in all user profiles, but I don't see anything related to size.

PS> Get-ChildItem -Path \\WEBSRV1\c$\Users -Recurse -File

PSPath            : Microsoft.PowerShell.Core\FileSystem::...
PSParentPath      : Microsoft.PowerShell.Core\FileSystem::...
PSChildName       : file.log
PSProvider        : Microsoft.PowerShell.Core\FileSystem
PSIsContainer     : False
Mode              : -a----
VersionInfo       : File:             \\WEBSRV1\c$\Users\Adam\file.log
                    InternalName:
                    OriginalFilename:
                    FileVersion:
                    FileDescription:
                    Product:
                    ProductVersion:
                    Debug:            False
                    Patched:          False
                    PreRelease:       False
                    PrivateBuild:     False
                    SpecialBuild:     False
                    Language:

<SNIP>

To get all of the properties Get-ChildItem returns, you can pipe the output to Select-Object and use specify all properties by using an asterisk as the value for Property as shown below.

PS> Get-ChildItem -Path \\WEBSRV1\c$\Users -Recurse -File |
Select-Object -Property *

When you run the above command, you'll see a Length property. This is how large the file is in bytes. Knowing this, you'll now have to figure out how to add up all of these Length property values for all files in each server's C:\Users folder.

Luckily, PowerShell makes this easy with the Measure-Object cmdlet. This cmdlet accepts input from the pipeline and will automatically add up a value for a specific property.

PS> Get-ChildItem -Path '\\WEBSRV1\c$\Users\' -File -Recurse | Measure-Object -Property Length -Sum

Count    : 15
Average  :
Sum      : 6000000554
Maximum  :
Minimum  :
Property : Length

We now have a property (Sum) we can use to represent the total user profile size in our output.

At this point, we will incorporate this code into our loop and set the appropriate property in our $output hashtable. Since we just need the Sum property, we'll enclose the command in parentheses and just reference the Sum property as shown below.

foreach ($server in $servers) {
    $output = [ordered]@{
        'ServerName'                  = $null
        'IPAddress'                   = $null
        'OperatingSystem'             = $null
        'AvailableDriveSpace (GB)'    = $null
        'Memory (GB)'                 = $null
        'UserProfilesSize (GB)'       = $null
        'StoppedServices'             = $null
    }
    $output.ServerName = $server
    $output.'UserProfilesSize (GB)' = (Get-ChildItem -Path "\\$server\c$\Users\" -File -Recurse | Measure-Object -Property Length -Sum).Sum
    [pscustomobject]$output
}
Adding code for the UserProfileSize (GB) property
ASIDE: Note that $output.ServerName and $output.'UserProfilesSize (GB)' are a little different. 'UserProfilesSize (GB)' is surrounded by single quotes and ServerName is not. Why is that? PowerShell allows you to create and reference object properties with spaces in them only if surrounded by single or double quotes. This tells PowerShell to treat anything inside of the quotes as the object property.

Your script now outputs:

PS> C:\ServerInventory\Get-ServerInformation.ps1 | Format-Table -AutoSize

ServerName UserProfilesSize (GB) AvailableDriveSpace (GB) OperatingSystem StoppedServices IPAddress Memory (GB)
---------- --------------------- ------------------------ --------------- --------------- --------- -----------
SQLSRV1                   6000036245
WEBSRV1                   6000000554
Finding user profile size

I now see the total size of the user profile size property, but it's not in GB. We've calculated the sum of Length and Length is in bytes. Converting measurements like this is easy in PowerShell; simply divide bytes by 1GB, and you've got your number.

$userProfileSize = (Get-ChildItem -Path "\\$server\c$\Users\" -File -Recurse | Measure-Object -Property Length -Sum).Sum

$output.'UserProfilesSize (GB)' = $userProfileSize / 1GB
Assigning the UserProfilesSize (GB) property

When this run, you'll see that the values are now represented in GB but have a whole lot of decimals.

You don't need to see the total user profile size to 12 digits so you can do a little rounding using the Round() method on the [Math] class making the output look much better. The [Math] class is a .NET class (not specifically PowerShell) that you can reference in PowerShell. Type [Math] in your console followed by :: and start hitting the tab key. You'll discover all kinds of methods to perform many different math operations.

Once I round the output found in..

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Graph is Microsoft’s RESTful API that allows you to interface directly with Azure AD, Office 365, Intune, SharePoint, Teams, OneNote, and a whole lot more. By using the Invoke-RestMethod PowerShell cmdlet we can connect and interact directly with the Graph API. The Invoke-RestMethod cmdlet sends HTTP and HTTPS requests to Representational State Transfer (REST) web services that returns richly structured data. PowerShell formats the response based on the data type. For an RSS or ATOM feed, PowerShell returns the Item or Entry XML nodes. For JavaScript Object Notation (JSON) or XML, PowerShell converts (or deserializes) the content into objects.1 In this article, I will walk you through setting up the Azure Application, assigning proper  permissions, Authentication and finally running queries against the Graph API. Once you understand how to properly authenticate and format queries you will see how powerful Graph can be for you and your organization.

1. Application …

The post Connect and Navigate the Microsoft Graph API with PowerShell appeared first on The Lazy Administrator.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Hey folks...

It's been a little while since I have posted anything on my own blog and many of you may be wondering... why?

For the last six months or so, I have been up to my eye balls in projects and those projects have kept me from publishing blog posts for my website. I am certainly not complaining though, because the last six months have been super fulfilling. But, it's time that I get back to tending to my website and sharing snippets of code again.

What the heck have I been up to? Where do I begin...?

First off, many of you might be saying, "I see you all the time on Twitter, LinkedIn, Reddit And Facebook PowerShell groups. Yes, you are right, I am very active in the PowerShell community... My absense has been mostly from my own blog. But it's MY BLOG and my readers mean so much to me... I feel completely guilty for not being able to post on a regular basis, so I figured its time to explain why I haven't been posting much lately.

Late last year I had a number of opportunities come my way and I basically said YES to all of them. First, it started with being accepted as speaker for the PowerShell Summit in April 2019. That was an amazing experience for me and one I will never forget. I presented on an open-source toolkit called PSADHealth that I built with some really talented individuals and the code writing process leading up the Summit was a huge undertaking. If you're curious about the module, you can watch my talk on YouTube or you check out the module we built in the PowerShell Gallery . The talk and the toolkit are two things I am super proud of and can't believe I actually did it!

I accepted a postion with PowerShell.org after the Summit completed to be their new Director of Community Engagement. I'll be focusing on organizing PowerShell Saturdays and helping user groups expand. You'll see more from me with PowerShell.org towards the middle of August. My goal is help user groups with big things like get more people to attend their local meetings, but also the smaller details like how to run their groups effectively and market themselves. As I also mentioned, I'll be helping groups who wish to put on their own PowerShell Saturday events and be an advocate for the community. I am very excited to get started helping groups. If you are user group organizer or have a desire to get involved with a user group, make sure you say Hello! I would love to be able to help you get started.

You may recall last year I participated in the PowerShell Conference Book vol 1 as a contributing author. It was a very exciting project and I was so honored to be a part of that project. This year, I signed up to once again contribute a chapter to vol 2 of the book. I'll be writing about PowerShell Remoting and Logging. I'll discuss all the options and the configurations available and offer advice on best practices. Look for the book to be out in early Fall 2019!

On August 10th, I'll be presenting at the PowerShell on the River conference in Chattanooga, TN. My topic for this event will be a shortened version of my book topic I mentioned previously. The event is made up of 15 speakers who will be discussing all things PowerShell. It is a two day event and is a fantastic way to get out and be a part of this great community. If you live on the East Coast, I encourage to join us. See the link above for more info.

Many of you may know that I am also the co-leader of the Research Triangle PowerShell users group in Raleigh, NC. Our group has grown considerably over the last 18 months and we now have 1100 registered members. Phil Bossman is one of the other co-leaders of the group and we attended the PowerShell on the River event together last year. On the way home from that event, Phil was really motivated to host our own PowerShell Saturday event in 2019.

I am not sure how we pulled this together, but on September 21st, we'll be hosting our own PowerShell Saturday (and Sunday( at the NC State campus in Raleigh. We have 18 speakers lined up to present on Saturday. We follow that up with a special Sunday 6 hr deep dive on cybersecurity with Fernando Tomlinson who is a a CyberOps pro for the US Dept of Defense.

In addition to all of that, I still write regularly for 4sysops.com and I am getting ready to start writing for other sites as well. So as you can now see, I have been really busy with special projects! Those projects will soon start to wind down and when they do, the articles on my site will start to resume. I hope to have something to share with everyone later this week.

I've been busy with projects, but I have also been creating alot of great tools that I will share with everyone in the near future. My code is tools that I built to help me do my job better. I am happy to share those tools with the community and my readers. Also, I have a new idea for a series of posts that I think readers will really enjoy!

There's lots more to come in te next few weeks! Thanks for being a loyal reader and sticking with me.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

If you're a system administrator, one of your jobs is to install, upgrade and remove software from lots of systems. What if I told you you don't have to connect to each machine and check for installed software manually anymore? What if you could simply write a litte PowerShell free of charge and get a nice-looking inventory report in minutes? Read on! In this blog post, I'm going to show you how to use PowerShell to get installed software on lots of computers at once.

Managing software on a single system is not a big deal but what if you've got hundreds of systems that you must maintain? Managing software at this scale soon becomes a nightmare if you don't have the proper tools.

Why PowerShell?

A lot of products exist in the marketplace to help you report on and manage software on multiple systems at once. Microsoft's System Center Configuration Manager, Dell KACE and Altiris products come to mind. These products work great but can sometimes be overkill. Perhaps you simply need a quick way to perform a software inventory of a few system. In this case, I'd advise you to use PowerShell.

By using a PowerShell script, you can easily reach out to each of these systems, pull a real-time software inventory and generate a report in any fashion you'dlike.

In this article, I'll show you a function that you can use today that allows you to point to one or more systems and generate a list of all the software that's installed on each.

Where Installed software lives

For reference, installed software exists in three locations:

  • the 32-bit system uninstall registry key
  • the 64-bit system uninstall registry key
  • each user profile's uninstall registry key.

Each software entry is typically defined by the software's globally unique identifier (GUID). Inside of the GUID key contains all the information about that particular piece of software. To get a complete list, PowerShell must enumerate each of these keys, read each registry value and parse through the results.

Since the code to correctly parse these values is way more than a single article can hold, I've prebuilt a function called Get-InstalledSoftware that wraps all of that code up for you as you can see below.

function Get-InstalledSoftware
{
	<#
	.SYNOPSIS
		Retrieves a list of all software installed on a Windows computer.
	.EXAMPLE
		PS> Get-InstalledSoftware
		
		This example retrieves all software installed on the local computer.
	.PARAMETER ComputerName
		If querying a remote computer, use the computer name here.
	
	.PARAMETER Name
		The software title you'd like to limit the query to.
	
	.PARAMETER Guid
		The software GUID you'e like to limit the query to
	#>
	[CmdletBinding()]
	param (
		
		[Parameter()]
		[ValidateNotNullOrEmpty()]
		[string]$ComputerName = $env:COMPUTERNAME,
		
		[Parameter()]
		[ValidateNotNullOrEmpty()]
		[string]$Name,
		
		[Parameter()]
		[ValidatePattern('\b[A-F0-9]{8}(?:-[A-F0-9]{4}){3}-[A-F0-9]{12}\b')]
		[string]$Guid
	)
	process
	{
		try
		{
			$scriptBlock = {
				$args[0].GetEnumerator() | ForEach-Object { New-Variable -Name $_.Key -Value $_.Value }
				
				$UninstallKeys = "HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall", "HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall"
				New-PSDrive -Name HKU -PSProvider Registry -Root Registry::HKEY_USERS | Out-Null
				$UninstallKeys += Get-ChildItem HKU: | where { $_.Name -match 'S-\d-\d+-(\d+-){1,14}\d+$' } | foreach { "HKU:\$($_.PSChildName)\Software\Microsoft\Windows\CurrentVersion\Uninstall" }
				if (-not $UninstallKeys)
				{
					Write-Warning -Message 'No software registry keys found'
				}
				else
				{
					foreach ($UninstallKey in $UninstallKeys)
					{
						$friendlyNames = @{
							'DisplayName' = 'Name'
							'DisplayVersion' = 'Version'
						}
						Write-Verbose -Message "Checking uninstall key [$($UninstallKey)]"
						if ($Name)
						{
							$WhereBlock = { $_.GetValue('DisplayName') -like "$Name*" }
						}
						elseif ($GUID)
						{
							$WhereBlock = { $_.PsChildName -eq $Guid }
						}
						else
						{
							$WhereBlock = { $_.GetValue('DisplayName') }
						}
						$SwKeys = Get-ChildItem -Path $UninstallKey -ErrorAction SilentlyContinue | Where-Object $WhereBlock
						if (-not $SwKeys)
						{
							Write-Verbose -Message "No software keys in uninstall key $UninstallKey"
						}
						else
						{
							foreach ($SwKey in $SwKeys)
							{
								$output = @{ }
								foreach ($ValName in $SwKey.GetValueNames())
								{
									if ($ValName -ne 'Version')
									{
										$output.InstallLocation = ''
										if ($ValName -eq 'InstallLocation' -and ($SwKey.GetValue($ValName)) -and (@('C:', 'C:\Windows', 'C:\Windows\System32', 'C:\Windows\SysWOW64') -notcontains $SwKey.GetValue($ValName).TrimEnd('\')))
										{
											$output.InstallLocation = $SwKey.GetValue($ValName).TrimEnd('\')
										}
										[string]$ValData = $SwKey.GetValue($ValName)
										if ($friendlyNames[$ValName])
										{
											$output[$friendlyNames[$ValName]] = $ValData.Trim() ## Some registry values have trailing spaces.
										}
										else
										{
											$output[$ValName] = $ValData.Trim() ## Some registry values trailing spaces
										}
									}
								}
								$output.GUID = ''
								if ($SwKey.PSChildName -match '\b[A-F0-9]{8}(?:-[A-F0-9]{4}){3}-[A-F0-9]{12}\b')
								{
									$output.GUID = $SwKey.PSChildName
								}
								New-Object –TypeName PSObject –Prop $output
							}
						}
					}
				}
			}
			
			if ($ComputerName -eq $env:COMPUTERNAME)
			{
				& $scriptBlock $PSBoundParameters
			}
			else
			{
				Invoke-Command -ComputerName $ComputerName -ScriptBlock $scriptBlock -ArgumentList $PSBoundParameters
			}
		}
		catch
		{
			Write-Error -Message "Error: $($_.Exception.Message) - Line Number: $($_.InvocationInfo.ScriptLineNumber)"
		}
	}
}

Once you copy and paste this function into your PowerShell console or add it to your script, you can call it by using a particular computername with the ComputerName parameter.

Using the Get-InstalledSoftware function

PS> Get-InstalledSoftware -ComputerName XXXXX

When you do this, you will get an object back for each piece of software that's installed. You are able to get a wealth of information about this whatever software is installed.

If you know the software title ahead of time you can also use the Name parameter to limit only to the software that matches that value.

For example, perhaps you'd only like to check if Microsoft Visual C++ 2005 Redistributable (x64) is installed. You'd simply use this as the Name parameter value as shown below.

PS> Get-InstalledSoftware -ComputerName MYCOMPUTER -Name 'Microsoft VisualC++ 2005 Redistributable (x64)'
Summary

Using PowerShell to get installed software, you can build a completely free tool that you and your team can use to easily find installed software on many Windows computers at once!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

VNC is one of the most popular remote desktop control products out there. It's free and comes in many different flavors that administrators can pick from. In this post, we're going to cover how to silently install UltraVNC, a flavor of VNC, on Windows.

If you're a desktop administrator, there may have been a time in your career where you needed to connect to an UltraVNC server running on a user's desktop to find that it was not installed. At that point, you might have talked the user through installing the UltraVNC server from a file share somewhere on the network. This takes time and resources.

It's much better to automate this process and remotely install the UltraVNC server remotely with no user interaction at all! We can build a PowerShell script to do just that that allows you to remotely install UltraVNC server to as many computers as you need.

Finding UltraVNC silent install switches

Before we get too far, we'll first need to figure out the switches required to make the install silent. Depending on the software, this might be an easy or incredibly frustrating experience.

To silently install UltraVNC server on a Windows machine, you'll first need to create an INF file. The INF file is like an answer file providing all of the installation information the UltraVNC server needs.

You create this INF file by performing a typical install of UltraVNC on a single machine. Instead of simply launching the installer, you'll use the /savinf which and provide the location of the INF file as shown below.

setup.exe /saveinf="C:\silentinstall.inf"

The example above will save a file called silentinstall.inf to the root of you C drive.

Next, you need to figure out how to use this answer file when installing UltraVNC on another computer. This is possible using the /loadinf switch and passing the file name in the same manner.

setup.exe /loadinf="C:\silentinstall.inf"

This works but doesn't make the install completely silent like you need. To do that, you'll need to add another switch to this; /verysilent.

setup.exe /verysilent /loadinf="C:\silentinstall.inf"

When executed, this syntax will silently install UltraVNC with all of the same answers I initially provided manually saved in the INF file.

Setting up remote install capability

We now have the ability to silently install this software on a machine. However, we have no way to do this to remote computers. As is, you have to manually copy the setup.exe file and the INF file to any computer you'd like to remotely install this on. This is unacceptable! Let's automate all of this with a PowerShell function.

Since you've already got everything necessary to perform the install locally, you'll now need to figure out first where you're going to store the setup.exe file and the INF file. These files need to be accessible to every computer you'll be installing UltraVNC to. You'll be providing these files on a common file share to deliver those bits to remote computers.

Because you'll probably need to reference the installer bits over and over again, it's a good idea to place them on a file share somewhere on the network. I'll put them on a share called ToolShare on my MEMBERSRV1 server. Below is where I'm starting to build a deployment PowerShell script.

$InstallerFolder = '\\MEMBERSRV1\ToolShare\VNC'

Next, you'll need to copy the setup.exe and the INF file to remote computers on demand. To do this, use Copy-Item to do a simple file copy of the entire VNC folder. Below I'm copying the contents of my installer folder to the root of the C drive on the remote computer.

$ClientName = 'MYCLIENT'
Copy-Item -Path $InstallerFolder -Destination \\$ClientName\c$ -Recurse

Next, I'll need to invoke the install remotely. PowerShell remoting is an excellent way to make this happen. I'll use the Invoke-Command command to remotely execute the silent install from my computer. To do this, I'll first need to wrap the command I need to run in a script block.

$scriptBlock = {
    Start-Process "C:\VNC\setup.exe" -Args "/verysilent/loadinf=`"C:\VNC\silentinstall.inf`"" -Wait -NoNewWindow
}

You can see you're using the Start-Process cmdlet to kick off the installer on the remote computer and am passing the silent arguments as you came up with earlier.

Next, you'll need to execute this script on the remote computer with the Invoke-Command command.

PS> Invoke-Command -ComputerName $ClientName -ScriptBlock $scriptBlock

This sends instructions to $ClientName to run the code inside of the script block. After this is complete, we're done! UltraVNC is installed.

Cleaning up installer remnants

However, we've now left that C:\VNC folder on the remote computer. Let's clean that up.

Remove-Item \\$ClientName\c$\VNC -Recurse

We've now installed UltraVNC and cleaned up any remnants we've left behind. If you've liked this approach, feel free to download the Deploy-VNC script from my Github repo. It encapsulates all of this code into a PowerShell function and adds some additional functionality to make remotely installing UltraVNC server on Windows machines a piece of cake!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The string type is probably the most popular variable in PowerShell. I don't think I've ever written a script that didn't contain at least one string. Strings are great to define a simple set of characters like a computer name, path to a text file, a registry keypath, a URL and so on. Expanding variables inside of strings and using PowerShell's string format capabilities, you can do just about anything.

String are defined by enclosing one or more characters in single or double quotes like:

$Url = 'http:\\www.bing.com'
$FilePath = 'C:\thisissomefile.txt'

Defining strings like this is straightforward but soon you'll find yourself needing to get the value of a variable inside strings.

For example, perhaps you have a script with a ComputerName parameter and in your script, you need to concatenate a standard prefix to it like SRV- as shown below.

$ComputerName = 'URANUS'
$ComputerName = "SRV-$ComputerName"

The $ComputerName variable's value is now SRV-URANUS but enclosing $ComputerName inside of the string with double quotes is only one way to insert variables in strings. There are a few different ways to make that happen. Let's take a look at two examples.

Expanding Strings

The first method is one you are probably already familiar with, called expanding strings, or simply expanding variables inside of existing strings. This is the most common way of getting the value of a variable to show up in a string.

To do this requires placing the variable inside of a string with double quotes. Single quotes will not work because they treat all characters literally.

For example, maybe I need to specify the path to a file. I know the base folder path but not the actual file name just yet. I might create a variable called $FullPath and statically input the folder path, yet make the file name a variable.

PS> $FileName = 'MyFile.txt'
PS> $FullPath = "C:\Folder\subfolder\$FileName"
PS> $FullPath
C:\Folder\subfolder\MyFile.txt

What if you need to specify an object property inside of a string? This looks a little different. In this case, I need to surround the property with parentheses and prepend a dollar sign to the front. This tells PowerShell to expand the value first.

PS> $File = Get-Item -Path 'C:\MyFile.txt'
PS> $FullPath = "C:\Folder\subfolder\$($File.Name)"
PS> $FullPath
C:\Folder\subfolder\MyFile.txt

Expanding string is very common but what if you have a string that is more complicated?

Using the PowerShell String Format Operator

Another way to ensure variable values are inserted into your strings is through Powershell's format operator (-f).

Using the format operator, you don't have to place the variable or the property inside of a string surrounded by double quotes. You can create a string in single quotes and specify the variables that will go into that string outside of it as shown below.

PS> $File = Get-Item -Path 'C:\MyFile.txt'
PS> 'C:\Folder\subfolder\{0}' -f $File.Name
C:\Folder\subfolder\MyFile.txt

You can see it has the exact same effect. Although this time you didn't have to use double quotes or worry about enclosing the object property in parentheses and another dollar sign. It looks a little cleaner but might be harder to understand. Using the format operator forces you to replace the {0} with $File.Name in your mind.

To add other variables, simply increment your labels by 1 and continue adding variables separated by a comma to the format operator. As long as the format operator's index position matches the position in the string label you can add as many references as you'd like.

PS> $File = Get-Item -Path 'C:\MyFile.txt'
PS> $SubFolderName = 'folder1'
PS> 'C:\Folder\subfolder\{0}\{1}' -f $SubFolderName, $File.Name
C:\Folder\subfolder\folder1\MyFile.txt
When to Use the Format Operator

I usually prefer to place variables inside of the string and expand them because I don't have to match an index with a value. However, there are exceptions, as in the case of special characters that must be escaped.

For example, let's say you need to execute a command-line utility that requires you to pass a filename to it. This file path may contain spaces so you'll need to enclose that reference with double quotes. Without those double quotes, utility.exe could not interpret the file argument correctly.

Utility.exe -file "C:\Program Files\SomeFolder\a subfolder here"

Now let's say you want to do this in a PowerShell script. If you're using expanding strings, it would look a little messy because you must use a backtick to escape the double quotes.

$FileName = 'C:\Program Files\SomeFolder\a subfolder here'
Start-Process -FilePath 'utility.exe' -Args "`"$FileName`""

Now, let's get the same result with the format operator.

$FileName = 'C:\Program Files\SomeFolder\a subfolder here'
$Args = '"{0}"' -f $FileName
Start-Process -FilePath 'utility.exe' -Args $Args

I don't have to use any escape characters. This may seem like a simple example but as soon as you come across large strings that require a lot of escaping you will soon see the real benefit of using the format operator!

Summary

The next time you find yourself having to substitute variables inside of a string, think about which method would be most readable. It will save you time in the long run.

For more posts related to working with strings, be sure to check out The PowerShell Substring: Finding a string inside a string and PowerShell Variables in Strings.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As your team begins to provision and configure more Windows Azure virtual machines (VMs), at some point, you'll get tired of reinventing the wheel. You need automation! In this article, come with me as I walk you through step-by-step to begin using Azure custom script extensions for Windows to speed up deployment and configuration.

If your team is in a Windows environment, one of the best tools to automate server configuration is PowerShell. By using the Azure PowerShell module your team can leverage the power of PowerShell to not only automate on-prem tasks but also run commands on Azure VMs in bulk rather than one at a time.

One feature Microsoft has provided us is called the Azure custom script extension. The custom script extension is an Azure virtual machine extension that the VM agent runs to execute arbitrary PowerShell code against your VMs by using the Azure API rather than consoling into the VM or using PowerShell remoting.

Running commands this way provides several benefits. Running commands using the Azure custom extension in Windows:

  • provides increased security by not needing to open network ports for PowerShell remoting
  • allows for easy executing PowerShell code at VM startup
  • automatically transferring resources from Azure storage to your VM as part of the provisioning process
  • an easy way of running PowerShell scripts stored in various Azure storage accounts

Enabling an Azure custom script extension for Windows can be done in a few ways. In this article, we'll be focusing on enabling the custom script extension via PowerShell, but you may also enable the extension via an Azure Resource Manager(ARM) template.

As a simple example, let's say you'd like to ensure PowerShell remoting is enabled on an Azure VM in your subscription. To do this, you'd need to run the following command locally on each VM:

Enable-PSRemoting -Force

Let's build a custom script extension to do this.

Building an Azure Custom Script Extension for Windows

First, create a PowerShell script called Enable-PSRemoting.ps1 on your local computer with the command above inside. This script needs to run on an Azure VM. To do this, we'll build another small PowerShell script called New-CustomScriptExtension.ps1 to get it uploaded into Azure and a custom script extension created to execute it. Before we get too far, you'll need a couple items:

  • The Azure resource group and storage account name that will store the script
  • The Azure storage container name
  • The VM name
  • The Azure resource group name that the VM is in

This script can be broken down two sections; getting the small PowerShell script uploaded and Azure and creating the custom script extension. Everything else is required to glue these two processes together.

Uploading a PowerShell Script to Azure

First, we'll upload the Enable-PSRemoting.ps1 script to the Azure storage account ($StorageAccountName) inside of the container ($ContainerName) in the resource group ($ResourceGroupName).

$saParams = @{
    'ResourceGroupName' = $ResourceGroupName
    'Name' = $StorageAccountName
}
$storageContainer = Get-AzureRmStorageAccount @saParams |Get-AzureStorageContainer -Container $ContainerName
$bcParams = @{
    'File' = 'C:\Enable-PSRemoting.ps1'
    'BlobType' = 'Block'
    'Blob' = ' Enable-PSRemoting.ps1'
}
$storageContainer | Set-AzureStorageBlobContent @bcParams
Copying Enable-PSRemoting.ps1 to AzureRunning the Custom Script Extension

When you run New-CustomScriptExtension.ps1, this script will upload the Enable-PSRemoting.ps1 script to the Azure storage account specified.

Now that the script is stored in Azure, you can execute it via the custom script extension for the VM:

  • Name: $VMName
  • Resource Group: $rgName
  • Storage Account: $saName
  • Storage Container: $scName

Open up a text editor for New-CustomScriptExtension.ps1 and paste in the below example. When you run this, it will execute the Azure custom script extension for Windows which will execute the Enable-PSRemoting.ps1 PowerShell script you uploaded earlier.

# Get the VM we need to configure
$vm = Get-AzureRmVM -ResourceGroupName $rgName -Name $VMName
# Get storage account key
$key = (Get-AzureRmStorageAccountKey -Name $saName -ResourceGroupName$rgname).Key1
## Enable the custom script extension and run the script
$scriptParams = @{
    'ResourceGroupName' = $rgName
    'VMName' = $VMName
    'Name' = 'Enable-PSRemoting.ps1'
    'Location' = $vm.Location
    'StorageAccountName' = $saName
    'StorageAccountKey' = $key
    'FileName' = 'Enable-PSRemoting.ps1'
    'ContainerName' = $scName
    'Run' = 'Enable-PSRemoting.ps1'
}
Set-AzureRmVMCustomScriptExtension @scriptParams
New-CustomScriptExtension.ps1Summary

Once this script completes, you can verify that Enable-PSRemoting.ps1 has executed on the VM and PowerShell remoting has been successfully enabled. You should now be able to use Invoke-Command against your Azure VM.

By leveraging the Azure custom script extension in Windows, you now have the ability to remotely run any kind of PowerShell scripts on your Azure VMs.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Using .NET, PowerShell has some powerful regex capabilities. In this article, I'm going go through, step-by-step on how to use use regex to capture groups with PowerShell using the [regex] type accelerator.

Unlike the Bash shell in Linux which only works with strings, with PowerShell, we work with objects which typically have properties already ready for us to use. We don't have to parse out near as many strings as you must do in the Linux world.

However, there still are times when you must be able to read a string and grab pieces of that string matching a specific pattern. This is where regular expressions (regex) come in.

There are a couple different ways to use regex and thusly regex groups with PowerShell.

  • The [regex] type accelerator
  • The automatic variable $matches

In this article, you'll be getting an introduction to the [regex] type accelerator to pull text from strings.

The [regex] type accelerator

One method that PowerShell can process regex is through the [regex] type accelerator. Without going into a lot of development speak, the [regex] type accelerator is a way to use a .NET class with a short expression.

To check out what you can do with this type accelerator bring up a PowerShell console and simply type [regex] followed by two double colons and begin hitting your tab key.

[regex]::

You'll notice that you'll be cycling through a lot of options. In this article, you'll only use the Match() method.

Let's say you have a string "My dog is named spot". Place this string into a variable called $aVariable.

$aVariable = 'My dog is named spot

In a script, you might not know what the value of this variable is so you'd like to use regex to pull out all of the characters to the left of the word "dog". This can be done using regex groups.

In a regex pattern, a group is signified by a pair of parentheses. Anything in these parentheses will be part of that particular group. Since I want to pull out all characters to the left of the word "dog" I will use the regex pattern .+ which is essentially equals a wildcard.

My regex pattern would look like this:

(.+)dog

Now that you have the pattern created, you can then use the Match() method to attempt the match on my variable.

PS> $aVariable = 'My dog is named spot'
PS> [regex]::Match($aVariable,'(.+)dog')
Regex outputFinding regex capture groups

You'll notice a few properties came out of that. The one we're interested in is called Groups. The Groups property will contain the characters from the string to the left of the word "dog". We'll need to dive a little deeper into this object to find those.

First, append .Groups to the end of the match reuslt to view only the Groups property. You'll see that this returns two objects.

PS> ([regex]::Match($aVariable,'(.+)dog')).Groups

Groups   : {0, 1}
Success  : True
Name     : 0
Captures : {0}
Index    : 0
Length   : 6
Value    : My dog

Success  : True
Name     : 1
Captures : {1}
Index    : 0
Length   : 3
Value    : My

The second object will always contain the text we're looking for. Knowing this, you will get more specific and pick this one from the two objects by referencing the index number of 1.

PS> [regex]::Match($aVariable,'(.+)dog').Groups[1]

Success  : True
Name     : 1
Captures : {1}
Index    : 0
Length   : 3
Value    : My

You're now just one step away! Now all you need to do is reference the Value property and you will get back the text you're looking for.

PS> [regex]::Match($aVariable,'(.+)dog').Groups[1].Value
My

Since there was a space right before the word "dog" I made sure that you captured that space by first assigning the output to a variable. Then, using the Length property, it checked out how many characters were inside of that string.

Summary

The [regex] type accelerator and regex capture groups in PowerSHell are a powerful way to match and parse strings. The next time you need to pull out a string inside of a string, give the [regex] type accelerator a try and add some parentheses around that regex to capture the values inside of the string.

If you'd like to learn more about some of the other features that PowerShell can do with regex, check out my other post Creating Custom Objects from Regex Matches.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

So you bring up a new Windows Server machine, you're done installing and configuring it how you'd like it to look, perhaps deployed it into your test environment and run a few PowerShell scripts against it. You're done, right? Not so fast. Are you sure all of your standard checklist of things have been applied correctly? Validate that Server 2016 configuration with the testing framework, Pester!

Pester is a testing framework built-in PowerShell and comes natively installed with Windows Server 2016 although I recommend getting the latest version by running Install-Module Pester.

Pester's goal is to confirm your expected configuration ensuring your code executes the way you think it should and makes changes to your environment the way you think it should (unit and infrastructure tests).

In this article, we're going to apply the Pester testing framework to Window Server 2016 in particular and show you how it can be leveraged to ensure your Windows Server 2016 configuration set exactly how you intend it to be.

Defining configuration items to validate

Prior to actually creating a test, we need to ensure what to test in the first place so I'll make a list of a couple components I'd like to test after my server is provisioned. Here I'm only testing two items but you can add as many Windows Server 2016 configuration items as you'd prefer.

  • Ensure your VM (or physical server) is running Windows Server 2016
  • Ensure the Containers Windows feature is enabled

Building a Pester test scaffold script

Now that we know what to test, I'll start by creating a single Pester test script called WinSrv16.Tests.ps1 that contains three individual tests in a single describe block called WinSrv2016 Tests.

describe 'WinSrv2016 Tests' {
    it 'is Server Windows Server 2016' {
    
    }
	
    it 'has the Containers Windows feature enabled' {

    }
}

If I run this test now using Invoke-Pester, you can see that Pester does recognize the tests but does not show any results. This is because we don't have any actual test code in our it blocks yet.

Scaffolding Pester testsBuild code to test and to confirm expectations

Let's first add some code to determine the OS. During my investigation, I discovered that I could use the Win32_OperatingSystem WMI class to find this. If the Caption property is equal to Microsoft Windows Server 2016 then it has the OS I'm expecting. I'll add this check in my first test.

describe 'WinSrv2016 Tests' {
    it 'is Server Windows Server 2016' {
    	(Get-CimInstance -Class Win32_OperatingSystem -Property Caption).Caption | should be ' Microsoft Windows Server 2016'
    }
	
    it 'has the Containers Windows feature enabled' {

    }
}

You can now see that test passes.

Server is running Windows Server 2016

Now you need to figure out how to test for the Containers Windows feature as well. Note you could also test server roles like this too. When you run Get-WindowsFeature -Name Containers you'll see that it returns an object with a property of Installed and a value of True. You need to write a test for that condition.

When you run (Get-WindowsFeature -Name Containers).Installed, you'll then get back a boolean True or False value. This value will give you a value to key off of when checking it against the expected output of True.

it 'has the Containers Windows feature enabled' {
    (Get-WindowsFeature -Name Containers).Installed | should be $true
}

Run your tests again and can now see that both tests have passed!

All tests pass

Your end result test script should now look like the below example:

describe 'WinSrv2016 Tests' {
    it 'is Server Windows Server 2016' {
    	(Get-CimInstance -Class Win32_OperatingSystem -Property Caption).Caption | should be ' Microsoft Windows Server 2016'
    }
	
    it 'has the Containers Windows feature enabled' {
        (Get-WindowsFeature -Name Containers).Installed | should be $true
    }
}
Using Context blocks

Another great way to build infrastructure tests is with Pester's context blocks. Context blocks allows you to separate tests within a describe block. For the example you're working with, perhaps you have many different Windows features you need to test along with files that need to be on the server, registry keys that need to be set and so on.

You should build context blocks grouping these tests together to ensure you keep your tests organized.

Your tests could end up looking something like below. Notice also that I introduced a way you can provide many different features names and test them all without having to replicate the it block. Since Pester is PowerShell, you can do a lot with it.

describe 'WinSrv2016 Tests' {

    context 'Windows Features' {
        $features = 'Containers', 'Web-Server'
        foreach ($feature in $features) {
            it "has the $feature Windows feature enabled" {
                (Get-WindowsFeature -Name $feature).Installed | should be $true
            }
        }
    }
    
    context 'Registry settings' {
        it 'has registry key foo in place' {

        }
    }

    context 'Misc settings' {
        it 'is Server Windows Server 2016' {
            (Get-CimInstance -Class Win32_OperatingSystem -Property Caption).Caption | should be ' Microsoft Windows Server 2016'
        }
    }
}
Learn more about Pester

If you'd like to learn more about Pester, be sure to check out my 200+ page book Pester Book to learn the ins and outs of how to use Pester to test all the things!

You can check out a full two chapters from the book for free at my blog post Write PowerShell Tests with Pester: Getting Started.

Summary

These examples are most likely only a small subset of the number of items that you need to change when setting up a Windows Server 2016 configuration. However, it should give you the foundation to think through each component and to build a fully-featured Pester test that will cover a wide range of configuration values for your new server.

Take what you've learned here and add more tests to your Server 2016 configuration or any other server configuration for that matter. The techniques you've learned here will apply across many different operating systems and contexts.

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview