Loading...

   How can I use Windows PowerShell to enumerate all certificates on my Windows computer?

  If you have Windows 7 or later, you can user the Get-ChildItem cmdlet to enumerate all certificates on a local system. For example:

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Summary: Certificate management is always challenging. Let’s explore how to use PowerShell to export local certificate information to a comma-separated values (CSV) file on Windows 7 (or later) computers.

Q: Hey, Scripting Guy!

How can I get all my certificate info into a CSV on my Windows computers?

—SH

A: Hello SH,

Patrick Mercier here, with my first “Hey, Scripting Guy!” post. This question has come up at multiple customer sites, as they plan a new PKI infrastructure or a revamp of their current one!

There’s tons of resources on using PowerShell for querying certificates, but questions around finding expiring certificates, self-signed certificates, or certs issued by specific roots keep coming up when I meet with customers. My current customer needed to find self-signed certificates, so we took this local scan example and wrapped it in Invoke-Parallel to scan targeted systems! Thanks to Joel Mueller, a fellow Premier Field Engineer (PFE) at Microsoft who got me started on this, and to the rest of the “Hey, Scripting Guy!” community for providing a starting point.

As I’m sure you’ve seen in other posts here, the whole thing starts with the Get-ChildItem cmdlet.  At its most basic level, the following command lists all the certificates on your local system:

Let’s break it down:

  • We’re asking for the child items of the certificate branch of the local machine (Get-ChildItem -path Cert:\LocalMachine). “Wait a minute!” you say. “I’ve only ever used the Get-ChildItem cmdlet with a file path to get a list of files and folders. Where do you get this cert:\localmachine business?” Simply put, this “path” is available due to the presence of a PowerShell Provider. For more info, check out Ed’s post: Find and use Windows PowerShell Providers. But the basics for today are that in providing CERT: as the path, I’m calling on the certificate provider in order to access specific information on my system.
  • We’re doing this recursively (-Recurse), to get every child object below this point.
  • We’re filtering out the containers (where-object {$_.PSIContainer -eq $false}).
  • We’re ensuring that we’re grabbing all the attributes available (Format-List -Property *).

Running this command displays all the certificates installed on your local system, conveniently including a list of available attributes:

This example shows the GlobalSign Root CA in the root store of my machine. You should be able to find this cert on your system too. Alternatively, if you like doing things the hard way, you can bring up an MMC, load the certificates snap-in, and browse to the trusted root store. There you can find the GlobalSign Root CA – R1 certificate, and then copy each attribute value to Excel.

You would think that piping that command to a CSV would make for a happy day, wouldn’t you? Sadly, not so. Directly outputting this by using Export-CSV doesn’t give us the expected result.

Getting it all into a format we can manipulate is going to take a bit more effort. Enter the array and the PSObject.

So, my script now starts with defining an empty array, conveniently called $array.

Now, we see the familiar Get-ChildItem command. But instead of piping it directly out by using Export-CSV, we’ll use the foreach-object loop, and break down the output. Ultimately, what this does is:

  • Create a new PSObject for each certificate found by the get-childitem cmdlet. Think of the PSObject as a row inside your data table or, ultimately, your Excel sheet. (New-Object -TypeName PSObject)
  • Add the value of our selected attributes into “columns”. In this case, PSPath, FriendlyName, Issuer, NotAfter, NotBefore, SerialNumber, Thumbrint, DNSNameList, Subject, and Version are all populated. (Add-Member –MemberType NoteProperty -Name “%attrib%” -Value $_.%attrib%)
  • Add the object to your array as a new row. ($array += $obj)
  • Clear out the object, so that no data carries over on the next iteration of the loop. ($obj=$null)
  • Export your array to your CSV. (Export-Csv)

As you see, we can then pipe our array out to the CSV file.

If all went well, you now have a CSV that contains the certificate information on your local machine! I would not be surprised if, after having done this, you discover expired certificates on your system. I’ll leave it to you to find the well-known one that I keep finding.

If the attributes included above don’t meet your needs, you can easily add (or remove) one from the loop simply by inserting an additional Add-Member line. Say you decide you need to include the PSProvider attribute. Simply insert the following above the $array += $obj in the loop:

$obj | Add-Member -MemberType NoteProperty -Name "PSProvider" -value $_.PSProvider

To see what attributes are available, run the first command provided above, and read the output!

I suspect that many of you will want to see how to scale this to scanning remote systems, so watch for a future post that will do just that.

I invite you to follow the Scripting Guys on Twitter and Facebook. If you have any questions, send email to them at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum.

Patrick Mercier, Premier Field Engineer

Microsoft

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Summary: Desired State Configuration is a great deployment tool to meet your organization’s infrastructure-as-code goals. I recently came across a situation for a project that uses the Push Service (as opposed to the Pull Service). It required me to be able to apply a new partial configuration to a node, without any knowledge of what partial configurations were already in place. This led to the development of the DscLcm module, which solved that problem for my team.

DscLcm module

The DscLcm PowerShell module is a utility that interacts with the Local Configuration Manager (LCM) of a target machine. You don’t have to define your configuration as a script block. This functionality is useful in both push and pull scenarios. But the full benefit comes from a push model, because a lot of this functionality is built into the pull model already.

My goal with this module is to provide standard PowerShell functions that allow you to interact with the LCM. At the moment, DscLcm allows you to:

  • Modify an LCM Setting (for example, RebootNodeIfNeeded).
  • Add a partial configuration to the LCM.
  • Remove a partial configuration from the LCM.
  • Modify properties of an existing partial configuration block on the LCM.
  • Reset the LCM to a default state.

Before now, you would have to define an LCM setting in a configuration script block, as seen here:

[DSCLocalConfigurationManager()]

configuration LCMConfig

{

Node localhost

{

Settings

{

RefreshMode = 'Push'

}

}

}

With the new DscLcm PowerShell module, this same setting can be applied with the following command:

Set-LcmSetting -RefreshMode Push

This format is more conventional for working directly with the LCM, versus having to set up an entire configuration block for potentially only one setting change. In the following example, notice that modifying the few settings with the Set-LcmSetting command did not alter any of the already existing settings!

As I mentioned, one of the cool features of DscLcm is that it gives you the ability to append a DSC partial configuration to an LCM, without losing any of its current settings. Traditionally, one would have to re-define all the partial configurations and other LCM settings in the LCM configuration block, before deploying the resulting Managed Object Format (.mof) file. The main benefit of this functionality is that it gives you the ability to apply a new partial configuration, without having to know what partial configurations are already on the target.

In the following example, suppose that the localhost LCM configuration already knows about a partial configuration called ‘ServiceAccountConfig’. In order to apply a new partial to that LCM, you would have to define both ‘ServiceAccountConfig’ and the new partial, ‘SharePointConfig’, in the meta configuration.

[DSCLocalConfigurationManager()]

configuration PartialConfigDemo

{

Node localhost

{

PartialConfiguration ServiceAccountConfig

{

Description = 'Configuration to add the SharePoint service account to the Administrators group.'

RefreshMode = 'Push'

}

PartialConfiguration SharePointConfig

{

Description = 'Configuration for the SharePoint server'

RefreshMode = 'Push'

}

}

}

PartialConfigDemo

With DscLcm, this same function can be performed with the following command:

Add-LcmPartialConfiguration `

-PartialName SharePointConfig `

-RefreshMode Push `

-Description 'Configuration for the SharePoint server'

For the exact opposite scenario, we can also remove individual partial configurations by name. In addition to removing the partial configuration object, this function will also remove any dependencies on that partial as well. The next time the consistency check runs, the LCM will also automatically remove the partial configuration .mof for you.

Remove-LcmPartialConfiguration -PartialName ServiceAccountConfig

For those times when you have a defined partial configuration on a target and just want to adjust one of its settings, you can modify those settings as follows:

Set-LcmPartialConfiguration `

-PartialName SharePointConfig `

-DependsOn "ServiceAccountConfig" `

-Description "New Description"

The last cmdlet in this version of the module lets you reset the LCM to a blank state. This comes in handy for just about any scenario when you need to scrap a configuration altogether.

Reset-LcmConfiguration

As you can see, these functions greatly reduce the overhead for defining your LCM settings in any environment. Keep in mind, this is only one step in the DSC publishing process. Even though we are adding a partial configuration to the LCM, we still need to publish a partial configuration .mof in order for the full process to be completed. I have found these functions to be very handy as I work with DSC, and I hope you will too. Please feel free to leave any feedback or suggestions at either of the links below.

The module can be installed directly with PowerShell, by using the PowerShell Gallery repository with:

https://github.com/aromano2/DscLcm

https://www.powershellgallery.com/packages/DscLcm

Anthony Romano

Consultant, Microsoft

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Summary: You can use Windows PowerShell to authenticate to the Text-to-Speech REST API.

Q: Hey, Scripting Guy!

I was reading up on the REST API for the Text-to-Speech component of Cognitive Services. I'm just starting to learn how to use REST and PowerShell. Could you spend a little bit of time and show me how to authenticate to the service?

—SH

A: Hello SH,

Authentication is one of the biggest pieces you'll want to learn. Telling a system at the back end who you are, and knowing how to communicate with it, is very critical before you can do anything fun!

Talking to the Text-to-Speech API is pretty easy once you know the basics. If we were to pretend the web service was a door with a really cool VIP in the back and a bodyguard watching it, the interaction might go like this:

Knocking on the door with no credentials or invalid credentials.

"Hello, I want to walk in."

Strange look on the bodyguard's face, and he says "Huh?" or "Grrrr" (or roars like Chewbacca). That would be your error code.

You can keep doing this as many times as you like, and you're still not going into that door.

But provide the correct "secret phrase" (or, in our case, key) and the interaction goes like this:

"Hello, I want to walk in. The secret phrase is 'crustless frozen peanut butter sandwiches'."

The bodyguard looks at the list, sees that secret phrase beside your name, and nods. He then calls up on his two-way radio and gets a new, second secret phrase, with instructions to a second door.

Now you meet the second bodyguard, who is much meaner than first one. (Missed getting coffee that morning I suppose.) This one wants your second phrase, and after validating it's good, the second bodyguard starts a stopwatch.

You can run in and out of the door to do what you need with that new phrase, but after a certain amount of time he scratches that phrase off the list.

So, you run back to the first body guard, hand him your first passphrase, and the process repeats until you're done for the day.

That's pretty much what the authentication piece for the REST API does, and how it works.

We talk to a REST API, and pass it one of the keys we generated in the last article. The REST API generates a token, which is a long string of characters, and you need to use that token with the second REST API. This token is only good for a short term, and you need to go back and request a new one every so often.

Let's get some basics together to make this happen.

This first "door" is an endpoint to the REST API that handles the authentication. The documentation on the use of this endpoint can be found under the authentication header at Bing Text-to-Speech API.

We are provided the following information immediately:

POST https://api.cognitive.microsoft.com/sts/v1.0/issueToken

Content-Length: 0

From this information, we can see we need to use a POST method, and the endpoint is https://api.cognitive.microsoft.com/sts/v1.0/issueToken.

Let's start to build that into some Windows PowerShell variables.

# Rest API Method

$Method='POST'

# Rest API Endpoint

$Uri=' https://api.cognitive.microsoft.com/sts/v1.0/issueToken'

The next piece we need to supply is the header information. We need to pass our key to a value named Ocp-Apim-Subscription-Key.

The value of the key is one of the two authentication keys you produced last time, when you initially created the Cognitive Services account for Bing.Speech. I'm going to use a fictitious one for our example.

Here, we'll populate the header. The header in this case is pretty simple, containing only one value.

# Authentication Key

$AuthenticationKey='13775361233908722041033142028212'

# Headers to pass to Rest API

$Headers=@{'Ocp-Apim-Subscription-Key' = $AuthenticationKey }

We then call up the REST endpoint directly, to see if everything worked.

Invoke-RestMethod -Method $Method -Uri $Uri -Headers $Headers

If it worked properly, and everything was formatted the way it should be, the output would be something similar to the following. This is your token for the temporary access to the second endpoint.

We would then modify our Invoke-RestMethod to be captured to a PowerShell object, so we can reuse it later.

# Get Authentication Token to communicate with Text to Speech Rest API

$Token=Invoke-RestMethod -Method $Method -Uri $Uri -Headers $Headers

On the other hand, if you didn't supply a valid authentication key, you would get this:

So, with this knowledge, we can even trap for this in our script.

Try

{

[string]$Token=$NULL

# Rest API Method

[string]$Method='POST'

# Rest API Endpoint

[string]Uri='https://api.cognitive.microsoft.com/sts/v1.0/issueToken'

# Authentication Key

[string]$AuthenticationKey='13775361233908722041033142028212'

# Headers to pass to Rest API

$Headers=@{'Ocp-Apim-Subscription-Key' = $AuthenticationKey }

# Get Authentication Token to communicate with Text to Speech Rest API

[string]$Token=Invoke-RestMethod -Method $Method -Uri $Uri -Headers $Headers

}

Catch [System.Net.Webexception]

{

Write-Output 'Failed to Authenticate'

}

Now we know how to get a token to communicate with the Bing.Speech API. Pop in again next time, when we'll show you how to start putting the building blocks together to use the service!

I invite you to follow the Scripting Guys on Twitter and Facebook. If you have any questions, send email to them at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum.

Sean Kearney, Premier Field Engineer, Microsoft

Frequent contributor to Hey, Scripting Guy! 

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Summary: Change the keys to authenticate to Azure RM Cognitive Services, by using Windows PowerShell.

  Hey, Scripting Guy! I created the keys for my Rest API. I know I can change them in the web portal, but is there a faster way of doing it through Windows PowerShell?

  There absolutely is! Just use the New-AzureRMCognitiveServicesAccountKey cmdlet to reset either Key1 or Key2 (or both). Here is an example, where we generate a new sequence for Key1:

New-AzureRMCognitiveServicesAccountKey -ResourceGroup 'HSG' -Name 'Sample' -KeyName Key1

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Summary: You can use Windows PowerShell to authenticate to the Microsoft Cognitive Services Text-to-Speech component through the Rest API.

Q: Hey, Scripting Guy!

I heard about the cool Microsoft Cognitive Services, and had heard they have a REST API. Does that mean I can use PowerShell to consume them? Could you show me how to authenticate to it?

—SH

A: Hello SH,

I just love waking up and saying "YES YOU CAN!" when people ask "Can you do that with Windows PowerShell?"

So… ahem…. "Yes you can!"

For those who didn't know, Cognitive Services are hosted in the Azure cloud, and they allow you to many things easily. With very little work in PowerShell or any other programming language, we can moderate content visually.

We can easily use it to search the internet for content or, like we'll do over the next series of articles, make use of the Text-to-Speech component.

First, sign up for the trial of Cognitive Services, which is part of the Azure Subscription.

If you don't already have an Azure account, select FREE ACCOUNT to get yourself started. We'll wait for you to finish.

Once that process is done, you'll see some selections. Select Login to authenticate and add the trial. Then choose Speech to get a trial started with the Speech API.

Next, the most important button to select is Get API Key, beside the Bing Speech API. This will generate the application keys you'll need to talk to the API.

IMPORTANT: If you already have an Azure subscription, and just want to get going under production (that is, not the trial), the steps are different. Please follow these instructions if you do not want to use the trial version, or you want to move to production from your trial.

"But…but Scripting Guy!" I can hear you say. "This is a PowerShell article…. Where's the PowerShell?"

Well I was wondering when you were going to ask. I like to show both halves of the solution whenever I can.

First, you'll need the updated AzureRM cmdlets from the PowerShell gallery to make sure you have the AzureRM.CognitiveServices module. If you haven't done this before, just run the following cmdlet to get the complete set of cmdlets for managing Azure:

Install-Module AzureRM

If you are running an older version and need to update, just run this:

Update-Module AzureRM

You can confirm if the new module is available by running this:

Get-Module -ListAvailable AzureRM.CognitiveServices

On my system, I have a total of seven new cmdlets available to me for provisioning and managing these resources.

We would first authenticate by using the Login-AzureRMAccount cmdlet. Once connected, we can examine the properties of that newly created resource.

To list all Cognitive Service accounts we've created, we can run the following cmdlet. In this example, we have only the one created.

Get-AzureRMCognitiveServicesAccount

But if we'd like to re-create this, we can store and grab the properties by targeting the name of the account and the resource group it's stored within. We will place this within the object called $Account.

$Account=Get-AzureRmCognitiveServicesAccount -Name 'HSG-CognitiveService' -ResourceGroupName 'HSG-ResourceGroup'

We'll need some data from this object we captured, to rebuild or duplicate this process in the future.

To begin with, let's obtain the first two: Location and the AccountType. The location should not to be mistaken with Azure datacenter locations.

We can obtain this from the object we created earlier, called $Account. It is stored within the property called Location, as seen in the preceding screenshot. Its value is global.

$Location=$Account.Location

An additional value we'll need is the type of account we created. In our case, it was a connection to Bing.Speech. You can see this attached to the AccountType property, which we will obtain from the $Account object we created earlier.

$AccountType=$Account.AccountType

Another property we'll need to obtain is the list of objects Azure uses to identify the pricing tiers. To find this, we can pipe the $Account object into the Get-AzureRMCognitiveServicesAccountSkus cmdlet. We will need to grab the values property specifically from this object, and expand it.

$Skus=$Account | Get-AzureRmCognitiveServicesAccountSkus | Select-Object -expandproperty Value

If you examine the newly created object, you'll see results that don’t seem all that useful:

However, we can do a little PowerShell magic, and expand the Sku property further by doing this:

$Sku.Sku

In our case, we only need worry about the SKU used from the object we most recently created. Our goal is simply to duplicate the process. We can access the SKU directly again from the $Account object.

$Account.Sku

This will give us output similar to what we saw before (when we grabbed all the SKUs). In our case, to rebuild the resource, we only need the property called Name.

$Account.Sku.Name

To capture any of these properties, and avoid re-typing, we can just pipe to Set-Clipboard:

$Account.Sku.Name | Set-Clipboard

To re-create this resource from a new Azure subscription, we would just run the following script (after logging in of course):

# AzureRM Resource Group

$ResourceGroup='HSG-ResourceGroup'

# Azure Cognitive Services Account SKU

$Sku='F0'

# Azure Cognitive Services Account Type

$AccountType='Bing.Speech'

# Unique Name to our Azure Cognitive Services Account

$AccountName='HSG-Speech Account'

New-AzureRmCognitiveServicesAccount -ResourceGroupName 'HSG-ResourceGroup' -Name 'HSG-AzureRMSpeech' -Type Bing.Speech -SkuName F0 -Location 'global' -force

Be sure to visit again as we start to look into how to leverage this amazing resource by using the REST API!

I invite you to follow the Scripting Guys on Twitter and Facebook. If you have any questions, send email to them at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum.

Sean Kearney, Premier Field Engineer

Enterprise Services Delivery, Secure Infrastructure

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Summary: “Hey, Scripting Guy!” shows you how to use Invoke-RestMethod to read a list of entries from an RSS feed.

 How can I use Windows PowerShell to see the list of articles from an RSS feed?

      Just use the Invoke-RestMethod and provide the full path to the link to the RSS feed. Here is an example:

   Invoke-RestMethod -Uri 'https://blogs.technet.microsoft.com/heyscriptingguy/rss.aspx

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Summary: This post provides a quick introduction to what the REST API is, and how it applies to Windows PowerShell.

Q: Hey, Scripting Guy!
I can see there is this cool cmdlet called Invoke-RestMethod. I've been told REST API's are all around, and this allows me to consume that data. Could you give me a hand getting started?
—SH

A: Hello SH,
Glad to help out! I remember hearing about REST APIs the first time, thinking they might be a way to take a nap at work. Was I wrong on that one!

What a "REST API" is at the most BASIC level is really just a very fancy web Endpoint. You could, if you were really creative, type in everything you need to connect to one in your browser. But that wouldn't be a very productive use of time.

What REST stands for is "Representational State Transfer." It’s a very connectionless protocol. This means it shouldn't care if there is a temporary break in the internet.

You can connect, ask it a question, and even in some cases send data. It will think about that question and can return content back (if so designed).

Generally, when you are contacting a REST API, you will need to provide some information. You also need to understand the "buzzwords" when you're reading documentation for a REST Endpoint.

A URI or Endpoint

This will be an HTTP or HTTPS endpoint. It could be as detailed as this:
https://speech.platform.bing.com/speech/recognition/interactive/cognitiveservices/v1?language=en-US&format=detailed HTTP/1.1

Or it could be as simple as this:
https://blogs.msdn.microsoft.com/powershell/feed/

Method

In all cases, you will be providing a "method." This is similar to the verb in PowerShell. With REST, there are a few pretty common ones like PUT, GET, or POST. There are others like DELETE and PATCH. Which method you use is defined by the documentation of the owner of the REST API.

Authentication

Some REST API's will not require authentication. A weather one might be an example, since no critical data is passing over the wires.

A REST API hosted by a Human Resources application would more than likely prefer authentication. They would need to know who is accessing that data, as part of its control mechanism.

Authentication could be a regular authentication pop-up for an ID and password. It could also be something like an access token, a temporary key generated initially and used for short term access uses.

Headers and the body

Headers and the body contain parameters and data we need to send up to the API. A good example of a header parameter might be the UserAgent string to identify your browser to the API. The body could be the raw data you need sent to a Translation API.

Knowing how these values can be consumed by Windows PowerShell, and how you can find which ones to use, are the trick to using a REST API.

For some excellent examples that we are going to work with in upcoming articles, see the Azure Cognitive Services REST API.

When we are building values for a header in PowerShell for Invoke-RestMethod, the format will look like this for the most part:
@{'Valuename' = 'SomeValue' }

An example you will see early on is passing the header needed for the authentication component of the REST API. It will look like this:
$Header=@{'Ocp-Apim-Subscription-Key' = $APIKey }

Or, a more complex one would look like this:
$Header=@{ `
'Content-Type' = 'application/ssml+xml'; `
'X-Microsoft-OutputFormat' = $AudioOutputType; `
'X-Search-AppId' = $XSearchAppId; `
'X-Search-ClientId' = $XSearchClientId; `
'Authorization' = $AccessToken `
}

Another hint you can use to learn what a REST method wants will be examples of the "Responses" documented for REST API's. Take a look at the following example:
POST /synthesize
HTTP/1.1
Host: speech.platform.bing.com

X-Microsoft-OutputFormat: riff-8khz-8bit-mono-mulaw
Content-Type: application/ssml+xml
Host: speech.platform.bing.com
Content-Length: 197
Authorization: Bearer [Base64 access_token]

<speak version='1.0' xml:lang='en-US'><voice xml:lang='en-US' xml:gender='Female' name='Microsoft Server Speech Text to Speech Voice (en-US, ZiraRUS)'>Microsoft Bing Voice Output API</voice></speak>

Reading down line by line, you can see this particular operation is calling for a "POST" method. If you read the documentation on this particular function, you would notice that Content-Type is an actual value beyond supplied, as was X-Microsoft-OutputFormat.

Over the next few articles, we will be using PowerShell to consume the Azure Cognitive Services Text to Speech API, by using Invoke-RestMethod. My hope is that not only will you learn something cool, but you'll have a bit of fun having Azure talk for you.

Stay tuned until next time, when we look at Azure Cognitive Services and getting some basic authentication happening for our little project.

I invite you to follow “Hey, Scripting Guy!” on Twitter and Facebook. If you have any questions, send email to them at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum.

Sean Kearney, Premier Field Engineer

Enterprise Services Delivery, Secure Infrastructure

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Summary: Learn how to configure and use cross-platform PowerShell remoting in PowerShell Core.

I’m Christoph Bergmeister, a London-based full stack .NET software developer with a passion for DevOps. To enable product testing of .NET Core apps across non-Windows machines in CI, I was one of the first people to use the new cross-platform remoting capabilities based on an integration of OpenSSH with PowerShell Core. In the early days, setup was very tedious, and as part of my journey I had to experiment a lot to get everything working together nicely. Today, I want to share with you some of what I learned.

Introduction

Some of you might have heard that the next generation of PowerShell is cross-platform. Currently it is known as PowerShell Core, has a version number of 6, and is available as a Beta on GitHub. At the moment, it offers a subset of cmdlet coverage compared to Windows PowerShell. But Microsoft has shifted their development effort towards PowerShell Core, and therefore at least the engine is already a superset. As part of making it cross-platform, the goal is also to allow remoting from any operating system to any operating system, by using a similar syntax and experience of using PSSessions and the Invoke-Command cmdlet. I have used this to create a cross-platform CI testing system. It executes PowerShell deployment scripts in an agnostic way, against remote machines that can be either Windows or Linux. I will showcase what is needed to wire everything up. Disclaimer: Although I learned quite a bit about OpenSSH and how it works, I am no expert, and all I will show you is how to configure it such that it works. I welcome comments on my setup procedure.

 Configure PowerShell remoting

This example shows how to configure remoting from a Windows client to a Linux host, which is the most common scenario. The setup is similar in other configurations of Windows/Linux as a client/host.

Apart from installing PowerShell Core on the client and host machine, we also need to install OpenSSH on both machines. OpenSSH on Linux can be installed on Ubuntu/Debian machines as ‘sudo apt-get install openssh-server openssh-client’ or on RHEL/CentOS/Fedora using ‘yum -y install openssh-server openssh-client’. On Windows, the PowerShell team has created a port named Win32-OpenSSH, which is in pre-release state as well. See the detailed instructions here or here. (For example, you can use Chocolatey, although Chocolatey is a third-party tool, not officially supported by Microsoft.) When I did it the first time, I followed the whole manual process to understand the components better. But if you just want to install everything that you probably need, the following chocolatey command should do:

choco install -y openssh -params '"/SSHServerFeature /SSHAgentFeature"'

Now we still need to configure OpenSSH on the client and host side, by using RSA key based authentication.

Edit the file ‘sshd_config’ as an Administrator in the OpenSSH installation folder (which is something like ‘C:\Program Files\OpenSSH-Win64’). Uncomment the following lines (by removing the hash character):

  • RSAAuthentication yes
  • PubkeyAuthentication yes
  • PasswordAuthentication yes

Also add PowerShell Core as a subsystem in sshd_config by adding the following line (you can get the path to your PowerShell Core executable by using Resolve-Path "$($env:ProgramFiles)\PowerShell\*\*.exe"):

Subsystem powershell C:\Program Files\PowerShell\6.0.0-beta.9\pwsh.exe -sshs -NoLogo -NoProfile

Then, restart the sshd process (that is, the ssh daemon):

Restart-Service sshd

Now we need to generate a pair of RSA keys, as follows:

ssh-keygen -t rsa -f ReplaceThisWithYourDesiredRsaKeyFileName

This generates you 2 files: one with the ending ‘.pub’, and one without. The former is the public key that you will need to distribute, and the latter is the private key.

On the remote Linux machine, you need to configure OpenSSH as well. Edit the config file /etc/ssh/sshd_config, and, similar to the above, enable the three authentication methods (PasswordAuthentication, RSAAuthentication, and PubkeyAuthentication). Adding the subsystem has a slightly different syntax:

Subsystem powershell /usr/bin/pwsh -sshs -NoLogo -NoProfile

Then append the content of the public key that you generated before to the .ssh/authorized_keys file, and optionally create a folder and set the correct permissions. The following lines take care of everything, and all you need to do is insert the path to your public key file.

mkdir -p .ssh

chmod 700 .ssh

cat PathToPublicKeyOfSpecificWindowsMachineToAllowPasswordLessRemoting.pub >>

.ssh/authorized_keys

chmod 640 .ssh/authorized_keys

sudo service sshd restart

Now open PowerShell Core, and let’s test remoting the first time by using the new parameter set of ‘Invoke-Command’ for OpenSSH remoting:

Invoke-Command -ScriptBlock { “Hello from $hostname)” } -UserName $remoteMachineLogonUserName -HostName $IpAddressOfRemoteMachine -KeyFilePath $PathToPrivateRsaKeyFile

The first time you run this command, you will be prompted to confirm that you trust the connection. Choose ‘yes’, and this will add the connection to the known_hosts file. Should your remoting client get locked down after the first configuration, you can make it add a new machine to the known_hosts file via the command line, by using:

ssh -o StrictHostKeyChecking=no username@hostname

You will have noticed that you also needed to specify the full path to the private RSA key, which is a bit annoying. We can get rid of that parameter, however, by using:

ssh-add.exe $PathToPrivateRsaKeyFile

One important note is that this command and the RSA key file generation command have to be executed as the user who will execute the PowerShell remoting commands. That is, if you want your co-workers or the build agent account to be able to use PowerShell OpenSSH remoting, you need to configure the public and private keys both on the client and host side for every user.

If you want to set up remoting in other configurations of Windows/Linux as client/host, the process is very similar. There is a lot of documentation already out there, especially on the Linux side.

Wrap up OpenSSH remoting in Windows PowerShell

Now that we solved the remoting problem, let’s write a wrapper so that we can use PowerShell Core from Windows PowerShell, which will run on the build agent. The first problem to be solved is hopping into PowerShell Core from a Windows PowerShell task on the Windows build agent:

<#

.Synopsis

Looks for the latest pre-release installation of PS6, starts it as a new process and passes the scriptblock to be executed.

.DESCRIPTION

The returned result is an output string because it is a different process. Note that you can only pass in the string value of variables but not the objects themselves.

In order to have in the passed in scriptblock, use [scriptblock]::Create("Write-Output $stringVariablefromouterScope; `$variableToBeDefinedHere = 'myvalue'; Write-Host `$variableToBeDefinedHere")

.EXAMPLE

Invoke-CommandInNewPowerShell6Process ([scriptblock]::Create("Write-Output $stringVariablefromouterScope; `$variableToBeDefinedHere = 'myvalue'; Write-Host `$variableToBeDefinedHere"))

#>

Function Invoke-CommandInNewPowerShell6Process

{

[CmdletBinding()]

Param

(

[Parameter(Mandatory=$true)]

[scriptblock]$ScriptBlock,

[Parameter(Mandatory=$false)]

$WorkingDirectory

)

$powerShell6 = Resolve-path "$env:ProgramFiles\PowerShell\*\*.exe" | Sort-Object -Descending | Select-Object -First 1 -ExpandProperty Path

$psi = New-object System.Diagnostics.ProcessStartInfo

$psi.CreateNoWindow = $true

$psi.UseShellExecute = $false

$psi.RedirectStandardOutput = $true

$psi.RedirectStandardError = $true

$psi.FileName = $powerShell6

$psi.WorkingDirectory = $WorkingDirectory

# To pass double quotes correctly when using ProcessStartInfo, one needs to replace double quotes with 3 double quotes". See: https://msdn.microsoft.com/en-us/library/system.diagnostics.processstartinfo.arguments(v=vs.110).aspx

$ScriptBlock = [scriptblock]::Create($ScriptBlock.ToString().Replace("`"", "`"`"`""))

if ($powerShell6.contains('6.0.0-alpha'))

{

$psi.Arguments = $ScriptBlock

}

else

{

$psi.Arguments = "-noprofile -command & {$ScriptBlock}"

}

$process = New-Object System.Diagnostics.Process

$process.StartInfo = $psi

Write-Verbose "Invoking PowerShell 6 $powerShell6 with scriptblock $ScriptBlock"

# Creating string builders to store stdout and stderr.

$stdOutBuilder = New-Object -TypeName System.Text.StringBuilder

$stdErrBuilder = New-Object -TypeName System.Text.StringBuilder

# Adding event handers for stdout and stderr.

$eventHandler = {

if (! [String]::IsNullOrEmpty($EventArgs.Data)) {

$Event.MessageData.AppendLine($EventArgs.Data)

}

}

$stdOutEvent = Register-ObjectEvent -InputObject $process `

-Action $eventHandler -EventName 'OutputDataReceived' `

-MessageData $stdOutBuilder

$stdErrEvent = Register-ObjectEvent -InputObject $process `

-Action $eventHandler -EventName 'ErrorDataReceived' `

-MessageData $stdErrBuilder

[void]$process.Start()

# begin reading stdout and stderr asynchronously to avoid deadlocks: https://msdn.microsoft.com/en-us/library/system.diagnostics.process.standardoutput%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396

$process.BeginOutputReadLine()

$process.BeginErrorReadLine()

$process.WaitForExit()

Unregister-Event -SourceIdentifier $stdOutEvent.Name

Unregister-Event -SourceIdentifier $stdErrEvent.Name

$stdOutput = $stdOutBuilder.ToString().TrimEnd("`r", "`n"); # remove last newline in case only one string/line gets returned

$stdError = $stdErrBuilder.ToString()

Write-Verbose "StandardOutput:"

Write-Output $stdOutput

If (![string]::IsNullOrWhiteSpace($stdError))

{

# 'Continue' is the default error preference

If ($ErrorActionPreference -ne [System.Management.Automation.ActionPreference]::Continue)

{

Write-Output "StandardError (suppressed due to ActionPreference $ErrorActionPreference): $stdError"

}

else

{

Write-Error "StandardError: $stdError"

}

}

Write-Verbose "PowerShell 6 invocation finished"

}

The function above is complex because of using the ProcessStartInfo .NET class, to be able to retrieve stderr and stdout without deadlocks, and to pass double quotes correctly to it. I decided not to use Start-Process, because this cmdlet writes to disk for capturing stderr and stdout.

Execute platform-specific commands

Using PowerShell Core remoting, we can now start writing PowerShell code that can be executed from any platform on any other platform. You don't even need to know what the remote platform is! It’s like Xamarin for PowerShell. However, sometimes you will want to do something very specific on a certain platform (for example, I decided to fall back use WinRM-based remoting for Windows hosts, but also needed to execute commands as ‘sudo’ on Linux). So, I first needed to figure out what type of platform the remote machine is, which I did by using TTL (TimeToLive) values. It might not be the ideal method, but it worked reliably for me and was fast to implement. It is based on the fact that Linux systems have TTL values around 64 ms and Windows has TTL values around 128ms. It should work for most modern and commonly used operating systems, but I am sure there are special cases where it does not. So just experiment to see what works for you.

Enum OS

{

Linux = 1

Windows = 2

}

Function Get-OperatingSystemOfRemoteMachine

{

[CmdletBinding()]

Param

(

$remoteHost

)

[int]$responseTimeToLive = Test-Connection $remoteHost -Count 1 | Select-Object -ExpandProperty ResponseTimeToLive

$os = [System.math]::Round($responseTimeToLive/64) # TTL values are not 100% accurate -> round to get a range of +/-32

if($os -eq 1) #Linux (TTL should be around 64

{

return [OS]::Linux

}

elseif($os -eq 2) #Windows (TTL should be around 128)

{

return [OS]::Windows

}

else

{

Throw "OS of remote machine $remoteHost could not be determined by TTL value. TTL value was: $responseTimeToLive"

}

}

Execute commands as sudo

Armed with this knowledge, we can now make platform-specific decisions, and, for example, build up our scriptblocks. But how can we execute sudo commands? PowerShell Core itself supports native Linux commands when executed locally, but executing commands by using sudo rights remotely is not fully baked yet (see the tracking issue). So, putting ‘sudo whoami’ in your ScriptBlock will give you an error. But I found a workaround, which is based on the fact that the sudo password can be piped into sudo using the -S option. Therefore, the following command works, executed remotely:

echo InsertSudoPasswordHere | sudo -S whoami

Yes, you need to be careful about security here, but depending on your use case, this might be OK.

Practical tips

Most of the remoting is based on scriptblocks. You can inject variables (as a string) into it by using the scriptblock constructor, but also take care to escape characters if you want to use variables:

[scriptblock]::Create(“Write-Host $variableNameThatIsDefinedOnTheClient”; `$meaningOfTheUniverseAndEverything = 40+2; Write-Host `$ meaningOfTheUniverseAndEverything”)

Should your code get more complex, then I suggest defining a PowerShell function that takes a PSSession as an argument. This is because you can also create a PSSession by using the new parameter set shown above. The idea is that all the scriptblock does is re-import the necessary modules, and then execute a top-level function that takes a PSSession:

$myscriptBlock = [scriptblock]::Create("Import-Module $FullPathToMyRequiredModule; Invoke-MyComand -PSSession `$session”)

$scriptBlockToCreateSession = [scriptblock]::Create("`$VerbosePreference = '$VerbosePreference'; `$session = New-PSSession -HostName $HostName -UserName $UserName")

$scriptBlockMain = [scriptblock]::Create("$scriptBlockToCreateSession; Invoke-Command -ScriptBlock { $ScriptBlock } -Session `$session;")

The above example also shows how to correctly propagate the $VerbosePreference, which Invoke-Command currently does not do (see this GitHub issue for tracking).

In our builds, we need to copy our deliverables to our system under test, but I did not want the deployment/installation scripts to be platform specific. I needed to solve problems such as finding a common path. I sniff the home directory, and then create the path on the remote machine:

$homeDirectoryOnRemoteMachine = Invoke-Command -Session $Session -ScriptBlock { (Get-Location).Path }

$destinationPathLocalToRemoteMachine = [System.IO.Path]::Combine($homeDirectoryOnRemoteMachine, $FolderNameOnRemoteMachine)

Conclusion

We have seen several useful pieces that you can wire together for your needs, which could be:

  • Setting up OpenSSH remoting, without passwords, to be able to use it for CI purposes, for example.
  • Calling PowerShell Core from Windows PowerShell. This could also be used for CI machines, for example, or for convenience to do cross-platform remoting from Windows PowerShell.
  • Determining the operating system type of a remote machine to decide whether an OpenSSH or WinRM PSSession should be created. I have used this to write an Invoke-CommandCrossPlatform cmdlet that also wraps the complex logic of concatenating various scriptblocks.
  • Overcoming current limitations of OpenSSH remoting, to execute remote commands as sudo.

If you have any questions, suggestions, or want to share your experience, comment below, or feel free to contact me.

Christoph Bergmeister, guest blogger

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview