Loading...
To be fair Python's REPL mode allows you to explore objects pretty easy. But since PowerShell has been my first language, I often tend to crib for similar experience.

P.S. - I do know that PowerShell language specs picked up quite many things from Python. Also, Python is my goto language on Linux platform as well.

So back to cribbing, I tend to miss most the exploring aspects of  PowerShell e.g. Get-Member and Format-* cmdlets until one day I sat down and wrote few functions in Python to give me a similar experience.


Getting MembersEnter GetMember.py, not a replacement for dir() in Python but this is more useful to me coming from PowerShell background.



Below is how I use it from my REPL, see that my function uses a package called Inspect and I use the GetMember function to inspect that itself :D





Format Object propertiesAnother thing which I most from PowerShell is that you can just pipe objects into Format-* cmdlet.

FormatList.py, not a hotshot function but gets work done.



See the usage below, makes life easy when you want to see the members (method and properties) on an object.



Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Well, below are my notes on using account Shared access signatures in Azure using Azure PowerShell modules.

Theory
Let's get the basics out of the way first.

A shared access signature is a way to delegate access to resources in a storage account, without sharing the storage account keys.

SAS gives us granular control over the delegated access by :
  • Specifying the start and expiry time.
  • Specifying the permissions granted e.g Read/Write/Delete
  • Specifying the Source IP address where the requests will originate from.
  • Specifying the protocol to be used e.g HTTP/HTTPS.


There are two types of SAS.
  1. Service SAS: This type of SAS delegates access to resources in a single storage service. Note - Azure storage is made of Blob, Queue, Table and File services.
  2. Account SAS: This type of SAS delegates access to resources in one or more storage services. In addition, it can also delegate access to the operations that apply to a given service.
So, in a nutshell, SAS is a signed URI that delegates access to one or more storage resources. Note that this URI basically contains all the information in it.

Now the SAS can take two forms.
  1. Ad-hoc SAS: This type of SAS contains/implies all the constraints in the SAS URI e.g. start time, end time, and permissions. Both Service and Account SAS can take this form.
  2. SAS with stored access policy: A stored access policy can be used to define the above constraints e.g. start/end time and permissions on a resource container (blob container, table, queue, or file share). So when a SAS is applied on the resource container it inherits the above constraints from the stored access policy.

    Note - Currently Service SAS can only take this form.
One more thing of importance is that while creating Service SAS tokens, it is a best practice to have stored access policy associated with the resource containers in place because the SAS can simply be revoked (if needed) by deleting the stored access policy.

If you do not follow above then you have to revoke the storage account key which was used to generate the SAS token.

Example: Create and use an account SAS
For this post, I will be showing how to create an account SAS to grant service-level API access to Blob and file storage services and then using a client to update the service properties.

Following the .NET code samples listed here

 +azureprep (resource group)
   \-azprepstore (storage account)
       \-testblobcontainer1 (blob container)
           \- docker.png (blob)


Create an Account SAS token

First, step is to create the Account SAS token using AzureRM and AzureSM PowerShell modules.




Use Account SAS token (created above)

Open another PowerShell console, this will act as a client. The intent here is to show that using SAS token one can access the storage resource independently from another client.





Hope this is useful.

References:

Using Shared access signatures

Create and use an account SAS (.NET)
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
ProblemWe have had this problem statement at hand which requires the field engineer to validate that any engineered solution, when deployed at a customer site, is running the supported versions of the firmware and driver.



BackgroundWell, we already have Pester tests written and placed inside a validation kit which tests whether a deployment is as per the practices outlined in our deployment guide.
So this was only natural to add the driver version validation under this kit (firmware validation in the future release).

We use Pester (PowerShell unit testing framework) for the Infrastructure/Ops validation and PSRemotely to target all the nodes in our solution for these tests. So the code samples in the post correspond to that.



SolutionWhere to store supported version info?First, of all, we decouple the environment details (inputs) to our validation kit in a file named EnvironmentConfigData.psd1. This file follows PowerShell DSC style configuration data rules.
Inside this configuration data, we maintain a list of all supported versions under the PnPDeviceMetadata. See below EnvironmentConfigData.psd1 for sample :

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
@{
    AllNodes = @(
        @{
            # common node information hashtable
            NodeName = '*'; # do not edit
            Features = @('Hyper-V','Failover-Clustering', 'Data-Center-Bridging') # Do not edit
            SETTeamName = 'S2DSwitch'; # <Edit> name of the SET team
            PnPDeviceMetadata = @( # do not edit, PNPDevice metadata, used for driver version validation
                @{
                    Class = 'SCSIAdapter' #PNP device class
                    FriendlyName = 'Dell HBA330 Mini' # PNP device friendly name
                    Versions = @('2.51.15.0') # PnP device driver supported versions. Add more values if applicable
                },
                @{
                    Class = 'System'
                    FriendlyName = '*chipset*'
                    Versions = @('10.1.2.85') #Add more values if applicable
                },
                @{
                    Class = 'Net'
                    FriendlyName = '*Mellanox*'
                    Versions = @('5.35.12978.0','1.50.15998.0') # CX3,CX4 (Dell Q2 versions) #Add more values if applicable
                }
            )
        },
        @{
            # Individual node information hashtable
            NodeName = 'Node1'
        }
   )
}

Now this metadata inside the configuration file serves as the source of truth for the supported versions in our solution.


Test the node
The above metadata is used by a generic test block written in Pester as below and runs on each node in our solution.

Note that $Node variable below is a placeholder for all the information specific to a node in our solution

and is generated after reading the above environment configuration data by the framework PSRemotely :











001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019



PSRemotely -Path $PSScriptRoot\EnvironmentConfigData.psd1 {
    Node $AllNodes.NodeName {
       Describe "PnP Device driver version validation" {
            $Node.PnPDeviceMetadata.Foreach({
                Context "PnP Device with class $($PSitem.Class) and Friendlyname $($PSitem.FriendlyName) driver version check" {
                    $PnPDevices =  Get-PnpDevice -Class $PSitem.Class -FriendlyName $PSitem.FriendlyName
                    Foreach ($PnPDevice in $PnPDevices) {
                        $DriverVersion = [version]($PnPDevice | Get-PnpDeviceProperty -KeyName DEVPKEY_Device_DriverVersion | Select-Object -ExpandProperty Data)
                
                        It "[PnP Device $($PnPDevice.FriendlyName) driver version check] `
                           Should have only the supported versions v$($PSitem.versions)" {
                            $Driverversion -in $PSitem.versions | Should Be $True
                        }
                    }
                }
            })
        }
    }
}



Decoupling of the supported driver version and making the PnP device driver version validaiton

test a generic Pester test enables us the agile addition of validation new components that are shipped with our

solution e.g S2D, CPS etc.


We also are working on extending this to validating firmware versions as well.


Useful reference:






Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

What is environment configuration data?
Well, you might have heard the term 'configuration data' in usage with PowerShell DSC. The case for using configuration data is wherein all the input arguments are abstracted from the code being written so that this configuration data can be generated on the fly and passed to the underlying scripts or framework like DSC.

For some of our solutions being deployed at the customer site, we require a lot of input parameters e.g. different network subnets for management and storage networks, AD/DNS information etc.

Adding all these parameters to our input argument collector script was an error prone and tedious task since there were far too many input arguments. So instead of having a file to specify all input arguments was the preferred method.

This also helped us while troubleshooting the deployments since a local copy of the input arguments always persisted.




.PSD1 vs .JSON ?

We started with using JSON files first but later realized below salient points of using .psd1 files:


  1. .PSD1 files are native and first class citizens on PowerShell. 
  2. Also the ability to put comments in the .psd1 files. 
  3. PowerShell ISE (native to Windows Server OS) is able to edit the .psd1 files with ease.

Below I would like to take a moment and highlight a bit more on the above points.

1) .PSD1 file support in PowerShell
There was an ugly way of using the Import-LocalizedData cmdlet to read the .psd1 files. But since WMF 5.0 a function named Import-PowerShellDataFile was added. Read this PowerShell Magazine article by Ravi here.

2) .PSD1 file supports placing inline comments
In the .psd1 file you can place inline comments to mention what that input field represents. Along with this information, we also use comments to mention if a field needs to be modified or not in our .psd1 files.

3) PowerShell ISE support to edit .PSD1 files
Since our engineered solutions are mostly run from a Windows box, there is PowerShell ISE present out of the box present which allows deployment engineers to edit the files and highlight any syntax errors.




All being said, we do support consuming .JSON files in our engineered solutions for input too, since both .psd1 and .json file formats are for data storage and consuming them is a straightforward in PowerShell e.g. our PSRemotely module supports passing configuration data in both .psd1 and .json file formats.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Why should I use WMI, when there is a PowerShell module available for Configuration Manager (CM Module) already?

Well the cmdlets behind the scene interact with the WMI layer and if you know which WMI classes the corresponding cmdlet work with , it can be of help in future by :

  1. Switching to native WMI calls when the CM cmdlets fail for some reason (probably bug in the CM Module).
  2. Making your scripts more efficient by optimizing the WMI (WQL query) calls, the cmdlet will query all the properties for an Object (select *) you can select only ones you need. 
  3. Lastly no dependency on the CM Module, you can run these automation scripts from a machine not having the CM console installed (needed for CM module).
Moreover ConfigMgr uses WMI extensively, you already have this knowledge leveraging it with PowerShell shouldn't surprise you. This post assumes you have been working with CM cmdlets (you already are versed with PowerShell), know where the WMI namespace for ConfigMgr resides and the basics of WMI.


Example Problem:
I will use one of the problem people have been commenting about a lot on the below post
PowerShell + SCCM 2012 R2 : Create an Application (from MSI) & Deploy it

What they want to do is specify multiple app-categories to an application while creating these apps using PowerShell ?

This seemed trivial at first as the help for the Set-CMApplication cmdlet which is used to set the app category for an application accepts a string array. Probably a bug in the cmdlet (as this seems to be working on the most recent CM module). See below the comment screenshot from the post :



This is strange.
So what you do now ? Don't worry you can at anytime fallback to using PowerShell and WMI  until the bug is fixed ( it seems to be fixed in the latest version).

So what I am going to show you now is 

  1. how to start exploring the WMI Class associated.
  2. Read the documentation.
  3. Use PowerShell + WMI to automate it.


For the above scenario,

I have an Application named Notepad++ and two application categories named "PSCreatedApps" and "OpenSource". I want to add these 2 categories to the application via the WMI only (remember my CM cmdlet has a bug).
Get the WMI Class name:
This shouldn't be too hard to find, in the post we were using the Set-CMApplication cmdlet to set multiple app categories. So the first and the easiest way to find the WMI Class you are playing with is the corresponding Get-CMApplication cmdlet (there are other ways to get this using ConfigMgr console too, find them).

Pipe the output of the Get-CMApplication cmdlet to Get-Member to see the WMI Class you have been fiddling with all along :



The typename says IResultObject#SMS_Application (not the WMI Object) because the CM cmdlets use the IResultObject interface to expose data for result objects (don't worry about that part much). SMS_Application is the WMI Class here.

Another way would be to closely observe the SMSProv.log when you execute the CM cmdlet, reading this log is of utmost importance when Scripting against ConfigMgr.

Well reading the SMSProv.log takes some time and practice but a good way is to dump the Verbose stream from the cmdlet. As this does show all the WQL query being run behind the scenes, so you can do a mapping between the SMSProv.log and understand what might be causing the failure.

Just to show how it is done, see the first Verbose stream message showing the WQL below is where it shows up in the log :

The log is a mine of information and one should invest time in interpreting it while playing with the Cmdlets, WMI (even the actions on the Console show up here, neat trick to get the explore too).


Read the WMI Class documentation
In the documentation for the SMS_Application class you will find that the base class for it is SMS_ConfigurationItemBaseClass . Base class means that SMS_Application (Child Class) class inherits from SMS_ConfigurationItemBaseClass. So we need to be actually looking for documentation on both classes.



Also do a search on the page for all the properties having word "Category" in it, below is a snip of all the properties from the page :

Now at this point we are looking to set a property on the application object which has something to do with the category. So only read/write properties should interest us, drop the LocalizedCategoryInstanceNames property from above :)
Take a moment to notice that read/write properties named CategoryInstance_UniqueIDs, PlatformCategoryInstance_UniqueIDs are pointing us to see the base class documentation (highlighted in yellow). Click on the link and it should take you to the base class, for both the properties you will see :


the documentation for the CategoryInstance_UniqueIDs looks promising. Observe that the Data Type is String Array , which clearly means more than one category unique ids can be assigned. But how do we find these category unique instance ids ?

Leaving the exercise to finding this via WMI only to you and taking a shortcut here by using Get-CMCategory cmdlet :

We have all the key pieces together :
  1. Class Name - SMS_Application
  2. Writable property corresponding to the Categories.
  3. Category Unique Ids for our 2 categories.

Let's get to the final phase of using WMI now purely to set 2 app category on the Application, below is the same operation done via WMI which was intended to be done by the Set-CMApplication cmdlet :

# Import-Module configurationmanager
# No need for this we are using WMI. Set the default parameters for the WMI Calls ( personal preference)

$PSDefaultParameterValues = @{
    'Get-WMiObject:ComputerName'='DexSCCM'; # point the Get-WMIObject to the ConfigMgr server having WMI namespace installed
    'Get-WMiObject:NameSpace'='root/SMS/Site_DEX'; # Point to the correct WMI namespace for CM automation
}

# get the SMS_Application object instance for application Notepad++
$Application = Get-WmiObject -Query "SELECT * from SMS_Application WHERE LocalizedDisplayName = 'Notepad++' AND IsLatest = 1 AND IsHidden = 0"

# Get the UniqueIds for the categories - PSCreatedApps and OpenSource
$CategoryIDs = Get-WmiObject -Query "SELECT CategoryInstance_UniqueID FROM SMS_CategoryInstance WHERE CategoryTypeName='AppCategories' and LocalizedCategoryInstanceName in ('PSCreatedApps','OpenSource')" |
                    Select-Object -ExpandProperty CategoryInstance_UniqueID

# Let's modify the Object in the memory
$Application.CategoryInstance_UniqueIDs = $CategoryIDs

# Sync the changes to the ConfigMgr Server
$Application.Put()

ET Voila ! (check the SMSProv.log too when you use this way to troubleshoot).

Below is a gif showing this in action, I tried showing that all things done via the Console, CM Cmdlet or PowerShell actually interface with WMI layer (all actions get recorded in the SMSProv.log) :




Looking for more ConfigMgr + PowerShell stuff, below is link to all my posts around the topic :
http://www.dexterposh.com/p/collection-of-all-my-configmgr.html

Resources :My friend Stephane has few posts talking about troubleshooting WMI functions and a list of things you need to know when scripting against the SCCM WMI provider.
http://powershelldistrict.com/troubleshoot-wmi-functions/

http://powershelldistrict.com/top-6-things-you-need-to-know-when-scripting-with-sccm/
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
This is a long overdue post (previous one here) on how to use certificates to do an automated login to Azure Resource Manager. Not rocket science but easy to setup, so that you use a cert to authenticate to Azure RM automatically.


It seems the Azure docs are already up to date on how to do few bits involved in this, please read the section 'Create service principal with a certificate' in the docs.

The process is almost the same as mentioned in the docs, except the fact that when we do the role assignment, we instead assign the contributor role definition to the service principal, since we want the ability to manage the resources in Azure RM.
Also, we will author a function add it to our profile so that PowerShell authenticates automatically to Azure RM each time it opens. 
So let's begin with it:
  1. Create the self-signed certificate.

    If you are running this on Windows 8.1, then you have to use the script by MVP Vadims Podans from the gallery.


    # For OS below Windows 10, download the script and use that to generate the self-signed cert.
    Import-Module .\New-SelfSignedCertificateEx.ps1
    New-SelfSignedCertificateEx -StoreLocation CurrentUser -StoreName My -Subject "CN=AutomateLogin" -KeySpec Exchange
    $cert = Get-ChildItem -path Cert:\CurrentUser\my | where {$PSitem.Subject -eq 'CN=AutomateLogin' }

    Otherwise, if you are running Windows 10 then the builtin PKI module would suffice. Note - The cert created below has marked private key to be not exportable.

    Run below:


    $cert = New-SelfSignedCertificate -CertStoreLocation "cert:\CurrentUser\My" -Subject "CN=AutomateLogin" -KeySpec KeyExchange -KeyExportPolicy NonExportable


  2. Create the AD app and service principal

    In order to create the Azure AD application, we first need to read the raw certificate data as base64 encoded string (help for the New-AzureRMADApplication clearly states this fact).


    # Get the certificate data as base64 encoded string
    $keyValue = [System.Convert]::ToBase64String($cert.GetRawCertData())

    # Create a new Azure AD application with the associated certificate
    $app = New-AzureRmADApplication -DisplayName "AutomatedCertLoginApp" -HomePage "https://www.dexterposh.com" `
            -IdentifierUris "http://www.dexterposh.com/p/about-me.html" `
            -CertValue $keyValue -EndDate $cert.NotAfter -StartDate $cert.NotBefore

    Once the Azure AD Application is created, it is time to add a corresponding AD service principal. Make a note of this application ID (echoed on the host for reference), since it will be passed as a value to our function later.


    # Also create a corresponding service principal for the Azure AD application
    New-AzureRmADServicePrincipal -ApplicationId $app.ApplicationId
    Write-Host -ForeGround Cyan -Object $app.ApplicationId
    Start-Sleep -Seconds 15
  3. Role assignment to the service principalNow the time has come to deviate from the original article being referenced in for this post, it is time to grant the contributor access to the AD service principal created above.


    # Assign the Contributor role definition to the service prinicipal
    New-AzureRmRoleAssignment -RoleDefinitionName Contributor -ServicePrincipalName $app.ApplicationId


  4. Author the automated login function

    Now, I am calling this function Connect-ToAzureRM, in order for us to author this function we would need the current subscription id, which can be fetched using the Get-AzureRMSubscription cmdlet (note the tenantID field).

    Below is how the function is added to my $PROFILE and invoked at the end, you will have to place your subscription tenant id and application id along with the certificate subject name (created above).

    Now, this function is very crude and runs each time PowerShell opens, you might want to customize this by setting the default value of parameters and then invoking the function on demand. You can also get more creative with this concept.


    # Author a new function, add it to $PROFILE and call it everytime PS opens
    Function Connect-ToAzureRM {
        [CmdletBinding()]
        param(
            [Parameter()]
            [String]$TenantId,

            [Parameter()]
            [String]$ApplicationId,

            [Parameter()]
            [String]$CertificateSubjectName
        )
        # first fetch the certificate from the Cert store
        $cert = Get-ChildItem -path Cert:\CurrentUser\my | where {$PSitem.Subject -eq $CertificateSubjectName }

        # now use the above cert to authenticate to Azure
        Add-AzureRmAccount -ServicePrincipal -CertificateThumbprint $cert.Thumbprint -ApplicationId $ApplicationId -TenantId $TenantId
    }
     Connect-ToAzureRM -TenantId "place ur Tenant ID" -ApplicationId "place ur app id" -CertificateSubjectName CN=AutomateLogin
References :







Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Do you remember ?
In the older Azure Service Management model, we had an option to import the publish settings file and use the certificate for authenticating. It saved a lot of hassle.

That method is deprecating now but we have something better which we can use in the newer ARM model.

BTW for record I find it really annoying to enter credentials each time when I want to quickly try something out on Azure. So I have been using two techniques for automated login to the AzureRM portal.





This post is about the easier and a crude way (less secure) to setup the automated login using the Service Principal, in this way we store the service principal credentials (encrypted) in an XML file.
You would need the AzureRM PowerShell module installed. The whole code snippet is placed at the end of the post.

Below are the steps on creating an AzureAD App, tying it with a Service principal and using the service principal creds to do an automated login :

  1. Login to the Azure RM using the Login-AzureRMAccount cmdlet.


    # Login to the Azure Account first                                                            
    Login-AzureRMAccount



    Once done, the current context is displayed i.e. Account, TenantID, SubscriptionID etc.

  2. If you have multiple subscriptions then you need to select the Azure Subscription you want to create the Service principal account and automated login for. If you only have one subscription then, you can skip this step.


    # Select the right Subscription in which the Azure AD application and Service Principal are to be created
    Get-AzureRmSubscription | Out-GridView -OutputMode Single -Title 'Select the Azure Subscription!' | Set-AzureRmContext

  3. Now we need to create an Azure AD Application, this will create a directory services record that identifies an application to Azure AD.

    The homepage & identifier uri can be any valid url. Note identifier uri or application id are used as a username while building credentials for the automated login later along with the password specified in this step.


    # Create the Azure AD App now, this is the directory services record which identifies an application to AAD
    $CMDistrictAADApp = New-AzureRmADApplication -DisplayName "AppForCMDistrict" `
                            -HomePage "http://www.dexterposh.com" `
                            -IdentifierUris "http://www.dexterposh.com/p/about-me.html" `
                            -Password "P@ssW0rd#1234"
  4. Create a Service Principal in Azure AD, this is an instance of an application in Azure AD which needs access to other resources. In plain words application manifests itself as a service principal in directory services in order to gain access to other resources.


    # Create a Service Principal in Azure AD                                                          
    New-AzureRmADServicePrincipal -ApplicationId $CMDistrictAADApp.ApplicationID
  5. Using RBAC grant access to Resource group (CMDistrict_RG in this case), you have to the above service principal.

    Note that since this is a less secured way, so you can be extra careful and give limited access to a Resource Group rather than the entire subscription for this.


    # Grant access to Service Prinicpal for accessing resources in my CMDistrict RG                   
    New-AzureRmRoleAssignment -RoleDefinitionName Contributor `
        -ServicePrincipalName $CMDistrictAADApp.ApplicationId `
        -ResourceGroupName CMDistrict_RG
  6. Now it is time to save the Service Principal credentials locally, easiest way to do is use Get-Credential and then pipe the object to Export-CliXML to save those locally.


    # Export creds to disk (encrypted using DAPI)
    Get-Credential -UserName $CMDistrictAADApp.ApplicationId -Message 'Enter App password' | 
        Export-CLixml -Path "$(Split-Path -path $profile -Parent)\CMDistrictAADApp.xml"

  7. Use the exported credentials next time you want to quickly do something on the resource group. For example - I use a function to on demand start/stop the VMs, so before I run the function I import the creds and authenticate.
    Check below that I create the creds using Import-Clixml and then use those with Add-AzureRMAccount , -ServicePrincipal switch marks that this is a service prinicipal account authenticating.
    Note - You can take the below lines and hard code your tenant-id (get it using Get-AzureRMContext or Get-AzureRMSubscription) below and put this in your profile or wrap it in a function.


    # Authenticate now using the new Service Principal
    $cred = Import-Clixml -Path "$(Split-Path -path $profile -Parent)\CMDistrictAADApp.xml"  

    # Authenticate using the Service Principal now
    Add-AzureRmAccount -ServicePrincipal -Credential $cred -TenantId '<Place your tenant id here>'


BTW if you are wondering why can't we simply create a credential object and pass it to Add-AzureRMAccount ( Login-AzureRMAccount is an alias for it) then read below from MSFT documentation that only organization accounts support that. 

Now one has to go and create a user in your AzureAD and use that account here (another way of doing this), but in the next post you will see that with service principals we can have certificate based logins too (more secure). 

Watch out upcoming article on that subject.



Below is the entire PowerShell code snippet :

#region Automated login using the Service Principal
# Login to the Azure Account first
Login-AzureRMAccount

# Select the right Subscription in which the Azure AD application and Service Principal are to be created
Get-AzureRmSubscription | Out-GridView -OutputMode Single -Title 'Select the Azure Subscription!' | Set-AzureRmContext

# Create the Azure AD App now, this is the directory services record which identifies an application to AAD
$CMDistrictAADApp = New-AzureRmADApplication -DisplayName "AppForCMDistrict" `
                        -HomePage "http://www.dexterposh.com" `
                        -IdentifierUris "http://www.dexterposh.com/p/about-me.html" `
                        -Password "Passw0rd#1234"

# store the applicationID for the above AD App created
$Appid = $CMDistrictAADApp | Select -ExpandProperty ApplicationID

#- Service Prinicipal is an instance of an application in a directory that needs to access other resources.
# Create a Service Principal in Azure AD
New-AzureRmADServicePrincipal -ApplicationId $CMDistrictAADApp.ApplicationID

# Grant access to Service Prinicpal for accessing resources in my CMDistrict RG
New-AzureRmRoleAssignment -RoleDefinitionName Contributor `
    -ServicePrincipalName $CMDistrictAADApp.ApplicationId `
    -ResourceGroupName CMDistrict_RG

# Export creds to disk (encrypted using DAPI)
Get-Credential -UserName $CMDistrictAADApp.ApplicationId -Message 'Enter App password' |
    Export-CLixml -Path "$(Split-Path -path $profile -Parent)\CMDistrictAADApp.xml"

# Authenticate now using the new Service Principal
$cred = Import-Clixml -Path "$(Split-Path -path $profile -Parent)\CMDistrictAADApp.xml"

# Authenticate using the Service Principal now
Add-AzureRmAccount -ServicePrincipal -Credential $cred -TenantId '<Place your tenant Id here>'
#endregion
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Problem

Do you have a central network share, where you store all the scripts or PowerShell modules ?
What happens if you try to run the script from a network share ? or if you have scripts (local) which invoke scripts or import PowerShell modules stored on this network share ?

Well you would see a security warning like below (Note - I have set execution policy as 'Unrestricted' not 'bypass' here):
Run a .ps1 from the network share


Well this is a similar warning, which you get when you download scripts from Internet.
As the message says run Unblock-File cmdlet to unblock the script and then run it, let's try it.




Using Unblock-File does not help, still invoking the script presents the same security warning.


Import a PowerShell module from the network share
You would even see a similar warning if you try to import a PowerShell module from the network share using Import-Module.




Use the network resources in your local scripts
So if you have scripts which try to import or reference a module/ script placed on this network share, then again it would display the security warning each time it is run. Not good for the unattended automation workflows you have.



Solution

So you get an idea about the problem at hand, now the solution to this problem is you can manually trust the network location for files, using the IE.
Old manual way using IE


Below is an animated gif showing this in action.




Trust network share using PowerShell
Well this is no rocket science, but the above method of using IE to trust network share actually writes to registry. So below is a quick function which adds the required registry entries :



Further Reading

https://blogs.msdn.microsoft.com/permanenttan/2008/06/05/giving-full-trust-to-a-network-share/

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Continuous Integration, huh ?

Simply put CI is running all the tests (against your code, system etc) frequently in order to perform code validation and see everything is integrating well with each other. For Example - If I check in Code then CI runs all the tests to see if the commit did break anything.

Why are we doing this CI stuff anyway ?

To check if something failed on regular basis, so that it is easy to fix it at the earlier stage.

Note
- I am a mere mortal and follower of DevOps (much broader term) but have started to appreciate the simplicity all these concepts bring in. Don't mistake me for an expert here ;)
A little background on why I explored using Jenkins as the CI solution, the Project recently I started working on requires me to code in Python/ PowerShell and the team already uses Jenkins for other projects in Python, Java, Ruby etc so we needed to integrate running of Pester tests from Jenkins for our PowerShell codebase.


With all the CI stuff cleared out, time to move on towards the task at hand for this post.
In this post, I have a Jenkins Server installed on an Azure VM. The installation is pretty straightforward and I was drafting a post from scratch on this but then stumbled across a tweet by Matthew Hodgkins and his posts are superb job , Check out Resources Section at the bottom for link to his posts.

Below is the tweet :




So moving on this post will only revolve around integrating Pester with Jenkins-
We need to perform few house keeping steps to make the Pester integration easier for us.

  1. Install PowerShell Plugin & Nunit Plugin. Click on Manage Jenkins> Manage Plugins > Under Available tab , search 'PowerShell' , 'Nunit' respectively and install them :

  2. Once done come back to the Home page and click 'New Item' and create a free style project.

  3.  Your new project should appear in the dashboard now, hover over it and click on 'Configure'. Notice that

  4. For this post I am gonna dump a PS1 file and associated Pester tests in a directory and add a build step which runs Pester tests. One can play and integrate their Version control tools like Git, Subversion etc too with Jenkins. So Let's configure our new Project to use a Folder say E:\PowerShell_Project now. Below is a gif to show that :

  5.  Now in the same page above scroll down to Build steps and add a simple build action to show you a possible gotcha. Note - We added the PowerShell Plugin to Jenkins to get the option to add build step using PowerShell natively.
    Let's add few test PS statments to it like :
    $env:UserName
    Get-Module Pester
    Get-Location


    Note - You can use $env:PSModulePath in above code (or normal PS console) snippet to see which all folders PowerShell looks for the Module disocvery.

  6.  Click on "Build Now" for the project to see a possible pitfall.

  7.  Below is the console output of the above build run :
    Started by user anonymous
    Building in workspace E:\PowerShell_Project
    [PowerShell_Project] $ powershell.exe "& 'C:\Windows\TEMP\hudson3182214357221040941.ps1'"
    DEXCLIENT$

    Path
    ----
    E:\PowerShell_Project


    Finished: SUCCESS
    Few important things to note here are  :
    • When running PowerShell code as part of a build step be informed of which User account is being used. For my case I see it using the System Account (my .machine name is DexClient)
    • Based on above check if the Module is discoverable to PowerShell, notice that the Get-Module Pester in the build step return nothing. (Pester was placed in my User's Modules folder).
    • If you are using a Custom workspace (step 4) the default location for PowerShell host that runs our code (added in build step) is set to that Folder.
    • Check out how Jenkins runs the PowerShell code specified in the build step.
      powershell.exe
       
  8. Now one can definitely configure Jenkins to handle this in a better way but that would make my post lengthy. Quick fix here is to load the Pester Module explicitly with full path. For example : Import-Module 'Import-Module 'C:\Users\dexterposh\WindowsPowerShell\Modules\Pester\pester.psd1'
  9. Once you have taken care of how to load the Module, you can add another build step for modify the existing one to run Pester tests. I modified the existing build step to look like below :Import-Module 'C:\Users\dexterposh\WindowsPowerShell\Modules\Pester\pester.psd1'
    Invoke-Pester -EnableExit -OutputFile PSTestresults.xml -OutputFormat NUnitXml

    Take a note of the parameters used here -OutputFile , -OutputFormat and -EnableExit switch.
    Pester is really awesome as it supports integrating with almost all CI Solutions out there.
    Read more here
  10.  As a last step , We will be adding a post-build step to consume our PSTestresults.xml by the Nunit Module. Below is the last gist showing the test run :



Resources :Matthew Hodgkins - Post on installing Jenkins and Automation using PowerShell
https://www.hodgkins.net.au/powershell/automating-with-jenkins-and-powershell-on-windows-part-1/

https://github.com/pester/Pester#continuous-integration-with-pesterhttps://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Making a quick note to document the version gotcha encountered while running puppet client on Windows.

I downloaded the latest and greatest available version of the puppet client on a Windows Server 2012 R2 box, but when running the puppet agent for the first time interactively to generate the certificate request to the puppet master server it blew up with below error message.


  1. Quickest way to get the puppet binaries all accessible is "Start Command Prompt with Puppet" shortcut.

  2. Once in the cmd prompt, run puppet_interactive. This will run the puppet agent on demand and when run for the first time issue a certificate request to the puppet master to sign. But this threw up the below error :

    Error: Could not request certificate: Error 400 on SERVER: The environment must be purely alphanumeric, not 'puppet-ca'



Wow! that is really descriptive about what went wrong. I was able to find a useful answer here.

It appears that, I was running incompatible versions of the puppet master (v3.8.7 ) and client(v4.7.0).




So I went to the puppet website and downloaded the puppet agent for v3.8.7,
removed the incompatible one and installed the v3.8.7. Once it was done, ran the puppet agent again and I could see the certificate request showing up for the node on the puppet master.


Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview