Loading...

Follow PowerShell Magazine | For the most Powerful com.. on Feedspot

Continue with Google
Continue with Facebook
or

Valid

I was recently featured on the Latest Shiny Podcast (@l8istsh9y) hosted by Rob Hirschfeld and Stephen Spector. I came across their podcast a while ago and listened to their last two episodes.

A few weeks ago Rob (I knew him from his Dell days) tweeted about a probable topic for an upcoming episode.

Who wants to rant on @l8istsh9y about infrastructure as code #IaC? Seems fraught, so perfect podcast topic.

— Rob Hirschfeld (@zehicle) June 19, 2019

Having written a couple of published books on PowerShell DSC and being in the infrastructure automation space, Infrastructure as Code (IaC) is close to my heart. Therefore, I just jumped in and said count me in!

We started with a discussion on IaC but eventually it lead to Rob naming what we discussed as a vision for the continuously integrated data center! Indeed, that is (should be) the goal. Rest is what you will hear in this episode of the podcast.

A Vision for the Continuous Integrated Data Center - SoundCloud
(2113 secs long, 161 plays)Play in SoundCloud

This was a fun episode. Let me know what your thoughts are. I will certainly find some time to write about this vision and the objectives.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In the earlier parts of this series, I introduced you to the concepts and design of Garuda framework. I demonstrated a proof-of-concept version of this at PowerShell Conference Europe.

The recording of that session is available.

Ravikanth Chaganti - Designing a distributed flexible validation framework for Test in Production - YouTube

Instead of writing another article about how the POC works, I thought it is easier for you to see it in action.

I am working on a complete overhaul of the framework and will have a new version soon on GitHub. Stay tuned!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I have been talking to several automation engineers (for a vacant position) and realized there are many women who have been doing some great work in the area of infrastructure automation. However, there have been very few women attendees or speakers at our user group meetings or conferences that I attended.

While there may be many reasons for this, the organizing committee of PowerShell Conference Asia decided that we invite women in tech (infrastructure automation, Cloud, and DevOps) to this year’s edition of our conference.

We have opened up registration of intent to attend the conference. All you have to do is just provide your details. We will select five random registrations and give them full 3 day pass to the conference at no cost. For five more, we will offer higher discount. The organizing committee will decide the percentage of discount.

The free entry or the discounted entry entails you the conference pass only. If you need to travel to Bangalore to attend this conference, attendee must bear the travel and accommodation expenses.

This registration will end on 15th August 2019. We will announce the selected registrations on 20th August 2019.

Please share this registration information and help us enable women in the infrastructure automation, cloud, and DevOps space to attend PowerShell Conference Asia 2019!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Posts in this series
  1. Distributed and Flexible Operations Validation Framework - Introduction
  2. Garuda - Architecture and Plan

In the first part of this series, I mentioned the reasoning behind starting development of a new framework for operations validation. Towards the end, I introduced Garuda — a distributed and flexible operations validation framework. There are certain principles that drove the design of this framework — Distributed, Flexible, and Secure.

In this part of the series, you will see the architecture proposal and the plan I have to implement these features.

Architecture

To support the principles described, the framework needs a few moving parts. These moving parts provide the flexibility needed and gives you a choice of tooling.

At a high-level, there are five components in this framework.

Test Library

The test library is just a collection of parameterized Pester test scripts. The parameterization helps in reuse. As a part of the Garuda core, you can group these tests and then use the publish engine to push the tests to remote targets. The tags within the tests are used in the test execution process to control what tests need to be executed within the test group published to the remote target.

Garuda Core

This is the main module which sticks the remaining pieces together. This is what you can use to manage the test library. This core module will provide the ability to parse the test library and generate the parameter information for each test script which is eventually used in generating a parameter manifest or configuration data. One of the requirements for this framework is to enable grouping of tests. The Garuda core gives you the ability to generate the test groups based on what is there in the library. You can then generate the necessary parameter manifest (configuration data) template for the test group that you want to publish to remote targets. Once you have the configuration data prepared for the test group, you can publish the tests to the remote targets.

Publish Engine

The publish engine is responsible for several things.

This module generates the necessary configuration or deploy script that does the following:

  1. Install necessary dependent modules (for test scripts from a local repository or PowerShell Gallery)
  2. Copy test scripts from the selected test group to the remote target
  3. Copy the configuration data (sans the sensitive data) to the remote target
  4. As needed, store credentials to a credential vault on the remote target
  5. Copy the Chakra engine to the remote targets
  6. if selected, create the JEA endpoints for operators to retrieve test results from the remote targets
  7. If selected, create scheduled tasks on the remote target for reoccurring test script execution

Once the configuration or the deploy script is ready, the publish engine can enact it directly on the remote targets or just return the script for you to enact it yourself.

The publish engine is extensible and by default will support PowerShell DSC and PSDeploy for publishing tests to the remote targets. Eventually, I hope the community will write providers for other configuration management platforms / tools. There will be abstractions within the engine to add these providers in a transparent manner.

The publish engine helps in securing the test execution by storing sensitive configuration data in a vault. It also implements the scheduled tasks as either SYSTEM account or a specified user. The JEA endpoints configured on the remote targets help us securely retrieve the test results with least privileges needed.

You can publish multiple tests groups to the same remote target. This helps implement the flexibility that IT teams need in the operations space for a given infrastructure or application workload. There can be multiple JEA endpoints one for each team publishing the test groups.

Chakra

The Chakra is what helps execute the tests in test group(s) on the remote targets. It has the test parameter awareness and can use the published configuration data and the sensitive data stored in the vault for unattended execution of tests. Chakra is also responsible for result consolidation. It can be configured to retain results for X number of days. All the test results for each group get stored as JSON files and are always timestamped. The scheduled tasks created on the remote targets invoke Chakra at the specified intervals. Chakra also contains the cmdlets that are configured for access within the JEA endpoints. Using these endpoints, test operators can retrieve the test results from a central system from all the remote targets.

Report Engine

The report engine is final piece of this framework that enables retrieving the results from the remote targets and transforming those results into something meaningful for the IT managers. By default, there will be providers for reports based on PSHTML, ImportExcel, and UniversalDashboard. The report engine provides that abstractions for community to add more reporting options.

The Plan

The initial release of the framework or what I demonstrated at the PowerShell Conference Europe was just a proof of concept. I am planning on breaking down the framework into different core components I mentioned above. The GitHub repository for the framework will have issues created for each of these components and I will start implementing the basics. The 1.0 release of this framework will have support for every detail mentioned above and will be completely useable and functional for production use.

What about the naming?

The names Garuda and Chakra are from the Hindu mythology and their meaning is connected to the concepts I am proposing for this framework. Garuda is the bird from Hindu mythology. It has a mix of human and bird features. It is deemed powerful and is the vehicle of the Hindu god Vishnu. It can travel anywhere and is considered the king of birds. The Chakra the weapon that lord Vishnu carries. It is used to eliminate evil. This is also known as the wheel of time. Garuda is the vehicle that transports lord Vishnu and his Chakra to places where there is evil.

The Garuda Core combined with the Publish engine can take your operational validation tests to your remote targets. Chakra is the way to perform operations validation to ensure that your infrastructure is always healthy and functional.

In the next article in this series, you will see the framework in action.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The Public APIs repository on GitHub has a list of free APIs that you can use in software and web development. This is a great resource for finding out if there is a free public API for a specific task at hand. For example, if your application requires weather data, you can take a look at several free API options available and select the one that works for you. I have been following this repository and they have recently added something useful — a public API to query for public APIs!

I quickly created a new PowerShell module that wraps around the public API for the public APIs!

You can install this module from the gallery as well.

Install-Module -Name PSPublicAPI -Force

There are four commands in this module.

Get-PSPublicAPICategory – Gets a list of categories for the public API.

Get-PSPublicAPIHealth – Gets the health state of public API service.

Get-PSPublicAPIEntry – Gets specific APIs or all API entries from the public API service.

Get-PSPublicAPIRandomEntry – Gets a random API entry public API service or a random API entry matching a specific criteria.

The commands are pretty much self-explained and you can find the docs for each command here.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Posts in this series

    Operations validation using PowerShell and Pester has been one of my favorite topics and I have both personal and professional interest in this area. I have invested good amount of time experimenting with the existing frameworks and creating a couple of my own. One of my PowerShell Conference EU sessions was around this topic and a new framework that I am developing. This session was well received and there was good amount of interest in this new framework.

    In this series of articles, I will write about the need for a new framework and introduce the new framework which I demonstrated at PowerShell Conference Europe. There is still a lot of work to be done and this series will act as a way to express where I want to see this whole framework and the end goals.

    In this part, you will see what are the available options today for operations validation and / or similar use cases and what their limitations are. Towards the end, I will talk about the desired state requirements for a distributed and flexible operations validation framework.

    The Current State

    My work with operations validation started back in 2016 when I had first demonstrated using Pester for validating the functional state of clusters. This first implementation was tightly coupled with PowerShell DSC resource modules and needed configuration data supplied with DSC configuration documents to perform the operations validations. This model worked very well for infrastructure that was configured using DSC. However, this is not really a generic or flexible framework for running operations validation.

    Microsoft Operations Validation Framework (OVF)

    Around the time I started working on the operations validations bundled with infrastructure blueprints, PowerShell team published an open source version of a framework meant for operations validation. This implements operational tests bundled with regular PowerShell modules. The cmdlets in the framework can discover the operations validation tests packaged in the installed modules and invoke the same. You can specify a hashtable of values as the test script parameters. This is a distributed test execution model. Tests can be copied to all nodes and invoked on the node. This is certainly what I wanted to start with. But, the tight coupling between the modules and tests is not what I really want. Instead, I want to be able to distribute chosen tests as groups of tests to any node. I could have written a wrapper script around OVF and achieve what I wanted but there are other limitations.

    Packaging tests as modules is an unnecessary overhead. If you have a huge library of tests and you need to determine the tests that run on the remote targets dynamically, you also need to be able to generate modules dynamically. And, then, you need to find a way to distribute those modules among the target nodes and also ensure that these are kept up to date as you update the central repository.

    The test parameters are passed as a hashtable and therefore if you need to invoke the tests in an unattended (such as a schedule task) manner, you need to ensure that you have a wrapper script that reads some sort of configuration data and translates that into relevant parameters. But, then you need a way to publish that configuration data as well to the remote targets.

    PSHealthz

    PSHealthz by Brandon Olin provides a web service endpoint to invoke tests packaged or published using OVF. This is an implementation of the Health Endpoint Monitoring Pattern using PowerShell. The available tests can be retrieved using the /health endpoint. Tests can be executed on the target node using the query parameters on the /health endpoint. PSHealthz is more of a way to list and invoke tests on the target nodes using the REST API but the limitations of OVF I mentioned above still exist.

    Remotely and PSRemotely

    Remotely is an open source PowerShell module from Microsoft that can be used for running Pester tests remotely — no surprises there! You can specify a set of remote targets in a file called machineconfi.csv and then use the Remotely keyword inside the It scriptblock for running the tests on the remote targets. This module has several drawbacks and has been more experimental than anything remotely useful (pun intended!). In fact, it has been more than 3 years since there was any update. Although the tests run on the remote node (using PowerShell remoting), they are essentially triggered from a central location in a fan-out method. Therefore, this module implements centralized test execution and reporting.

    PSRemotely was born out of the need for running tests on a bunch of remote nodes while eliminating all of Remotely drawbacks and providing better control over what runs when and where. This module uses DSC type configuration data for providing test parameters for each remote node. In fact, we have implemented a complete validation suite using PSRemotely before writing one more internal framework for operations validation of clusters. The major drawback of this module was the need to enable CredSSP so that the delegated credentials, when needed, can be used on remote targets. Also, there was no infrastructure awareness in PSRemotely. The number and type of tests running gets determined using the configuration data and we had no control over grouping tests based on the type of infrastructure. With PSRemotely, the execution of tests is distributed and reporting is centralized. Therefore, PSRemotely implements a hybrid model. With this framework, Pester tags is the only way to separate tests into groups.

    DBAChecks and pChecksAD

    Both DBAChecks and pChecksAD implement a more centralized test execution and reporting. All tests stay on the local system and you can design these tests to target remote systems using a cmdlet provided method or write your tests to use PowerShell remoting to target remote systems. These are purpose built modules but you can take clues from how they implemented these modules and write one for your specific use case. These are great at what they are doing but not something that would satisfy my requirements for a distributed and flexible operations validation framework.

    The Desired State

    You have seen, so far, options available for performing operations validation. You have also read about the limitations that these frameworks or modules pose. I will, now, translate these limitations into the requirements for a new operations validation framework.

    Distributed

    The new framework needs to support distribution (publishing) of tests to remote targets and should offer different methods for test distribution. For example, I should be able to publish tests to remote targets using PowerShell DSC or Ansible or Chef or Puppet.

    The new framework should support distributed test execution. I want to be able to invoke tests on-demand or on a scheduled basis on the remote targets. The input parameters or configuration data needed for the tests should be local but the framework should provide a way to publish the configuration data as well. And, the secrets within the configuration data should be encrypted.

    Flexible

    The new framework should be flexible enough to allow integration with different other modules or technologies. For example, I had mentioned already that the test distribution should support more than one method.

    Within infrastructure management, there will be more than one team involved in bringing up the infra. For example, if there is a SQL cluster that is being managed, there may be a team that is solely responsible for OS deployment & management whereas another takes care of SQL management. Now, each of these team will have their own operations validation tests. The new framework should enable a way to publish multiple test groups to the remote targets and execute and report them independently.

    From a reporting point of view, the new framework should be capable of supporting multiple reporting methods like HTML, Excel, and so on.

    Secure

    The tests running on the remote targets need input parameters and this may include secure strings and secrets. Since this configuration data needs to reside on the target nodes, the sensitive data should be stored in a safe manner. For example, credentials should go into a vault such as Windows Credential Manager. The new framework should support this.

    The result retrieval from the remote targets happens at a central console. For this, the test operators need access only to invoke the test result retrieval from the remote targets. The framework should support least privileged way of doing this such as implementing a JEA endpoint.

    Introducing Garuda

    I have been experimenting a bit trying to implement a totally new framework that satisfies most if not all of the desired state requirements. This is still in a proof-of-concept phase. There is not even documentation around how to use this yet. This is what I demonstrated at the PowerShell Conference EU 2019 a week ago. I named this framework Garuda. I will write about the naming choice in the next post.

    Today’s article is an introduction to the thought process behind Garuda. In the next post, I will explain the architecture of Garuda and talk about how some of the desired state requirements are implemented.

    BTW, just before my session at the EU conference, I had a couple of hours to kill and created this logo for the Garuda framework. You will know the meaning of this in the next post.

    • Show original
    • .
    • Share
    • .
    • Favorite
    • .
    • Email
    • .
    • Add Tags 

    While working on a module that interacts with REST API, I came across a situation where I had to generate query strings for HTTP GET operations. The number of parameters will vary based on what is supplied to the function. This becomes a bit tricky since the HTTP query strings have a certain format.

    For example, https://localhost:443/?name=test&age=25&company=powershell+magazine.

    As you see in the above example, the first parameter in the query string should be prefixed with question mark and the subsequent parameters are separated by an ampersand. If there are spaces in a parameter value the spaces should be replaced with a plus sign. This can be coded easily in PowerShell but there is a better way using the System.Web.HttpUtility class in .NET.

    The ParseQueryString method in the httputility class parses the URL and gives us a key-value collection. To start with, we can provide this method an empty string.

    $nvCollection = [System.Web.HttpUtility]::ParseQueryString([String]::Empty)

    We can then add the key value pairs to this collection.

    $nvCollection.Add('name','powershell')
    $nvCollection.Add('age',13)
    $nvCollection.Add('company','automate inc')

    Once the parameters are added to the collection, we can build the URI and retrieve the query string.

    $uriRequest = [System.UriBuilder]'https://localhost'
    $uriRequest.Query = $nvCollection.ToString()
    $uriRequest.Uri.OriginalString

    This is it. I created a function out of this for reuse.

    function New-HttpQueryString
    {
        [CmdletBinding()]
        param 
        (
            [Parameter(Mandatory = $true)]
            [String]
            $Uri,
    
            [Parameter(Mandatory = $true)]
            [Hashtable]
            $QueryParameter
        )
    
        # Add System.Web
        Add-Type -AssemblyName System.Web
    
        # Create a http name value collection from an empty string
        $nvCollection = [System.Web.HttpUtility]::ParseQueryString([String]::Empty)
    
        foreach ($key in $QueryParameter.Keys)
        {
            $nvCollection.Add($key, $QueryParameter.$key)
        }
    
        # Build the uri
        $uriRequest = [System.UriBuilder]$uri
        $uriRequest.Query = $nvCollection.ToString()
    
        return $uriRequest.Uri.OriginalString
    }

    • Show original
    • .
    • Share
    • .
    • Favorite
    • .
    • Email
    • .
    • Add Tags 

    PowerShell Conference Asia 2019 edition will be in Bangalore, India. We closed our CFP towards end of February and finalized (partially) a great set of International PowerShell Experts. This conference, as always, will feature PowerShell product team members from Redmond.

    psconf.asia is updated to feature the confirmed speakers and this list includes experts from the USA, Australia, Europe, and India. As I write this, we are yet to finalize a few more speakers and I am sure we will have a fully loaded agenda for all three days. The pre-conf workshops include content from level 100 (PowerShell 101) to CI / CD for PowerShell professionals. On the pre-conf day, there is a track dedicated for deep-dive sessions for the attendees who are already comfortable writing PowerShell scripts and modules.

    At this point in time, we have opened the early bird discount sale (15% on the original ticket price). The tickets are priced in INR and a 3 day pass at this point in time costs less than 100 USD.

    You can get the 3 day (includes pre-conf day) early bird pass @ https://imjo.in/N8t8G7 and the 2 day (only full-conf days) early bird pass @
    https://www.instamojo.com/@tecoholic/l2f610b18b43c4deb9f534b1082b7f414/

    If you have a group of 5 or more interested in attending this year’s conference, reach out to us at info@psconf.asia. We will let you know what best we can do for your group.

    We have already received a few ticket sales and very happy with the progress in last couple of days. Can’t wait to see you all in September.

    • Show original
    • .
    • Share
    • .
    • Favorite
    • .
    • Email
    • .
    • Add Tags 

    I had published a PowerShell DSC resource module, last month, called WindowsAdminCenterDsc that uses the PowerShell module that was made available with Windows Admin Center version 1812. This module makes use of the REST API that comes with Windows Admin Center to manage connections, feeds, and extensions.

    I had immediately verified that the API was available in version 1809.5 as well. So, I wanted to build another PowerShell module that has similar and/or more features than the current module that ships with version 1812. Also, the goal was to ensure that I can use this module in my build process to add the newly deployed servers and clusters to Windows Admin Center in an automated manner.

    Note: This module works with Windows Admin Center 1809.5 and above.

    This module can be installed from PowerShell Gallery:

    Install-Module -Name PSWindowsAdminCenter <br>
    CommandDescription
    Get-WacConnectionGets connections added to Windows admin Center for management.
    Add-WacConnectionAdds a new connection to Windows Admin Center for management.
    Get-WacFeedGets all extension feeds available in Windows Admin Center.
    Add-WacFeedAdds an extension feed to Windows Admin Center.
    Remove-WacFeedRemoves an extension feed from Windows Admin Center.
    Get-WacExtensionGets all extensions available or installed in Windows Admin Center.
    Install-WacExtensionInstalls an extension.
    Uninstall-WacExtensionUninstalls an extension.
    Update-WacExtensionUpdates an extension.

    This project is available in my GitHub repository. I have a few TODOs:

    • Add Export option to Get-WacConnection command so that you can export the connections details to a CSV file.
    • Add Import option to Add-WacConnection command so that you can import all connections from a CSV file.
    • Update WindowsAdminCenterDsc module to use the PSWindowsAdminCenter instead of the module that ships with WAC.

    If you see any issues or would like to see new features, feel free to create an issue.

    • Show original
    • .
    • Share
    • .
    • Favorite
    • .
    • Email
    • .
    • Add Tags 

    Windows Admin Center (WAC) is the new web-based management application for managing Windows Servers, Failover Clusters, Hyper-Converged Infrastructure (HCI) clusters, and Windows 10 PCs. This is a free application and can be installed on Windows Server 2016, Windows Server 2019, or Windows 10. Unlike System Center Operations Manager (SCOM), WAC does not store any monitoring data locally and therefore it is near real-time only.

    Ever since WAC was released, one thought I had was to automatically onboard the servers and clusters that I want to manage within WAC right after their deployment is complete. There was no API that was available to do this earlier.

    With the release of the WAC version 1812 (insider preview), there are a couple of PowerShell modules that are bundled with WAC. This internally uses the REST API and wraps around the same for a few management tasks.

    When I saw this, I immediately explored a design to implement DSC resources for WAC install/uninstall and configuration. And, the result is here: https://github.com/rchaganti/WindowsAdminCenterDsc

    This works only with Windows Admin Center 1812 insider preview and above.

    This REST API is available in 1809.5 as well and I am working on creating a PowerShell to wrap that API as a set of cmdlets. I will update this DSC resource module as well without breaking the current DSC resource design.

    This module contains a set of resources to install Windows Admin Center (WAC) and configure WAC feeds, extensions, and connections.

    DSC Resource NameDescription
    WacSetupInstalls Windows Admin Center. This is a composite resource and enables options to change the port and certificate thumbprint to a local certificate instead of a self-signed certificate.
    WacFeedThis resource supports adding and removing WAC extension feeds.
    WacExtensionThis resource supports installing and uninstalling WAC extensions.
    WacServerConnectionUse this resource to add Windows Server for management within WAC.
    WacHciConnectionUse this resource to add Windows Failover Cluster for management within WAC.
    WacClusterConnectionUse this resource to add HCI Cluster for management within WAC.

    For complete documentation, see https://windowsadmincenterdsc.readthedocs.io/en/latest/.

    Read for later

    Articles marked as Favorite are saved for later viewing.
    close
    • Show original
    • .
    • Share
    • .
    • Favorite
    • .
    • Email
    • .
    • Add Tags 

    Separate tags by commas
    To access this feature, please upgrade your account.
    Start your free month
    Free Preview