Loading...

Did you know you can store your ARM template output in a VSTS variable and parse that data for later usage in your VSTS Release Pipeline?

If you are using the Azure Resource Group Deployment Task in your VSTS Release Definitions you now have the option to store the output values from your ARM template in the Deployments Output section.

In this blog post I explain how you can use the Deployments Output values in the rest of your Release Pipeline.

ARM Template

Before we can start, we first need to have an ARM Template which some output. In this example I don't deploy any Azure Resources I just want to output something.

armtemplate.json

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "firstName": {
      "type": "string",
      "metadata": {
        "description": "The first name of user"
      }
    },
      "lastName": {
        "type": "string",
        "metadata": {
          "description": "The last name of user"
        }
      }
    },
  "variables": {},
  "resources": [],
  "outputs": {
    "firstNameOutput": {
      "value": "[parameters('firstName')]",
      "type": "string"
    },
    "lastNameOutput": {
      "value": "[parameters('lastName')]",
      "type": "string"
    }
  }
}

This ARM Deployment has 2 parameters for firstName and lastName.

armtemplate.parameters.json

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "firstName": {
      "value": "Stefan"
    },
    "lastName": {
      "value": "Stranger"
    }
  }
}

When you deploy above ARM Template you would see the following output being shown in the Azure Portal.

PowerShell script

For parsing the output of the ARM Template we are going to create a PowerShell script.

Folder view in Visual Studio:

Parse-ARMOutput.ps1:

param (
    [Parameter(Mandatory=$true)][string]$ARMOutput
    )

#region Convert from json
$json = $ARMOutput | convertfrom-json
#endregion

#region Parse ARM Template Output
Write-Output -InputObject ('Hello {0} {1}' -f $json.firstNameOutput.value, $json.lastNameOutput.value)
#endregion

Build and Release Definition

After committing the ARM Template and script files to your Repository it's time to create a Build and Release Definition in Visual Studio Team Services (VSTS).

Build Definition:

Release Definition:

Make sure you define Deployment outputs variable in the Azure Resource Group Deployment VSTS Task. This value is going to be used in the next step of the Release Pipeline.

Create a new Release Task for parsing the output from the Deployment Outputs section using the PowerShell script created earlier.

PowerShell VSTS Task

For parsing the Deployment Outputs from the Azure Resource Group Task we are using the PowerShell Task from VSTS.

When you now trigger a release you see that the output from the ARM Template deployment is being parsed in the PowerShell Script VSTS task.

Hope you find some more scenario's in which you can use the Deployment Outputs option from the Azure Resource Group Task.

References:

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Yesterday I tweeted about me being able to access my Docker for Windows Kubernetes Cluster from Debian WSL without exposing the Docker Daemon with TLS and I got quite some responses.

W00t! I'm able to access my #Docker for Windows #Kubernetes Cluster from #Debian WSL without exposing the #Docker Daemon with TLS. Anyone interested to know how I did this? pic.twitter.com/Dgu3a6aWrO

— Stefan Stranger (@sstranger) April 1, 2018

It seems that there are quite some people interested in me sharing how I was able to do this.

Background information

Let me start with some background information about why I wanted to manage my Docker for Windows Client Kubernetes cluster from (Debian) WSL. Last week I visited the Dutch Azure Meetup in Amsterdam where Erik St. Martin talked about Azure Containers.

One of the tools he talked about was Helm. Helm is a package manager for Kubernetes.

Helm

Helm helps you manage Kubernetes applications — Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application.
Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste madness.

I wanted to play around with Helm on my Docker for Windows Kubernetes Cluster. Please read my earlier blog post called "Running Kubernetes Cluster in Docker for Windows" to learn more on how to get this running on your Windows 10 machine.

There are different version of Helm:

I tried to install the Windows version of Helm but was not able to get this working. Then I thought I would try to manage my Docker for Windows Kubernetes cluster from (Debian) WSL.

We just released the Debian GNU/Linux for WSL. More information can be found at the following "Debian GNU/Linux for WSL now available in the Windows Store" blog post.

While investigating how I could connect from Debian WSL to my Docker for Windows Kubernetes cluster I stumbled on the following blog post "[Cross Post] WSL Interoperability with Docker" from Craig Wilhite.

By default the Docker Client for Windows offers a configuration to expose the Docker Daemon.

If you enable this configuration you do expose your system to potential attack vectors for malicious code.

And that's where the tool npiperelay can help. This is a tool built by John Starks.

WSL Interoperability with Docker

Please following the steps to install npiperelay, socat, docker client on the blog post from Craig Wilhite.

High-Level I run the following steps:

  1. Install npiperelay in Debian WSL
    • Install Aptitude on Debian WSLAptitude is an Ncurses based FrontEnd to Apt, the debian package manager.

      sudo apt-get install aptitude

    • Install Go#Make sure we have the latest package lists
      sudo apt-get update

      #Download Go. You should change the version if there's a newer one. Check at: https://golang.org/dl/

      sudo wget https://storage.googleapis.com/golang/go1.10.1.linux-amd64.tar.gz

      #unzip Go

      sudo tar -C /usr/local -xzf go1.10.1.linux-amd64.tar.gz
      #Put it in the path

      export PATH=$PATH:/usr/local/go/bin

    • Build the relay (see the blog post from Craigh Wilhite)
  2. Install socatsudo aptitiude install socat
  3. Install Docker CE for Debian (https://docs.docker.com/install/linux/docker-ce/debian/#install-docker-ce-1)sudo aptitude install docker-ce
  4. Stitch everything together.See blog post Craigh Wilhite.
  5. Install kubectl (https://kubernetes.io/docs/tasks/tools/install-kubectl/)
  6. Configure kubectl configuration.Copy Docker for Windows Kubernetes kube config files to Debian WSL kube configuration folder

    cp -R /mnt/c/Users/[username]/.kube/ ~/

    This will copy the kubernetes cluster configuration created by the Docker for Windows client to the Debian WSL user.

  7. Install Helm on Debian WSL
    curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash

You should now be able to access your Docker for Windows Kubernetes cluster from Debian WSL.

#start relay
sudo ~/docker-relay &
#test docker client
docker version
#test access to the Kubernetes cluster
kubectl version
kubectl cluster-info
kubectl get nodes --all-namespaces
kubectl get pods --all-namespaces
#test Helm
helm version
helm list

Have fun!

References:

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Last week I needed to deploy an Windows 2016 Data Science Virtual Machine from the Azure MarketPlace.

The Azure Marketplace is an online applications and services marketplace that enables start-ups and independent software vendors (ISVs) to offer their solutions to Azure customers around the world.

The Azure Marketplace combines Microsoft Azure partner ecosystems into a single, unified platform to better serve our customers and partners. This will improve existing experiences and make it easier to search, purchase, and deploy a wide range of applications and services in just a few clicks.

During this process I learned some lessons I wanted to share in this blog post.

Lesson 1 - Accept MarketPlace terms

Before you can deploy a MarketPlace item like the Windows 2016 Data Science Virtual Machine you first need accept the MarketPlace terms.

If don't accept the MarketPlace terms you will see the following error message during the deployment.

There were errors in your deployment. Error code: MarketplacePurchaseEligibilityFailed.
Marketplace purchase eligibilty check returned errors. See inner errors for details. 
Details:
BadRequest: Offer with PublisherId: Microsoft-ads, OfferId: windows-data-science-vm cannot be purchased due to validation errors. See details for more information.[{"Legal terms have not been accepted for this item on this subscription. To accept legal terms using PowerShell, please use Get-AzureRmMarketplaceTerms and Set-AzureRmMarketplaceTerms API(https://go.microsoft.com/fwlink/?linkid=862451) or deploy via the Azure portal to accept the terms":"StoreApi"}] undefined
Task failed while creating or updating the template deployment.

According to the error message you need the Get-AzureRmMarketPlaceTerms and Set-AzureRmMarketplaceTerms PowerShell cmdlets to accept the MarketPlace terms.

But how do you find the correct parameter values for the Get-AzureRmMarketPlaceTerms cmdlet?

Here is an example for another MarketPlace item, the Ubuntu CIS Hardened image.

You can use the following PowerShell commands to retrieve the values needed to accept the MarketPlace Terms:

#region Connect to Azure
Add-AzureRmAccount
 
#Select Azure Subscription
$subscription = 
(Get-AzureRmSubscription |
        Out-GridView `
        -Title 'Select an Azure Subscription ...' `
        -PassThru)
 
Set-AzureRmContext -SubscriptionId $subscription.Id -TenantId $subscription.TenantID
#endregion

#region get parameter values to Accept MarketPlace terms for Windows 2016 Data Science VM
Get-AzureRmVMImageOffer -PublisherName 'Microsoft-ads' -Location 'westeurope' | Get-AzureRmVMImageSKu
#endregion

Next step is to retrieve the MarketPlace terms and accept those using the following commands:

Get-AzureRmMarketplaceTerms -Publisher 'Microsoft-ads' -Product 'windows-data-science-vm' -Name 'windows2016' |
    Set-AzureRmMarketplaceTerms -Accept

You have now accepted the MarketPlace terms for the Windows 2016 Data Science Virtual Machine and can continue with the deployment.

Lesson 2 - Azure Resource Manager parameters are case-sensitive.

After accepting the MarketPlace terms using PowerShell you can also verify this using the Azure Portal.

Go to your Subscription and select 'Programmatic deployment'

The final step is to deploy the Windows Data Science Virtual Machine. I used ARM Templates with a VSTS (Visual Studio Team Services) CI/CD Pipeline.

In my azuredeploy.parameter.json file I configured the following parameter values regarding the Windows 2016 Data Science VM:

"sku": {
  "value": "windows2016"
},
"publisher": {
  "value": "Microsoft-ads"
},
"offer": {
  "value": "windows-data-science-vm"
},
"version": {
  "value": "latest"
},

Don't forget to add a plan attribute in the azuredeploy.json file for the Virtual Machine Resource.

"plan": {
        "name": "[parameters('sku')]",
        "publisher": "[parameters('publisher')]",
        "product": "[parameters('offer')]"
      },

I thought I did everything correctly but the deployment failed with the following error:

{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.","details":[{"code":"Conflict","message":"{\r\n \"status\": \"Failed\",\r\n \"error\": {\r\n \"code\": \"ResourceDeploymentFailure\",\r\n \"message\": \"The resource operation completed with terminal provisioning state 'Failed'.\",\r\n \"details\": [\r\n {\r\n \"code\": \"VMMarketplaceInvalidInput\",\r\n \"message\": \"The purchase information does not match. Unable to deploy from the Marketplace image. OS disk name is 'ds-d-vm-01_OsDisk_1_'.\"\r\n }\r\n ]\r\n }\r\n}"}]}

The important part of this error message is the following:

The purchase information does not match

It turns out the ARM Template parameters values are case-sensitive when deploying a MarketPlace item!

You need to change the parameter input value for publisher to lower-case. 'Microsoft-ads' needs to become 'microsoft-ads':

"sku": {
  "value": "windows2016"
},
"publisher": {
  "value": "microsoft-ads"
},
"offer": {
  "value": "windows-data-science-vm"
},
"version": {
  "value": "latest"
},

After this change I was able to deploy the Windows 2016 Data Science Virtual Machine successfully.

Hope you find this blog post useful.

References:

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We just released the Debian GNU/Linux for WSL. More information can be found at the following "Debian GNU/Linux for WSL now available in the Windows Store" blog post.

After downloading the Debian WSL from the Microsoft Store I wanted to first install the Azure CLI tools. According to the Azure CLI installation documentation one of the first steps is modifying your sources list:

AZ_REPO=$(lsb_release -cs)
echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | \
     sudo tee /etc/apt/sources.list.d/azure-cli.list

But when running lsb_release -cs on the Debian WSL you are not getting the correct result returned.

Because lsb_release -cs does not return a value on the Debian WSL you should modify the sources list accordingly:

echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ stretch main" | \
     sudo tee /etc/apt/sources.list.d/azure-cli.list

Now you should be able to continue with the installation of the Azure CLI tools on your Debian WSL.

Hope this helps!

References:

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In the latest Edge version (18.02 CE Edge) of Docker for Windows support for Kubernetes is added.

If you want to start playing with Kubernetes you need to do the following:

  1. Download the latest (18.02 CE Edge) Docker for Windows Edge version from here.
  2. Install the Docker for Windows 18.02 CE Edge version you downloaded in step 1.
  3. Enable Kubernetes in the settings of the Docker for Windows Edge version.

Here are some screenshots made during the configuration of Kubernetes in the Docker for Windows Edge Client.

Wait till the Kubernetes cluster is deployed.

The Kubernetes cluster is now running.

The Kubernetes client command, kubectl, is included and configured to connect to the local Kubernetes server. If you have kubectl already installed and pointing to some other environment, such as minikube or a GKE cluster, be sure to change context so that kubectl is pointing to docker-for-desktop:

kubectl config get-contexts
kubectl config use-context docker-for-desktop

Let's check our Kubernetes Cluster.

NGINX Deployment

Let's deploy NGINX using the following deployment yaml file:

apiVersion: apps/v1beta2 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

We see that NGINX is now running.

You can easily parse the kubectl JSON output using the following PowerShell Code:

#region get k8s pods info
(kubectl get pods --all-namespaces -o=json | convertfrom-json).items | select @{L='nodeName';E={$_.spec.nodeName}}, @{L='podIP';E={$_.status.podIP}}, @{L='hostIP';E={$_.status.hostIP}}, @{L='podName';E={$_.metadata.name}}, @{L='states';E={$_.status.containerStatuses.state}} | format-Table *
#endregion

Interested in Kubernetes Nodes info?

#region get k8s nodes info
(kubectl get nodes --all-namespaces -o=json | convertfrom-json).items | select @{L='IPAddress';E={$_.status.Addresses[0].address}}, @{L='Name';E={$_.metadata.name}}, @{L='Role';E={$_.metadata.labels.'kubernetes.io/role'}} | Format-Table *
#endregion

Expose Deployment NGINX

Retrieve Deployment(s)

kubectl get deployments

Expose NGINX deployment:

kubectl expose deployment nginx-deployment --type=NodePort

Have fun playing with Kubernetes in Docker for Windows.

References:

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This sprint I've been busy with the implementation of running Kubernetes in Azure.

What is Kubernetes?

Kubernetes is an open-source platform designed to automate deploying, scaling, and operating application containers.
With Kubernetes, you are able to quickly and efficiently respond to customer demand:

  • Deploy your applications quickly and predictably.
  • Scale your applications on the fly.
  • Roll out new features seamlessly.
  • Limit hardware usage to required resources only.

Kubernetes on Azure

Azure Container Service (ACS) allows you to quickly deploy a production ready Kubernetes cluster.
ACS for Kubernetes makes it simple to create, configure, and manage a cluster of virtual machines that are preconfigured to run containerized applications.

Remark:

Azure Container Service (AKS) is being updated to add new deployment options, enhanced management capabilities, and cost benefit to Kubernetes on Azure. Visit the AKS documentation to start working with these preview features.

What is the difference between ACS and AKS?

ACS is our current Azure Container Service, which is the unmanaged version compaired to the managed version of Azure Container Service (AKS).

By using AKS, you can take advantage of the enterprise-grade features of Azure, while still maintaining application portability through Kubernetes and the Docker image format.

Managed Kubernetes in Azure
AKS reduces the complexity and operational overhead of managing a Kubernetes cluster by offloading much of that responsibility to Azure. As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you. In addition, you pay only for the agent nodes within your clusters, not for the masters. As a managed Kubernetes service, AKS provides:

  • Automated Kubernetes version upgrades and patching
  • Easy cluster scaling
  • Self-healing hosted control plane (masters)
  • Cost savings - pay only for running agent pool nodes

With Azure handling the management of the nodes in your AKS cluster, you no longer need to perform many tasks manually, like cluster upgrades. Because Azure handles these critical maintenance tasks for you, AKS does not provide direct access (such as with SSH) to the cluster.

Deployment options AKS

You can deploy AKS via the following options:

  • Azure CLI
  • Azure Portal (search the Marketplace for Azure Container Service)
  • ACS-Engine

ACS-Engine

Because we needed more control over the Azure Resource Manager templates, we used the open source acs-engine project to build our own custom Kubernetes cluster and deploy it via PowerShell.

The Azure Container Service Engine (acs-engine) generates ARM (Azure Resource Manager) templates for Docker enabled clusters on Microsoft Azure with your choice of DC/OS, Kubernetes, Swarm Mode, or Swarm orchestrators. The input to the tool is a cluster definition. The cluster definition is very similar to (in many cases the same as) the ARM template syntax used to deploy a Microsoft Azure Container Service cluster.

And now we are getting close to the title of this blog post.

The default installation option for the ACS-Engine is the following:
Download the latest version of acs-engine for here. Download acs-engine for your operating system. Extract the binary and copy it to your $PATH.

But my collegue Alessandro Vozza created a Docker file and Image for the ASC-Engine. Which makes it so easy to use the ACS-Engine tool.

Running ASC-Engine in Docker for Windows

To get the ASC-Engine running in Docker for Windows you need to do the following:

  1. Install Docker for Windows
  2. Configure a Shared Drive in Docker for Windows.
  3. Pull Docker image ams0/acs-engine-light-autobuild from Docker Hub
  4. Create a ACS-Engine Cluster Definition file
  5. Run the ASC-Engine Docker instance and generate ARM (Azure Resource Manager) templates from Cluster definition input.
  6. Deploy Kubernetes Cluster to Azure via PowerShell or Azure CLI.

Step 1. Install Docker for Windows

Just follow the steps described here to install Docker for Windows.

Step 2. Configure a Shared Drive in Docker for Windows

Because we are going to create and edit the ACS-Engine Cluster definition on our Windows machine we need to make sure we can access this file from within the Docker Engine on Windows.

Open the setting of the Docker for Windows Client and configure the Shared Drive.

Step 3. Pull Docker image ams0/acs-engine-light-autobuild from Docker Hub

Open your favorite shell (PowerShell Core?) and type the following:

docker pull ams0/acs-engine-light-autobuild

You can verify the Docker image download by running the following command:

docker images

Step 4. Create a ACS-Engine Cluster Definition file

Open your favorite editor (VSCode) and create a Cluster Definition file and store that file on the Docker Shared Drive.

You can find some examples of Cluster Definition files on the Github page.

For more detailed configuration of your Cluster Definition have a look at the Cluster Definition documentation on Github.

Empty example of a kubernetes.json Cluster Definition file:

{
  "apiVersion": "vlabs",
  "properties": {
    "orchestratorProfile": {
      "orchestratorType": "Kubernetes"
    },
    "masterProfile": {
      "count": 1,
      "dnsPrefix": "",
      "vmSize": "Standard_D2_v2"
    },
    "agentPoolProfiles": [
      {
        "name": "agentpool1",
        "count": 3,
        "vmSize": "Standard_D2_v2",
        "availabilityProfile": "AvailabilitySet"
      }
    ],
    "linuxProfile": {
      "adminUsername": "azureuser",
      "ssh": {
        "publicKeys": [
          {
            "keyData": ""
          }
        ]
      }
    },
    "servicePrincipalProfile": {
      "clientId": "",
      "secret": ""
    }
  }
}

Step 5. Run the ASC-Engine Docker instance and generate ARM (Azure Resource Manager) templates from Cluster definition input

Go back to the shell where you have docker running.

Start Docker Container asc-engine with attached drive where kubernetes.json file is stored. Output is stored in c:\temp\_output folder

docker run -it --rm -v c:/Temp:/acs -w /acs ams0/acs-engine-light-autobuild:latest /acs-engine generate kubernetes.json

You should now see all kinds of files being created in the output folder of the ACS-Engine.

Step 6. Deploy Kubernetes Cluster to Azure via PowerShell or Azure CLI.

The final step is to deploy the ARM Templates created in the output folder from step 5.

If you use PowerShell you can do the following:

#region Variables
$ResourceGroupName = '[configure resource group name for Kubernetes Cluster]'
$Location = '[configure Azure Location]'
#endregion

#region Connect to Azure
Add-AzureRmAccount
 
#Select Azure Subscription
$subscription = 
(Get-AzureRmSubscription |
        Out-GridView `
        -Title 'Select an Azure Subscription ...' `
        -PassThru)
 
Set-AzureRmContext -SubscriptionId $subscription.Id -TenantId $subscription.TenantID
#endregion

#region create Resource Group to test Azure Template Functions
If (!(Get-AzureRMResourceGroup -name $ResourceGroupName -Location $Location -ErrorAction SilentlyContinue)) {
    New-AzureRmResourceGroup -Name $ResourceGroupName -Location $Location
}
#endregion

#region variables
$ARMTemplateFile = 'C:\Temp\_output\dts\azuredeploy.json'
$ParameterFile = 'C:\Temp\_output\dtc\azuredeploy.parameters.json'
#endregion

#region Test ARM Template
Test-AzureRmResourceGroupDeployment -ResourceGroupName $ResourceGroupName `
    -TemplateFile $ARMTemplateFile `
    -TemplateParameterFile $ParameterFile `
    -OutVariable testarmtemplate
#endregion

#region Deploy ARM Template with local Parameter file
$result = (New-AzureRmResourceGroupDeployment -ResourceGroupName $ResourceGroupName `
        -TemplateFile $ARMTemplateFile `
        -TemplateParameterFile $ParameterFile -Verbose -DeploymentDebugLogLevel All)
$result
#endregion

Hope you found this blog post interesting.

References:

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I've been using Visual Studio Team Services (VSTS) for almost all my development projects.

VSTS is a cloud service for collaborating on code development. It provides an integrated set of features that you access through your web browser or IDE client, including:

  • Git repositories for source control of your code
  • Build and release management to support continuous integration and delivery of your apps
  • Agile tools to support planning and tracking your work, code defects, and issues using Kanban and Scrum methods
  • A variety of tools to test your apps, including manual/exploratory testing, load testing, and continuous testing
  • Highly customizable dashboards for sharing progress and trends
  • Built-in wiki for sharing information with your team

Most of the developers would probably be managing VSTS from the Git commandline (source control) and the VSTS Portal.

VSTeam PowerShell Module

VSTeam is a PowerShell module that exposes portions of the REST API for Visual Studio Team Services and Team Foundation Server.
It is written in pure PowerShell and can be used on Mac, Linux or Windows to connect to TFS or VSTS. To install VSTeam you can use the Install-Module cmdlet. Just make sure on Windows you run PowerShell as administrator and on Mac or Linux you sudo your PowerShell session. Then simply issue the following command.

Install-Module VSTeam

With this PowerShell Module you are able to manage your VSTS and TFS server from the PowerShell commandprompt.

While this is already a pretty cool way to manage you VSTS/TFS Projects, Builds, Release etc, it can be even cooler and easier to manage your projects. Meet Simple Hierarchy in PowerShell (SHiPS)

Simple Hierarchy in PowerShell (SHiPS)
SHiPS is a PowerShell provider that allows any data store to be exposed like a file system as if it were a mounted drive. In other words, the data in your data store can be treated like files and directories so that a user can navigate data via cd or dir. SHiPS is a PowerShell provider. To be more precise it's a provider utility that simplifies developing PowerShell providers.

Would it not be cool to navigate your VSTS/TFS Projects from the commandprompt using SHiPS on top of the VSTeam PowerShell Module?
Meet the new VSTeam module which integrates SHiPS functionality with the PowerShell VSTeam module.

SHiPS and VSTeam PowerShell module

To get started with the VSTeam PowerShell Module with SHiPS functionality download the latest version from the PowerShell Gallery:

Install-Module VSTeam -scope CurrentUser

The VSTeam PowerShell Module needs a personal access token for VSTS or TFS. More information on how to create a Personal Access Token can be found here.

High-Level steps to get started with VSTeam and SHiPS:

  1. Open PowerShell host
  2. Import VSTeam PowerShell Module
  3. Create VSTeam Profile
  4. Add VSTeam Account (and create SHiPS drive)
  5. Navigate the SHiPS VSTeam Drive

Step 1 and 2. Open PowerShell host and import VSTeam PowerShell Module.

Step 3. Create VSTeam Profile

Add-Profile -Account '[VSTSOrTFSAccountName]' -PersonalAccessToken '[personalaccesstoken]' -Name '[ProfileName]'

Step 3. Add VSTeam Account and create SHiPS Drive

Add-VSTeamAccount -Profile [profilename] -Drive vsteam

Copy the yellow output 'New-PSDrive -Name vsteam -PSProvider SHiPS -Root 'VSTeam#VSAccount' to your host and run the command.

Step 4. Navigate the SHiPS VSTeam Drive

#region navigate to you VSTeam SHiPS drive
cd vsteam:
#endregion

#region list vsteam account projects
Get-ChildItem
#endregion

#region navigate to project
cd OperationsDay2017
#endregion

#region list folders for Project
Get-ChildItem
#endregion

#region list Builds for Project
cd Builds
#endregion

#region list Build properties
Get-ChildItem .\117 | Select *
#endregion

#region list Build properties using Get-Item
Get-Item .\117 | Select *
#endregion

#region list Unsuccessful Builds 
Get-ChildItem | Where-Object {$_.result -ne 'succeeded'}  | Format-List *
#endregion

#region list Release for Project
Get-ChildItem ..\Releases
#endregion

#region list Release properties
Get-ChildItem ..\Releases\Release-51 | select *
#endregion

#region find all rejected releases for specific requestor
Get-ChildItem ..\Releases | Where-Object {$_.createdByUser -eq 'Stefan Stranger'} | 
    Where-Object {$_.Environments.status -eq 'rejected'} |
    Select-Object createdByUser |
    Group-Object -Property createdByUser -NoElement
#endregion

#region find all rejected releases grouped by creator
Get-ChildItem ..\Releases |  
    Where-Object {$_.Environments.status -eq 'rejected'} |
    Select-Object createdByUser |
    Group-Object -Property createdByUser -NoElement |
    Sort-Object -Property Count
#endregion

#region overview of failed releases per release definition
Get-ChildItem ..\Releases |
    Where-Object {$_.Environments.status -eq 'rejected'} |
    Select-Object createdByUser, @{'L' = 'Name'; E = {$_.Environments.releasedefinition.name[0]}} |
    Group-Object -Property Name |
    Sort-Object -Property Count -Descending
#endregion

Remark:
Currently the SHiPS Module only supports the following cmdlets:

  • Get-Item
  • Get-ChildItem

Screenshot Navigate VSTeam Account with VSTeam PowerShell Module with SHiPS functionality

Screenshot Build Properties

You can check the announcement from Donovan Brown here.

References:

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Universal Dashboard

Adam Driscoll the creator of the PowerShell Pro Tools for Visual Studio has created a new feature Universal Dashboard.
The PowerShell Pro Tools Universal Dashboard PowerShell module allows for creation of web-based dashboards. The client and server side code for the dashboard is authored all PowerShell. Charts, monitors, tables and grids can easily be created with the cmdlets included with the module. The module is cross-platform and will run anywhere PowerShell Core can run.

With this PowerShell Module you can easily create awesome Dashboards. After creating some cool Dashboards I thought it would be cool to test if you could also run the Universal Dashboard module in a Docker Instance.

Docker

I have a Windows 10 machine so I used the Docker for Windows Client to create Docker Containers on my Windows machine.

Docker is a software technology providing containers, promoted by the company Docker, Inc. Docker provides an additional layer of abstraction and automation of operating-system-level virtualization on Windows and Linux. [from Wikipedia] Please check the references if you want to learn more about Docker.

Example Universal Dashboard

With below Script we can create a Dashboard which retrieves Microsoft Stock prices calling a Web Service.

<#
    Example Dashboard for showing Microsoft Stock Value last 6 months.
    Links: 
    - Universial Dashboard: https://adamdriscoll.gitbooks.io/powershell-tools-documentation/content/powershell-pro-tools-documentation/universal-dashboard.html
    - Stock API: https://iextrading.com/developer/
#>
$Dashboard = Start-Dashboard -Content { 
    New-Dashboard -Title "Stockprice Dashboard" -Color '#FF050F7F' -Content {
        #Insert HTML Code
        New-Html -Markup '<h1>Running Universal Dashboard in Container!</h1>'
        New-Row {
            New-Column -Size 12 -Content {
                New-Chart -Type Line -Title "Stock Values - 6 months" -Endpoint {
                    (Invoke-RestMethod -Uri 'https://api.iextrading.com/1.0/stock/msft/chart/6m' -Method Get) | 
                        Out-ChartData -LabelProperty "date" -DataProperty "close"                    
                }
            }
        }
        New-Row {
            New-Column -Size 12 {
                New-Grid -Title "StockPrice MSFT" `
                    -Headers @('Date', 'Close Stock Value', 'Low Stock Value', 'High Stock Value') -Properties @('date', 'close', 'low', 'high') `
                    -DefaultSortColumn 'Date' -DefaultSortDescending  `
                    -Endpoint {
                    $StockData = Invoke-RestMethod -Uri 'https://api.iextrading.com/1.0/stock/msft/chart/6m' -Method Get 
                    $StockData | Out-GridData 
                }
            }
        }
        
    }
} -Port 9292

#Open Dashboard
Start-Process http://localhost:9292


#Stop Dashboard
#Stop-Dashboard -Server $Dashboard

If we run above script after installing the UniversalDashboard PowerShell Module and setting the Universal Dashboard License we get the following Dashboard.

Running Universal Dashboard in Docker Instance

We are going to use the microsoft/powerShell image to run our Universal Dashboard. This Docker image contains PowerShell Core.

Make sure you have installed the Docker Client for Windows and you have mounted your c-drive in the Docker Client. We will store the Universal Dashboard script on our c-drive.

High-level steps:

  1. Install the Docker for Windows Client
  2. Configure the Shared Drives (Map C-drive)
  3. Start Docker Windows Client
  4. Create Dashboard Script and store on your Windows C-drive.
  5. Download and install microsoft/powershell Docker image
  6. Start Docker microsoft/powershell instance
  7. Start Dashboard PowerShell script on Docker microsoft/powershell running instance.

I hope you where able to execute steps 1. to 3. yourself so I'll continue with how your Dashboard Script needs to look like.

<#
    Example Dashboard for showing Microsoft Stock Value last 6 months.
    Links: 
    - Universial Dashboard: https://adamdriscoll.gitbooks.io/powershell-tools-documentation/content/powershell-pro-tools-documentation/universal-dashboard.html
    - Stock API: https://iextrading.com/developer/
#>

#region install Universal Dashboard Module from PSGallery
Install-Module UniversalDashboard -Scope AllUsers -Force
#endregion

#region Set License
$License = Get-Content -Path '/data/license.txt'
Set-UDLicense -License $License
#endregion

#region Universal Dashboard
Start-Dashboard -Content { 
    New-Dashboard -Title "Stockprice Dashboard" -Color '#FF050F7F' -Content {
        #Insert HTML Code
        New-Html -Markup '<h1>Running Universal Dashboard on Docker Instance!</h1>'
        New-Row {
            New-Column -Size 12 -Content {
                New-Chart -Type Line -Title "Stock Values - 6 months" -Endpoint {
                    (Invoke-RestMethod -Uri 'https://api.iextrading.com/1.0/stock/msft/chart/6m' -Method Get) | 
                        Out-ChartData -LabelProperty "date" -DataProperty "close"                    
                }
            }
        }
        New-Row {
            New-Column -Size 12 {
                New-Grid -Title "StockPrice MSFT" `
                    -Headers @('Date', 'Close Stock Value', 'Low Stock Value', 'High Stock Value') -Properties @('date', 'close', 'low', 'high') `
                    -DefaultSortColumn 'Date' -DefaultSortDescending  `
                    -Endpoint {
                    $StockData = Invoke-RestMethod -Uri 'https://api.iextrading.com/1.0/stock/msft/chart/6m' -Method Get 
                    $StockData | Out-GridData 
                }
            }
        }
        
    }
} -Port 9090 -Wait
#endregion

This script will download and install the UniversalDashboard Module from the PSGallery and configure the license and start the Dashboard listening on port 9090.

Save the dockeruniversaldashboard.ps1 file on your c-drive.

Now it's time to start Docker and download the latest microsoft/powershell Docker image from the Docker Hub.

Open your PowerShell Command prompt (as Administrator) and search for the microsoft/powershell image.

If you have not downloaded the microsoft/powershell image run

docker pull microsoft/powershell

in the PowerShell Console.

With the command

docker images

you can see your installed Docker images.

Now we can run the following docker commands to start our Universal Dashboard hosted in the Docker microsoft/powershell instance.

docker run -d -p 9090:9090 --rm --name ud -i -v c:/Temp:/data microsoft/powershell 
docker exec -i ud  powershell -file ./data/dockeruniversaldashboard.ps1

Above docker commands start the Docker container with the ports mapped and the c:\temp folder mapped where we saved the dockeruniversaldashboard.ps1 file we created earlier.

The second command starts the PowerShell script in the Docker instance.

Here we see an animated gif showing  the end result.

Go buy your PowerShell Pro License and start creating cool Dashboards.

References:

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

There are many ways to start an Azure Automation Runbook. You can start the Runbook from the Azure Portal, using PowerShell cmdlets or a Webhook. And maybe even more.

If you want to call a Runbook from another tool, like OMS or any other tooling supporting web requests you can create a Webhook for the Runbook which to be called from the external tool.

A Webhook allows you to start a particular runbook in Azure Automation through a single HTTP request. This allows external services such as Visual Studio Team Services, GitHub, Microsoft Operations Management Suite Log Analytics, or custom applications to start runbooks without implementing a full solution using the Azure Automation API.

A disadvantage of using a Webhook for an Azure Automation Runbook is the lack of authentication for calling the Webhook. The only security available for calling the Azure Automation Runbook Webhook is the secret token that is generated during the creation of the Webhook.

After creation the URL can't be viewed anymore but anyone knowing the Webhook URL is able to call the Runbook via the Webhook. Provided they also know the needed parameter inputs.

So how could you call an Azure Automation Runbook via a web request using a username and password?

Azure REST API

Azure Resource Manager provides a way for you to deploy and manage the services that make up your applications. For an introduction to deploying and managing resources with Resource Manager, see Azure Resource Manager Overview. Most, but not all, services support Resource Manager, and some services support Resource Manager only partially. Microsoft will enable Resource Manager for every service that is important for future solutions, but until the support is consistent, you need to know the current status for each service. For information about the available services and how to work with them, see Resource Manager providers, regions, API versions and schemas. [*from Azure Resource Manager REST API Reference]

How to call an Azure Automation Runbook with Azure ARM REST API?

So how does the authentication work when you want to to do a web request call against the Azure ARM REST API? You need to supply a bearer Access Token in the request Header of the web request. But how do you get that AccessToken? You can retrieve the AccessToken by creating an Active Directory application and service principal and use a ClientID and ClientSecret to retrieve the AccessToken. We will use PowerShell to create the Service Principal to access resources in Azure.

Create a service principal to access resources

  1. Create the AD application with a password
  2. Create the service principal
  3. Assign the Contributor role to the service principal
#region variables
$ADApplicationName = 'demowebrequest'
$HomePage = 'https://www.stranger.nl/demowebrequest'
$ADApplicationPassword = 'P@ssw0rd!'
#endregion

#region Login to Azure
Add-AzureRmAccount
 
#Select Azure Subscription
$subscription = 
(Get-AzureRmSubscription |
        Out-GridView `
        -Title 'Select an Azure Subscription ...' `
        -PassThru)
 
Set-AzureRmContext -SubscriptionId $subscription.Id -TenantId $subscription.TenantID

Select-AzureRmSubscription -SubscriptionName $subscription.Name
#endregion

#region create SPN with Password
New-AzureRmADApplication -DisplayName "demowebrequest" -HomePage $ADApplicationName -IdentifierUris $HomePage -Password $ADApplicationPassword -OutVariable app
New-AzureRmADServicePrincipal -ApplicationId $app.ApplicationId
New-AzureRmRoleAssignment -RoleDefinitionName Contributor -ServicePrincipalName $app.ApplicationId.Guid

Get-AzureRmADApplication -DisplayNameStartWith 'demowebrequest' -OutVariable app
Get-AzureRmADServicePrincipal -ServicePrincipalName $app.ApplicationId.Guid -OutVariable SPN
#endregion

Remark:

If you want the Service Principal only to manage Automation Runbooks you should give this service account the "Automation Operator" Role and limit access to the Scope where the Automation Account is created.

You now need to follow the steps described in the blog post Using the Azure ARM REST API – Get Access Token.

If you followed the steps described there you should have a ClientId and ClientSecret which are going to be used to Authenticate against the Azure ARM REST API.

You can verify correct authentication using the following commands from bash. An access token is returned.

curl --request POST "https://login.windows.net/[tennantid]/oauth2/token" --data-urlencode "resource=https://management.core.windows.net" --data-urlencode "client_id=[clientid]" --data-urlencode "grant_type=client_credentials" --data-urlencode "client_secret=[clientsecret]"

Or if you prefer PowerShell you can use the following commands:

#Azure Authtentication Token

#requires -Version 3
#SPN ClientId and Secret
$ClientID       = "clientid" #ApplicationID
$ClientSecret   = "ClientSecret"  #key from Application
$tennantid      = "TenantID"
 

$TokenEndpoint = {https://login.windows.net/{0}/oauth2/token} -f $tenantid 
$ARMResource = "https://management.core.windows.net/";

$Body = @{
        'resource'= $ARMResource
        'client_id' = $ClientID
        'grant_type' = 'client_credentials'
        'client_secret' = $ClientSecret
}

$params = @{
    ContentType = 'application/x-www-form-urlencoded'
    Headers = @{'accept'='application/json'}
    Body = $Body
    Method = 'Post'
    URI = $TokenEndpoint
}

$token = Invoke-RestMethod @params

$token | select access_token, @{L='Expires';E={[timezone]::CurrentTimeZone.ToLocalTime(([datetime]'1/1/1970').AddSeconds($_.expires_on))}} | fl *

The next step is to call an Azure Automation Runbook using a web request against the Azure REST API with the earlier retrieved access token.

Azure Automation Runbook Web Request

Let's first start with retrieving Azure Automation Runbook information.

Method Request URI
GET 'https://management.azure.com/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.Automation/automationAccounts/{AutomationAccountName}/runbooks?api-version={ApiVersion}'

Example web request call using curl from bash:

#!/bin/bash

# bash script to retrieve Azure Runbook information using plain Azure ARM REST API web requests

#Azure Subscription variables
ClientID="[applicatie clientid" #ApplicationID
ClientSecret="application client secret"  #key from Application
TenantID="[azure tenantid]"
SubscriptionID="[azure subscriptionid]"
ResourceGroupName="[resourcegroup name for azure automation account]"
AutomationAccountName="[azure automation account name]"
APIVersion="2015-10-31"

accesstoken=$(curl -s --header "accept: application/json" --request POST "https://login.windows.net/$TenantID/oauth2/token" --data-urlencode "resource=https://management.core.windows.net/" --data-urlencode "client_id=$ClientID" --data-urlencode "grant_type=client_credentials" --data-urlencode "client_secret=$ClientSecret" | jq -r '.access_token')

#Use AccessToken in Azure ARM REST API call for Runbook Info
runbookURI="https://management.azure.com/subscriptions/$SubscriptionID/resourceGroups/$ResourceGroupName/providers/Microsoft.Automation/automationAccounts/$AutomationAccountName/runbooks?api-version=$APIVersion"

curl -s --header "authorization: Bearer $accesstoken" --request GET $runbookURI | jq .

Result running script in WSL (Bash for Windows)

Trigger Azure Automation Runbook with web request:

Simple Hello World Runbook PowerShell Script (HelloWorld.ps1):

[CmdletBinding()]
param(
    $firstname,
    $lastname
)

Write-Output "Hello $firstname $lastname"

This Runbook has two parameters, FirstName and LastName.

If we now want to trigger this Runbook using a web request we need the following information.

Method Request URI
POST 'https://management.azure.com/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.Automation/automationAccounts/{AutomationAccountName}/jobs/{GUID}?api-version={ApiVersion}'
BODY

Example web request using curl:

#!/bin/bash

# bash script to retrieve Azure Runbook information using plain Azure ARM REST API web requests

#Azure Subscription variables
ClientID="[applicatie clientid" #ApplicationID
ClientSecret="application client secret"  #key from Application
TenantID="[azure tenantid]"
SubscriptionID="[azure subscriptionid]"
ResourceGroupName="[resourcegroup name for azure automation account]"
AutomationAccountName="[azure automation account name]"
APIVersion="2015-10-31"
GUID=$(uuidgen)

accesstoken=$(curl -s --header "accept: application/json" --request POST "https://login.windows.net/$TenantID/oauth2/token" --data-urlencode "resource=https://management.core.windows.net/" --data-urlencode "client_id=$ClientID" --data-urlencode "grant_type=client_credentials" --data-urlencode "client_secret=$ClientSecret" | jq -r '.access_token')

#Use AccessToken in Azure ARM REST API call for Runbook Info
runbookURI="https://management.azure.com/subscriptions/$SubscriptionID/resourceGroups/$ResourceGroupName/providers/Microsoft.Automation/automationAccounts/$AutomationAccountName/jobs/$GUID?api-version=$APIVersion"

curl -s --header "authorization: Bearer $accesstoken" --header "Content-Type: application/json" -d '{"tags":{},"properties":{"runbook":{"name":'"'$RunbookName'"'},"parameters":{"LastName":"Stranger","FirstName":"Stefan"}}}' --request PUT $runbookURI | jq .

Result output:

You can also check the Runbook output in the Azure Portal.

If you prefer to use PowerShell to call the Azure Automation Runbook via the Azure REST API you can use the following code:

#requires -Version 3

# ---------------------------------------------------
# Script: CallRunbookFromRESTAPI.ps1
# Version:
# Author: Stefan Stranger
# Date: 09/08/2017 15:16:25
# Description: Call Azure Automation Runbook using Azure ARM REST API calls.
# Comments: https://docs.microsoft.com/en-us/rest/api/automation/
# Changes:  
# Disclaimer: 
# This example is provided "AS IS" with no warranty expressed or implied. Run at your own risk. 
# **Always test in your lab first**  Do this at your own risk!! 
# The author will not be held responsible for any damage you incur when making these changes!
# ---------------------------------------------------


#region variables
$ClientID       = '[ClientID]' #ApplicationID
$ClientSecret   = '[ClientSecret]'  #key from Application
$tenantid      = '[Azure Tenant Id]'
$SubscriptionId = '[Azure Subscription Id]'
$resourcegroupname = '[Resource Group Automation Account]'
$AutomationAccountName = '[Automation Account Name]'
$RunbookName = '[Runbook Name]'
$APIVersion = '2015-10-31'
#endregion

#region Get Access Token
$TokenEndpoint = {https://login.windows.net/{0}/oauth2/token} -f $tenantid 
$ARMResource = "https://management.core.windows.net/";

$Body = @{
        'resource'= $ARMResource
        'client_id' = $ClientID
        'grant_type' = 'client_credentials'
        'client_secret' = $ClientSecret
}

$params = @{
    ContentType = 'application/x-www-form-urlencoded'
    Headers = @{'accept'='application/json'}
    Body = $Body
    Method = 'Post'
    URI = $TokenEndpoint
}

$token = Invoke-RestMethod @params
#endregion


#region get Runbooks
$Uri = 'https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Automation/automationAccounts/{2}/runbooks?api-version={3}' -f $SubscriptionId, $resourcegroupname, $AutomationAccountName, $APIVersion
$params = @{
  ContentType = 'application/x-www-form-urlencoded'
  Headers     = @{
    'authorization' = "Bearer $($token.Access_Token)"
  }
  Method      = 'Get'
  URI         = $Uri
}
Invoke-RestMethod @params -OutVariable Runbooks
#endregion

#region Start Runbook
$Uri = 'https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Automation/automationAccounts/{2}/jobs/{3}?api-version={4}' -f $SubscriptionId, $resourcegroupname, $AutomationAccountName, $((New-Guid).guid), $APIVersion
$body = @{
  'properties' = @{
    'runbook'  = @{
      'name' = $RunbookName
    }
    'parameters' = @{
      'FirstName' = 'Stefan'
      'LastName' = 'Stranger'
    }
  }
  'tags'     = @{}
} | ConvertTo-Json
$body

$params = @{
  ContentType = 'application/json'
  Headers     = @{
    'authorization' = "Bearer $($token.Access_Token)"
  }
  Method      = 'Put'
  URI         = $Uri
  Body        = $body
}

Invoke-RestMethod @params -OutVariable Runbook
$Runbook.properties
#endregion

#region get Runbook Status
$Uri ='https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Automation/automationAccounts/{2}/Jobs/{3}?api-version=2015-10-31' -f $SubscriptionId, $resourcegroupname, $AutomationAccountName, $($Runbook.properties.jobId)
$params = @{
  ContentType = 'application/application-json'
  Headers     = @{
    'authorization' = "Bearer $($token.Access_Token)"
  }
  Method      = 'Get'
  URI         = $Uri
}
Invoke-RestMethod @params -OutVariable Status
$Status.properties
#endregion

Have fun with calling your Runbooks using web requests against the Azure ARM REST API!

References:

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview