Loading...
This is part 2 of the REST API blogpost. In part1 we successfully setup two REST API endpoints using the UniversalDashboard PowerShell module. In this part we are going to create a simple module that support some CRUD operation against our API. As we are trying to keep things as simple as possible, we will not use any fancy framework (like Plaster) to build our module. We are also going to skip a very important step you should familiarize yourself with, Pester tests. Lets get to it.


The moduleWe will build a module called FilesAPI. The module folder will look like this:



In the functions folder I have already added the 2 helper functions from part 1, Get-AuthorizationHeader and ConvertTo-Base64. The other folders are just placeholders for important stuff like classes, private functions that you do not want to make available for the module consumer and tests for Pester tests. For such a small module that we are going to create, one could argue that it is much easier to just add the functions directly in the module definition file (psm1), however I strongly urge you to create a folder structure for your modules. It is a huge benefit to have each function, test and class in a separate file. If you use the folder structure, Visual Studio shines compared to ISE as your module grows. It will also make the transition to module frameworks like Plaster much easier.

Since we are using a single file for each function, this is what we will put in out module file:

$ModuleScript variable contains the script that will load each function file, class and private function in our folder structure. As we add functions to the folders, it will only export the functions in the functions folder. It is a generic template that I started to use quite recently. The inspiration came from Kieran Jacobsen and his Plaster template. Go ahead and execute the script to create the FilesApi.psm1 module file.

First API function – Get-FileApiWe are going to start with the first simple function and we will name it Get-FileApi. As you might recall from the first part, the API needs an Authorization header for all calls to our API, so we need that as an parameter for the function. It might also be a good idea to add a parameter for Name so we can filter on specific files. Below is my function:

Basically we are just wrapping Invoke-RestMethod supplying the URI, method and the header containing the Authorization key/hashtable. We target the Authorization parameter as mandatory and leave the Name parameter optional. Save this function in the functions directory and name the file Get-FileApi.ps1.


Second API function – Set-FileApiSecondly we are now going to create a function for the POST method of the API. The endpoint accepts 3 parameters:
  • Authorization
  • FileName
  • Content
Our function needs to implement these parameters. Since this is a function uses the verb Set, we should be nice to the consumers of our module and implement WhatIf, more commonly know as SupportsShouldProcess. I really hate it when a destructive command do not implement the WhatIf parameter, here is looking at you Azure for PowerShell. Again we are marking the Authorization parameter as Mandatory for our function:

Pretty cool, with a few extra lines of code, our functions supports the WhatIf parameter and we have created 2 functions that talk to an API. Copy the function and save it in the functions folder as Set-FileApi.ps1.


Turning it up a notchNow at this point, we have two working function, however this is not really how APIs work. UniversalDashboards helps us quite a bit since the objects returned from Invoke-RestMethod is converted to real PowerShell objects. Invoke-RestMethod usually returns a HtmlWebResponseObject which contains two important properties. Content and StatusCode. To make it a little more interesting, we will try and get as close as possible to a real API. We are going to update our endpoints to return an object containing 2 properties; Content and StatusCode. Content will contain a JSON string of the object previously returned from the API. StatusCode is an HTTP code indicating the status of our request, which we will set to 200 if authorization is okay. We are also going to add a Name header for our Endpoint to simulate server side filtering. After the modifications, the Endpoint definition looks like this:

Go ahead and register the new endpoint.

Running our Get-FileApi function now is resulting in this output:



We can see the statuscode and the Content property containing a JSON string which is what we are after. So we have to check the StatusCode to verify that the request was OK (200) and then we have to convert the content string to an object using ConvertFrom-Json. After the modifications we have the following:


Running the Get-FileApi function gives us a nice object and filtering by Name works to, even with wildcards:



Applying the same logic to our CreateFileEndpoint, we update the definition to this:

After registering the new endpoint definition and running the unmodified Set-FileApi function, we get the new respons from our API:



Updating the Set-FileApi function to convert the JSON string to an object, it should look something like this:

Trying the Set-FileApi function after updating, we again get a nice object ouput:




Final module workIf you have followed along until now, your functions directory should look like this:



Your root folder should just have the folders and a single file FilesApi.psm1 (we created that in the beginning):



Now we are going to create the FilesApi.psd1 or what is know as the manifest. Here is the script I will be using:

A couple of notes here. If you plan to run this script several times, like if you add more functions, best practice is to “keep” the GUID for the module. If you run this script, the manifest will have a new GUID each time. Also if you are checking the manifest into source control, it is always nice that the manifest is encoded in UTF8. The New-ModuleManifest encodes it in ASCII format, which you probably do not want. In the last two lines I am converting the module to UTF8. After you have created the manifest, you may go ahead and start a new PowerShell session and import the module:



Lets take it for a spin, first we need to create an authorization variable:




After getting an authorization object, we go ahead and run Get-FileApi with the parameter:



Well, I’ll be darned, it works. Who would have thought.


What is missing?A couple of things really. 

Help
As a best practice you should add help to your functions.

Tests
Test Driven Development, TDD for short, is something you might want to look into. In essence you write your unit tests before you create the function. As I have mentioned previously, you can use Pester for this. It it built-in and included with PowerShell as a module.

Authorization
Different APIs have a couple of ways of adding authentication to the requests. I have only shown one method. 

URL query parameters

I chose to not include it in this guide. If you come a cross such an API like I have, I strongly suggest you add an function that can build the url for you. If the interest it there, I can show the Join-Url function I wrote which will build an URL for you with childpaths and URL query parameters.

BaseURL problem for production/test
In our example, we have hard coded the URL in the functions. It is quite common to have a test/dev environment for the API and a production environment. This requires that your module needs to support 2 different base URL. You can solve this quite easily by adding a module-level variable in the psm1 file ([bool]$script:IsProduction = $false). Then you add 2 new functions to the module, Get-ApiBaseURL and Set-ApiEnvionment. Get-ApiBaseURL contains just a if-block and will return the correct URL for the environment you want to work with. The Set-ApiEnvironment will flip the module level boolean variable to match your environment. You might also want to add a Get-ApiEnvironment function that will return the value of the $Script:IsProduction variable. 

Key takeaway
The goal of creating this guide is twofold: Firstly I wanted you to become aware of the awesome module UniversalDashboard and how easy it is to create an REST API using only PowerShell. Secondly I wanted to show it is quite trivial to create a module that consumes an REST API. Hope you have enjoyed reading or doing the tutorial as much as I did.

You can find part 1 of this tutorial here.

Cheers

Tore
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


Over the years I have developed different PowerShell modules for different web APIs. I thought it would be a good idea to write a 2 series post about how you could go about to do this. This will be a 2 part blog series where we will run through the entire process of building a module for a REST API. I will try my best to keep this as simple as possible and leave more advanced stuff for a follow up post if the interest is there.What you needDepending on your experience with source control and PowerShell in general, you might want to use GIT or some other software repro for the code. In addition we are going to create a test REST API using the splendid UniversalDashboard PowerShell module created by Adam Driscoll. It is available on the PowershellGallery. Other prerequisites are built-in to Powershell. I will assume that you will be following along using at least PowerShell version 5 or greater.
What is HTTP metods for REST API.The primary or most common HTTP verbs used are POST, GET, PUT, PATCH and DELETE.

  • POST - Create something
  • GET - Read or get something
  • PUT - Update or replace something
  • PATCH - Update or modify something
  • DELETE - Delete something

To keep it as simple as possible, I will focus on using POST and GET in this guide.
AuthorizationAPIs usually have some sort of requirement of authentication using a key. The UniversalDashboard module does not yet support authorization for REST endpoint, however we are going to fake it. My pseudo authentication method is not something you should ever use in production.
The REST APIWe will create an API that enables you to list the files in a folder on your drive, sorry was the best I could come up with. The API will also let you create files with content. I know this is kind of not really exciting, however the overall goal is to keep it simple and everybody understands files.
REST EndpointsFirst if you don’t have the UnversalDashboard module installed, fire up PowerShell and download the module from the Gallery:

    Install-Module -Name UniversalDashboard –Force

Now we are going to create our first GET endpoint.

We create a New-UDEndpoint with the partial url of "/file/" and we will support the GET method. The endpoint it self is just a scriptblock that is executed in a separate runspace on your machine. Currently an GET Endpoint does not support parameters (Authorization) in the scriptblock like a POST endpoint does, however I included it anyway. You notice we have to extract the Authorization value from the request header from the runspace. The endpoint assumes that you have a folder ($path) on you computer with path "c:\temp\api". If the folder does not exists, you will have to create it. If the request contains the supersecret key "foo:bar" the request will get all the files in c:\temp\api and write them to the pipeline as JSON-object. If you do not understand the authorization key concept, do not worry, we will create an helper function, actually two helper functions, that will help you create the key.

Next up is our POST endpoint which is quite similar.

The POST method supports scriptblock parameters, and we need Authorization, FileName and Content parameters to be able to create files. As you may notice, we have the same decoding of the Authorization key.
Helper functionsTo aid us in working with the API, we need 2 helper functions to be able to create an Authorization key. Normally the key is added as an header to a request or in the actual body of the request. We will have to add the key to the body of the request for the POST endpoint.


The Get-AuthorizationHeader is the main function that will create the key for us. The ConvertTo-Base64 function is used by Get-AuthorizationHeader to encode the key.
First testTo make sure everything is working, we will test the GET Endpoint. Copy the helper functions and the endpoint definition to your favorite PowerShell editor. Then we will start the Endpoint and create an key that can authenticate us against the API:

First we start the Endpoints by running Start-UDRestApi. You may get a firewall warning when you run the cmdlet. You have to allow traffic to the port you have selected for this to work. I selected post 11000 as the listening port for my API, however you may choose another port on your computer. Next we create a securestring containing our password "bar". Then we create a PSCredential object and pipe that to our helper function Get-AuthorizationHeader function. To test the Endpoint, we run the Invoke-RestMethod cmdlet and use the key just created. If everything work, you should have something like this in your window:



Those are the 3 files I have in my c:\temp\api folder.

That is it for part 1, In part 2 we will start to create a small PowerShell module and make it even more awesome. Stay tuned!

Cheers

Tore

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 



This years Microsoft Ignite conference was all about transforming your business with technology. Here is a techy summary for business-minds.


Going forward, IT-Pros must prepare to answer both tricky business questions, and leverage new tools to meet business demands. I imagine questions like these:
 
  • What are the needs of our business?
  • How can we empower our users to apply the cloud to gain competitive advantages?
  • How can we innovate with greater agility and optimize our IT resources?
  • How can we migrate from the traditional model where IT is just a cost-center, to a lean/mean machine where IT is the engine that powers our business strategy with increased earnings?


A model of the traditional business case

We live in a traditional world with traditional problems. Simplified a business consists of a few silos:
  • Internal users
  • Your customers
  • Your suppliers and partners
  • The remainder of the universe

All of these are connected directly and indirectly through processes, some of them manual and some maybe through automation. The job of the IT department is to deliver services, preferably in the most cost effective way possible. Generally, if you change a process through a tool or automation (PowerShell), and you saved time/cost, you become the hero. Cost- and time-savings are always welcome, however the possible impact is superior when IT is driving your revenue, like in the new model.



The new model for IT

In the new world, everything is about processes, data and applications. In other words, algorithms. Everything is moving and changing at a higher speed than we have ever experienced before. Silos probably still exists, however they are interconnected and data-aware. Your CRM application will have access to and understand other applications and their data structure. It will empower your employees and provide you with just in time insights. With the new Azure PowerApp and Flow applications which implement the CDM (Common Data Model) you have this available today as a preview service. Throw Azure Functions into the picture, and you have a pretty robust and extendable model which is highly customizable and scalable.

In addition, Azure has implemented predictive analytics and machine learning (ML) in the different APIs, like Storage, Azure SQL, Hadoop etc. They are enabling ML for the masses by implementing it across their datacenters and in the Azure model. Your developer is not responsible for implementing intelligence in your application, he consumes predictive data from the Azure machine learning API possible through the integration with the Storage API. You do not consider IT as a cost-center, however as a business enabler, that helps you to increase revenue by applying analysis of big data through algorithms that is constantly updated to provide perfect information just in time. Theoretically possible, however immensely difficult to implement in practice if you are not in Azure.



What do you need?



:Speed and agility: If you have a clear understanding of your needs, your market and competitors, why not move as agile and fast as you can? If you can change faster than your competitors, you have an advantage and a head start. Let me illustrate with an example; You have probably heard about robot-trading in the stock-market? They move very fast and agile because the first person/robot that receives and understands specific market information, is the winning party and walks away with some profits. In our business case, it is the same thing. Rapid changes to your algorithm and IT system to understand the business and receive correct information just in time, is essential to become the leader and increasing profits.

:Scale: Your IT system need to be able to scale, up and down. You should not have to worry about it as the cloud does this for you within the limitations you have defined. The cloud empowers businesses of all sizes to use scaling technology that previously was the privilege of large enterprises with expensive dedicated appliances. Committing to services and applications that handles scaling is key in the new world. Relying on old legacy applications and services will prevent you from becoming a new force in your market. Startups in your market will become your new IT system performance benchmark and they probably do not consider legacy systems a match for their agile needs.

:Knowledge – Close the gap: The adoption of cloud resources and the hybrid cloud is just the beginning of the disruptive change that is here. Hybrid cloud is just a steppingstone towards the connected cloud with unlimited resources at your fingertips. That does not imply that the private clouds will not exists. They just need to be connected to the public cloud and empower it by binging some added value. In the other case, if it is not connected, it will be a relic and an edge-case for very special circumstances. In this scenario, knowledge will be important. New features and services are launched on an almost weekly basis. Products are migrating from private preview, to public preview and finally to general availability in matter of months. If you do not take advantage, someone else will, perhaps your competitors.

:New People and Organization 2.0: Best case scenario, you need a huge amount of training and designing. If ordering a new web-server or virtual machine takes longer than the time usually needed to create/deploy it automatically, trust me, you have to do something. Your organization is already changing, perhaps you just have not noticed it yet? Ever heard about Shadow IT, the evil from within? If it is not knocking on your door, it is because it is already inside. Shadow IT is a real problem that you need to take seriously. In the emerging world, people want things yesterday, like always. Problem is that if you do not deliver, someone else can, and asking for forgiveness beats asking for permission 9 out of 10 times, especially if it yielded a positive result. Rules, policies and guidelines are nice, however immediate results are king.

DevOps is a “must”: The new world relies on DevOps. DevOps is a merge between a developer and a IT-Pro where you bring the knowledge of both parties together and apply that knowledge to your business and culture in a series of new processes. DevOps is not automation; however, automation is a key part of DevOps.

:Security: You do know that hackers target IT-Pros due to the fact that they normally have access to everything? The tools to handle this is available and has been for quite some time now. Microsoft Identity Manager comes with PAM (Privileged Access Management) which audits privileged access with time constrains. Then your privileged access token expires, your access is revoked. The PowerShell team has created a toolkit called Just Enough Administration (JEA) which is very similar to the Identity Manager solution. Both solutions should be designed with a “break the glass” option for that time when you really don’t care about the security, but need to fix the issue. If you break the glass, all kinds of things happen and you probably would expect to face some sort of hearing where you have to justify the action, which is a good thing.

With Windows Server 2016 a new Hyper-V feature was launched giving us Shielded VMs. With shielded VMs the tenant of a shared resource owns the VM completely. The entity responsible for the platform it is running on, have the ability to manage it to a certain degree (like start, stop and make a backup). The backup of a shielded VM is encrypted if you were wondering.

Last but not least, security starts at the operating system level. In general, reducing the attach surface is regarded as a first line of defense. Windows Server 2016 Nano is the new operating system for the cloud and will change the way you work and handle datacenter workloads. Nano Server has a tiny footprint, small attach surface and is blazingly fast, which makes it a perfect match for a fast moving and agile business.

:Help – Private cloud or hybrid cloud: Even with a new organization and knowledge, it is highly likely that you will need some consultancy. According to Gartner, 95% of all attempts to create a private cloud fails or fails to yield the expected outcome. Building and implementing a private cloud is very hard and you should be very confident on your organization’s abilities before you embark on such a journey. Microsoft is the only public cloud provider that will provide you with a key-ready solution to run your hybrid cloud. If you have not heard about Microsoft AzureStack you should probably read up on it. Basically it is Azure wrapped up in a Hyper Converged ready solution for you to deploy in your datacenter delivered from OEM vendors like Dell, Lenovo, HP et al. New features initiated in Azure most likely will migrate to AzureStack ready for usage in your hybrid cloud.

AzureStack is targeted for release some time mid 2017 or later that year. That is almost a year away. The good thing is that AzureStack is based upon Azure. It has the same underlying technology that powers Azure like the portal and the Azure Resource Manager (ARM). Microsoft is delivering a consistent experience across the public and hybrid cloud with the ARM technology. To prepare yourself for AzureStack, you should invest time and effort into learning Azure and that knowledge will empower you if you decide to implement AzureStack next year.




All in - or not

Do you need to get all in on the private cloud or should you just integrate yourself with the public cloud? It depends on your organization and your business needs. One thing is for certain, you probably have to do something. Implementing your own version of ready to consume features in the public cloud in your own private datacenter, is not an option you should consider. If would require a tremendous effort and tie down your resources and in effect, make you static. You need to rub DevOps and business strategy on your business and culture. There are some really smart people out there that can help you with that and like everything else, it is an ongoing process that requires your constant attention.

The change is here. How will you empower your organization and become the new star? I am happy to discuss opportunities if you reach out by sending me an email.

Cheers

Tore


Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


PowerShell is getting increasing attention and gaining followers each day. That is a good thing in my book. I saw a tweet about Citrix OctoBlu automation where Dave Brett (@dbretty) was using it to save money with a PowerShell script (full post here) to power on and off VMs. I reached out to him and asked if he would like a little help with his PowerShell script. To my delight, he happily accepted and this post is about how I transformed his scripts to take advantage of the full power of The Shell. Fair warning is in order, since I have never used or touched a OctoBlu solution.


Starting scripts


(shutdownScript.ps1)



(StartupScript.ps1)



What we would like to change

First of, a PowerShell function should do one thing and do it well. My first goal was to split the function into two parts where we have one function that handles both the startup and the shutdown of the VM-guests. Secondly I would like to move the mail notification out of the function and either put it in a separate function or use the built in cmdlet Send-MailMessage which has been available since PowerShell version 3.0. Nothing wrong with using the .net class, however I like to use cmdlets if they provide similar functionality.

Secondly I changed the function to an advanced function to leverage WhatIf and all the streams (debug, verbose, information etc). I also added some Write-Verbose statements. The difference between a regular function and an advanced function can be as simple as adding [cmdletbinding()] to your function. If you do, you have to use a Param() section to define your parameters.

Third I added parameters to the function. From the scripts I decided to create the following parameters:

  • Credential as [PScredential]
  • XenServerUrl as [string]
  • VMname as [string[]]
  • Shutdown as [switch]

Forth I added Begin, Process and End blocks to enable PipeLineInput for the VMname parameter. Also to take advantage of configuring the requirements like Import-Module and Connect-XenServer in the Begin-block.

Fifth I added an output object to the function in which I output the VMname and the action taken with the VM (startup or shutdown). The reason for that becomes clear when we start to setup notification.

Those are the 5 big changes I have made to the initial scripts. Other than that I added some personal features related to the use of Write-Verbose and other minor stuff.


How to handle credentials

Every time you add a parameter to your function called username or password you should stop and think. You should most likely use a PScredential object instead. So how do you access those credentials at runtime? This script needs credentials and you cannot prompt the OctoBlu automation engine to provide those. Perhaps OctoBlu have a credential store, however I do not know. 

An secure and easy solution to this problem is to use the DAPI built-in encrypting API. The same logic can be applied to any service or service automation solutio that need specific credentials to execute your scripts included scheduled tasks. We will leverage tree cmdlets to accomplish this:

  • Get-Credential
  • Export-CliXml
  • Import-CliXml


First you need to start a PowerShell host as the user that need to use your credentials. Then we need to run these commands:




This will create a PScredential object and the Export-CliXml will protect the password with DAPI when you create the XenCred.xml file. That file can only be decrypted with Import-CliXml running under the account it was created with. So when you need to access those credentials you run:


(ImportCred.ps1)


The updated script


(Set-LabPowerState.ps1)




The Shell Thing




(Screenshot of OctoBlu, image by Dave Brett)


Dave Brett uses the profiles.ps1 script to make functions available in OctoBlu. That is fine, however it makes it hard for people that don’t know PowerShell to figure out where the function (Lab-Shutdown) comes from. I would suggest to add something like this in the script box:


(TheScriptThing.ps1)


This is just a suggestion which in my opinion makes it easier to follow what is happening. Since the Set-LabPowerState and the parameter VMName takes an array of strings, we could take the content of the file holding the names of the VMs and use that. I decided to use a foreach loop for readability reasons. 

I probably need to say something about a technique called splatting in PowerShell. Have a look at this line:

Set-LabPowerState @setLabPower -VMname $vm

A few lines up, you can see I create a variable $SetLabPower which is a hashtable. The keys in the hashtable match the name of the parameters of the function Set-LabPowerState. This makes it easier to read when you call functions or cmdlets that have many parameters. We can then provide those keyvalue-pairs to the function using a @ in front of the variable name.

The other thing to note is that I am using dotsourcing to make the Set-LabPowerState function available in the Script Thing session. I am assuming that the content of my new function is saved in the c:\scripts\Set-LabPowerState.ps1 file. 

Since my function outputs an object for each VM it processes, we can leverage that in the email notification setting and provide feedback on the VMs we have messed with. The output for the foreach loop is saved in the $results object. We convert this object to a string representation with the Out-String cmdlet and use that string object as the body of the email.


A note about ErrorAction

Since this script needs access to the XenServerPSModule module and you need to connect to an XenServer, I am using ErrorAction Stop on the Import-Module and the Connect-XenServer statements. This will prevent the script to continue if both prerequisites are not met. In addition the user is presented with a nice message explaining what the issue is.


Benefits of the new script

  1. We have a function that does a single task even if it can start and shutdown VMs.
  2. The functions accepts parameters so we can reuse it later
  3. The function is discoverable by the PowerShell help engine since we have added help in the function
  4. The automation task in OctoBlu is easier to understand. Think of the next guy
  5. We can execute the function without actually making changes since it is an advanced function and we have implemented ShouldProcess (WhatIf)
  6. The function outputs an object which we can reuse in the email notification scenario

So the only thing that is needed is someone to test my improved solution on an OctoBlu server. I have no idea if it works or if you think this is a better solution. I think it is.
Cheers
Tore
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Currently I am working on a big new module. In this module, I need to persist data to disk and reprocess them at some point even if the module/PowerShell session was closed. I needed to serialize objects and save them to disk. It needed to be very efficient to be able to support a high volume of objects. Hence I decided to turn this serializer into a module called HashData.



Other Serializing methods

In PowerShell we have several possibilities to serialize objects. There are two cmdlets you can use which are built in:
  • Export-CliXml
  • ConvertTo-JSON

Both are excellent options if you do not care about the size of the file. In my case I needed something lean and mean in terms of the size on disk for the serialized object. Lets do some tests to compare the different types:


(Hashdata.Object.ps1)

You might be curious why I do not use the Export-CliXML cmdlet and just use the [System.Management.Automation.PSSerializer]::Serialize static method. The static method will generate the same xml, however we do not need to read back the content of the file the cmdlet creates. 

If we compare the length of the string we get this:



As you can see, the XML serialization is very bloated with metadata, however the JSON serialization is much better. The winner is the HashData module with a 30% smaller size compared to a JSON string.


HashData module

Currently the module implements these cmdlets:

  • Assert-ScriptString
  • ConvertTo-HashString
  • ConvertTo-Hashtable
  • Export-HashData    
  • Import-HashData    
  • New-Date 

Like for the Import-XMLCli and Export-XMLCli, the logic for serialization and deserialization is implemented in Import-HashData and Export-HashData. I chose to also include and export from the module the helper functions ConvertTo-Hashtable and ConvertTo-HashString. Those could be useful in other scenarios as well. The New-Date function is probably my smallest function I have ever published. It purpose is to be able to convert datetime objects on deserializing objects.

Lets inspect the object we created above and look at it’s string representation:

(HashTextObject.ps1)

As you can see, the datetime object are converted to a [long] ticks value, which the function New-Date converts to a datetime object on deserialize.


Currently implemented property-types

In this version, your object may have properties of the following type:


  • String
  • Integer
  • Boolean
  • Double
  • DateTime
  • Array of String
  • Array of Integers


Currently supported and tested object depth is 1. That might change in the future. You may pipe or supply an array of PSCustomObject to the Export-HashData function.

I have deliberately chosen not to convert the objects from Import-Hashdata to PSCustomObject in this release. Depending on feedback and the need, I will consider adding this at a later stage.


Security

The Assert-ScriptString function is a security boundary and is implemented and used in the Import-Hashdata function. The reason for that is, when you serialize a object as an hashtable string, you are in essence generating a script file which in this instance will behave like a scriptblock. When you Import something, that is invoking a scriptblock, the Assert-ScriptString will make sure nothing evil will ever execute. The only function allowed in the serialized object currently, is the New-Date function.

The Import-HashData function has a switch parameter (UnsafeMode) that lets you override this security feature. Use it with care.


PowershellGallery and GitHub

The module is published to the PowershellGallery https://www.powershellgallery.com/packages/hashdata and here is the link to the GitHub repro https://github.com/torgro/HashData.

Please reach out to me on twitter or leave a comment. I love feedback both good and bad.


Cheers


Tore
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
I needed a project for my Xmas holiday and I needed something remotely work related. Thus the dubious PoshARM PowerShell module was born and brought to life during my Xmas holiday. Simply put it is a module that lets you build – for now – simple Azure Resource Manager (ARM) templates with PowerShell . 

The module can also import templates from a file or from the clipboard/string. Your partial template or ready made template can be exported as a PowerShell script. This blog post will walk you through how to use it and the features that is currently implemented. 



Update 08.02.2017:

The module is now published to the PowerShellGallery (https://www.powershellgallery.com/packages/posharm). It is still in beta version, however test coverage have increased and some bugs have been squashed during the testing. Also help is present, however somewhat lacking here and there.

Update 18.01.2017:

The module is now on GitHub. Here is the link to the repro (PoshARM on GitHub)



What is a ARM template?It is a text file, or more correctly a JSON text file. Here is a sample template which is empty:



The ARM template is an input to the Azure Resource Manager which is responsible for deploying you resource definition (your ARM template) onto an Azure Subscription. There are multiple ways you can make or build your template:


  • Any pure text editor (Notepad, Notepad++)
  • Visual Studio
  • Visual Studio Code
  • PoshARM (this module)

To summarize an ARM template consists of these main building blocks:

  • Parameters
  • Variables
  • Resources
  • Outputs

In addition you should also have a metadata.json file associated with your template. You can find the complete Microsoft documentation of an ARM template on this link: Authoring ARM-templates


Why PoshARM?Good question. In my experience this will probably not be the primary way of creating an ARM template for the professionals. For them is will probably be quicker to manually copy/paste and edit the template in an text editor or in Visual Studio. Trouble is when your template expands, it can get quite big. In addition I have yet to say hello to any IT-pro (with very few exceptions) that embrace and understand big JSON files, much less IT-pros that build their own ARM templates. If only a single person find it useful or any part of this module is useful, I will be happy.



Module statusThis is a public alpha preview. There are bugs in the module and it is not feature complete in any way. Currently I have Pester coverage for most of the cmdlets, however the current ARM-template test file is just to create a simple VM in Azure and it contains 6 resources, some parameters and variables. As always, help is missing everywhere and this is the reason I have not published it to Powershell Gallery yet.

There are currently no cmdlet for working with the template outputs property. It is handled and imported if you use the Import-ARMtemplate cmdlet, however it will be missing if you export it.



ARM VariablesTo interact with variables we have these cmdlets:

  • Get-ARMvariable
  • New-ARMvariable
  • Add-ARMvariable
  • Get-ARMvariableScript
  • Set-ARMvariable

Creating a new variable is straight forward and we can pipe the output to Add-ARMvariable to add it to the template:




Set-ARMvariable and Get-ARMvariable cmdlets implements a dynamic parameter for the Name of the variable. This makes it impossible to set or get the value of a variable if it does not exists:




ARM ParametersA parameter have many more properties than a variable, however you need to specify a Name and the Type of the parameter. These are the cmdlets we have:


  • Get-ARMparameter
  • Get-ARMparameterScript
  • New-ARMparameter
  • Add-ARMparameter
  • Set-ARMparameter

Creating a parameter for adminUserName can be as simple as this:




As with the variable cmdlets, we have a dynamic parameter for the name both for Get-ARMparameter and Set-ARMparameter.



ARM ResourcesThis is where it gets rather complicated. The resources property of the ARM template, expects an array of resources which in turn can have nested resources, which again can have nested resources. As you would expect, we have a few cmdlets to work with resources as well:


  • Get-ARMresourceList
  • Update-ARMresourceList
  • Get-ARMresourceScript
  • New-ARMresource
  • Add-ARMresource

Get-ARMresourceList provides dynamic resource type parameter for New-ARMresource. The Update-ARMresourceList cmdlet is used to update the cached version of the resource providers that is available in Azure. Currently the cached resource list is saved in the module path (.\Data\AllResources.json), however it should probably be moved to AppData.

Creating a new resource is straight forward. Currently it does not support lookup of variables and parameters, however that feature could be added later. Here is an example that creates an new Storage Account on Azure:


The New-ARMresource cmdlet implements a Dynamic parameter named Type. The value for this parameter is generated by the Get-ARMresourceList command. 



ARM template metadata
Each template should have some metadata that help to identify the template. There is a Set-ARMmetadata cmdlet that will create the metadata.json file for you. Here is an example metadata.json file:




Importing existing ARM templatesOn GitHub you can find loads of quick starter templates that you can modify and update. It would be pretty useless if this module did not let you import these templates and work with them. The Import-ARMtemplate will import an template from the clipboard/string or from a file on your computer. Here is how you can use it:



ARM template
For working with ARM templates, we have the following cmdlets:


  • Get-ARMtemplate
  • Get-ARMtemplateScript
  • New-ARMtemplate

The New-ARMtemplate cmdlet will create a new empty ARM template in the current Powershell session. Currently it will overwrite the current template if you have started creating one. This will change and will require you to specify the Force parameter if a template exists.

Get-ARMtemplate executed without any parameters will return the template which is stored in a module variable called $Script:Template. It also have 2 switch parameters:


  • Get-ARMtemplate –AsJSON
  • Get-ARMtempalte –AsHashTableString

The hashtable string version is easier on the eye compared to the JSON version, however that depends on your JSON experience level and your hashtable fondness level.



Helper functionsThere are two helper functions available in the module. Both of them are used heavily in the Script cmdlets which we will talk about next.


ConvertTo-HashIf you have worked with Powershell it should be pretty simple to understand what this cmdlet does. It converts an Inputobject to an hashtable, that is it actually outputs a ordered hashtable. It will chew through the Inputobject and create an ordered hashtable even for nested objects and arrays. Lets take it for a spin:






Out-HashStringGive this cmdlet an hashtable or an ordered hashtable an it will output the text version of it that you can paste into a Powershell host and execute. Let’s use the $fileObject hashtabel and see if we can get back the text representation of the object:



Yes, there it is with proper indention and everything.




Get-ARM*ScriptYou may have noticed that I have added a cmdlet for each property that have the Get-ARM*Script name syntax. The purpose of those cmdlets are to generate the Powershell script for each property in the template. Here is how you use it:

In the example we have created 2 variables, a parameter and a resource. These have been added to our template as we have moved along. Now we introduce the Get-ARMtemplateScript cmdlet which will give you the template as a script. Here are the commands we have executed:



Now we are going to run Get-ARMtemplateScript and see what we get back:



There we have it. We just created a ARM template with Powershell and converted the template back to a Powershell script. This also works with imported templates which enables you to copy snippets of code to create templates. The observant reader may spot the bug in the screenshot above. The SKU key is “System.Collections.Hashtable” which is not correct. Did I mention that it is not ready yet? Well it is not, but it is almost working.



Planned features

Depending on the reception of the module, I have planned some enhancements for the module:


  • Add help
  • Improve Pester coverage
  • Add cmdlets for creating outputs
  • Add support for template functions and keywords ([variables()], [parameters()], [concat()], [resourceId()] etc)
  • Template linking

Please contact me if you have other suggestions or ideas. I cannot think of everything.



Final thoughts


There is a very small amount of job left to make this module work at the current functional level. Please leave feedback here on my blog or reach out to me on Twitter (@ToreGroneng). The module will be published on PowerShellGallery.com and the link to the repro is here (link to PoshARM).

Cheers

Tore

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

From time to time I find myself needing a notification tool. Normally a simple show messagebox will suffice, however the trouble with that is that is blocking until someone clicks that OK button. I have seen workarounds that uses wscript and other things, however that is just meh.

There is a function in my general purpose repro on BitBucket that is called Show-Message. Here is the function:



The “good” thing about this is that it also works on Windows Server 2016 with the GUI experience, however who is using a GUI on a server nowadays? Everybody should be using Nano or Server Core, right?

In addition the function will fallback to a regular good old MessageBox if the Windows.UI.Notification namespace is not available. Please note that in those scenarios, the function will block execution until the OK button is clicked.

Here is how a notification looks like:



You can also control for how long the notification should be shown to the user with the DisplayDuration parameter.

If you simply want to display a regular MessageBox, just run Show-Message -Message "Test message" and it will show you a message box:



That is it.

Tore
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

After spending last week at the IT Dev Connections conference, I thought I should share some insight.

This was the first time I attended the conference, and I must admit it was a nice substitute for Microsoft Ignite which I was not able to attend. Key take away from the sessions and content presented is:


  • Automation and integration – do more work with less effort
  • Hybrid cloud is here to stay
  • Azure is removing dependencies (System Center) ASR
  • OnPrem presence is still substantial
  • Containers – Docker and Docker for Windows
  • IT Dev Connections – A great conference




IT Dev Connections
First off, this was held in a great place (Las Vegas). The venue (Aria) was perfect with nice auditoriums (not to big) and very good internet connection (Aruba rules). By my estimates, there were around 1500 participants (I do not have an exact figure). Compared to Ignite, which is around 20.000, it is more personal than those big conferences. They also managed to mix the agenda with a nice blend of Cloud/Azure/VMWare, Office365 and pure developer content.
They also created a smartphone app where you could build your own schedule and give your feedback to the sessions you attended! They even included a map of the conference area. I loved the app!!
My only complaint was that sessions I wanted to attend was scheduled to run at the same time, which is inevitable for such a conference. Only way to prevent this is to repeat sessions which I understand can be a challenge. Anyway all the sessions was recorded and can be watched online if you bought the premium (executive insight) package.


Automation and Integration



Powershell and Azure Automation
In full disclosure, I like Powershell and the things you can do with it, however it is my unbiased and objective opinion that you learn and/or ramp up your skills on tools/things like:


  • Powershell and Azure Automation
  • Source control (GitHub, BitBucket or Visual Studio Online (VSO))
  • JSON
  • Windows Powershell Desired State Configuration (DSC)
  • Unit Testing/Test-Driven-Developmen(TDD)
  • Any Cloud consumption (Azure, AWS, Google, Rackspace etc)


Some person I spoke to (sorry I do not remember his name) was amazed that almost every session he attended spoke of Powershell and integration with some product/cloud service. I think that is a perfect summary of where we are going.

When Microsoft released Windows10 they included the preview of production ready and supported content in Windows Management Framework 5 (WMF5). It includes a whopping list of 70-ish modules for your pleasure only. Also worth noting is that they included the Powershell Pester Unit Test module which is a module created by the community. Version 5 of the Management Framework is still in preview and is expected to reach production with the release of Windows Server 2016 later this year or early next year.

The day before the conference started, Microsoft announced that Azure Automation now supports Pure/Native Powershell. I feel for the speakers at the conference because even if this has been an announced feature, the exact time was not known and they had to change their presentations in the last minute to be up to speed. Kudos to the speakers to be able to implement the changes (Trevor Sullivan, David O’Brien and Aleksandar Nikolić and other people too).

Native Powershell support in Azure Automation is a small but very important step to enable the world to embrace automation in the cloud. Combine that with the fact that it supports on-prem workernodes (or runbook/powershell runspace hosts if you like) and you have a nice package that should make you able to receive a nice ROI on you current Powershell skills and not worry about the annoyances in Powershell workflows. Further Azure Automation now supports source control like GitHub, which should make you feel warm and fuzzy.



Unit testing

Test-Driven-Deveopment process

Yep, it is coming to the IT-Pros as well. Traditionally this has been something the software developers has cherished or ignored depending on their company policy and personal preference. As I mentioned above, Microsoft now includes a Unit Test framework for Powershell in their supported version of Windows10. Needless to say that this module also is present in Windows Server 2016. This begs the question: Why?


Apparently, Microsoft thinks that you should Unit Test your Powershell script. If your Powershell script makes changes to your production infrastructure, it makes perfect sense. Just saying, “I have tested this, it works”, just does not cut it any more. Prove it with Unit Tests and you confirmed your statement. Which person would you hire to do some Powershell work for you that would affect your production platform? A guy that says it works or the guy with the unit tests that proves that it works?


Who is JSON?

JSON is a way of representing data structure in a way that is easy to consume (my definition not the official one). It became a standard in javascript and has almost completely replaced XML in asynchronous browser/server communication. It is also the data format the Azure Resource Manager (ARM) understands and consumes in its quest to provide you with cloud resources.

The cloud community has created a nice “Quick starter JSON templates” repository for you on GitHub. In there you will find ready-made templates that will configure the cloud resource for you. They are also a great starting point if you want to create your own templates to spin up or scale resources you will have running on Azure.


(Powershell can convert TO/from JSON)



Source control – Why should you use repositories


Source control tracks changes made to files/content, tags the change with the person who made it and commits it to the active branch (Master/Development/whatever) of your repository. I have been using source control for my Powershell scripts since 2012 and it has saved my bacon a couple of times. It is a perfect match for agile and quickly changing environments that gives you control of the source and at the same time documents changes that have been made.The screenshot above is from Atlassian’s SourceTree application that runs on Windows and Mac. GitHub has an application that is called GitHub. You can use both to connect to BitBucket or GitHub repositories.

The most popular Source Control systems out there and those that I have used is:


  • BitBucket
  • GitHub
  • Visual Studio Online(VSO)


They have slightly different pricing strategies, so use the one that fits your needs. Bitbucket has up to 5 users for free and GitHub is free if you only use public available repositories.

Writing scripts, JSON-template files or Unit Tests is like writing a program. The process is very similar. You have a “Development/Test” branch that is your work in progress and a “Production/Master” branch that is production quality code/content that should run or is running in your environment. As your work progress, things flow from one branch to another, like it should, however remember:

“Before you move a feature/Fix/Patch from “Development” branch and into “Master” you make sure that the Unit Test are up to date to reflect the changes and that they pass.”

This is where it all comes together and you might understand why Microsoft have a Unit Test module for your pleasure in their OS. It just makes perfect sense in a DevOps world to prevent unwanted downtime and a quick way of identifying the changes that broke your production system or application.



Windows Powershell Desired State Configuration (DSC)

Linux has been doing configuration management for years with software like Chef and Puppet. People think that DSC is a Powershell feature, however it is a Windows feature first implemented in Powershell. It is built upon WSMAN, CIM, OMI (WMI for Linux). DSC now also supports Linux and Chef has integrated their management tools to support DSC on Windows. Currently there is over 100 DSC resources that you can use to build your infrastructure and make it idempotent and scalable. You can even create your own DSC resources using Powershell scripts or c# code if you prefer.


There is currently a handful of Resources built into Windows10, however the majority of the resources created by the Powershell team are marked experimental and sadly not supported by Microsoft, yet. Sometime in the future this will change, nevertheless I dare not to speculate on when those will be officially supported by Microsoft. That said, I know Stack-exchange (the website) current rely on DSC for their production infrastructure. So what is your excuse for not implementing this in your dev/test/staging area if not production?

There are a number of repositories on GitHub that contains resources for DSC. Microsoft’s repository for experimental DSC resources is here (not officially supported by Microsoft):

https://github.com/PowerShell/DscResources

PowershellOrg also has a repro on GitHub with their community resources. There you will find some nice features created by Steven Murawski who was the lead architect when StackExchange implemented DSC in their production environment:

https://github.com/PowerShellOrg


Why should software companies/vendors care about DSC? 

They should care about DSC the same way they care about their customers consuming/buying their software. I cannot remember how many times in the past I spent more time collecting all dependency software for a particular system, than installing the software it self. Not to mention the time consuming process of configuring software after in has been installed and making sure it does not change unless you want it to.

Would it not be better if you provided a Desired State Configuration for your system that collected all dependencies, scaled the system to the customers needs and presented a configuration for the base setup of the system? Makes perfect sense to me. I realize that it is a big investment for anyone wanting to provide this for their customers, however think about the competitive edge you will build in the market. I would be very surprised if you did not receive a good ROI after making the right decision.

A fair warning if you plan to embark on the DSC journey. Currently the documentation on teched regarding DSC is to put it mildly lacking. I am confident that this also will change and receive a massive update thanks to the constant request and references to the missing elements by my friend Trevor Sullivan (you should follow him on twitter @pcgeek86).


My automation insight

If you have a cloud strategy and/or a hybrid cloud strategy, you are doing it wrong if automation and integration is not a key element of your implementation strategy. The cloud is built to scale and and is designed to be easily automated. That is why it has several APIs for you to work with (REST, Pyton, Ruby, .Net and of course Powershell).

If you want to create a successful career or business going forward, you need the skills/tools I mentioned above. Like with any huge shifts in technology, you will have early adaptors, mainstream adaptors and late adaptors. A competitive edge and/or success is usually not created by the contenders in the latter category.

In part 2 we will jump into Hybrid Cloud, talk about the missing link (see my previous blog entries about the subject) and take a deep dive into MAS (AzureStack). Is it still missing? Stay tuned!

Cheers

Tore
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
This is part 2 of a multi post around topics discussed in Part1. Next up is Hybrid Cloud and how it will impact us and why it is the way of the future.


Hybrid cloud

Microsoft changed their focus a couple of years ago, from the primary solution provider for enterprise software running in datacenters to a Mobile/Cloud first strategy. Their bet was that the cloud would be more important and that the economic growth would be created in the emerging cloud market. Needless to say that it looks like their bet is paying of tenfold.



Azure

Over the last year there have been a significant number of announcements that connect Azure to your datacenter, thus enabling enterprise to utilize cloud resources with enterprise On-Prem datasources or local hybrid runbook workers. It should not come as any surprise that this trend is accelerating in preparation for the next big shift in hybrid technology, that is expected to be released with Windows Server 2016 – Microsoft Azure Stack. Before we go down that rabbit hole, a short history summary of the hybrid cloud efforts by Microsoft.


Azure Pack – The ugly

In my honest opinion, this was a disaster. It was Microsoft first attempt on building a hybrid cloud using workarounds (Service Provider Foundation anyone) to enable a multi tenant aware solution. It relied heavily on System Center technology and was a beast to configure, setup and troubleshoot. Although there was/is integration with Azure, it relies on the old API which is Azure Service Manager and it is not an consistent experience. Some times the early adopters pay the ultimate price, and this was one of them. Currently there is not announced any upgrade path from Azure Pack to the new Azure Stack solution, and I doubt there will ever be one.

That being said, it works and provides added value for the enterprise. On the downside it does not scale like Azure and requires expert knowledge to be managed. My advice if you are considering a Private or Hybrid Cloud, wait until Windows Server 2016 is released and have a look at Azure Stack instead.



CPS (Cloud Platform System) – The bad

This is Microsoft first all-in-a-box-cloud solution powered by Dell hardware. The entire system runs and scale very nicely. When you want more capacity, buy a new all-in-one-box and hock it up to the first box. It was built upon the first attempt at creating a Private Cloud in a “box” running Windows Azure Pack. CPS initial configuration is done by a massive single powershell script and was planned and released before the new Azure Resource Manager (ARM) technology hit the ground running.

Why is it bad? Well because in it’s current release, it is powered by Azure Pack and it fits in nicely with the Clint Eastwood analogy I lined up. I would be very surprised if it is not bundled with Azure Stack when that is released later this year or early next year. Time will show.

Just if you were wondering. The price-tag for this solution with hardware, software and software assurance would run you something like in the region of $2.5 million. That is for the first box. You may get a discount if you buy several boxes at the same time.



MAS (Microsoft Azure Stack) – The good

Fast forward to Microsoft Igninte 2015 and MAS was announced. It is currently in limited preview (the same as for Windows 2016 preview program) and is expected to be released to the market when Windows Server 2016 reach RTM.

MAS is the software defined datacenter you can install in your datacenter and create your own private cloud. It is identical to Azure, behaves like Azure in any respect and it can run in your datacenter giving you the consistent experience across boundaries. Think about that for a minute and reflect on how this will change your world going forward.

A true Hybrid Cloud will manage and scale your resources using technology built and enabled by the cloud. Resource templates (JSON ARM templates) you create to build services in MAS, can with the flip of a switch be deployed to Azure instead and the other way around.


MAS – Overview


This is a image I borrowed from a presentation held by Jeffry Snover during the Powershell Summin held in Stockholm this year (full video here). It does not rely on any System Center components and is built to be a true multi tenant solution. There will be providers that will support the different System Center products, which is probably a good idea.

The MAS structure is strikingly similar to something we all know very well. It contains the conceptual building blocks of an operating system or a server if you like.


MAS - Hardware and Abstraction layer

The hardware layer explains it self. It is the common components that a server is build of like CPU, storage, network and other components. Above this we have the Abstraction layer that consists of Plug-n-Play and a drivers stack. This layer is there to assist you when you “plug in” new hardware in your datacenter or add more storage etc. This is also the layer the MAS kernel communicates with.

Big progress have been made into creating a Datacenter Abstraction Layer (DAL or what is otherwise known as Hardware Abstraction Layer (HAL) on Windows) that conforms into a standard that hardware vendors implement. These are


  • System Management Architecture for Server Hardware (SMASH)
  • Common Information Model (CIM, or WIM or earlier versions of windows)
  • Storage Management Initiative (SMI-S)





The main goal of DAL is to create a unified management of hardware resources. Microsoft have created an open source standard for this and it is called Open Management Infrastructure (OMI). OMI has been adopted and implemented by Cisco, Arista, HP, Huawei, IBM and different Linux distros. This is why you can run Linux in Azure and MAS can talk to and configure hardware resources like network, storage and other devices for you.

Now for Server and Rack Management there will be something called RedFish which implement a OData endpoint that support paging, server-side filtering and have request header support. There will be Powreshell cmdlets you can use to interact with RedFish, however at this time it is uncertain if it will be ready by Windows Server 2016 RTM.


MAS - Initial System Load

The process of the initial setup of MAS is entirely done and enforced by Desired State Configuration (DSC), not Powershell like you might expect. This has a number of implied consequences you might want to reflect on;

  1. If DSC is used in MAS, is Azure also under the hood using “DSC”?
  2. If DSC is used in MAS, would it be fair to say that Microsoft has made a deep commitment into DSC?

The answer to no 1 is; "I do not know, yet". For no 2, it is a big fat YES.

The Azure Resource Manager (ARM) in Azure and MAS bears a striking resemblance to Desired State Configuration:


  • They are both idempotent
  • Both use resource or resource providers
  • They both run in parallel 
  • They are both declarative
  • ARM uses JSON and DSC uses MOF/textfiles
  • A DSC configuration or a JSON template file can be re-applied several times and only missing elements or new configuration is applied to the underlying system.


MAS Kernel

You only want secure and trustworthy software to be running here. II is the heart and soul of MAS and it is protected and run by Microsoft new Cloud OS – Nano server. Nano server is the new scaled down Windows 2016 server that is build for the cloud. In footprint it is more that 20 times smaller than server Core and boots in less than 6 seconds.

There has been a number of security enhancements that directly apply to the MAS kernel:


  • Enhanced security logging – Every Powershell command is logged, no exceptions
  • Protected event logging – You can now encrypt your event log with a public key and forward them to a server holding the matching private key that can decrypt the log.
  • Assume breached – This implies that there has been a mindset change in Microsoft. They now assume that the server will be breached and the security measures/plan is implemented accordingly.
  • Just Enough Admin (xJea) – JEA is about locking down your infrastructure with the JEA toolkit and thus limiting the exposure of privileged access to the core infrastructure/systems. It now also supports a big red panic button for those cases that require emergency access to the core to solve a critical problem that otherwise would have to be approved through appropriate channels.


To show developers that Microsoft is serious about Powershell, they have made some changes to Visual studio to increase the support for Powershell and included some nice tools for you:


  • Static Powershell code analysis with script analyzer
  • Unit testing for Powershell with Pester (see Part1)
  • Support for Classes in Powershell like in C-sharp


MAS - User space

This is where the tenant Portal, Gallery and Resource providers live. Yes, MAS will have a gallery for your services that your tenants can consume. This is where the DevOps lifestyle come into play. Like we talked about in Part1.

In addition Microsoft has proved it cherishes Linux with the announcement that they will implement Open SSH in windows. Furthermore they have started to port DSC to Linux and spinning of their OMI commitment in the open source community.


Shadow IT

Everybody have a shadow IT problem. People that say they do not, just does not realize it yet. It has become so easy to consume cloud resources that solve line of business problems, that IT can’t or is unable to solve in a timely manner. It could be any number for reasons for this, commonly it is related to legacy requirement, budget restraints or pure resistance towards any change not initiated by IT themselves.

One of the goals in implementing a hybrid/private cloud should be to use the technology to re-enable IT as a strategic tool for management that creates completive advantages that drive economic growth. In my opinion Executive Management has for to long regarded IT as a cost center and not as an instrument they can use to achieve business goals, strategic advancement and financial progression.



Missing automation link

1,5 year ago I wrote a 2 part blog (Part1 and Part2) about the missing automation link. Basically it was a rant where I could not understand why DSC was not used more to enable the Hybrid Cloud. Windows Azure Pack just did not feel right, and it turns out I was right. Well now we have the answer and it is Microsoft Azure Stack. It runs Microsoft Azure and it will perhaps one day run in your datacenter too.



Will the pure datacenter survive?

For the time being, I think they will, however they will we greatly outnumbered by the number of hybrid clouds running in conjunction with the cloud and not in spite of the cloud. Currently we are closing in on a Kodak moment. It does not matter if your datacenter is perfect in the eyes of who ever is in charge. If it does not solve the LOB problems in your organization, the cloud will win if it provides the right solution at the right time.



Why should you implement a Hybrid Cloud?

Question is more like, why not? I know it is a bit arrogant, however Microsoft has made a serious commitment into a consistent experience whether you are spinning up resources in the Cloud or in your private Hybrid Cloud. Why would you not be prepared to utilize the elasticity and scalability of the cloud? With the Hybrid Cloud you get the best from both worlds in addition to most of the innovation Microsoft does in the Cloud.

As Azure merges closer and closer with On-Prem datacenters, it should become obvious that not implementing a hybrid cloud will be the wrong way. Even if Azure will merge nicely with On-Prem it will not compare to the integration between Azure and MAS.

Two more important things that will accelerate the shift in IT. Containers/Azure Container Service and the new cloud operating system Nano server will change the world due to their portability light weight. For the first time I see opportunities for a Cloud Broker that trades computing power in an open market. Computing power or capacity will become a commodity like pork belly on the stock exchange.

How do you manage Nano server and Containers? Glad you asked, with powershell of course. Do you still think that powershell is an optional skill going forward?

In part 3 we will talk in more depth about the game changers; Nano server and Containers. 


Cheers

Tore
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


I needed to dedicate a full blog post about Windows Server 2016 and the way it will impact you going forward. At some point some of these features will apply to you too, as your infrastructure start to run the new server bits. Here are the highlights from MSignite.

> Highlights
  • Installation
  • Development
  • Packaging and deployment
  • Configuration
  • Containers
  • Operation Validation and Pester Testing
  • Operating security

> Installation
Server 2016 comes in three flavors. You have the “Desktop experience” server intended for management of other flavors of 2016 or as a terminal server. Next is Server Core which is just the same full server without the desktop and is headless, intended to be managed from Powershell or from a server using the desktop experience. Then there is the new kid on the block, Nano Server. It is the new Cloud OS, born in the cloud and the workhorse for everyone serious about creating modern, lean, super-fast and easy to manage applications. 

Installation of the Desktop Experience and Server Core is just like installing like you are familiar with. For Nano server you need to use a new GUI tool that guide you through the process of creating an image or you can just use Powershell. The GUI tool is currently not in the Evaluation version of Server 2016, however it will be available when it reaches general availability in mid October. 

Nano-Server
It is really small compared to the Core Server and not to mention the full Desktop Experience server. Here are some key metrics for you to think about:









How do you mange Nano server and/or Core Server?
There are a quite a few options for you. The Nano Server is headless and only have a very simplistic local GUI which is text based. To manage your server, you need to use one of the following:

  1. Install a remote management Gateway and use the Web-GUI in the Azure Portal
  2. Install a Desktop Experience 2016 server and use all your regular tools like:
  • MMC in general
  • Event Viewer MMC
  • Registry
  • Services MMC
  • Server Manager MMC
  • Powershell ISE (remote file editing)
  3. Powershell and Powershell Remoting
  4. Local textbased GUI (very rough and few settings available)

You can still have System Center VMM agents on your Nano Server and System Center Operations Management Agent. Those are packages you will have to install during image creation or add with Powershell and PackageManager.

The intended workloads for Nano Server are:

  • Clustering
  • Hyper-V
  • Storage – Scale out File system (SoFS)
  • DNS server
  • IIS (.net Core and ASP.Net Core)
  • Containers, both Windows Containers and Hyper-V containers

Nano Server is a first class Powershell citizen with support for Desired State Configuration and Classes in Management Framework 5. The Nano server runs Powershell Core which is a subset of the full Powershell version you have in Server Core and Desktop Experience servers. 


> Development
Nano server has a full developer experience, server core is not. You have support for the Windows SDK and Visual Studio 2015 can target the Nano server. You even have full remote debugging capabilities from Visual Studio.


> Packaging and Deployment
Nano server do not support MSI-installers. Reason for that is custom actions that are allowed in MSI. Instead Microsoft has created a new app installer built upon AppX which is called WSA (Windows Server App) installer. The WSA extends the AppX schema and becomes a declarative server installer. You still have support for server specific extensions in WSA like NT service, Perf counters, WMI-providers and ETW events. Of course the WSA does not support custom actions!

Package management architecture:




This might look a little complex, however it is quite simple. You have some core Package management cmdlets which relies upon Package Management Providers who are responsible for sourcing packages from Package Sources. This is really great because for the End User you use the same cmdlets against all Package sources (NuGet, Powershell Gallery, Chocolaty etc). The middle ware is handled by the Package Management providers. So the End User just need these cmdlets to work with packages:




So to install a new package provider, you just use the PackageManagement module:




Here are some of the Providers you can install. Notice that you have a separate Provider for Nano server which you can use to install the VMM/SCOM agent:




> Configuration
Since the Nano server is small and have a cloud friendly footprint, you most likely will have a lot of them running. To configure them and make sure the configuration does not drift and to make it easy to update their configuration, you use something called Desired State Configuration (DSC). This was introduced in WMF 4 and is declarative way of specifying the configuration of a server or a collection of servers.

There are tools out there you can use to leverage management of your configuration. Lookup Chef or Puppet or even Azure Automation for how to do that. This is a big concept and would require a separate blog post to get into details. Please also contact me if you have any further questions about DSC.


> Containers
This is also a big topic and something that has been around for ages in the Linux part of the world. Basically it is just another form of virtualization of the operating system into a single package that is small and runs super-fast. If you have ever heard about Docker, you have heard about containers. Docker is containers. Docker is supported in the new Windows Server 2016, hence you can run Docker containers on it.

One of the core concepts of containers, is that you may have many of them running in the same container at the same time. This is possible because the containers share the same kernel/operating system.




In Windows we will have to flavors of containers:
  • Windows Containers
  • Hyper-V Containers



So with Hyper-V containers we get isolation with performance since the containers do not share the kernel but have their own copy of it. This is important for multi-tenant scenarios and for regulatory requirements. Auditors usually do not like systems that have shared kernel in the sentence, someone told me.


> Operation Validation Testing
This is one of my babies and the coolest thing about how we embrace the future. Microsoft have created a small framework on top of the Pester Unit Testing framework/Module shipped with Windows 10 and Windows Server 2016. The concept is very simple and very powerfull; Create Unit Tests that verify your infrastructure. These tests can be very simple or extremely detailed. You will have to figure out what you are comfortable with and what suits your environment. 

The Pester Module enables us to write declarative statements and executing those tests to verify that the infrastructure is operating according to our needs. 




When you invoke the test, you will see something like this:




This is something I have been working with the last 2 years and I can tell you that it has saved my bacon quite a few times. It has also enabled me to notify my clients about issues with their infrastructure which they were not aware of until I told them. This could be something as simple as a SQL service account that have an expired password or that has been locked out somehow. 

I have created a GitHub repro which contains Pester or Operation Validation Tests for Identity Manager, VMM, Active Directory among other things. This is a community repro which accept pull requests from people who have created tests for other applications and services. Please contact me if you need further information or want to discuss this in detail.


> Operating Security
Just after Snowden shared his knowledge with the world, Microsoft launched a new concept called JEA – Just Enough Administration. It is a new Powershell framework that secures administrative privileges by only issuing Admin Privileges in a constrained way and for a limited amount of time.
You can find more information about JEA here:

https://github.com/PowerShell/JEA
https://gallery.technet.microsoft.com/Just-Enough-Administration-6b5ad370


> Other things
There are a couple of things you should be aware of. First off, if you plan to use Containers in your infrastructure, you must run them on Server Core or Nano Server. Containers are not supported on Servers installed with the Desktop Experience. This implies that you should probably consider installing your Hyper-V servers with the Nano server OS or the Server Core option. Also all the new cool features like SoFS and Storage Replicas with Storage Direct requires the Datacenter licensing option.

Cheers

Tore






























































Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview