Mike F Robbins – Scripting | Automation | Efficiency
Mike F Robbins is a Microsoft MVP on Windows PowerShell and a SAPIEN Technologies MVP. He is a co-author of Windows PowerShell TFM 4th Edition and is a contributing author of a chapter in the PowerShell Deep Dives book.
Recently, one of the companies that I provide support for switched from using ESET to a new antivirus vendor. The problem is that all of their servers had both ESET File Security and the ESET Remote Administrator Agent installed which needed to be uninstalled before installing the new antivirus agent. I determined that the following commands could be used to uninstall the applications. Running msiexec.exe /? shows the available options. Based on this information, it appears that /x is to uninstall and /qn is for no user input. The uninstall of ESET File Security using the previous commands that I provided cause the system to reboot automatically. There appears to be a switch for msiexec.exe to suppress the reboot, but it’s not something that I tried since the removal process does indeed require a restart. I initially wrapped those commands inside of the Invoke-Command cmdlet to remotely remove those two applications, but the problem that I ran into is the remoting session didn’t wait long enough for the uninstall to complete before ending the session. The solution was to use Get-Process inside of Invoke-Command with the Wait parameter to allow the uninstall to complete before the remoting session ended. You could use Get-Content to read from a list of server names in a text file or Get-ADComputer to read server names from Active Directory. You could also query the event logs of those remote servers to verify that the applications were indeed uninstalled. While Get-WinEvent has a ComputerName parameter, it’s much more likely that it will be blocked by a firewall on your network or that the necessary ports to use it won’t be open on the server that you’re querying. You’ll avoid these problems by wrapping it inside of Invoke-Command instead. This also allows all of the remote systems to be queried at once (up to 32 at once by default) instead of one at a time. µ
Sometimes you need to add more than one parameter set to a function you’re creating. If that’s not something you’re familiar with, it can be a little confusing at first. In the following example, I want to either specify the Name or Module parameter, but not both at the same time. I also want the Path parameter to be available when using either of the parameter sets. Taking a look at the syntax shows the function shown in the previous example does indeed have two different parameter sets and the Path parameter exists in both of them. The only problem is both the Name and Module parameters are mandatory and it would be nice to have Name available positionally. Simply specifying Name as being in position zero solves that problem. Notice that “Name” is now enclosed in square brackets when viewing the syntax for the function. This means that it’s a positional parameter and specifying the parameter name is not required as long as its value is specified in the correct position. Keep in mind that you should always use full command and parameter names in any code that you share. While continuing to work on the parameters for this function, I decided to make the Path parameter available positionally as well as adding pipeline input support for it. I’ve seen others add those requirements similar to what’s shown in the following example. This might initially seem to work, but what appears to happen is that it ignores the Parameter blocks for both the Name and Module parameter set names for the Path parameter because they are effectively blank. This is because another totally separate parameter block is specified for the Path parameter. Looking at the help for the Path parameter shows that it accepts pipeline input, but looking at the individual parameter sets seems to suggest that it doesn’t. It’s confused to say the least. There’s honestly no reason to specify the individual parameter sets for the Path parameter if all of the options are going to be the same for all of the parameter sets. Removing those two empty parameter declarations above the Path parameter that reference the individual parameter sets clears up the problems. If you want to specify different options for the Path parameter to be used in different parameter sets, then you would need to explicitly specify those options as shown in the following example. To demonstrate this, I’ve omitted pipeline input by property name when the Module parameter set is used. Now everything looks correct. For more information about using multiple parameter sets in your functions, see the about_Functions_Advanced_Parameters help topic. µ
I have a function in my MrToolkit module named New-MrScriptModule that creates the scaffolding for a new PowerShell script module. It creates a PSM1 file and a module manifest (PSD1 file) along with the folder structure for a script module. To reduce the learning curve of Plaster as much as possible, I’m simply going to replace that existing functionality with Plaster in this blog article. Then moving forward, I’ll add additional functionality. For those of you who aren’t familiar with Plaster, per their GitHub readme, “Plaster is a template-based file and project generator written in PowerShell. Its purpose is to streamline the creation of PowerShell module projects, Pester tests, DSC configurations, and more. File generation is performed using crafted templates which allow the user to fill in details and choose from options to get their desired output.” First, start out by installing the Plaster module. Create the initial Plaster template along with the metadata section which contains data about the manifest itself. You could also run the same command without splatting, although you’ll need to make sure the folder structure already exists if you use the following example. That creates an XML file that looks like the one in the following example. Based on the information from the about_Plaster_CreatingAManifest help file which can be viewed in PowerShell or in their GitHub repository, Name is mandatory, ID is a GUID that’s automatically generated if one isn’t provided, Version defaults to 1.0.0 unless otherwise specified, Title defaults to the same value as Name unless specified, and Description, Tags, and Author are all optional. Description will be added, but blank if not specified. Tags and Author won’t be added unless specified. Notice that there’s a parameter and content section in the previous example that was created. In this scenario, the parameters section is for adding values that may be different each time a new PowerShell script module is created. It’s the same concept and no different than adding parameters to a PowerShell function. These are almost the same items that are parameters for my New-MrScriptModule function. I’ve added a Version parameter for the module version. Path will be specified when creating the script module instead of at this point. Also notice that Author is set to “type=user-fullname”. In my opinion, this is better than setting a default because if a module author isn’t specified, this information will be retrieved from the users Git config. This also allows conditional defaults for the author depending on whether or not you’re using the “IncludeIf” functionality in your Git config file. IncludeIf was introduced in Git version 2.13. Next is the content section. I’ve placed my standard PSM1 file in my Plaster template folder and now I’m all set to create a new PowerShell script module. The completed Plaster template is shown below. The verbose parameter provides additional information which can be helpful in troubleshooting problems. A hash table can also be used to create the module without being prompted. The parameter names are TemplatePath, DestinationPath, and …
You’ve implemented Azure AD Connect to synchronize accounts in your on-premises Active Directory environment to Azure AD. If you took the defaults while running the setup wizard for Azure AD Connect, then everything in your Active Directory environment is synchronized. If you decided to filter the synchronization later to only specific OU’s (Organizational Units) in your Active Directory environment, you could run into a scenario where the number of deletions exceeds the default threshold of 500 objects. If this occurs, you’ll receive an email stating the following: The Identity synchronization service detected that the number of deletions exceeded the configured deletion threshold. A total of 1004 objects were sent for deletion in this Identity synchronization run. This met or exceeded the configured deletion threshold value of 500 objects. We need you to provide confirmation that these deletions should be processed before we will proceed. Please see Preventing Accidental Deletions for more information about the error listed in this email message. There’s a way to see which objects are about to be deleted as shown in the support article referenced in the information contained in that email. You can run the necessary commands directly from the machine that the Azure AD Connect is installed on. Log into a server with RDP, are you kidding me? While I could simply using the Enter-PSSession cmdlet to establish a PowerShell One-To-One remoting session, I decided to use implicit remoting instead to accomplish this task. In addition to using implicit remoting, why not use PowerShell Core 6.0 while we’re at it, although this particular version of PowerShell certainly isn’t required. First, I’ll stored my Azure credentials that have the necessary rights to perform this task in a variable. I’ll also store admin credentials for the on-premises server running Azure AD Connect in a variable. I’ll use a PowerShell one-liner to both create a PSSession to my on-premises server running Azure AD Connect and import the ADSync module with Implicit remoting. My recommendation is to always check the value of an item before making a change to it so you can always get back to where you started. Either increase the threshold or disable the setting altogether while the mass deletion occurs. Proceed at your own risk. Verify the setting is indeed disabled. Once the deletions have completed, be sure to re-enable the protection otherwise it’s an accident waiting for a place to happen. Last, but not least, verify the protection is indeed enabled. µ
I’ve been working through the Iron Scripter 2018 prequel puzzles which can be found on PowerShell.org’s website. In puzzle 3, you’re asked to create a “reusable PowerShell artifact”. To me, that almost always means a PowerShell function. One requirement is to pull information from the PowerShell.org RSS feed. Invoke-RestMethod which was introduced in PowerShell version 3.0 is the easiest way to accomplish that task. You’re also asked to display the returned information in a way that allows the user to select an item from the feed. My first thought was to use Out-GridView to build a GUI based menu for selecting the item, but that won’t work in PowerShell Core since it doesn’t have the same GUI based commands as Windows PowerShell. This means I’ll have to build a text based menu in PowerShell. Luckily, I’ve recently read “The PowerShell Scripting and Toolmaking Book” by Don Jones and Jeff Hicks which has some examples of building text based menus, although I designed mine a little differently than theirs. Consuming content from others often sparks your own creativity. The next part was actually the most challenging. Give the user the ability to display the selected item from the RSS feed in either their browser or in plain text. While the browser part was easy enough and the text part was also easy for Windows PowerShell, returning the content of the feed in text for PowerShell Core was more difficult. This was due to the ParsedHtml property not existing for Invoke-WebRequest in PowerShell Core. One of the drawbacks to using Invoke-WebRequest in Windows PowerShell is that it’s dependent on Internet Explorer, although that isn’t the case in PowerShell Core running on a Windows system. Making it work for any RSS feed was a bonus and mine does work against a couple of different sites I tested it against (PowerShell.org and my mikefrobbins.com blog site). Your results may vary when using it for RSS feeds on other websites. I use a regular expression to parse the text if PowerShell Core is used. That regular expression could probably use some more tweaking and I’ll post an update if I decide to work on it more. The other things that could be added are error handling and verbose output. One of the rules I’ve broken in the PowerShell function shown in this blog article is that a function should do one thing and do it well. More about that rule can be read about in this Hey, Scripting Guy! Blog article. This function could be broken up into multiple functions to better adhere to that rule. The function shown in this blog article can be downloaded from my IronScripter repository on GitHub. It can also be installed from the PowerShell Gallery. See the scripting games category on this blog site to view my other Iron Scripter entries along with my previous scripting games entries. Other recommended reading: PowerShell Core 6.1 Web Cmdlets Roadmap blog article by Microsoft MVP Mark Kraus. Learn to write award winning …
You’ve decided to install PowerShell Core on your Windows system. First of all, keep in mind that PowerShell Core version 6.0 is not an upgrade or replacement to Windows PowerShell version 5.1. It installs side by side on Windows systems. Being aware of this makes what is shown in this blog article make more sense, otherwise it can be confusing. Based on the response to a tweet of mine from Don Jones, it appears that I’m not the only one who thought PowerShell Core should have been version 1.0 instead of 6.0 to help differentiate it and eliminate some of the confusion. Update-Help has been run in Windows PowerShell to make sure the help system is 100% up to date. Trying to retrieve the help information for a command in PowerShell Core returns a message stating only partial help is displayed because the help files don’t exist and need to be downloaded with Update-Help. This is because the help files for PowerShell Core are located in a different location than the ones for Windows PowerShell. This means Update-Help has to be run in PowerShell Core to update its help files independently of Windows PowerShell. As I previously mentioned, PowerShell Core is indeed a totally separate side by side installation from Windows PowerShell. Other differences also exist. In Windows PowerShell, the first time you ask for help on a command, it prompts you to download the help files. This occurs only if you use Get-Help and not the Help function in Windows PowerShell, but it never prompts to download the help in PowerShell Core or at least it didn’t for me. If you’ve running Windows PowerShell and PowerShell Core on your Windows system, don’t forget to update help in both of them. µ
As I mentioned in my previous blog article, each week leading up to the PowerShell + DevOps Global Summit 2018, PowerShell.org will be posting an iron scripter prequel puzzle on their website. As their website states, think of the iron scripter as the successor to the scripting games. If you haven’t done so already, I recommend reading my solution to the Iron Scripter prequel puzzle 1 because some things are glossed over in this blog article that were covered in detail in that previous one. Prequel puzzle 2 provides you with an older script that looks like it was written back in the VBScript days. It retrieves information about the operating system, memory, and logical disks. This entry is fairly similar to my previous one as it queries all of the remote computers at once using a CIM session which is created with the New-CimSession cmdlet that was introduced in PowerShell version 3.0. Using the CIM cmdlets instead of the older WMI ones allows it to run in PowerShell Core 6.0. The built-in DriveType enumeration is used for parameter validation and tabbed expansion / intellisense of the DriveType parameter. I also decided to add the operating system ReleaseId which has to be retrieved from the registry. The function shown in the previous example outputs the raw data returned by the cmdlets as a single type of object with a custom name. None of the disk or memory sizes are converted in case the person working with this function wants to use that raw information. The only exception is that locale is converted to decimal instead of returning it as the default hexadecimal value. Who really wants to see drives or memory returned in bytes or kilobytes when they run a PowerShell command? A types.ps1xml file is used to extend the types of the returned object so the default output is much more user friendly. If you’re interested in learning more about types in PowerShell, I recommend taking a look at the About_Types.ps1xml help topic. Even through I’ve specified the default properties to return in the types.ps1xml file that was previously listed, I overwrite those defaults in a format.ps1xml file for its table view. I could have also provided a list view in the format.ps1xml file which would have eliminated the need to list the default ones in the types.ps1xml file, but I wanted to show how one of these files could be used to overwrite the other in one scenario, but not another. I’ve only shown the pertinent portion of the format.ps1xml file in the following example. See my IronScripter repository on GitHub for the entire file and module. As with learning more about types, if you’re interested in learning more about modifying the default display of objects in PowerShell, I recommend taking a look at the About_Format.ps1xml help topic. The types.ps1xml file has to be specified in the TypesToProcess section of the module manifest and the format.ps1xml file must be specified in the FormatsToProcess section. Once again, I’m only showing the relevant portion of …
Each week leading up to the PowerShell + DevOps Global Summit 2018, PowerShell.org will be posting an iron scripter prequel puzzle on their website. As their website states, think of the iron scripter as the successor to the scripting games. I’ve taken a look at the different factions and it was a difficult choice for me to choose between the Daybreak and Flawless faction. While I try to write code that’s flawless, perfection is in the eye of the beholder and it’s also a never-ending moving target. Today’s perfect code is tomorrow’s hot mess because one’s knowledge and experience are both constantly increasing, or at least they should be if you want to remain relevant in this industry. I used some of the comments I saw in a tweet from Joel Bennett to help me choose between the two and I ended up choosing the Daybreak faction. In the first puzzle, you’re given some code that simply does not work due to numerous errors. The instructions even state there are errors in the code. I started out by cleaning up the supplied code a bit to at least make it work so I’d have a better understanding of what it’s trying to accomplish. If I were part of the Battle faction, I would probably quit here and say it works so that’s good enough. Throwing a prototype function together to query the local system was simple enough. Using the CIM cmdlets instead of the older WMI ones allows it to run on PowerShell Core 6.0 in addition to Windows PowerShell 4.0+. Although the CIM cmdlets were introduced in PowerShell version 3.0, I choose to use the ForEach method which wasn’t introduced until PowerShell version 4.0. In case you didn’t already know, PowerShell Core version 6.0 is not an upgrade or replacement to Windows PowerShell version 5.1. It installs side by side on Windows systems. Based on the response to a tweet of mine from Don Jones, it appears that I’m not the only one who thought PowerShell Core should have been version 1.0 instead of 6.0 to help differentiate it and eliminate some of the confusion. Specifying the Property parameter with Get-CimInstance to limit the properties returned makes my Get-MonitorInfo function shown in the following example more efficient. After all, there’s no reason whatsoever to retrieve data that’s never going to be used. I decided to keep things as simple as possible and write a regular function instead of an advanced one. It could be turned into an advanced function by simply adding cmdlet binding which also requires a param block even if it’s empty. This function works as expected against a local system with multiple monitors, but it’s run against a VM with a single monitor in this example. While the function seems to meet the requirements of the solution, I wanted to take it a step further and write something that I would use in a production environment. After all, if you’re going to go to all of this effort, …
It seems as if every time I need to reload a physical system, I’m searching the Internet to find a way to create a bootable USB drive from a Windows 10 or Windows Server 2016 ISO. I always seem to find tutorials that are using a process that’s almost 20 years old. They have me using the diskpart command line utility. Diskpart which initially shipped with Windows 2000, reminds me way too much of its predecessor, the fdisk command line utility. The PowerShell commands in this blog article are written to be compatible with PowerShell version 4.0 and higher. Windows 8 or Windows Server 2012 with a GUI or higher is also required because although you can install PowerShell version 4.0+ on older operating systems, many of the commands used in this blog article won’t exist on them. This is due to the newer WMI namespaces which the cmdlets rely on not existing on older operating systems. Out-GridView which is used in this blog article isn’t supported on Server Core (Windows Server without a graphical user interface). To create a bootable USB drive from an ISO, insert the USB drive into your computer and launch PowerShell as an administrator. Run the following PowerShell one-liner. Warning: This command will permanently delete all of the data on the selected USB drive. Proceed at your own risk. You’ve been warned. Walking through the previous command, the first line gets a list of all the disks attached to the system. The second filters them to only ones that are USB drives. Those results are sent to Out-GridView for the user to select the USB drive to format in case more than one USB drive is attached to the system (you can only blame yourself if you select the wrong one). The fourth clears all data and partitions off of the disk. The fifth creates a new partition using all of the available space on the USB drive and assigns a drive letter to it. The last line formats the USB drive. While many tutorials use NTFS as the type of file system, I found that my Lenovo ThinkPad P50 will not boot from a USB drive formatted with NTFS. It boots fine from one formatted with FAT32. Mount the ISO. The problem I ran into is there’s no easy way to determine what drive letter is assigned to an ISO once it’s mounted. The simplest way I found is to compare the drive letters before and after mounting it. While this sounds simple, another problem I ran into was that Compare-Object doesn’t handle null values. Change directory into the boot folder on the drive of the mounted ISO. Make the drive bootable and copy the contents of the ISO to the USB drive. While I’ve tried to do as much of this process as possible in PowerShell, there’s still a need to use a few command line utilities to accomplish this task. Thankfully, it’s something that you shouldn’t have to do very often. For ease …
I thought I’d run into a bug with the Compare-Object cmdlet in PowerShell version 5.1 earlier today. Compare-Object : Cannot bind argument to parameter ‘ReferenceObject’ because it is null. At line:1 char:33 + Compare-Object -ReferenceObject $DriveLetters -DifferenceObject $Driv … + ~~~~~~~~~~~~~ + CategoryInfo : InvalidData: (:) [Compare-Object], ParameterBindingValidationException + FullyQualifiedErrorId : ParameterArgumentValidationErrorNullNotAllowed,Microsoft.PowerShell.Commands.CompareObjectCommand Running the same commands on a VM with PowerShell version 5.0 completed without issue so it initially appeared to be a problem with the Compare-Object cmdlet in PowerShell version 5.1. The error was actually due to Get-Volume returning several results that didn’t have a drive letter on the system running PowerShell version 5.1. There just happened to be no drives without letters on the VM that the results were being compared to. In other words, all drives had drive letters on the comparison VM. This explains why it succeeded without error on that particular system. Once the entries without drive letters were filtered out on the system running PowerShell version 5.1, the command completed successfully. I would like to thank both Jeff Hicks and Aleksandar Nikolic for their assistance in helping determine the source of the problem. I thought I’d share this problem because I’m sure it’s something that others will run into. µ
Read Full Article
Read for later
Articles marked as Favorite are saved for later viewing.
Scroll to Top
Separate tags by commas
To access this feature, please upgrade your account.