Nigel Boulton's Blog
30Jun/111

Black Console and 100% CPU after restoring a Windows 2003 Virtual Machine

I was recently involved in a Disaster Recovery rehearsal. The idea behind this was to prove that we could recover our key systems at another site should a disaster occur. We came across an interesting issue which I thought I would blog about in case it is of help to anyone who may also encounter it in a similar situation. Let's face it, in a disaster recovery scenario, you need as few difficult issues to deal with as possible..!

This issue is only really likely to affect virtual servers (provided you are using identical hardware to recover your physical ones that is).

We were using IBM Tivoli Storage Manager (TSM) to restore C: drive and System State backups of virtual servers (taken in the live environment) into a separate isolated network.

The process involved creating a new VM (typically from a template), with the same virtual hardware version, number of vCPUs, amount of memory, disk layout and virtual NIC type as the live server. This VM would have the same operating system version, edition, architecture (x86/amd64) and service pack installed as the live server, plus the TSM client to facilitate the restore.

On restoring the first Windows 2003 server in this manner, the server wouldn't boot. The VM console displayed a black screen (no error message) and the VM CPU usage immediately spiked to 100% and stayed there. This happened immediately after power on, so it was not possible to get the VM to respond to the F8 key in an attempt to put it into safe mode, to assist with troubleshooting.

It really looked like a hardware incompatibility, but I'd been very careful to make sure the necessary parameters matched, so was a bit mystified. After some head scratching and time spent comparing the hardware that Windows thought was present in the template and live VM (good job it wasn't a real disaster!), I spotted that the Hardware Abstraction Layer (HAL) didn't match between the two (the HAL can be checked via Device Manager – Computer). The template VM had an ACPI Uniprocessor HAL (which I expected as it had been built with only one vCPU), but the live VM had an ACPI Multiprocessor HAL. I was pretty sure at this point that this would be the cause of the issue.

Like the template VM, the live VM also had one vCPU, so why did it have a multiprocessor HAL? The key difference between the two VMs was how they had been created. The live VM had originally been created by P2V'ing a physical server. This physical server would no doubt have had multiple processors and hence when Windows was originally installed, had been given a multiprocessor HAL. This didn't change on P2V, but the person who did this elected for the VM to only have one vCPU - quite understandably as it was a relatively lightly loaded server. So it was running a single vCPU with a multiprocessor HAL (which is clearly a valid configuration).

The problem was introduced by the restore process. I assume that some aspect of the restore didn't replace something in the template, and part of the template VM's uniprocessor HAL was still operational after the restore and reboot – or not operational in fact!

The supported/correct way of setting a multiprocessor HAL would be to install the OS from scratch on a VM with more than one vCPU. However, that would have been time consuming for the number of variations of servers that we had to restore, and the time available didn't allow for that.

So how did I rectify this? Well, a few years ago I ran into a (different) issue attempting to give a singe vCPU VM an additional processor, and in the process of troubleshooting that, came across this post on ngohq.com by Squall Leonhart. This describes how to change the Windows HAL without reinstalling. Note that this approach is, obviously, totally unsupported!

The method involved using DevCon, which is basically a command-line version of Device Manager. DevCon can be downloaded from Microsoft here.

By running the following commands within the template VM, prior to the restore, I was able to update the HAL to an ACPI Multiprocessor one:

devcon sethwid @ROOT\ACPI_HAL\0000 := +acpiapic_mp !acpiapic_up
devcon update c:\windows\inf\hal.inf acpiapic_mp

Squall recommends rebooting twice after doing this, to ensure that the device and IRQ tables get updated correctly.

After performing the steps above, a subsequent TSM restore was successful, and the server booted with no further problems. Result!

I have reproduced Squall's entire post below as this is such useful information which could be lost should the ngohq.com forum cease to exist for any reason – which would be a massive shame. It includes information on how to go to and from various HALs. Thanks for this incredibly helpful information Squall!

Heres some tips for upgraders!

You require the Devcon utility for this, unpack it to a folder, then navigate to the folder its in using Command prompt (command prompt on context menu PowerToy is handy for this)

How to enable APIC without repair installing windows
in device manager you will notice that under computer type it says Advanced Power and Control Interface PC.. this is a standard single processor HAL driver without APIC. to upgrade to the APIC driver you input the following:

devcon sethwid @ROOT\ACPI_HAL\0000 := +acpiapic_up !acpipic_up
devcon update c:\windows\inf\hal.inf acpiapic_up

after this, enable APIC in the bios if you haven't already, and reboot twice so windows can update the device and irq tables, it should now say ACPI Uniprocessor PC in the device manager

How to go back to PIC
if you wish to go back to PIC from APIC enter this:

devcon sethwid @ROOT\ACPI_HAL\0000 := +
acpipic_up !acpiapic_up
devcon update c:\windows\inf\hal.inf acpipic_up

and reboot twice to update the device and IRQ tables, and then disable APIC in the bios (the reason is, if you disable APIC before the device and irq tables update, windows will crash at startup.

How to Update from a Single Core APIC compatible cpu to a Multicore APIC compatible cpu

under the computer entry in the device manager, you will see it says ACPI Uniprocessor PC, to update to the multiprocessor HAL input this:

devcon sethwid @ROOT\ACPI_HAL\0000 := +acpiapic_mp !acpiapic_up
devcon update c:\windows\inf\hal.inf acpiapic_mp.

Then reboot twice again to update the device and IRQ tables.

How to go back to Single Core (should it be needed)
if you accidentally burn your processor and have to go back to a single core backup, you input this into the devcon:

devcon sethwid @ROOT\ACPI_HAL\0000 := +acpiapic_up !acpiapic_mp
devcon update c:\windows\inf\hal.inf acpiapic_up.

and always reboot twice.
__________________

Filed under: VMware, Windows 1 Comment
7May/1125

Problems with PowerShell Comment-based Help

I was writing a script recently and came across a couple of "gotchas" when attempting to use PowerShell 2's Comment-based Help at script level. A simple test script to demonstrate what I am about to describe is shown below:

<#
.SYNOPSIS
    Tests whether PowerShell 2 Comment-based Help is working as expected
.DESCRIPTION
    Displays Comment-based Help for this script
.EXAMPLE
    Test-CommentBasedHelp.ps1
.NOTES
    None
#>            
Write-Host "Hello World"


The expected output from this is as follows:

Expected output from test script showing Comment-based Help working

"Get-Help about_comment_based_help" says

"SYNTAX FOR COMMENT-BASED HELP IN SCRIPTS

  Comment-based Help for a script can appear in one of the following two

  locations in the script.

  -- At the beginning of the script file. Script Help can be preceded in the 
     script only by comments and blank lines.

  -- If the first item in the script body (after the Help) is a function 
     declaration, there must be at least two blank lines between the end of the 
     script Help and the function declaration. Otherwise, the Help is 
     interpreted as being Help for the function, not Help for the script.

  -- At the end of the script file."

Looks straightforward enough. However, if you try the following:

# Comment            
<#
.SYNOPSIS
    Tests whether PowerShell 2 Comment-based Help is working as expected
.DESCRIPTION
    Displays Comment-based Help for this script
.EXAMPLE
    Test-CommentBasedHelp.ps1
.NOTES
    None
#>            
Write-Host "Hello World"

The output isn't as expected. You just get the name of the script returned, as shown below:

Output from test script showing just script name returned

What's that about…? Well, this is the first "gotcha". Although Get-Help says that Script Help can be preceded in the script by comments and blank lines, it's easy to miss the text further up that says

"All of the lines in a comment-based Help topic must be contiguous. If a comment-based Help topic follows a comment that is not part of the Help topic, there must be at least one blank line between the last non-Help comment line and the beginning of the comment-based Help."

So to avoid breaking it, you need to use the following syntax:

# Comment            

<#
.SYNOPSIS
    Tests whether PowerShell 2 Comment-based Help is working as expected
.DESCRIPTION
    Displays Comment-based Help for this script
.EXAMPLE
    Test-CommentBasedHelp.ps1
.NOTES
    None
#>            
Write-Host "Hello World"

One tiny blank line can make such a difference!

The same applies if you put anything inside the comment block before the first keyword. For example, the following is not valid:

<#
Comment
.SYNOPSIS
    Tests whether PowerShell 2 Comment-based Help is working as expected
.DESCRIPTION
    Displays Comment-based Help for this script
.EXAMPLE
    Test-CommentBasedHelp.ps1
.NOTES
    None
#>            
Write-Host "Hello World"


The second "gotcha" is around the fact that Get-Help says that "at the end of the script file" is a valid location for Comment-based Help for a script. This is true, but if you do this, be aware of the fact that if you subsequently sign your script, a signature block is added to the end of the script which means that your Comment-Based Help block is no longer at the end of the file, and you will get the symptom described above.

Write-Host "Hello World"            
<#
.SYNOPSIS
    Tests whether PowerShell 2 Comment-based Help is working as expected
.DESCRIPTION
    Displays Comment-based Help for this script
.EXAMPLE
    Test-CommentBasedHelp.ps1
.NOTES
    None
#>            

# SIG # Begin signature block            
# MIID/wYJKddoZIhvcNAQcCoIID8DCCA+wCAQExCzAJBgUrDgMCGgUAMGkGCisGAQ            
# gjcWqCA56QSgWzBZMDQGCisGAQQBgjcCAR4wJgIDAQAABBAfzDtgWUsITrck0sYp            
# AgEAAgEAAgEAAgEAAgEAMCEwCQYFKw4DAhoFAAQUXPsShpFvys7oIWj23R6GpiQb            
# l46gggIdMIICGTCCAYKgAwIBAgIQSqeTRu71Hp1OJu+xx6ASfzANBgkqhkiG9w0B            
# AQQFADAYMRYwFAYDVQQDEw1OaWdlbCBCd5b3VsdG9uMCAXDTAwMDEwMTAwMDAwMF            
# DzIwOTkwMTAxMDAwMDAwWjAYMRYwFA68YDVQQDEw1OaWdlbCBCb3VsdG9uMIGfMA            
# CSqGSIb3DQEBAQPd789yhiCBiQKBgQC0+WAMn64J4oKsTIKsbBH5cTB4fEnfafzG            
# 1G+QkkgHpimfbT0Y+XrfmqKP6G/ailX3BHvwYOMmuSARqutfF6Rv9AQ7B/Sl8BgH            
# +AztcWg+jNko9dTidqexjH+bunpbzFMIJ6Lnzr+xSBvAbQR8oWtOwodQASW0G4Ra            
# b7+u5VZBaQIDAQABo2IwYDATBgNVHSUEDDAKBggrBgEFBQcDAzBJBgNVHQEEQjBA            
# gBCJKelkj8xj96uouh6cXclzoRowGDEWMBQGA1UEAxMNTmlnZWwgQm91bHRvboIQ            
# SqeTRu71Hp1OJu+xx6ASfzAHJy578}iG9w0BAQQFAAOBgQBw/WwbWGAHyyGjDhpb            
# Z7i8duiLHBBRYfUpczIh02jXPU+DfWa7atfwuFyxeilUDTszZ/2dOplH8l394j3H            
# yy8ZqXTf796zLqWXmvZn85rkgm16rRXqzDBheHidyTP3cPRPn7ehCahAAqpmHS0y            
# H7X3bevXIvMwDSXpL47nCCfWUDGCAUwwggFIAgEBMCwwGDEWMBQv0GA1UEAxMNTm            
# ZWwgQm91bHRvbgIQSqeTRu71Hp1OJu+xx6ASfzAJBgUrDgMCGgUAoHgwGAYKKwYB            
# BAGCNwIBDDEKMAhjY7guigAoAAoQKAADAZBgkqhkiG9w0BCQMxDAYKKwYBBAGCNw            
# BgorBgEEAYI3ACBgELMQ4wDAYKKwYBBAGCNwIBFTAjBgkqhkiG9w0BCQQxFgQULr            
# hd1Ib4bCuTXkm35KwTLDIiK58wDQYJKoZIcxhvcNAQEBBQAEgYCPumseo6AGAZFD            
# R37Tj8Kx6E0E6+MHqMHZ1TcLjO3E/lZqzFW7cCTJOcIH6Yg78r2DiToGXISdJkk8            
# 9sBB3nbsvQHWsWOYdRVwH8VueRg9paSa3CMj87E500z6bElejYGOi9VVfDZ8xBwm            
# rY4aAWd5A2dDpnojJQLC1yCv8w==            
# SIG # End signature block

Update: Please see the comment below from June Blender at Microsoft, who writes PowerShell Help. She has kindly updated the PowerShell Online Help topic about_Comment_Based_Help (available here) to reflect this "gotcha". Some further good news is that this change made it into the Windows PowerShell 2.0 Core Help May 2011 Update, which provides updated PowerShell Help in a CHM format (handy for searching!).

Hope this information is useful to you, I spent more time than I would have liked chasing this around!

Filed under: PowerShell 25 Comments
26Mar/110

Regular Expressions in PowerShell – Tome Tanasovski

Tome Tanasovski (@toenuff on Twitter) who runs the NYC PowerShell User Group did a brilliant presentation on Regular Expressions in PowerShell to the UK PowerShell User Group earlier this week - by far the best I've seen to date.

Richard Siddaway (who runs the UK PowerShell User Group) has kindly made the recording available here and Tome's presentation slides, scripts and cheat sheet here. Well worth a look!

Many thanks to Tome, and of course to Richard for organising this event.

Filed under: PowerShell No Comments
23Mar/115

On-Demand Access to your Windows Live SkyDrive via Windows Explorer

My eldest is off to University later this year, and I had suggested that he upload any important documents to his Windows Live SkyDrive, for online backup and the ability to edit them if necessary from any machine with a browser.

I thought it would be worth finding a slick way of giving access to the SkyDrive via a mapped drive in Windows Explorer, ideally connecting only when required and without the annoyance of being prompted for credentials – that way the backups are more likely to happen! I was aiming to do this using what Windows 7 has to offer natively and avoiding installing any additional applications.

A quick search led me to this great post from Mike Plate. This gave me a good starting point.

I had previously used the Office 2010 method that Mike describes to successfully determine the correct path to the "My Documents" folder on the SkyDrive, but I ran into an issue (discussed below), and it has to be said that this method is slightly convoluted. Fortunately, as described in the update to the above mentioned post, Mike has developed a neat tool called the SkyDrive Simple Viewer, which is available on CodePlex to assist with this. The EXE can be run from a folder on your PC and doesn’t require installation.

Here are the steps required - you will need to perform these logged on as the user who will use the mapped drive. At the end of this process you will have a nice desktop shortcut looking something like this, that you can double-click and have your SkyDrive folder silently mapped to a drive letter on your PC:

Update 10 Dec 2011: The SkyDrive Simple Viewer no longer seems to work correctly. Please see the comment below for details, and a link to an alternative method of determining the WebDAV address.

1. Download the SkyDrive Simple Viewer for WebDAV (I used the WPF version, which requires .Net Framework 3.5 SP1)

2. Run the viewer and log in to your SkyDrive using your Windows Live credentials, then select the top-level folder you want to use to store your documents in. If this is anything other than the default "My Documents" folder you will have to log on to your SkyDrive via a browser and create it using the normal method before doing this

3. Copy the WebDAV address from the text field in the viewer and paste this into a new Notepad document. It should look something like this:

https://yxbjla.docs.live.net/bc634a9b20da709c/^.Documents

In the above example, we’ll call "yxbjla.docs.live.net" the Server FQDN, "bc634a9b20da709c" the SkyDrive ID and "^.Documents" the Folder ID. Note that the Server FQDN will differ for each top-level folder on your SkyDrive

4. Edit the text document to create a new command line in the format shown below:

net use Drive Letter "\\Server FQDN@SSL\DavWWWRoot\SkyDrive ID\Folder ID" /SAVECRED /PERSISTENT:NO

e.g.

net use s: "\\[email protected]\DavWWWRoot\bc634a9b20da709c\^.Documents" /SAVECRED /PERSISTENT:NO

A few key points here – I mentioned above that I’d run into an issue when following Mike’s article. Well, this was when attempting to map a drive to the default SkyDrive "My Documents" folder. For me, it is identified by WebDAV (as can be seen above) as "^.Documents", not "^2Documents" (perhaps Microsoft have changed this since Mike wrote his article?). Anyway, I was able to map the drive using "^.Documents", but ran into access denied errors copying files onto the SkyDrive via that route. To address this, I found that I had to enclose the entire WebDAV path in quotes, as shown in the command line above

The key to not being prompted to log on each time is to have Windows store your Windows Live credentials for you – the /SAVECRED switch does this, and the /PERSISTENT:NO switch avoids Windows mapping the drive at each logon, so that it can be done "on demand"

5. Open a Command Prompt and paste the command line you created in the Notepad document in after the prompt, and then press Enter. When prompted, provide your user name and password (i.e. your Windows Live credentials) and you should see the message "The command completed successfully". A quick check in Computer should show that the drive is mapped and the files on your SkyDrive are accessible

6. Right-click the mapped drive and select Disconnect

Finally, we need to create a shortcut to map the drive when desired:

7. Right-click the desktop and create a new shortcut

8. Paste the command line you created in the text document into the wizard without the switches, e.g.

net use s: "\\[email protected]\DavWWWRoot\bc634a9b20da709c\^.Documents"

9. Give the shortcut a suitable name (bearing in mind you can’t use a colon (:) in the name of the shortcut), and save it

10. Finally, edit the shortcut properties to run it minimised, and select a suitable icon using the Change Icon button

I selected an icon from SHELL32.dll – there’s a good number in there to choose from. Mine shows a couple of computers with a network connection alongside a globe, which I think sums up the function nicely!

11. If you would like a new Windows Explorer window to open displaying the contents of the SkyDrive folder after mapping the drive, edit the shortcut properties to prefix the target with "cmd /c " and append " & explorer s:", (without the quotes) as shown below:

cmd /c net use s: "\\[email protected]\DavWWWRoot\bc634a9b20da709c\^.Documents" & explorer s:

I find managing files on the SkyDrive this way works well, and you can go ahead and create subfolders at will using the normal Windows methods. However, if you need access to a different top-level folder you will need to set up an alternative shortcut (and/or drive letter) by following the steps above again.

If you change your Windows Live password in future, it will be necessary to repeat the steps above up to the point where you create the shortcut, to provide and save the new credentials. There isn’t a documented method of permanently removing the saved credentials should they no longer be required, as far as I’m aware - however, I will mention that they are stored under %APPDATA%\Microsoft\Credentials in hidden system files – delete them (and reboot) at your own risk!

I have seen it reported that accessing files on your SkyDrive via WebDAV can be very slow, but I haven’t experienced this myself. Of course you must remember that there is no way that it’s likely to be comparable to local storage or LAN speed-wise. Various people have reported that ensuring you do not have your Internet Explorer proxy settings configured for automatic detection can improve transfer speeds – I haven’t tested this myself. To check this, in Internet Explorer, go to Tools – Internet Options – Connections tab – LAN settings and ensure that the "Automatically detect settings" checkbox is unselected (assuming you don’t need to use this functionality of course).

Bear in mind that all the usual restrictions with regard to your SkyDrive still apply – you can only upload files of up to 50 MB in size each, and only certain types of files are permitted. However, with 25 GB of storage provided by Microsoft for free, this is a convenient way to store (and edit) your important Office documents online.

Finally, if you’d like to do this without the complication and you’re happy to install additional applications, there are a number of free applications available that may meet your needs. In the course of this work I tested a few of them, but none of them provided exactly what I wanted, so I stuck with the method described in this post.

Filed under: Windows 5 Comments
28Feb/113

Converting a PowerShell Array into a .Net Framework ArrayList

I was writing a PowerShell script earlier today and needed to take some data I had in an array and put it into a .Net Framework ArrayList. It took me some searching online to find out how to do this so I thought I’d blog it here…

ArrayLists are a powerful way of managing data – one of their biggest advantages is that it is easy to manipulate data by adding or removing elements, as shown below. With default PowerShell arrays there is no simple way to remove elements.

To demonstrate how ArrayLists work, try this code:

$ArrayList = New-Object System.Collections.ArrayList
$ArrayList.Add("New Element 1")
$ArrayList.Add("New Element 2")
$ArrayList.Count
$ArrayList.Remove("New Element 2")
$ArrayList.Count

Note that the return value from the Add method is the element number that was added. You can always cast this to void or pipe it to Out-Null if you don’t need it.

You can also specify where to add or remove elements:

$ArrayList.Add(0,"New Element 3") 	# Adds an element at the beginning of the array list
$ArrayList.RemoveAt(0) 				# Removes the first element

Anyway, back to the point... I had an array containing a list of files (which came from Get-ChildItem), and I wanted to create an ArrayList and populate this with that data, because I wanted to be able to remove each file from the list later on in the script. There are two approaches that can be used for this:

From an existing array:

$arrFiles = Get-ChildItem
$colFiles = New-Object System.Collections.ArrayList
$colfiles.AddRange($arrFiles)

Or more simply, populate the ArrayList directly:

$colFiles = New-Object System.Collections.ArrayList(,(Get-ChildItem))

Note the comma within the parameters – the comma is the array construction operator in PowerShell.

Forgive my artistic license in choosing a title for this post – we’re not really converting an array as such, but I thought it would be the most likely thing somebody seeking this information would search for.

Filed under: PowerShell 3 Comments
7Feb/110

Checking whether a Hotfix is installed on Multiple Machines using PowerShell Remoting

In my last post, I said that I would post a way of verifying whether a particular hotfix had been installed on a number of machines, so here it is...

This approach uses good old PowerShell remoting again - as I have said before, this is an incredibly powerful way of executing PowerShell code on a number of machines, and well worth investing the time to set it up in your environment.

...So, I'd deployed the hotfix to all the servers in the farm, and I needed a quick way of verifying that it had been successfully installed on each of them. This was a variation on an approach I'd taken in the past (an enhancement kindly provided by Jeffrey Snover) - with one important difference: this time I was making good use of the PowerShell custom objects that the script returns. The script code is shown below:

$Servers = $(1..176 | foreach {"SERVER$_"})

Invoke-Command -ComputerName $Servers -ScriptBlock {
	$Result = Get-Hotfix | where {$_.hotfixid -eq 'KB2464876'}
	if ($Result) {
        New-Object PSObject -Property @{Host = hostname; Value = $true}
    } else {
		New-Object PSObject -Property @{Host = hostname; Value = $false}
    }
}

Running this script as shown below allowed me to get a quick indication of any servers that did not have the hotfix installed. This was achieved by querying for returned custom objects whose "Value" property was not equal to True:

./Check-Hotfix.ps1 | Where-Object {$_.Value -ne $True} | Select-Object Host

...and the result was:

Host
----

As I got no servers returned in the result (which of course is good), I wanted a confidence check, so I ran it this way to prove that the code was in fact running as expected:

./Check-Hotfix.ps1 | Where-Object {$_.Value -ne $False} | Select-Object Host

Host
----
SERVER1
SERVER2
SERVER3
SERVER4
.

As I mentioned in my last post, there are a number of ways you can build the list of servers. I used a numbered range, but by simply substituting the line which sets the $Servers variable, you can easily read a list of machine names from a text file (or a CSV file of course):

$Servers = Get-Content '\\Fileserver\Hotfixes\ServerList.txt'

Variations on the above approach can potentially allow you to do almost anything that you can do with PowerShell locally, across your entire server estate. For example, recently I used the same method to check which servers had a specific unwanted value in one of the Terminal Server "shadow" keys. This rogue value was being written into user profiles when users were directed to the servers in question and causing unexpected behaviour in subsequent sessions.

One of the best things about this is that, in most cases, you can develop and test the functional part of the code locally and then simply drop it into the scriptblock. Nice!

Filed under: PowerShell No Comments
20Jan/1114

Installing a Windows Hotfix on Multiple Machines using a PowerShell Script

A little while ago I was given a hotfix by Microsoft PSS for an issue we had been experiencing with the WMI repository intermittently becoming corrupted on Windows 2008 servers. As you will know from my previous posts we have quite a few servers, so after testing the hotfix carefully I was looking for a way to deploy this across the server estate with minimum effort. We do have a third-party deployment product, but this is geared around deploying regular Microsoft security patches as opposed to hotfixes intended to address specific issues. I wanted a way to do this semi-interactively so that I could monitor progress and deal with any issues arising during the deployment process.

My good friend and colleague Jonathan Medd kindly did the initial research on this for me (I figured if anybody could find a way to do it then he could!). We suspected that we might have some trouble getting PowerShell remoting to do this (see my previous post here), and after some searching and testing it soon became clear that this was in fact the case.

The approach we settled on was still based around PowerShell (of course) but we ended up having to make use of the trusty old utility PsExec, originally written by Mark Russinovich of Sysinternals, who now come under the Microsoft umbrella.

In this solution, PsExec.exe calls WUSA.exe, which is the Windows Update Stand-alone Installer. This is a means of installing update packages programmatically. Update packages have an .msu file extension. I can’t get out of the habit of calling them hotfixes though, sorry... WUSA is a simple and effective utility that surprisingly, I hadn’t encountered before. The final script code is shown below:

$Servers = $(1..176 | foreach {"SERVER$_"})

$HotfixPath = '\\Fileserver\Hotfixes\KB2464876\Windows6.0-KB2464876-x86.msu'

foreach ($Server in $Servers){
	if (Test-Path "\\$Server\c$\Temp"){
		Write-Host "Processing $Server..."
		# Copy update package to local folder on server
		Copy-Item $Hotfixpath "\\$Server\c$\Temp"
		# Run command as SYSTEM via PsExec (-s switch)
		& E:\SysinternalsSuite\PsExec -s \\$Server wusa C:\Temp\Windows6.0-KB2464876-x86.msu /quiet /norestart
		if ($LastExitCode -eq 3010) {
			$ConfirmReboot = $False
		} else {
			$ConfirmReboot = $True
		}
		# Delete local copy of update package
		Remove-Item "\\$Server\c$\Temp\Windows6.0-KB2464876-x86.msu"
		Write-Host "Restarting $Server..."
		Restart-Computer -ComputerName $Server -Force -Confirm:$ConfirmReboot
		Write-Host
	} else {
		Write-Host "Folder C:\Temp does not exist on the target server"
	}
}

To avoid authentication issues, we have PsExec run WUSA as SYSTEM (-s switch), which means that the update package needs to be available locally, so the script copies it to C:\Temp on the machine in question first. During testing, we were using the -i (interactive) switch, but doing this caused error 1008 "ERROR_NO_TOKEN" when I tried to run it for real – this appears to happen if you are not logged on to the server being processed.

The hotfix (sorry, "update package") in question required a restart after installation. I wanted the process to be as automated as possible, but still interactive, as I mentioned earlier. I wanted the restart to be performed automatically and for the script to proceed to the next server unprompted if the package installed as expected, but to prompt me if not so that I could troubleshoot.

To achieve this, WUSA installs the package with the /norestart switch. Because of this, the error code returned from WUSA (via PsExec) is 3010. In fact this isn’t an error. By testing PowerShell’s built in $LastExitCode variable it's possible to have the script proceed with the removal of the local file and the subsequent restart of the machine automatically if this is the result.

Of course, you can build the list of servers in a number of ways in the code above. I used a numbered range, but by simply substituting the line which sets the $Servers variable, you can easily read a list of machine names from a text file:

$Servers = Get-Content '\\Fileserver\Hotfixes\ServerList.txt'

I ran this against 176 servers in groups of about 50 in a few hours and it worked faultlessly. After all the servers had restarted I needed a quick way of verifying that the hotfix had been successfully installed on all of them. Shortly I will publish a further post which outlines how I did that. It’s neat and simple and potentially useful to be able to check for the presence of any hotfix on a number of machines.

Filed under: PowerShell 14 Comments
16Dec/102

Taking Snaphots of all Virtual Machines using PowerCLI

I recently needed to apply a limited distribution patch to a number of Citrix servers, all of which are virtual on VMware ESXi 4.0. I wanted to take snapshots before doing this, to give me an easy backout route if things went horribly wrong. Of course I could always have done this using the VI Client, but that would have meant an awful lot of "mousing about" and clicking to be able to do this for 176 virtual machines.

With PowerCLI this is a cinch, in fact it's pretty much a one-liner! I chose to do this one host at a time, but with a small change to the code below you can easily expand this to encompass a larger chunk, or even all, of your virtual infrastructure.

First, connect to your vCenter Server (and provide the appropriate credentials when prompted):

Connect-VIServer -Server viserver.domain.com

Then run the following one-liner to take a snapshot of all VMs on a given host:

Get-VMHost vmhost.domain.com | Get-VM | New-Snapshot -Name "Pre patch" -Quiesce

In this case I chose to quiesce the file system first. Other options are available - see the help for the New-Snapshot cmdlet.

Once you have finished with the snapshots, delete them as follows:

Get-VMHost vmhost.domain.com | Get-VM | Get-Snapshot -Name "Pre patch" | Remove-Snapshot

And finally, disconnect from your vCenter Server:

Disconnect-VIServer -Server viserver.domain.com

How easy is that..?!

My good friends Alan Renouf and Jonathan Medd talk about how useful PowerCLI is for automating repetitive tasks in Episode 20 of the Get-Scripting Podcast, and this is a perfect example of that.

On the subject of the Get-Scripting Podcast, do be sure to check out Episode 20 - the guys interview none other than Jeffrey Snover, Lead Architect for Windows Server at Microsoft, and the man behind Windows PowerShell itself - excellent!

Filed under: PowerCLI, VMware 2 Comments
5Dec/104

Checking File Associations with the help of PowerShell Remoting

I recently needed to check the file association for .JPG files across a whole Citrix server estate, as we’d received reports of files of these types not always opening as expected. Because PowerShell remoting is enabled on every server, this was a very easy job..!

The code snippet below is what I used. A script block is run remotely on each server using the Invoke-Command cmdlet. The script block then uses the Get-ItemProperty cmdlet to read the registry to get the default file association for .JPG files (via the “jpegfile” class) and reports OK if it is as expected, or the actual value that is set if not:

1..176 | ForEach-Object {
	$ServerName = "SERVER$_"
	Write-Host "$($ServerName): " -NoNewLine
	Invoke-Command -ComputerName $ServerName -ScriptBlock {
		$Value = Get-ItemProperty "Registry::HKEY_CLASSES_ROOT\jpegfile\shell\open\command" "(Default)" | Select-Object -ExpandProperty "(Default)"
		if ($Value -eq "C:\Windows\System32\rundll32.exe `"C:\Program Files\Windows Photo Gallery\PhotoViewer.dll`", ImageView_Fullscreen %1") {
			Write-Host "OK"
		} else {
			Write-Host $Value
		}
	}
}

Update: Please see the comment below from Jeffrey Snover. Jeffrey's approach makes use of the concurrency feature of Invoke-Command, which executes the command on 32 servers simultaneously (by default) and so returns the results substantially faster.

In my opinion, PowerShell remoting is by far the best version 2.0 feature. If you don’t already have it enabled across your server estate I’d strongly recommend doing so as it can save hours of effort.

Depending on your environment, there may be a few hoops you have to jump through to get remoting working properly, but it will be time well spent.  PowerShell MVP Jonathan Medd has an excellent post on his blog on Enabling PowerShell 2.0 Remoting in an Enterprise and Ravikanth Chaganti has produced a helpful multi-part PowerShell 2.0 remoting guide.

Be aware though that unfortunately there are some things that can’t be run remotely – a month or two ago I was doing some work with WSUS and discovered that it’s not possible to call the IUpdateSession::CreateUpdateDownloader method remotely for example. Shame!

Filed under: PowerShell 4 Comments
1Dec/100

Citrix Access Management Console – “Errors occurred when using [server] in the discovery process.”

I’m sure that, like me, you’ve seen this error message from time to time when attempting to configure and run discovery in the Citrix Access Management Console (AMC): "Errors occurred when using [server] in the discovery process."

Error message with detailsThis doesn’t occur if you add the local computer (as opposed to a remote one) as the server for discovery.

Double-clicking on the error message gives further details, as shown above. The things to check are as follows:

  1. Check that the Citrix MFCOM Service is running on the remote server you have specified for discovery:
  2. Citrix MFCOM service running

  3. Check that the “COM+ Network Access” Role is installed on the remote server. If not, use Server Manager to add it, as shown below:
  4. Adding COM+ Network Access Role

  5. Make sure that the account you are running the AMC as is a member of the “Distributed COM Users” group on the remote server. This is an easy thing to overlook, especially if you are publishing out the AMC to non-admins via XenApp
  6. Adding user to Distributed COM Users group

  7. If it still fails, verify that the necessary traffic is not being blocked by any firewalls between the server running the AMC and the remote server

I have also seen the additional detail report "Enterprise Services are not enabled on this server ([server]). Ensure the server is configured as an application server, and COM+ network access is enabled.". This is slightly more helpful of course…!

Filed under: Citrix No Comments