Finding Locked Out Domain Accounts, Or At Least, How Not To

by Ryan 19. July 2013 13:00

I hadn't posted in a little while, so I thought I'd do a two-fer today.

You might see some advice on the internet about using the userAccountControl attribute to identify locked out domain accounts.  More specifically, the following LDAP filter:

(&(objectCategory=User)(userAccountControl:1.2.840.113556.1.4.803:=16))

The =16 part should mean "locked out", per the attribute's documentation, keeping in mind that 0x10 in hex is 16 in decimal.

DON'T USE IT.  It doesn't work. I don't think it has ever worked. Apparently it was just an idea that some person on the AD design team had that never got implemented. If anyone has any history on this bit, and if it has ever worked in the past, I would love to hear about it. All I know is that it does not work now.

You can easily verify for yourself that it doesn't work with Powershell:

Get-ADUser -LDAPFilter "(&(objectCategory=User)(userAccountControl:1.2.840.113556.1.4.803:=16))"

You'll probably get 0, or some other very inaccurate value. (Other userAccountControl flags however do definitely work and can be used reliably. Just not this one.)

Here is another LDAP filter you will often see on the web for finding locked out accounts:

(&(objectCategory=Person)(objectClass=User)(lockoutTime>=1))

DON'T USE THAT EITHER. That will return too many results.  The reason why is that lockoutTime is not reset until the next time that person successfully logs in. So if an account is locked out, then their lockoutTime attribute gets set, then if the domain lockout duration expires, the account is no longer technically locked out, but lockoutTime remains populated until the next time that user logs in. Now if you're thinking that we should filter this list by only the users who have a lockoutTime that is less than [domain lockout duration] minutes in the past, then you're on the right track. Those would be the users who are still really locked out.

When I type  Search-ADAccount -LockedOut , however, I am given what seems to be an accurate number of users that are currently locked out. I should point out that if working in a large AD environment, I think it's best to point directly to your PDC-emulator whenever possible, because your PDC-emulator will always have the most up-to-date information about account lockouts. From a Microsoft article about urgent replication:

... account lockout is urgently replicated to the primary domain controller (PDC) emulator role owner and is then urgently replicated to the following:

• Domain controllers in the same domain that are located in the same site as the PDC emulator.

• Domain controllers in the same domain that are located in the same site as the domain controller that handled the account lockout.

• Domain controllers in the same domain that are located in sites that have been configured to allow change notification between sites (and, therefore, urgent replication) with the site that contains the PDC emulator or with the site where the account lockout was handled. These sites include any site that is included in the same site link as the site that contains the PDC emulator or in the same site link as the site that contains the domain controller that handled the account lockout.

In addition, when authentication fails at a domain controller other than the PDC emulator, the authentication is retried at the PDC emulator. For this reason, the PDC emulator locks the account before the domain controller that handled the failed-password attempt if the bad-password-attempt threshold is reached.

If you follow my earlier instructions on how to peek inside the Search-ADAccount cmdlet itself, you'll see that Microsoft themselves is keying on the Account Lockout Time to perform this search:

Finally, I can reproduce the same behavior of  Search-ADAccount -LockedOut  with the following bit of Powershell, given that I know my domain's account lockout duration:

Get-ADUser -LDAPFilter "(&(objectCategory=Person)(objectClass=User)(lockoutTime>=1))" -Properties LockoutTime | 
Select Name, @{n="LockoutTime";e={[DateTime]::FromFileTime($_.LockoutTime)}} | 
Sort LockoutTime -Descending | ? { $_.LockoutTime -gt (Get-Date).AddMinutes($AccountLockoutDuration * -1) }

That gives the exact same results as  Search-ADAccount -LockedOut .

So Long, TechNet Subscription.

by Ryan 1. July 2013 18:45

It's been fun.  Got the email this afternoon.  I'm not sure that I'll be able to do much lab stuff any more.  Which means less content for this blog.  Less ability to answer questions on Server Fault through having the ability to quickly verify things.  Less ability for me take the things I've learned and tested and use them for the benefit of my employers.  Less ability to try out the extremely atypical scenarios that I'd get asked in the usual tricky Microsoft exam yet never see in a production environment.

I guess I can still get stuff from TechNet Evaluation Center, but as far as I can tell I'll have to promptly rebuild my entire lab every 6 months, which makes me less inclined.

I'll think of something.  Man, I never thought I'd be saying this, but sometimes I feel like things would be a lot easier on me if I just specialized in Linux stuff.

Psst, You Want A Script To Backup Your Lab VMs?

by Ryan 23. June 2013 09:33

I can hook you up...

So I'm always doing a lot of lab work with Hyper-V virtual machines. Every once in a while I want to just save the state of the entire lab all at once and back it up to a safe storage volume.  I suppose I could set up Windows Server Backup on each of the VMs, and find some disk to use as a pass-through disk for one of the virtual machines and then share that so that the VMs could back up to the network share... but that's a ton of hassle.

How about I just save the state of all the VMs, and export them directly to my backup volume, then resume the VMs, all from the hypervisor?  As a scheduled task, perhaps?

About 10 minutes in the Powershell ISE and I've done just that.  A couple things to be warned of - First, you can't do this in production. The virtual machines are frozen while they're being exported, and it can take several minutes to export a VM. Secondly, make sure you are running with full Administrator privileges, or else cmdlets such as Get-VM will silently return nothing.

 

# Ryan Ries, 2013
# Backs up some lab VMs. Takes several minutes at least.

[String]$BackupPath = "D:\Backups\Hyper-V"
[String]$ErrorLog   = "D:\Backups\Hyper-VBackups.log"
$Start = Get-Date
"$(Get-Date) - Hyper-V Backups Starting." | Out-File $ErrorLog -Append
Try
{
    Get-Childitem $BackupPath -Recurse -Force | Remove-Item -Recurse -Force -ErrorAction Stop
}
Catch
{
    "$(Get-Date) - Error during Get-Childitem or Remove-Item: $($_.Exception.Message)" | Out-File $ErrorLog -Append
    Return
}
Try
{
    Get-VM -ErrorAction Stop | Stop-VM -ErrorAction Stop -Save
}
Catch
{
    "$(Get-Date) - Error during Get-VM Stop-VM -Save: $($_.Exception.Message)" | Out-File $ErrorLog -Append
    Return
}
Try
{
    Get-VM -ErrorAction Stop | Export-VM -ErrorAction Stop -Path $BackupPath
}
Catch
{
    "$(Get-Date) - Error during Export-VM: $($_.Exception.Message)" | Out-File $ErrorLog -Append
    Return
}
Try
{
    Get-VM -ErrorAction Stop | Start-VM -ErrorAction Stop
}
Catch
{    
    "$(Get-Date) - Error during Start-VM: $($_.Exception.Message)" | Out-File $ErrorLog -Append
    Return
}
$End = Get-Date
"$(Get-Date) - Hyper-V Backups completed in $([Math]::Round((New-TimeSpan $Start $End).TotalMinutes)) minutes." | Out-File $ErrorLog -Append

A Few Powershell Commands That Have Been Useful To Me Lately

by Ryan 16. June 2013 19:30

I've been building lots of new Server 2012 machines lately, which means lots of Server Core, which means lots of command line interface, which means lots of Powershell.

So, a few quick tricks I've found useful the past couple days.

foreach($_ In Get-ADComputer -Filter *) { Invoke-Command -ComputerName $_.Name { Set-ItemProperty -Path HKLM:\SYSTEM\CurrentControlSet\Control\Filesystem -Name NtfsDisable8dot3NameCreation -Value 1 } }

This nifty one-liner grabs all the computer names from Active Directory, remotely disables 8.3 file name creation on each machine.  It's good for filesystem performance, as Windows no longer needs to maintain records of old DOS-style names like FILENA~1.TXT for every file with a long name. Better yet, the Best Practices Analyzer will stop complaining about it once you disable 8.3 file name creation. Unfortunately, MAX_PATH in Windows is still 260 characters. When you hit that limit, you will be extremely annoyed. .NET, and thus Powershell, are especially flummoxed by really long file paths. The Windows API does technically allow you to exceed MAX_PATH by using the \\?\ handle, but you also lose a lot of sanitizing and security features when you perform that bypass.  Note, you need to reboot the machine after changing the 8.3 file name policy.

New-NetIPAddress -AddressFamily IPv6 -InterfaceIndex 13 -IPAddress "2001:2c98:ee9c:279b::3" -PrefixLength 64

 Get used to setting your IP configs with Powershell. Not just IPv4, but IPv6 too. Hmm, speaking of TCP/IP configurations, what else do I need besides an IP address? Oh, yeah:

Set-DnsClientServerAddress -Addresses fd58:2c98:ee9c:279b::1,fd58:2c98:ee9c:279b::2 -InterfaceIndex 13

DNS servers! And of course, if you need to know the index of the network adapter you're working on, it's as simple as Get-NetAdapter.

 

Which Hyper-V VM Worker Process Belongs To Which VM, PART 2! (And also about IDE and SCSI controllers)

by Ryan 3. June 2013 17:38

It is with great shame that I admit that the information in yesterday's post is not accurate.  Yesterday, I spoke about the "Hyper-V Virtual IDE Controller (Emulated)" performance counters as though they would give you I/O statistics for virtual machines using the virtual IDE controller.

Virtual IDE Controller

Au contraire!

There are only two scenarios in which those counters will show anything useful. Either when the VM very first boots up, for about 1 second before the OS is completely loaded... or, if the Hyper-V VM integration tools are not installed.

See, when the virtual machine has its integration tools installed, then even when the configuration screen about says "IDE Controller," it's no longer an emulated IDE controller. It's a synthetic virtual controller, just like you would get if you used the SCSI controller on the virtual machine.

If the VM did not have the integration tools installed, and was thus using the actual emulated IDE controller, that would mean that the I/O would need extra steps in processing, and it would travel through its vmwp.exe process on the Hyper-V host.  However, if the enlightenments are installed on the VM, the I/O travels through the VMBUS.  This makes the I/O faster, and it explains why when using synthetic devices as opposed to emulated ones means that you can no longer see I/O happening in vmwp.exe, (it should travel straight to the "System" process on the root partition,) nor will you see anything but zeros in the "Hyper-V Virtual IDE Controller (Emulated)" performance counters.

To reassure you that I'm not just making s*$@ up this time, this MSDN blog post says basically the same thing. And so does this even better one.

"The next thing to notice is the “Fast Path Filter”.  This is a filter driver that gets installed on all disk objects in the virtual machine – whether they are IDE or SCSI.  It allows us to pass directly to the VMBUS based path for everything except low level disk operations (like partitioning a disk)."

So there you have it. But unfortunately, this means those counters I mentioned are pretty much worthless now. But all is not lost!  You can still use the "Hyper-V Virtual Storage Device" counters, however, the counters seem to be about how much I/O is being done on the VHD/X of the virtual machine, and not the precise I/O being done by the virtual machine itself.

Not as good, in my opinion, but it's still something.

Also, I'd like to thank Chris S from Serverfault for enlightening me (pun fully intended) and putting me on the right path.

Which Hyper-V VM Worker Process Belongs To Which VM?

by Ryan 2. June 2013 15:13

Warning! This info is not totally accurate! Please read the next day's post for corrections.

I wanted to track down which virtual machine on my Hyper-V host was causing an inordinate amount of disk I/O on my host without logging in to each one.

In Hyper-V, you will notice that on the root partition, you will see one instance of vmms.exe (VM Management Service,) and then you will see a separate instance of vmwp.exe (VM Worker Process) for each virtual machine that is currently running.

Notice that the vmwp.exe processes run under a special user context which contains the GUID that you would find in that virtual machine's configuration files on disk. The same GUID is also supplied to vmwp.exe as an argument as the process is created, like so:

"C:\Windows\System32\vmwp.exe" c83cdee4-1a6d-4f51-9d05-e57df8403ed4

It's not immediately apparent which vmwp.exe belongs to which VM. Furthermore, the I/O charged against each individual vmwp.exe process is not necessarily indicative of what's actually happening on the virtual machine that its hosting. So we'll need to go to the performance counters instead. The "Hyper-V Virtual IDE Controller (Emulated)" set of counters should do the trick, assuming you're using the IDE controller on your VMs.

I have all the information I need now to determine which virtual machine is responsible for the large amount of I/O... but I didn't want to just do it manually. Why not write a reusable tool that can also be run on Core servers with no GUI?

So this simple script, when run on a Hyper-V host, does just that. The output looks like this:

Get-VMPidAndIO | Out-GridView

My VMs were all idle then, hence all the zeros. The screenshot loses all its dramatic flair, but whatever.

The script could easily be enhanced by supporting remote Hyper-V hosts, alternate credentials, etc. But what do you want for 30 minutes?

#Requires -Version 3
function Get-VMPidAndIO
{
<#
.SYNOPSIS
	Gets the Process ID and I/O statistics of each virtual machine running on the Hyper-V host.
.DESCRIPTION
	Gets the Process ID and I/O statistics of each virtual machine running on the Hyper-V host.
    Currently only works for VMs using virtual IDE controllers.
    Requires Powershell 3 at a minimum.
.LINK
    http://myotherpcisacloud.com
.NOTES
    Written by Ryan Ries, June 2013.
    ryan@myotherpcisacloud.com
#>
    BEGIN
    {
        Try
        {
            $VMProcesses = Get-CimInstance -Query "Select ProcessId,CommandLine From Win32_Process Where Name ='vmwp.exe'" -ErrorAction Stop
        }
        Catch
        {
            Write-Error $_.Exception.Message
            Return
        }
    }
    PROCESS
    {

    }
    END
    {
        Foreach($_ In $VMProcesses) 
        {
            $VMName = $((Get-VM | Where Id -EQ $_.CommandLine.Split(' ')[-1]).Name)            
            [PSCustomObject]@{PID=$_.ProcessId;
                              VMName=$VMName; 
                              ReadBytesPerSec=[Math]::Round($(Get-Counter "\Hyper-V Virtual IDE Controller (Emulated)($VMName`:Ide Controller)\Read Bytes/sec").CounterSamples.CookedValue, 2);
                              WriteBytesPerSec=[Math]::Round($(Get-Counter "\Hyper-V Virtual IDE Controller (Emulated)($VMName`:Ide Controller)\Write Bytes/sec").CounterSamples.CookedValue, 2); }
        }

    }
}

Set Windows Update Schedule on Server 2012 Core with Local Policy

by Ryan 31. May 2013 16:46

I'm seeing a lot more Server Core deployments as 2012 adoption increases. Which I think is awesome - I love Server Core. But there are still a couple things that were completely trivial on a GUI edition of Windows that are a tad tricky on Core.

For instance, here I am on Server 2012 Core using the sconfig utility to set up Windows Automatic Updates:

sconfig

This is just a lab environment, so I don't mind automatic updates and reboots. But I have two Active Directory domain controllers in this lab, and I don't want them rebooting at the same time. So how do I change the "every day at 3:00 AM" schedule so that I can stagger the patching and reboots? On a GUI install it would be trivial of course. Here, not as much.

The first thing I did was briefly look for a registry entry in HKLM:\Software\Microsoft\Windows\CurrentVersion\WindowsUpdate\, but I didn't see anything that looked like it would help me modify the Windows Update schedule like I wanted to.

These are domain members, so of course I could use Group Policy to do it, but I didn't want to create and link separate GPOs for each server that I wanted to have a different Windows Update schedule. Plus I wanted to figure out how to do it on non domain-joined machines as well.

Ah - Local Policies! To edit local policies on Core servers, we'll need to connect to them remotely from what I like to call "a tools machine." When you have a bunch of Server Core machines running out there, you should also keep one machine around that has a full GUI install. I personally like to install all my tools (like RSAT) on that one machine, and use it to centrally manage all of the Core machines remotely.

Here I am using mmc on my tools machine to add Group Policy Object Editor snapins for several of the Server Core computers: 

Local Policies

Again, the name of the snapin is Group Policy Object Editor, and target the remote machine as you add the snapin. You'll of course need RPC over TCP connectivity to the remote machine, and you'll need to modify Windows Firewall on the remote machine to allow the incoming connection. (I like to use domain Group Policies for that, so all my machines have consistent firewall settings.)

All that's left to do now is navigate to Computer Configuration > Administrative Templates > Windows Components > Windows Update, and configure the "Configure Automatic Updates" setting for the server. It allows you to modify the hour and the day of the week that Windows Update will download and install updates on that machine.

As per Active Directory Group Policy application precedence, remember that any conflicting domain-based GPO will override settings you make to a machine's Local Policy. 

Lastly - don't forget that automatic update option 4 - "Auto download and schedule the install" - is the only option here that applies to Server Core. The others won't work because Server Core can't "notify" the user of updates the way it could were the GUI installed.

My Entry for the Advanced Event #4 of the 2013 Scripting Games

by Ryan 21. May 2013 12:35

We're on the downhill stretch now. Honestly I'm kind of glad.  These scripts are fun to write, and great practice, but it's work.  I can tell that I'm not the only one loosing steam, as the number of votes on other people's entries has gone way down.  Anyway, about the script I wrote: I like that the #Requires -Modules statement at the top automatically loads the AD module for you if it's not already loaded. I still didn't do the BEGIN/PROCESS/END blocks this time either, which I fail to see how it matters at all, since I'm not dealing with pipeline input... but I'm sure I'll still get crowd scores of 1 and 2 stars for it.  That and dudes with 640x480 monitors going "some of your code goes off the screen why don't you splat!?"  :P

#Requires -Version 3
#Requires -Modules ActiveDirectory
Function Get-RandomADUser
{
<#
.SYNOPSIS
    Retrieves random users from Active Directory and generates an HTML report.
.DESCRIPTION
    Retrieves random users from Active Directory, generates an HTML report,
    and then returns the users to the pipeline. 
    Use the -Verbose switch if you want to see console output.
    This Cmdlet requires PS 3 and the Active Directory module. The AD module
    will be loaded automatically if it isn't already.
.PARAMETER Count
    The number of random users to get from Active Directory. Minimum is 1,
    maximum is Int16.MaxValue (32767) and the default is 20.
.PARAMETER Filename
    The filename to write the HTML report to. The filename must end in
    html or htm. The default is .\RandomADUsers.html.
.EXAMPLE
    Get-RandomADUser
 
    Gets 20 random users from AD, outputs a report to .\RandomADUsers.html.
.EXAMPLE
    Get-RandomADUser -Count 100 -Filename C:\reports\rpt.html.
 
    Gets 100 random users from AD, outputs a report to C:\reports\rpt.html.
#>
 
    [CmdletBinding()]
    Param([Parameter()]
            [ValidateRange(1, [Int16]::MaxValue)]
            [Int16]$Count = 20,
          [Parameter()]
            [ValidateScript({($_.ToLower().Split('.')[-1] -EQ "html" -OR $_.ToLower().Split('.')[-1] -EQ "htm") -AND (Test-Path -IsValid $_)})]
            [String]$Filename = ".\RandomADUsers.html") 
 
    Try
    {
        Write-Verbose "Retrieving users from Active Directory..."
        $Users = Get-ADUser -Filter * -Properties Department, Title, LastLogonDate, PasswordLastSet, Enabled, LockedOut -ErrorAction Stop | Get-Random -Count $Count
        Write-Verbose "$($Users.Count) users retrieved from Active Directory."
    }
    Catch
    {
        Write-Error "Unable to retrieve users from Active Directory: $($_.Exception.Message)"
        Return
    }   
    Try
    {
        Write-Verbose "Generating report $Filename..."
        $Header = @'
        <title>Random Active Directory User Audit</title>
            <style type=""text/css"">
                <!--
                    TABLE { border-width: 1px; border-style: solid;  border-color: black; }
                    TD    { border-width: 1px; border-style: dotted; border-color: black; }
                -->
            </style>
'@
        $Pre  = "<p><h2>Random Active Directory User Audit for $Env:USERDNSDOMAIN</h2></p>"
        $Post = "<hr><p style=`"font-size: 10px; font-style: italic;`">This report was generated on $(Get-Date)</p>"
        $Users | ConvertTo-HTML -Property SamAccountName, Department, Title, LastLogonDate, PasswordLastSet, Enabled, LockedOut -Head $Header -PreContent $Pre -PostContent $Post | Out-File $Filename     
        Return $Users
    }
    Catch
    {
        Write-Error "Unable to generate report: $($_.Exception.Message)"
    }
}

Active Directory List Object Mode

by Ryan 20. May 2013 12:00

This is something I've been wanting to blog about for a long time, but have been putting it off because I knew it might turn in to a long, time-consuming post. Well it's time to bite the bullet and get started.

We were facing a bit of a problem in one of our managed hosting environments. We had this high-volume, multitenant Active Directory being used by dozens of different customers. There was a business requirement in this domain that customers not be able to read from one another's organization units for the sake of the mutual privacy of the customers. Things seemed to be working well for a while, but one day, it appeared that customer users logging on to many of the client computers were failing to process Group Policy upon logon:

Event ID: 1101
Source: Userenv
User: NT Authority\System
Description: Windows cannot access the object OU=Customers, DC=contoso, DC=com in Active Directory. The access to the object may be denied. Group Policy processing aborted.

To start troubleshooting, I copied one of the affected user accounts and used it to log in to one of their machines, and I was able to reproduce the issue. Upon trying to update Group Policy with gpupdate.exe, I noticed that the computer configuration was updating fine, while only the user portion of the update failed, and the event 1101 was produced.

The basic layout of the OU structure in the domain was this:

    
CONTOSO.COM
    |
    + Customers (OU)
          |
          + Customer1 (OU)
          |
          + Customer2 (OU)
          |
          + ...

Still using my customer-level user account, I noticed that I was able to browse the contents of my own Customer1 OU, but I was not able to browse the contents of any other OU. The permissions on these OUs had certainly been modified.

In fact, it was that the read permission for the Authenticated Users security group had been removed from the access control list on the Customers OU. That explains the event 1101s and the GPO processing failures. From Microsoft:

[GPO processing fails] when the Group Policy engine cannot read one of the OUs.

The Group Policy engine must be able to read all OUs from the level of the user object or the computer object to the level of the domain root object. Also, the Group Policy engine must be able to read the domain root object and the site object of the computer. This is because these objects may contain links to group policies. If the Group Policy engine cannot read one of these OUs, the events that are mentioned in the "Symptoms" section will be logged.

So in satisfying the business requirement that no customer be allowed to list the contents of another customer's OU, Group Policy processing had been broken. But simply giving Authenticated Users their read permissions back on the Customers OU, they get to browse all the other customers OUs as well.

We needed the best of both worlds.

This Microsoft article would lead you to believe that if a security principal just had the Read gpLink and Read gpOptions access control entries, then GPO processing should work fine:

But that's not enough. The four ACEs that were needed on the Customers OU were:

  • Read gpLink
  • Read gpOptions
  • Read cn
  • Read distinguishedName

Now we're making progress, but we're still not out of the woods. Giving Authenticated Users the List Contents permission on the Customers OU would allow them to see the names of all the other customer's OUs, although now they show up as "Unknown" object types and can't have their respective contents listed. But that's a messy solution in my opinion and doesn't fully satisfy the requirement. Customer1 shouldn't even be aware of Customer2's existence.

There's one last piece of the puzzle missing, and that brings me to List Object Mode.

List Object Mode is one strategy available to Active Directory administrators to allow for hiding certain bits of data from certain users. List Object mode has to be enabled manually; it's turned off by default. To enable it, set the value of the dsHeuristics property in the Configuration partition to 001 using ADSI Edit, like so:

dsHeuristics

Now you will have a new access control entry in the list on objects in your forest: List Object. The ACE was actually there before, but Active Directory doesn't enforce it by default.

List Object Mode is a form of Access Based Enumeration, (not to be confused with file system ABE,) where items are not displayed to users that do not have List Object permissions to them. By default, when a user has the List Contents permission on an OU, and queries that OU, he or she is given a list of all child OUs in that parent OU, even if the user doesn't have read access to those other child OUs.  They show up in ADUC as "Unknown" object types and get that little blank page for an icon which is the Microsoft universal symbol for "wth is this?"

By using List Object permissions after having enabled it as just described, Active Directory evaluates the permissions of all the child objects under the object that was queried before returning the results to the user. Unless the user has the List Object permission on the object, it is omitted from the results. So now we have a customer user who is able to read just his or her own OU, and the other Customer OUs are completely hidden from view.

And no more Group Policy failures due to access denied, either.

So are there disadvantages to enabling and using List Object mode in your domain? Yes there are. So even though it may be appropriate for your environment, List Object Mode is not for everybody and it's not a decision that should be made lightly:

  • Significantly increased access control checks on LDAP queries = busier domain controllers.
  • You may need to rethink your entire User and Computer organization strategy to accommodate for how the new permissions work.
  • It's a less common configuration that fewer people are familiar with. Administrative complexity++. You need to fully document the change and make sure every administrator is aware of it.

So there you have it. Now go impress your friends with your knowledge of AD List Object Mode!

My Entry for the Advanced Event #3 of the 2013 Scripting Games

by Ryan 14. May 2013 09:09

Halfway done.  Here's my third entry for this year's Powershell games.  I used a workflow this time, mostly in an attempt to garner favor from the voters for using new features exclusive to PS3.  Even though the multithreading with jobs that I did in the last event is a neat idea, it really doesn't perform very well.  The workflow will likely perform better, though I don't know if it's going to handle the throttling of thread creation if I handed it a list of 500 computers.

#Requires -Version 3
Function New-DiskSpaceReport
{
	<#
		.SYNOPSIS
			Gets hard drive information from one or more computers and saves it as HTML reports.
		.DESCRIPTION
			Gets hard drive information from one or more computers and saves it as HTML reports.
			The reports are saved to the specified directory with the name of the computer in
			the filename. The list of computers is processed in parallel for increased speed.
			Use the -Verbose switch if you want to see console output, which is very useful if you
			are having problems generating all the desired reports.
		.PARAMETER ComputerName
			One or more computer names from which to get information. This can be a
			comma-separated list, or a file of computer names one per line. The alias
			of this parameter is -Computer. The default value is the local computer.
		.PARAMETER Directory
			The directory to write the HTML files to. E.g., C:\Reports. The directory
			must exist. The default is the current working directory.
		.INPUTS
			[String[]]$ComputerName
			This is an array of strings representing the hostnames of the computers
			for which you want to retrieve information. This can also be supplied by
			(Get-Content file.txt). This can be piped into the cmdlet.
		.INPUTS
			[String]$Directory
			The directory to save the HTML reports to. The directory must exist.
		.OUTPUTS
			HTML files representing the information obtained from all
			the computers supplied to the cmdlet.
		.EXAMPLE
			New-DiskSpaceReport
			
			This will generate a report for the local computer and output the HTML file to
			the current working directory.			
		.EXAMPLE
			New-DiskSpaceReport -ComputerName server01,server02,server03 -Directory C:\Reports
			
			This will generate three HTML reports for the servers and save them in the C:\Reports
			directory.
		.EXAMPLE
			New-DiskSpaceReport -Computer (Get-Content .\computers.txt)
			
			This will generate HTML reports for all the computers in the computers.txt file and
			save the reports in the current working directory.
		.EXAMPLE
			,(Get-Content .\computers.txt) | New-DiskSpaceReport -Directory C:\Reports
			
			This will generate HTML reports for all the computers in the computers.txt file and
			save the reports in C:\Reports. Please note the leading comma in this example.
		.NOTES
			Scripting Games 2013 Advanced Event 3
	#>
	[CmdletBinding()]
	Param([Parameter(ValueFromPipeline=$True)]
			[Alias('Computer')]
			[String[]]$ComputerName = $Env:Computername,
		  [Parameter()]
			[ValidateScript({Test-Path $_ -PathType Container})]
			[String]$Directory = (Get-Location).Path)
	
	Write-Verbose -Message "Writing reports to $Directory..."
	
	WorkFlow BuildReports
	{
		Param([String[]]$Computers, [String]$Directory)
		ForEach -Parallel ($Computer In $Computers)
		{			
			InlineScript
			{				
				Write-Verbose -Message "Generating report for $Using:Computer..."
				$Header = @'
				<title>Disk Free Space Report</title>
				<style type=""text/css"">
					<!--
						TABLE { border-width: 1px; border-style: solid;  border-color: black; }
						TD    { border-width: 1px; border-style: dotted; border-color: black; }
					-->
				</style>
'@
				$Pre  = "<p><h2>Local Fixed Disk Report for $Using:Computer</h2></p>"
				$Post = "<hr><p style=`"font-size: 10px; font-style: italic;`">This report was generated on $(Get-Date)</p>"
				Try
				{					
					$LogicalDisks = Get-WMIObject -Query "SELECT * FROM Win32_LogicalDisk WHERE DriveType = 3" -ComputerName $Using:Computer -ErrorAction Stop | Select-Object -Property DeviceID,@{Label='SizeGB';Expression={"{0:N2}" -F ($_.Size/1GB)}},@{Label='FreeMB';Expression={"{0:N2}" -F ($_.FreeSpace/1MB)}},@{Label='PercentFree';Expression={"{0:N2}" -F (($_.Freespace/$_.Size)*100)}};
					$LogicalDisks | ConvertTo-HTML -Property DeviceID, SizeGB, FreeMB, PercentFree -Head $Header -PreContent $Pre -PostContent $Post | Out-File -FilePath $(Join-Path -Path $Using:Directory -ChildPath $Using:Computer`.html)
					Write-Verbose -Message "Report generated for $Using:Computer."
				}
				Catch
				{
					Write-Verbose -Message "Cannot build report for $Using:Computer. $($_.Exception.Message)"
				}
			}
		}
	}
	
	If($PSBoundParameters['Verbose'])
	{
		BuildReports -Computers $ComputerName -Directory $Directory -Verbose
	}
	Else
	{
		BuildReports -Computers $ComputerName -Directory $Directory
	}
}

About Me

Name: Ryan Ries
Location: Texas, USA
Occupation: Systems Engineer 

I am a Windows engineer and Microsoft advocate, but I can run with pretty much any system that uses electricity.  I'm all about getting closer to the cutting edge of technology while using the right tool for the job.

This blog is about exploring IT and documenting the journey.


Blog Posts (or Vids) You Must Read (or See):

Pushing the Limits of Windows by Mark Russinovich
Mysteries of Windows Memory Management by Mark Russinovich
Accelerating Your IT Career by Ned Pyle
Post-Graduate AD Studies by Ned Pyle
MCM: Active Directory Series by PFE Platforms Team
Encodings And Character Sets by David C. Zentgraf
Active Directory Maximum Limits by Microsoft
How Kerberos Works in AD by Microsoft
How Active Directory Replication Topology Works by Microsoft
Hardcore Debugging by Andrew Richards
The NIST Definition of Cloud by NIST


MCITP: Enterprise Administrator

VCP5-DCV

Profile for Ryan Ries at Server Fault, Q&A for system administrators

LOPSA

GitHub: github.com/ryanries

 

I do not discuss my employers on this blog and all opinions expressed are mine and do not reflect the opinions of my employers.