Powershell Code That Literally Writes Itself - Automaception

by Ryan 13. August 2014 17:08

I amused myself with Powershell today and thought I'd share. I might have also named today's post, "Write-Host does have a use after all!"

Today, I had the task of synchronizing the country attributes of thousands of users from a non-Microsoft LDAP server into multiple Active Directories.  This non-Microsoft LDAP server stored the country attribute ("c" in LDAP parlance) of each user as an ISO 3166 Alpha-3 three-letter abbreviation.  I wanted to convert that into the ISO 3166 alpha-2 two-letter notation before I imported it into Active Directory, as well as fill out the rest of the country-related attributes at the same time.

As most AD administrators know, when you want to programmatically set a user's country, you have to make the change in three different places if you want to be thorough.  You have to change co (Text-Country,) c (Country-Name,) and countryCode (Country-Code.)  Those three fields are a free-form text entry, an ISO-3166 A2 or A3 abbreviation, and a numeric value, respectively.

So the first thing I do is track down the ISO 3166 list.  It looks like this:

AALAND ISLANDS                                  AX      ALA     248
AFGHANISTAN                                     AF      AFG     004
ALBANIA                                         AL      ALB     008
ALGERIA                                         DZ      DZA     012
...

And on and on... for 240 countries.

I was thinking I'd want a Switch statement in my script... the idea is that I Switch($x) where $x is the three-letter country abbreviation that came from the LDAP server. Visions flashed through my mind of me staying up all night writing this monotonous switch block with 240 cases in it.  And then I thought, "Hey, Powershell is the most powerful automation framework there is. Surely I can use it to automate the automation!"  And so I set out to have my Powershell script literally write itself.

First, save that ISO 3166 country code list to a text file. That the list is already in a fixed-width format is going to make this extremely simple.

$Countries = Get-Content C:\Users\Ryan\Desktop\3166.txt

Write-Host "Switch (`$Transaction.NewValue.ToUpper().Trim())" #E.g. 'USA' or  'BEL'
Write-Host "{"
Foreach ($Line In $Countries)
{
    [String]$CountryCode = $Line[64] + $Line[65] + $Line[66]
    [String]$ThreeLetter = $Line[56] + $Line[57] + $Line[58]
    [String]$TwoLetter   = $Line[48] + $Line[49]
    [String]$FreeForm    = $Line.Substring(0, 46).Trim().Replace("'", $Null)

    Write-Host "    '$ThreeLetter'"
    Write-Host "    {"
    Write-Host "        Set-ADUser -Identity `$Transaction.ObjectGUID -Replace @{ countryCode = $CountryCode } -ErrorAction Stop"
    Write-Host "        Set-ADUser -Identity `$Transaction.ObjectGUID -Replace @{ co = '$FreeForm' } -ErrorAction Stop"
    Write-Host "        Set-ADUser -Identity `$Transaction.ObjectGUID -Replace @{ c = '$TwoLetter' } -ErrorAction Stop"
    Write-Host "    }"
}
Write-Host "    Default { Write-Error `"`$(`$Transaction.NewValue) was not recognized as a country.`" }"
Write-Host "}"

When I ran the above script, it printed out all the code for my massive switch block, so that all I had to do was copy the text out of the Powershell window, and paste it into the middle of the script I was working on.  It came out looking like this:

Switch ($Transaction.NewValue.ToUpper().Trim()) #E.g. 'USA' or  'BEL'
{
    'ALA'
    {
        Set-ADUser -Identity $Transaction.ObjectGUID -Replace @{ countryCode = 248 } -ErrorAction Stop
        Set-ADUser -Identity $Transaction.ObjectGUID -Replace @{ co = 'AALAND ISLANDS' } -ErrorAction Stop
        Set-ADUser -Identity $Transaction.ObjectGUID -Replace @{ c = 'AX' } -ErrorAction Stop
    }
    #
    # ... 238 more countries ...
    #
    'ZWE'
    {
        Set-ADUser -Identity $Transaction.ObjectGUID -Replace @{ countryCode = 716 } -ErrorAction Stop
        Set-ADUser -Identity $Transaction.ObjectGUID -Replace @{ co = 'ZIMBABWE' } -ErrorAction Stop
        Set-ADUser -Identity $Transaction.ObjectGUID -Replace @{ c = 'ZW' } -ErrorAction Stop
    }
    Default { Write-Error "$($Transaction.NewValue) was not recognized as a country." }
}

And there we have it.  Now I don't have any employees from Burkina Faso, but it's nice to know that I'd be able to classify them if I did.  Now with all the time I saved myself from having to type all that out by hand, I figured I'd write a blog entry about it.

Be the Master of the LastLogonTimestamp Attribute with S4U2Self

by Ryan 19. July 2014 09:07

I've written a little bit about the LastLogonTimestamp/LastLogonDate attribute here, and of course there is AskDS's notorious article on the subject here, but today I'm going to give you a handy little tip that I don't think I have mentioned before.

If you're an Active Directory administrator, chances are you're interested or have been interested in knowing if a given account is "stale," meaning that the account's owner has not logged in to the domain in some time.  (Keep in mind that an account could be either a user or a computer as it relates to Active Directory.)  You, like many sysadmins, might have some script or automated process that checks for stale accounts using a command-line approach, such as:

dsquery user -inactive 10

or Powershell's extremely flexible:

Get-ADUser -Filter * -Properties LastLogonDate
| ? { $_.Enabled -AND $_.LastLogonDate -LT (Get-Date).AddDays(-90) }

And then you take action on those inactive accounts, such as moving them to an "Inactive Users" OU, or disabling their accounts, or sending a reminder email to the account holder reminding them that they have an account in this domain, etc.

It might be handy for you to "artificially" update the lastLogonTimeStamp of another user though.  Maybe you know that this user is on vacation and you don't want their user account to get trashed by the "garbage collector" for being inactive.  According to the documentation, lastLogonTimeStamp is only editable by the system, so forget about directly modifying the attribute the way that you would other LDAP attributes.  And of course "LastLogonDate" is not a real attribute at all - merely a calculated attribute that Powershell gives you to be helpful by converting lastLogonTimestamp into a friendly .NET DateTime object.

The S4U2Self (Service for User to Self) Kerberos extension can help us here.

Just right click on any object such as an OU in Active Directory a folder in a file share, go to its Properties, then the Security tab.  Click the Advanced button.  Now go to the Effective Permissions tab.  Click the Select... button, and choose the user whose lastLogonTimestamp you want to update.  We are going to calculate the effective permissions of this inactive user:

By doing this, you are invoking the S4U2Self Kerberos extension, whereby the system will "go through the motions of Kerberos authentication and obtain a logon for the client, but without providing the client's credentials. Thus, you're not authenticating the client in this case, only making the rounds to collect the group security identifiers (SIDs) for the client."[1]

And just like that, you have updated the "Last Logon Time" on another user's behalf, without that user having to actually log on themselves.

Disable RC4-HMAC (And Others) in Active Directory

by Ryan 15. July 2014 14:07

My long-distance pal Mathias Jessen pointed out this article today on Twitter, in which the article's author attempts to make us all shudder in fright at the idea that Microsoft Active Directory has a terrifying security vulnerability that will cause the world's corporate infrastructure to crumble and shatter humanity's socio-political status quo as script-kiddies take over the now destabilized Earth.

OK, it's not all that bad...

Of the several encryption types that AD supports for Kerberos authentication, RC4-HMAC is among the oldest and the weakest.  The reason the algorithm is still supported?  You guessed it... backwards compatibility.  The problem is that in this (perhaps ill-conceived, but hindsight's 20/20) implementation of RC4-HMAC, as outlined in RFC 4757, the encryption key that is used is the user's NT/MD4 hash itself!  What this means is that all I need is the NT hash of another user, and by forcing an AD domain controller to negotiate down to RC4-HMAC, I can be granted a Kerberos ticket as that other user.  (Getting another user's NT hash means I've probably already owned some random domain-joined workstation, for instance.)

As you probably already know, an NT hash is essentially password-equivalent and should be treated with the same level of sensitivity as the password itself.  And if you didn't know that, then you should read my earlier blog post where I talk a lot more about this stuff.

This was a deliberate design decision - that is, Microsoft is not going to just patch this away.  The reason they chose to do it this way was to ease the transition from NTLM to Kerberos back around the release of Windows 2000.  Newer versions of Windows such as Vista, 2008, 2008 R2, etc., use newer, better algorithms such as AES256_HMAC_SHA1 that do not use an NT hash.  Newer versions of Windows on a domain will automatically use these newer encryption types by default, but the older types such as the dreaded RC4-HMAC are still supported and can be used by down-level clients... or malicious users pretending to be down-level domain members.

As an administrator, you're free to turn the encryption type off entirely if you do not need the backwards compatibility.  Which you probably don't unless you're still rockin' some NT 4.0 servers or some other legacy application from the '90s.

(Which probably means most companies...)

Edit the Default Domain Controllers (or equivalent) Group Policy and look for the setting:

Computer Configuration > Policies > Windows Settings > Security Settings > Local Policies > Security Options > Network Security: Configure encryption types allowed for Kerberos.

This setting corresponds to the msDS-SupportedEncryptionTypes LDAP attribute on the domain controller computer objects in AD.  Enable the policy setting and uncheck the first three encryption types.

And of course, test in a lab first to ensure all your apps and equipment that uses AD authentication can deal with the new setting before applying it to production.

Writing and Distributing a Powershell Module via GPO

by Ryan 6. June 2014 13:06

I recently wrote a bunch of Powershell Cmdlets that were related to each other, so I decided to package them all up together as a Powershell module.  These particular Cmdlets were tightly integrated with Active Directory, so it made sense that they would often be run on a domain controller, or against a domain controller, usually by a domain administrator.

First, to create a Powershell script module, you create a folder, and then place your *.psm1 and *.psd1 files in that folder and they must have exactly the same name as that folder. (You can also compile your Powershell module into a DLL, but let's just talk about script modules today.)  So for instance, if you name your module "CloudModule," you would create a directory such as:

C:\Program Files\PSModules\CloudModule\

And then you'd place your files of the same name in that folder:

C:\Program Files\PSModules\CloudModule\CloudModule.psm1
C:\Program Files\PSModules\CloudModule\CloudModule.psd1

I think that a subdirectory of Program Files works well, because the Program Files directory is protected from modification by non-administrators and low-integrity processes.

The psd1 file is the manifest for the PS module. You can easily create a new manifest template with the New-ModuleManifest cmdlet, and customize the template to suit your needs. The module's manifest is simply the "metadata" for the Powershell module, such as what version of Powershell it requires, other prerequisite modules it requires, the module's author, etc. The cool thing is that with current versions of Powershell, just by saying:

RequiredModules = @('ActiveDirectory')

in your manifest file, Powershell will automatically load the ActiveDirectory module for you the first time any cmdlet from your module is run. So you really don't need to put an Import-Module or a #Requires -Module anywhere.

The psm1 file is the collection of Powershell Advanced Functions (cmdlets) that make up your module. I would recommend naming all of your cmdlets with a common theme, using supported verbs (Get-Verb) and a common prefix. You know how the Active Directory module does Get-ADUser, Remove-ADObject, Set-ADGroup, etc.? You should do that too. For example, if you work for McDonald's, do Show-McSalary, Deny-McWorkersComp, Stop-McStrike, etc.

Now that you've got your module directory and files laid out, you need to add that path to your PSModulePath environment variable so that Powershell knows where to look for it every time it starts. Do not try to cheat and put your custom module in the same System32 directory where Microsoft puts their standard PS modules.

I decided that I wanted all my domain controllers to automatically load this custom module. And I don't want to go install and maintain this this thing on 50 separate machines.  So for a centrally-managed solution, let's utilize Group Policy and SYSVOL.

First, let's put something like this in the Default Domain Controllers (or equivalent) GPO:


(*Click to enlarge*)

And secondly, let's put our Powershell module in:

C:\Windows\SYSVOL\sysvol\Contoso.com\scripts\PSModules\CloudModule\

Of course, since this directory is replicated amongst all domain controllers, we only need to do this on one domain controller for the files to become available on all domain controllers.  Likewise, any time you update the module, you only need to update it on one DC, and all DCs will get the update.

Lastly, since this Powershell module is only for Domain Administrators, and SYSVOL by design is a very public place, let's protect our module's directory from prying eyes:

Careful that you only modify the permissions of that one specific module directory... you don't want to modify the permissions of anything else in Sysvol.

Parse-DNSDebugLogFile

by Ryan 30. May 2014 16:05

Happy Friday, nerds.

This is just going to be a quick post wherein I paste in a Powershell script I wrote about a year ago that parses a Windows Server DNS debug log file, and converts the log file from flat boring hard-to-read text, into lovely Powershell object output.  This makes sorting, analyzing and data mining your DNS log files with Powershell a breeze.

It's not my best work, but I wanted to get it posted on here because I forgot about it until today, and I just remembered it when I wanted to parse a DNS log file, and I tried searching for it here and realized that I never posted it.

#Requires -Version 2
Function Parse-DNSDebugLogFile
{
<#
.SYNOPSIS
    This function reads a Windows Server DNS debug log file, and turns it from
    a big flat file of text, into a useful collection of objects, with each
    object in the collection representing one send or receive packet processed
    by the DNS server.
.DESCRIPTION
    This function reads a Windows Server DNS debug log file, and turns it from
    a big flat file of text, into a useful collection of objects, with each
    object in the collection representing one send or receive packet processed
    by the DNS server. This function takes two parameters - DNSLog and ErrorLog.
    Defaults are used if no parameters are supplied.
.NOTES
    Written by Ryan Ries, June 2013.
#>
    Param([Parameter(Mandatory=$True, Position=1)]
            [ValidateScript({Test-Path $_ -PathType Leaf})]
            [String]$DNSLog,
          [Parameter()]
            [String]$ErrorLog = "$Env:USERPROFILE\Desktop\Parse-DNSDebugLogFile.Errors.log")

    # This is the collection of DNS Query objects that this function will eventually return.
    $DNSQueriesCollection = @()
    # Try to read in the DNS debug log file. If error occurs, log error and return null.
    Try
    {
        [System.Array]$DNSLogFile = Get-Content $DNSLog -ReadCount 0 -ErrorAction Stop
        If($DNSLogFile.Count -LT 30)
        {
            Throw "File was empty."
        }
    }
    Catch
    {
        "$(Get-Date) - $($_.Exception.Message)" | Out-File $ErrorLog -Append
        Write-Verbose $_.Exception.Message
        Return
    }
    
    # Cut off the header of the DNS debug log file. It's about 30 lines long and we don't want it.
    $DNSLogFile = $DNSLogFile[30..($DNSLogFile.Count - 1)]
    # Create a temporary buffer for storing only lines that look valid.
    $Buffer = @()
    # Loop through log file. If the line is not blank and it contains what looks like a numerical date,
    # then we can be reasonably sure that it's a valid log file line. If the line is valid,
    # then add it to the buffer.    
    Foreach($_ In $DNSLogFile)
    {
        If($_.Length -GT 0 -AND $_ -Match "\d+/\d+/")
        {
            $Buffer += $_
        }
    }
    # Dump the buffer back into the DNSLogFile variable, and clear the buffer.
    $DNSLogFile = $Buffer
    $Buffer     = $Null
    # Now we parse text and use it to assemble a Query object. If all goes well,
    # then we add the Query object to the Query object collection. This is nasty-looking
    # stuff that we have to do to turn a flat text file into a beutiful collection of
    # objects with members.
    Foreach($_ In $DNSLogFile)
    {
        If($_.Contains(" PACKET "))
        {
            $Query = New-Object System.Object
            $Query | Add-Member -Type NoteProperty -Name Date      -Value $_.Split(' ')[0]
            $Query | Add-Member -Type NoteProperty -Name Time      -Value $_.Split(' ')[1]
            $Query | Add-Member -Type NoteProperty -Name AMPM      -Value $_.Split(' ')[2]
            $Query | Add-Member -Type NoteProperty -Name ThreadID  -Value $_.Split(' ')[3]
            $Query | Add-Member -Type NoteProperty -Name PacketID  -Value $_.Split(' ')[6]
            $Query | Add-Member -Type NoteProperty -Name Protocol  -Value $_.Split(' ')[7]
            $Query | Add-Member -Type NoteProperty -Name Direction -Value $_.Split(' ')[8]
            $Query | Add-Member -Type NoteProperty -Name RemoteIP  -Value $_.Split(' ')[9]
            $BracketLeft  = $_.Split('[')[0]
            $BracketRight = $_.Split('[')[1]
            $Query | Add-Member -Type NoteProperty -Name XID       -Value $BracketLeft.Substring($BracketLeft.Length - 9, 4)
            If($BracketLeft.Substring($BracketLeft.Length - 4, 1) -EQ "R")
            {
                $Query | Add-Member -Type NoteProperty -Name Response -Value $True
            }
            Else
            {
                $Query | Add-Member -Type NoteProperty -Name Response -Value $False
            }
            $Query | Add-Member -Type NoteProperty -Name OpCode -Value $BracketLeft.Substring($BracketLeft.Length - 2, 1)
            $Query | Add-Member -Type NoteProperty -Name ResponseCode -Value $BracketRight.SubString(10, 8).Trim()
            $Query | Add-Member -Type NoteProperty -Name QuestionType -Value $_.Split(']')[1].Substring(1,5).Trim()
            $Query | Add-Member -Type NoteProperty -Name QuestionName -Value $_.Split(' ')[-1]
            
            # Just doing some more sanity checks here to make sure we parsed all that text correctly.
            # If something looks wrong, we'll just discard this whole line and continue on
            # with the next line in the log file.
            If($Query.QuestionName.Length -LT 1)
            {
                Continue
            }
            If($Query.XID.Length -NE 4)
            {
                "$(Get-Date) - XID Parse Error. The line was: $_" | Out-File $ErrorLog -Append
                Continue
            }
            If($Query.Protocol.ToUpper() -NE "UDP" -AND $Query.Protocol.ToUpper() -NE "TCP")
            {
                "$(Get-Date) - Protocol Parse Error. The line was: $_" | Out-File $ErrorLog -Append
                Continue
            }
            If($Query.Direction.ToUpper() -NE "SND" -AND $Query.Direction.ToUpper() -NE "RCV")
            {
                "$(Get-Date) - Direction Parse Error. The line was: $_" | Out-File $ErrorLog -Append
                Continue
            }
            If($Query.QuestionName.Length -LT 1)
            {
                Continue
            }
            
            # Let's change the query format back from RFC 1035 section 4.1.2 style, to the 
            # dot notation that we're used to. (8)computer(6)domain(3)com(0) = computer.domain.com.
            $Query.QuestionName = $($Query.QuestionName -Replace "\(\d*\)", ".").TrimStart('.')
            
            # Finally let's add one more property to the object; it might come in handy:
            $Query | Add-Member -Type NoteProperty -Name HostName -Value $Query.QuestionName.Split('.')[0].ToLower()
            
            # The line, which we've now converted to a query object, looks good, so let's
            # add it to the query objects collection.
            $DNSQueriesCollection += $Query
        }
    }
    $DNSLogFile = $Null
    
    # Clear the error log if it's too big.
    If(Test-Path $ErrorLog)
    {
        If((Get-ChildItem $ErrorLog).Length -GT 1MB)
        {
            Clear-Content $ErrorLog
        }
    }
    
    Return $DNSQueriesCollection
}
# END FUNCTION Parse-DNSDebugLogFile

And the output looks like this:

Date         : 6/17/2013
Time         : 11:51:41
AMPM         : AM
ThreadID     : 19BC
PacketID     : 0000000010146F10
Protocol     : UDP
Direction    : Snd
RemoteIP     : 10.173.16.82
XID          : e230
Response     : True
OpCode       : Q
ResponseCode : NOERROR
QuestionType : A
QuestionName : host01.domain.com.
HostName     : host01

Date         : 6/17/2013
Time         : 11:51:41
AMPM         : AM
ThreadID     : 19BC
PacketID     : 000000000811A6C0
Protocol     : UDP
Direction    : Snd
RemoteIP     : 10.173.16.82
XID          : 36df
Response     : True
OpCode       : Q
ResponseCode : NXDOMAIN
QuestionType : SRV
QuestionName : _kerberos-master._udp.domain.com.
HostName     : _kerberos-master

About Me

Ryan Ries
Texas, USA
Systems Engineer
ryan@myotherpcisacloud.com

I am a systems engineer with a focus on Microsoft tech, but I can run with pretty much any system that uses electricity.  I'm all about getting closer to the cutting edge of technology while using the right tool for the job.

This blog is about exploring IT and documenting the journey.


Blog Posts (or Vids) You Must Read (or See):

Pushing the Limits of Windows by Mark Russinovich
Mysteries of Windows Memory Management by Mark Russinovich
Accelerating Your IT Career by Ned Pyle
Post-Graduate AD Studies by Ned Pyle
MCM: Active Directory Series by PFE Platforms Team
Encodings And Character Sets by David C. Zentgraf
Active Directory Maximum Limits by Microsoft
How Kerberos Works in AD by Microsoft
How Active Directory Replication Topology Works by Microsoft
Hardcore Debugging by Andrew Richards
The NIST Definition of Cloud by NIST



MCITP: Enterprise Administrator

VCP5-DCV

Profile for Ryan Ries at Server Fault, Q&A for system administrators

LOPSA

GitHub: github.com/ryanries

 

I do not discuss my employers on this blog and all opinions expressed are mine and do not reflect the opinions of my employers.