Generating Certificate Requests With Certreq

by Ryan 18. September 2013 08:01

Hey there,

SSL/TLS and the certificates it comes with are becoming more ubiquitous every day.  The system is not without its flaws, (BEAST, hash collision attacks, etc.,) but it's still generally regarded as "pretty good," and it's downright mandatory in any network that needs even a modicum of security.

One major downside is the administrative burden of having to keep track of and renew all those certificates, but Active Directory Certificate Services does a wonderful job of automating a lot of that away.  Many Windows administrator's lives would be a living hell if it weren't for Active Directory-integrated auto-enrollment.

But sometimes you don't always have the pleasure of working with an Enterprise CA. Sometimes you need to manually request a certificate from a non-Microsoft certificate authority, or a CA that is kept offline, etc.  Most people immediately start thinking about OpenSSL, which is a fine, multiplatform open-source tool.  But I usually seek out native tools that I already have on my Windows servers before I go download something off the internet that duplicates functionality that already comes with Windows.

Which brings me to certreq.  I use this guy to generate CSRs (certificate requests) when I need to submit one to a CA that isn't part of my AD forest or cannot otherwise be used in an auto-enrollment scenario. First paste something like this into an *.inf file:

;----------------- csr.inf -----------------
[Version]
Signature="$Windows NT$

[NewRequest]
Subject = "CN=web01.contoso.com, O=Contoso LLC, L=Redmond, S=Washington, C=US" 
KeySpec = 1
KeyLength = 2048
; Can be 1024, 2048, 4096, 8192, or 16384.
; Larger key sizes are more secure, but have
; a greater impact on performance.
Exportable = TRUE
MachineKeySet = TRUE
SMIME = False
PrivateKeyArchive = FALSE
UserProtected = FALSE
UseExistingKeySet = FALSE
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
RequestType = PKCS10
KeyUsage = 0xa0

[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication
;-----------------------------------------------

Then, run the command:

C:\> certreq -new csr.inf web01.req

And certreq will take the settings from the INF file that you created and turn them into a CSR with a .req extension.  The certreq reference and syntax, including all the various parameters that you can include in your INF file is right here. It's at this moment that the private key associated with this request is generated and stored, but it is not stored within the CSR so you don't have to worry about securely transporting the CSR.

Now you can submit that CSR to the certificate authority. Once the certificate authority has approved your request, they'll give you back a PEM or a CER file. If your CA gives you a PEM file, just rename it to CER.  The format is the same.  Remember that only the computer that generated the CSR has the private key for this certificate, so the request can only be completed on that computer.  To install the certificate, run:

C:\> certreq -Accept certificate.cer

Now you should see the certificate in the computer's certificate store, and the little key on the icon verifies that you do have the associated private key to go along with it.

So there you have it.  See you next time!

Today's Thoughts on Windows 8.1 (Will Do Server 2012 R2 Next)

by Ryan 13. September 2013 19:40

Guten abend!

So thankfully, Microsoft reversed their earlier decision to not release Windows 8.1 and Server 2012 R2 RTM on TechNet or MSDN until October 18th. Both products popped up on TechNet a few days ago. So, I downloaded both and have been playing with them in my lab the past few days. (Which is likely the last good thing I will be able to get from TechNet.  Rest in peace, you final bastion of good will from Microsoft to IT professionals.)

Windows 8.1 has gone onto the following test machine:

  • Intel Core i5-2500k
  • 16GB RAM
  • 256GB Samsung SSD
  • NVidia GTX 670 2GB

Needless to say, it screams. My experience has been that you will typically have a better time with Win 8 if you set it up with your Microsoft Live ID from the beginning, and not a domain account. In fact, it's almost impossible to install Windows 8.1 with anything other than your Microsoft Live ID. (Although you're free to join a domain later, after the install. But good luck installing with a local account.) I would say that this will be a barrier for Windows 8 adoption in the enterprise, however, the actual Win 8.1 Enterprise SKU has not been released yet, so the installer for that edition should be tweaked for easier installation in an AD domain in an enterprise environment. (And I admittedly have not even tried custom deployable images as you would with an enterprise environment.)

That looks weird.

But in a home setting, the reason I think it's awesome to go ahead and use your Live ID to install Windows 8.1 is because:

  • Your Skydrive sets itself up. It's already there waiting for you. It's integrated into Explorer already, and the coolest part is it initially takes up no room on your hard drive. It all stays online but browsable from within Explorer, and you only pull a file down from the cloud when you open it. But if you have some need to have it available offline? Just right-click the file, folder, or your entire Skydrive and choose "Make available offline" and it will all be downloaded locally. If you used Skydrive before 8.1, you should love this improvement. If you did not use Skydrive before 8.1 then you may find that this added feature only gets in the way. 
  • All your OS settings from Windows 8 are synchronized and brought into 8.1, even if you performed a clean install of 8.1. As soon as the installation finished, I landed on a Windows desktop and my wallpaper is already what I had on my last PC, because the wallpaper was stored on Skydrive. Furthermore, all my settings like 'folder view settings' were automatically sucked into the new installation as well. Ever since Windows 95, every time I would install the OS on a new machine, the first thing I did was go to the folder view settings and uncheck the "Hide File Extensions" option. I always hated that Windows would hide the file extension of files. Well, now that setting stays with me on every Win 8 machine I move to and I no longer have to worry about it.
  • IE11 seems great so far. Very fast, although, that could also be attributed to my beefy hardware. However, I have experienced one compatibility problem so far with IE11. I know that the user agent string for one thing changed dramatically in IE11. But in a pinch, hit F12 for the developer tools and you can emulate any down-level version of IE that you need. No big deal. I'll resist the urge to rant against web developers here.
  • (Though seriously, web developers, if you're listening, you are ruining the web.)
  • Boot to desktop and the ability to show your desktop wallpaper as your Start Screen background are welcome features. The resurrection of the classic Start Button on the taskbar, however, I don't care about one way or the other. I never really missed the old Start Menu from old versions of Windows. I pretty much don't care about the 'Modern,' 'Metro' interface either way, but I'm not bitter about it, because I know it wasn't made for me. It was made for phones and tablets. I have a desktop PC, and as such, I have no need for the Modern UI. End of story. Use what works for you. The OS now has a new feature now that I'm not really interested in, but who cares, the rest of the underlying OS is still there, and it's still good.
  • The Remote Server Administration Tools for Win 8.1 Preview installs on and works in Win 8.1 RTM, which I am using to set up a full Server 2012 R2 lab environment, which I shall talk about shortly in an upcoming blog post!

Powershell RemoteSigned and AllSigned Policies, Revisited

by Ryan 4. September 2013 12:35

A while back I talked about Powershell signing policies here.  Today I'm revisiting them.

Enabling script signing is generally a good idea for production environments. Things like script signing policies, using Windows Firewall, etc., all add up to provide a more thorough defense-in-depth strategy.

On the other hand, one negative side-effect that I've seen from enforcing Powershell signing policies is that it breaks the Best Practices Analyzers in Windows Server, since the BPAs apparently use scripts that are unsigned. Which is strange, since Microsoft is usually very good about signing everything that they release. I assume that they've since fixed that.

I'd consider the biggest obstacle to using only signed Powershell scripts to be one of discipline. But maybe that in itself is a good thing - if only the admins and engineers who are disciplined enough to put their digital signature on something are allowed to run PS scripts in production, perhaps that will cut down on the incidents of wild cowboys running ill-prepared scripts in your environment, or making a quick change on the fly to an important production script, etc. Could you imagine having to re-sign your script every time you changed a single line? That seems like the perfect way to encourage the behavior that scripts are first perfected in a lab, and only brought to production once they're fully baked and signed.

The next obstacle is getting everyone their own code signing certificate. This means you either need to spend some money getting a boat-load of certs from a public CA for all your employees, or you need to maintain your own PKI in your own Active Directory forest(s) for internal-use-only certificates.  This part alone is going to disqualify many organizations. Very rare is the organization that cares about properly signing things in their IT infrastructure.  Even rarer is the organization that actually does it, as opposed to just saying they want everything to be properly signed.  "It's just too much hassle... and now you're asking me to sign all my scripts, too?"

I also want to reinforce this point: Like UAC, Powershell script execution policies are not meant to be relied upon as a strong security measure. Microsoft does not tout them as such. They're meant to prevent you from making mistakes and doing things on accident. Things like UAC and PS script execution policies will keep honest people honest and non-administrators from tearing stuff up.  An AllSigned execution policy can also thwart unsophisticated attempts to compromise your security by preventing things such as modifying your PS profile without your knowledge to execute malicious code the next time you launch Powershell. But execution policies are no silver bullet. They are simply one more thin layer in your overall security onion.

So now let's play Devil's advocate. We already know the RemoteSigned policy should be a breeze to bypass just by clearing the Zone.Identifier alternate NTFS data stream. How do we bypass an Allsigned policy?

PS C:\> .\script.ps1
.\script.ps1 : File C:\script.ps1 cannot be loaded. The file C:\script.ps1 is not digitally signed. The script will not execute on the system.

Script won't execute?  Do your administrator's script execution policies get you down?  No problem:

  • Open Notepad.
  • Paste the following line into Notepad:
Powershell.exe -NoProfile -Command "Powershell.exe -NoProfile -EncodedCommand ([Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes((Get-Content %1 | %% {$_} | Out-String))))"
  • Save the file as bypass.bat.
  • Run your script by passing it as a parameter to bypass.bat.

PS C:\> .\bypass.bat .\script.ps1

... And congratulations, you just bypassed Powershell's execution policy as a non-elevated user.

So in conclusion, even after showing you how easy it is to bypass the AllSigned execution policy, I still recommend that good execution policies be enforced in production environments. Powershell execution policies are not meant to foil hackers.

  1. They're meant to be a safeguard against well-intentioned admins accidentally running scripts that cause unintended consequences.
  2. They verify the authenticity of a script. We know who wrote it and we know that it has not been altered since they signed it.
  3. They encourage good behavior by making it difficult for admins and engineers to lackadaisically run scripts that could damage a sensitive environment.

Microsoft MCM and MCA Certifications Are Dead

by Ryan 31. August 2013 12:37

First they trash TechNet subscriptions, and now I'm hearing that Microsoft Certified Master and Architect certifications are officially dying now as well. (Note: the following email was not sent to me. It came from this guy.) 

We are contacting you to let you know we are making a change to the Microsoft Certified Master, Microsoft Certified Solutions Master, and Microsoft Certified Architect certifications. As technology changes so do Microsoft certifications and as such, we are continuing to evolve the Microsoft certification program. Microsoft will no longer offer Masters and Architect level training rotations and will be retiring the Masters level certification exams as of October 1, 2013. The IT industry is changing rapidly and we will continue to evaluate the certification and training needs of the industry to determine if there's a different certification needed for the pinnacle of our program.

As a Microsoft Certified Master, Microsoft Certified Solutions Master, or Microsoft Certified Architect, you have earned one of the highest certifications available through the Microsoft Certification program. Although individuals will no longer be able to earn these certifications, you will continue to hold the credential and you will not be required to recertify your credential in the future. You will continue to have access to the logos through the MCP site, and your certifications will continue to show in the appropriate section of your transcript, according to Microsoft technology retirement dates. If you are a Charter Member, you will continue to hold the Charter Member designation on your transcript.

Also as a Microsoft Certified Master, Microsoft Certified Solutions Master, or Microsoft Certified Architect, you are a member of an exclusive, highly technical community and you've told us this community is one of the biggest benefits of your certification. We encourage you to stay connected with your peers through the main community distribution lists. Although we won't be adding more people to this community, you continue to be a valued member of it. Over time, Microsoft plans to transition the distribution lists to the community, and, with your consent, will include your information so that it can continue to be a valuable resource for your ongoing technical discussions.

Within the coming weeks, you will receive invitations to an updated community site. This community site will require you to sign in with a Microsoft Account and will replace the need for a Microsoft Partner account as is required today. From this site, you will be able to manage service requests for the Masters and Architects communities – such as ordering welcome kits and managing your contact information for the distribution lists and directory - and accessing training rotation and other community content (if applicable).

If you have not ordered your Welcome Kit, the last day to do so is October 31, 2013. To order your Welcome Kit, please contact the Advanced Cert team at advcert@microsoft.com.

We thank you for your commitment to Microsoft technologies.

This is extraordinarily depressing for me, as Microsoft Certified Master certification has been one of my biggest goals for the past three years. And now that's no longer a possibility.

I don't know why Microsoft would make such a decision, or if there will ever be a new equivalent certification to take the place of the MCM and MCA.

Microsoft, many of us do not understand your recent decisions that appear to be squarely anti-IT Pro. Microsoft Certified Masters and Architects were your strongest supporters and evangelists. They help advocate your products to customers and drive sales for you, Microsoft.  They spent the time and effort on that Masters or Architect certification because of a sincere passion for your products. I can't think of any other reason for you to make this decision unless you just don't want highly skilled and trained people advocating your products.

There are pockets of activism showing up already.  Here is a petition of sorts on Microsoft Connect that you can participate in. If you find any other similar petitions, please let me know.

Anatomy of a Powershell Advanced Function

by Ryan 28. August 2013 22:10

This is the template I'm using for my Powershell 301: Anatomy of a Powershell Advanced Function class. In this online virtual class, I'll be discussing the Powershell Advanced Function feature line by line. Hopefully this will help instill good scripting techniques for scripters who are new to Advanced Functions or Powershell in general.

 

#Requires -Version 3
#Requires -Module ActiveDirectory
Function New-Cmdlet
{
<#
.SYNOPSIS
	This Cmdlet uses an approved verb from Get-Verb, comment-based help, #Requires
    statements, and it follows the format of an advanced Powershell function.
.DESCRIPTION
	This Cmdlet uses an approved verb from Get-Verb, comment-based help, #Requires
    statements, and it follows the format of an advanced Powershell function.
    This Cmdlet is designed only as a teaching tool to show good form for 
    writing Powershell v3 Cmdlets, also known as "Advanced Functions."
.PARAMETER ComputerName
    This describes the ComputerName parameter. If there are any defaults or extra
    attributes of the parameter as listed in the Param() block below, they will
    automatically be populated here. Notice that we use ComputerName here, and not
    ComputerNames or Hostname. That's because every other Cmdlet uses the parameter
    ComputerName with no S, even if we intend to supply multiple names, so we should
    stay consistent with that. Make it feel as much like an official Powershell
    Cmdlet as you can.
.EXAMPLE
    Get-Content C:\Hosts.txt | New-Cmdlet -Verbose

    This runs New-Cmdlet on each hostname in the Hosts.txt file.
.EXAMPLE
    "Server1","Server2","Server3" | New-Cmdlet

    This runs New-Cmdlet on Server1, Server2 and Server3.
.EXAMPLE
    New-Cmdlet -ComputerName Server1,Server2 -WhatIf -Verbose

    This tells what would happen if you ran New-Cmdlet on Server1 and Server2.
.INPUTS
    System.String[]

    This should be an array (a list) of one or more computer hostnames that you
    want to run this Cmdlet on. Pipeline input is accepted.
.OUTPUTS
    Nothing... yet!
.LINK
    If any relevant links, put 'em here.
.NOTES
    Notes. Author, date, whatever. You don't have to include every single one of 
    these event-based help categories, but it never hurts.
#>
	[CmdletBinding(SupportsShouldProcess=$True)]
	Param([Parameter(Mandatory=$True,ValueFromPipeline=$True,ValueFromPipelineByPropertyName=$True)][ValidateLength(3,30)][String[]]$ComputerName)

    BEGIN
    {
        # Run one-time set-up tasks here, like defining variables, etc.
        Set-StrictMode -Version Latest
        Write-Verbose -Message "Cmdlet is starting. You won't see this message unless you used -Verbose."
    }
    PROCESS
    {
        # The process block can be executed multiple times as objects are passed through the pipeline into it.
        ForEach($computer In $ComputerName)
        {
            If($PSCmdlet.ShouldProcess($computer))
            {
                Write-Verbose -Message "Beginning Process block on $computer"          
                $computer
            }        
        }
    }
    END
    {        
        # Finally, run one-time tear-down tasks here.
        Write-Verbose -Message "Running End block."
    }
}

Tags:

Powershell

My Powershell Profile Just Went Full-Glitz

by Ryan 24. August 2013 14:48

My cat woke me up extremely early this Saturday morning with the incessant meowing and carpet-scratching that signals either her boredom, or an empty food dish.

So I got up, made some coffee, put some meat crackers into kitty's bowl, and then started tinkering with my Powershell profile... now it looks like this every time I launch PS: 

Powershell Profile

It all started a couple weeks ago when I watched a Channel9 video where Jeffrey Snover was playing with Powershell, and I noticed that he had changed his error text color to green. I'm guessing like so:

$Host.PrivateData.ErrorForegroundColor = "Green"

I don't know why he configured his error messages to be green. Maybe it's just because it's easier to see than the default red.  But I like to imagine the idea is to promote positive feedback... like elementary school teachers marking their student's incorrect homework answers with another color of pen besides a red pen... because red ink makes the kids feel bad.

Anyway, as I started playing with text colors and title bar text and whatnot, it occured to me that all these settings would just revert to defaults after I closed this Powershell session. So how do we make such changes permanent?

The Powershell Profile!

Just type $Profile into Powershell right now, and it will tell you the full path of your very own PS profile. It should be something like this:

C:\Users\Ryan\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1

That script gets executed first thing every time you launch PS. It may not exist yet - you have to create it.  Just type Notepad $Profile and Notepad will open that file up, or prompt you to create it if it doesn't already exist.

I'm still thinking of more neat gizmos to throw in here, but this is good for now. The  weather information comes from the Yahoo Weather web API, and the ServerFault rep information comes from the StackExchange API. *Swoon...* REST APIs are so dreamy...

The StackExchange API gives you 300 anonymous calls per day per IP (more if you authenticate.)  There is a basic amount of error handling so that if you can't connect to one or the other of the web APIs to get the data for whatever reason, it will just replace the appropriate string with [Error connecting to weather API], and so on. You'd want to put a short timeout on the API calls too... Powershell doesn't need any help being slow to load!

And without further ado, here's the code:

Set-StrictMode -Version Latest
[String]$WOEID           = "2355944" # Where on earth ID for Arlington TX
[String]$WelcomeName     = "Ryan"
[Xml]$WeatherAPIResponse = $Null
$StackExAPIResponse      = $Null
[String]$WelcomeBanner   = [String]::Empty
[String]$WeatherString   = [String]::Empty
[String]$StackExString   = [String]::Empty

Try
{
    $WeatherAPIResponse = Invoke-WebRequest http://weather.yahooapis.com/forecastrss?w=$WOEID -TimeoutSec 3 -ErrorAction Stop
    If($WeatherAPIResponse -NE $Null -AND $WeatherAPIResponse.PSObject.Properties.Match('rss').Count)
    {
        $WeatherString = "Current weather in $($WeatherAPIResponse.rss.channel.location.city), $($WeatherAPIResponse.rss.channel.location.Region): $($WeatherAPIResponse.rss.channel.item.condition.temp)°, $($WeatherAPIResponse.rss.channel.item.condition.text), $($WeatherAPIResponse.rss.channel.atmosphere.humidity)% humidity"
    }
    Else
    {
        Throw
    }
}
Catch
{
    $WeatherString = "[Error connecting to weather service.]"
}

Try
{
    $StackExAPIResponse = Invoke-WebRequest https://api.stackexchange.com/users/104624?site=serverfault -TimeoutSec 3 -ErrorAction Stop
    If($StackExAPIResponse -NE $Null -AND $StackExAPIResponse.PSObject.Properties.Match('Content'))
    {
        $StackExString = "Current ServerFault rep: $($(ConvertFrom-Json $StackExAPIResponse.Content).Items.Reputation) total,  $($(ConvertFrom-Json $StackExAPIResponse.Content).Items.reputation_change_day) today, $($(ConvertFrom-Json $StackExAPIResponse.Content).Items.reputation_change_week) this week"
    }
    Else
    {
        Throw
    }
}
Catch
{
    $StackExString = "[Error connecting to StackExchange.  ]"
}


$WelcomeBanner      = @"
            .ooooooo            Welcome back, $WelcomeName!
          oooooooooooo          $WeatherString
        ooooo      ooooo        $StackExString
       oooo          oooo       
       ooo            .oo       
   oooooooooo          ooo      
  ooooooo.oooo.        oo.      
 ooo        .o.        ooooo    
ooo                    ooooooo  
oo                    .oooooooo 
oo                    oo     ooo
ooo                           oo
.oo                          ooo
 oooo                        oo.
  .oo myotherpcisacloud.com oo 
    .oooooooooooooooooooooooo  

"@

Write-Host $WelcomeBanner -ForegroundColor Cyan

SRV Record for NTP? In *MY* Active Directory?

by Ryan 12. August 2013 15:00

Howdy fellow IT goons. I am probably not going to talk about Powershell today... but no promises.

Good ole' RFC 2782, the great fireside reading that it is, spells out the concept behind DNS SRV records and using them to locate services within a domain. The Microsoft article "How DNS Support for Active Directory Works", which is also more than just a heart-warming story but is also required reading if you're a Windows admin, mentions that Active Directory is pretty much, more or less, compliant with the aforementioned RFC:

"When a domain controller is added to a forest, a DNS zone hosted on a DNS server is updated with the Locator DNS resource records for that domain controller. For this reason, the DNS zone must allow dynamic updates (RFC 2136), and the DNS server hosting that zone must support the SRV resource records (RFC 2782) to advertise the Active Directory directory service."

It goes _<service>._<protocol>.domain.com, so if I wanted to locate LDAP services in a domain I'd issue a DNS query for _ldap._tcp.domain.com, or if I wanted to find Kerberos service I'd do _kerberos._tcp.domain.com.

But no one ever said that Active Directory uses every type of SRV there is by default. Not even close. Take NTP, Network Time Protocol, as an example.  Given the above logic I might issue a DNS query for _ntp._udp.domain.com, searching for NTP time service in that domain. Assuming I'm in a Microsoft Active Directory domain, odds are that I will not find it.

An SRV record is not created by default for the NTP service.  This is because Windows clients connecting to an AD domain already know to use domain controllers for time service in an AD domain, and the domain controllers already have their own SRV records, so separate NTP records would be redundant and unnecessary.

In fact, the only Microsoft-centric scenario I know of where the SRV record _ntp._udp.domain.com comes in to play is smart phones and devices using Microsoft Office Communicator or Lync - and even then it's optional since they'll fail back to time.windows.com if the SRV record is not found. You can find those examples here and here. If you know of any other situations where Windows-based applications use such an SRV record, please let me know.

But maybe you have a heterogeneous IT environment and you may want to add these records for yourself in order to support Unix/Linux clients and their applications that are making such DNS queries.  It's very easy:

  • Open the DNS Manager console/MMC snap-in.
  • Drill down into your Forward Lookup Zones.
  • Locate the _udp subdomain, since the NTP service operates over the UDP protocol.
  • You should see a list of _kerberos and _kpasswd SRV records there already, that represent the domain controllers currently in your domain.
  • Right-click in that white space and choose "Other New Records..."
  • Select "Service Location (SRV)" from the list.
  • Configure your new record like this screenshot:

SRV record

Mind the underscores, and notice the trailing period at the end of your domain name. You will probably want to add one of these for each domain controller you have, and you can play around with the weights and priorities however you like. NTP uses port 123 of course. There will be some options in the drop down list that they give you as examples. Don't confuse it with _nntp, unless you host the News Network Transfer service in your domain too.

Powershell: Set-StrictMode -Version Latest

by Ryan 30. July 2013 13:13

Set-StrictMode is a wonderful Cmdlet that you should use in every script you write.  It will save you hours of debugging and frustration from things like fat-fingered variable and property names.  Here is what version 2 of Set-StrictMode does:

-- Prohibits references to uninitialized variables (including uninitialized variables in strings).
-- Prohibits references to non-existent properties of an object.
-- Prohibits function calls that use the syntax for calling methods.
-- Prohibits a variable without a name (${}).

If you apply Set-StrictMode to some of your old scripts, you'll be surprised at how many new errors it will throw at you. This is good though, because it encourages you to write safer, cleaner code with less bugs.  Number 2 on the list above was giving me fits today, however.  Let me give you an example.

$Users = Get-ADUser -Filter * -Properties *

Now let's say that I am interested in the EmployeeId attribute of the users. Even though I specified -Properties *, the user objects in the collection may or may not have a property called EmployeeId. The user object will not have an EmployeeId property that is blank or null.  If the attribute is not populated in Active Directory, the Cmdlet omits the property entirely.  So in a script without Strict Mode, I could just do

Foreach($User In $Users) { If($User.EmployeeId) { ... } }

And it would function as expected, running the code block if the user had an EmployeeId, and skip it otherwise. But with Strict Mode, you'll see a lot of this:

Property 'EmployeeId' cannot be found on this object; make sure it exists.
At line:1 char:9
+ $User. <<<< EmployeeId
    + CategoryInfo          : InvalidOperation: (.:OperatorToken) [], RuntimeException
    + FullyQualifiedErrorId : PropertyNotFoundStrict

Even if I did

$Users = Get-ADUser -Filter { EmployeeId -NE $Null } -Properties *

or

$Users = Get-ADUser -Filter * -Properties * | Where-Object { $_.EmployeeId -NE $Null }

I would still get the errors when running through the Foreach loop, even though that should have given me only the users with EmployeeIDs. So I need a way to test for the existence of an object property, rather than just assuming that a null value returned when the property is referenced is good enough.

If($User -NE $Null -AND $User.PSObject.Properties.Match('EmployeeId').Count) { ... }

This works. The -AND operator in Powershell works like a "short circuit" && operator in most programming languages, meaning that if the first expression does not satisfy the requirements for entering the code block, then the second expression is not evaluated. This is perfect, because if I accidentally feed a null user object to the code above, I would have gotten the same error about the $User object not containing a PSObject property.

Alright, back to scripting!

Tell Me Which Active Directory Security Groups Are Not Applying Inheritable Permissions

by Ryan 22. July 2013 14:05

I encounter many Active Directory forests that were built and maintained for years by other organizations, and then through mergers or acquisitions or business reorgs, I need to help bring them into the fold with the rest of my portfolio of AD forests.

AD permissions can be a deep rabbit hole, especially when sitting down to a new directory sight unseen. Administrators make subtle changes to AD objects over the years and a lot of entropy happens.  Entropy that's not always easy to see or keep track of.

In this particular instance, we had an issue where a delegation of control was not working correctly and/or consistently.  It was the common IT task of delegating the ability to reset passwords and unlock accounts (but nothing else) to a special "help desk" sort of security group. It was allowing members of the "help desk" group to reset the passwords of certain users, but not others.  None of them were administrative users, or members of the Domain Admins group, the Account Operators group, etc. etc.

Turns out, the problem was that some security groups were not including inheritable permissions from the domain root object, so users who were members of these certain groups were immune to the effects of the delegation. 

Advanced Permissions

At first I actually thought to go and click on every single security group in the domain, checking on whether they were applying inheritable permissions or not.  Then a few seconds I realized, "Don't be an idiot Ryan. The GUI is not your friend! Don't succumb to its siren song!"

After glancing at this excellent SDDL reference here for about 5 minutes, I whipped this up:

Foreach($_ In Get-ADGroup -Filter *)
{
	[bool]$Inherits = $($(Get-ADGroup $_ -Properties *).nTSecurityDescriptor.Sddl.Contains('OI'))
	If($Inherits -EQ $False)
	{
		$_
	}
}

 And presto - a list of all security groups in the domain that are not applying inheritable permission from their parents.

PS - I admit, I still have to use a cheat sheet when reading SDDL.  :P

Finding Locked Out Domain Accounts, Or At Least, How Not To

by Ryan 19. July 2013 13:00

I hadn't posted in a little while, so I thought I'd do a two-fer today.

You might see some advice on the internet about using the userAccountControl attribute to identify locked out domain accounts.  More specifically, the following LDAP filter:

(&(objectCategory=User)(userAccountControl:1.2.840.113556.1.4.803:=16))

The =16 part should mean "locked out", per the attribute's documentation, keeping in mind that 0x10 in hex is 16 in decimal.

DON'T USE IT.  It doesn't work. I don't think it has ever worked. Apparently it was just an idea that some person on the AD design team had that never got implemented. If anyone has any history on this bit, and if it has ever worked in the past, I would love to hear about it. All I know is that it does not work now.

You can easily verify for yourself that it doesn't work with Powershell:

Get-ADUser -LDAPFilter "(&(objectCategory=User)(userAccountControl:1.2.840.113556.1.4.803:=16))"

You'll probably get 0, or some other very inaccurate value. (Other userAccountControl flags however do definitely work and can be used reliably. Just not this one.)

Here is another LDAP filter you will often see on the web for finding locked out accounts:

(&(objectCategory=Person)(objectClass=User)(lockoutTime>=1))

DON'T USE THAT EITHER. That will return too many results.  The reason why is that lockoutTime is not reset until the next time that person successfully logs in. So if an account is locked out, then their lockoutTime attribute gets set, then if the domain lockout duration expires, the account is no longer technically locked out, but lockoutTime remains populated until the next time that user logs in. Now if you're thinking that we should filter this list by only the users who have a lockoutTime that is less than [domain lockout duration] minutes in the past, then you're on the right track. Those would be the users who are still really locked out.

When I type  Search-ADAccount -LockedOut , however, I am given what seems to be an accurate number of users that are currently locked out. I should point out that if working in a large AD environment, I think it's best to point directly to your PDC-emulator whenever possible, because your PDC-emulator will always have the most up-to-date information about account lockouts. From a Microsoft article about urgent replication:

... account lockout is urgently replicated to the primary domain controller (PDC) emulator role owner and is then urgently replicated to the following:

• Domain controllers in the same domain that are located in the same site as the PDC emulator.

• Domain controllers in the same domain that are located in the same site as the domain controller that handled the account lockout.

• Domain controllers in the same domain that are located in sites that have been configured to allow change notification between sites (and, therefore, urgent replication) with the site that contains the PDC emulator or with the site where the account lockout was handled. These sites include any site that is included in the same site link as the site that contains the PDC emulator or in the same site link as the site that contains the domain controller that handled the account lockout.

In addition, when authentication fails at a domain controller other than the PDC emulator, the authentication is retried at the PDC emulator. For this reason, the PDC emulator locks the account before the domain controller that handled the failed-password attempt if the bad-password-attempt threshold is reached.

If you follow my earlier instructions on how to peek inside the Search-ADAccount cmdlet itself, you'll see that Microsoft themselves is keying on the Account Lockout Time to perform this search:

Finally, I can reproduce the same behavior of  Search-ADAccount -LockedOut  with the following bit of Powershell, given that I know my domain's account lockout duration:

Get-ADUser -LDAPFilter "(&(objectCategory=Person)(objectClass=User)(lockoutTime>=1))" -Properties LockoutTime | 
Select Name, @{n="LockoutTime";e={[DateTime]::FromFileTime($_.LockoutTime)}} | 
Sort LockoutTime -Descending | ? { $_.LockoutTime -gt (Get-Date).AddMinutes($AccountLockoutDuration * -1) }

That gives the exact same results as  Search-ADAccount -LockedOut .

About Me

Name: Ryan Ries
Location: Texas, USA
Occupation: Systems Engineer 

I am a Windows engineer and Microsoft advocate, but I can run with pretty much any system that uses electricity.  I'm all about getting closer to the cutting edge of technology while using the right tool for the job.

This blog is about exploring IT and documenting the journey.


Blog Posts (or Vids) You Must Read (or See):

Pushing the Limits of Windows by Mark Russinovich
Mysteries of Windows Memory Management by Mark Russinovich
Accelerating Your IT Career by Ned Pyle
Post-Graduate AD Studies by Ned Pyle
MCM: Active Directory Series by PFE Platforms Team
Encodings And Character Sets by David C. Zentgraf
Active Directory Maximum Limits by Microsoft
How Kerberos Works in AD by Microsoft
How Active Directory Replication Topology Works by Microsoft
Hardcore Debugging by Andrew Richards
The NIST Definition of Cloud by NIST


MCITP: Enterprise Administrator

VCP5-DCV

Profile for Ryan Ries at Server Fault, Q&A for system administrators

LOPSA

GitHub: github.com/ryanries

 

I do not discuss my employers on this blog and all opinions expressed are mine and do not reflect the opinions of my employers.