SharpTLSScan v1.0

by Ryan 10. August 2014 20:08

Update 08/13/2014: v1.1 is here.

SSL and TLS have been getting a lot of attention from me lately, and recently I found myself in want of a tool that would tell me precisely which protocol versions and cipher suites a given server supported.

Sure, there's SSLScan and SSLScan-Win, but those tools haven't been updated in 5 years, and thus don't support the newer versions of TLS, 1.1 and 1.2.  And of course there are nice websites like SSL Labs that do a fine job, but I wanted to use this tool to audit internal/private systems too, not just internet web servers.

So I created a new tool and called it SharpTLSScan.  It's pure C# and has no reliance on outside libraries (such as OpenSSL,) and I managed to avoid the pain of directly interfacing with the SChannel API as well.

SharpTLSScan comes with the "It works on my machine (tm)" guarantee.  It's free, and the source will probably show up on Github pretty soon.

Here are some screenshots:

Usage is simple - SharpTLSScan myhost:636

First, the server's certificate is inspected and validated.  Next, a default connection is negotiated, which is useful for seeing what kind of connection your system would negotiate on its own.  Then, all protocol versions and all cipher suites are tested to see what the server will support.  (This can take a couple of minutes.)  Things that are obviously good (such as the certificate validating) are highlighted in green, while things that are obviously bad (such as SSL v2 support) are highlighted in red.  Things that are fair, but not great, (such as MD5 hashes) are in yellow.

*Oh dear...*

The reason why the protocol versions seem interleaved is a side-effect of a the multithreading in the program.  I'll likely fix it in the next update.

Here you go: (14.3KB)

Override Write-Debug to Avoid 80-Character Word-Wrapping in Transcript

by Ryan 4. August 2014 07:08

I was writing some long, complicated Powershell scripts recently, and complicated scripts call for plenty of tracing/debug logging.  These scripts were to be run as scheduled tasks, and it was important that I saved this debug logging information to a file, so I figured that instead of re-inventing the wheel, I'd just use Start-Transcript in conjunction with Write-* cmdlets to automatically capture all the output.

This almost worked fine, except that the lines in the resulting transcript file were word-wrapped at 80 characters.  This made the transcripts ugly and hard to read.  The only thing worse than reading debug logs while trying to diagnose a subtle bug is having to do it when the debug logs are ugly and hard to read.

Hm, would it help if I resized the Powershell host window and buffer at the beginning of the script, like so? 

$Host.UI.RawUI.BufferSize = New-Object System.Management.Automation.Host.Size 180, 3000
$Host.UI.RawUI.WindowSize = New-Object System.Management.Automation.Host.Size 180, 50

No, that did not help.

As it turns out, the Write-Host cmdlet does not exhibit the same word-wrapping limitation as its Write-Debug and Write-Warning brethren.  So instead of changing the 900 Write-Debug cmdlets already in our script, let's just "override" the Write-Debug cmdlet:

Function Global:Write-Debug([Parameter(Mandatory=$True, ValueFromPipeline=$True)][String]$Message)
    If ($DebugPreference -NE 'SilentlyContinue')
        Write-Host "DEBUG: $Message" -ForegroundColor 'Yellow'

Ah, no more needless word wrapping in my transcripts now.  Just be careful about putting stuff in the Global scope if you're working with other actors in the same environment.



Powershell: If You Want to Catch, Don't Forget to Throw

by Ryan 30. July 2014 12:07

I made a Powershell scripting error the other day, and wanted to share my lesson learned.  In hindsight it seems obvious, but I made the mistake nevertheless, then cursed when my script presented with an apparent bug.

I had a script that iterated through a collection of items using a ForEach loop - an extremely common task in Powershell.  For each iteration of that ForEach loop, I performed a function that had a chance of failing, so I wrapped it in a Try/Catch block.  The rest of the ForEach loop below that performed further processing that depended on the success of the preceding function, so if the previous function failed, I wanted to skip the rest of that iteration and use the Continue statement to skip directly to the next ForEach iteration.

Let me illustrate:

:NextUser ForEach ($User In $Users)
        Write-Debug "I'm about to try something risky..."
        Pre-Process $User
        Write-Error $_.Exception        
        Continue NextUser
    Write-Debug "Since pre-processing for $User was successful, I'm going to do more stuff here..."
    Post-Processing $User

And the Pre-Process function looked like this:

Function Pre-Process ($User)
        Write-Error $_.Exception

So what actually ended up happening was when the "Pre-Processing" function failed, it flashed an error message on the screen... but then the rest of the ForEach iteration continued to execute in an unpredictable and undesired way.  Why did my Continue statement in the Catch block not take me back to the next ForEach iteration?

Well as it turns out, the Catch block never ran.  Just because you catch an exception and use the Write-Error cmdlet doesn't mean you've thrown a terminating error to the caller.  And you need a terminating error in order to exit a Try block and go into a Catch block.  Since I was using a custom function, ErrorAction -Stop and $ErrorActionPreference didn't do anything for me. What I needed to do was use Throw to generate the needed terminating error.  What was throwing me off (no pun intended) was that I was confused as to which one of the Write-Error statements was actually taking place.  The one in the custom function, or the one in the ForEach loop that contained a call to the custom function?

In retrospect, I should probably stop using Try/Catch blocks altogether in custom Powershell functions.  You can easily suppress a needed exception with a Try/Catch and cause unintended bugs.

But for now, simply adding a Throw statement to my custom function was all I needed to activate the Catch block in the calling script:

Function Pre-Process ($User)
        Write-Error $_.Exception
        Throw $_.Exception

Be the Master of the LastLogonTimestamp Attribute with S4U2Self

by Ryan 19. July 2014 09:07

I've written a little bit about the LastLogonTimestamp/LastLogonDate attribute here, and of course there is AskDS's notorious article on the subject here, but today I'm going to give you a handy little tip that I don't think I have mentioned before.

If you're an Active Directory administrator, chances are you're interested or have been interested in knowing if a given account is "stale," meaning that the account's owner has not logged in to the domain in some time.  (Keep in mind that an account could be either a user or a computer as it relates to Active Directory.)  You, like many sysadmins, might have some script or automated process that checks for stale accounts using a command-line approach, such as:

dsquery user -inactive 10

or Powershell's extremely flexible:

Get-ADUser -Filter * -Properties LastLogonDate
| ? { $_.Enabled -AND $_.LastLogonDate -LT (Get-Date).AddDays(-90) }

And then you take action on those inactive accounts, such as moving them to an "Inactive Users" OU, or disabling their accounts, or sending a reminder email to the account holder reminding them that they have an account in this domain, etc.

It might be handy for you to "artificially" update the lastLogonTimeStamp of another user though.  Maybe you know that this user is on vacation and you don't want their user account to get trashed by the "garbage collector" for being inactive.  According to the documentation, lastLogonTimeStamp is only editable by the system, so forget about directly modifying the attribute the way that you would other LDAP attributes.  And of course "LastLogonDate" is not a real attribute at all - merely a calculated attribute that Powershell gives you to be helpful by converting lastLogonTimestamp into a friendly .NET DateTime object.

The S4U2Self (Service for User to Self) Kerberos extension can help us here.

Just right click on any object such as an OU in Active Directory a folder in a file share, go to its Properties, then the Security tab.  Click the Advanced button.  Now go to the Effective Permissions tab.  Click the Select... button, and choose the user whose lastLogonTimestamp you want to update.  We are going to calculate the effective permissions of this inactive user:

By doing this, you are invoking the S4U2Self Kerberos extension, whereby the system will "go through the motions of Kerberos authentication and obtain a logon for the client, but without providing the client's credentials. Thus, you're not authenticating the client in this case, only making the rounds to collect the group security identifiers (SIDs) for the client."[1]

And just like that, you have updated the "Last Logon Time" on another user's behalf, without that user having to actually log on themselves.

Disable RC4-HMAC (And Others) in Active Directory

by Ryan 15. July 2014 14:07

My long-distance pal Mathias Jessen pointed out this article today on Twitter, in which the article's author attempts to make us all shudder in fright at the idea that Microsoft Active Directory has a terrifying security vulnerability that will cause the world's corporate infrastructure to crumble and shatter humanity's socio-political status quo as script-kiddies take over the now destabilized Earth.

OK, it's not all that bad...

Of the several encryption types that AD supports for Kerberos authentication, RC4-HMAC is among the oldest and the weakest.  The reason the algorithm is still supported?  You guessed it... backwards compatibility.  The problem is that in this (perhaps ill-conceived, but hindsight's 20/20) implementation of RC4-HMAC, as outlined in RFC 4757, the encryption key that is used is the user's NT/MD4 hash itself!  What this means is that all I need is the NT hash of another user, and by forcing an AD domain controller to negotiate down to RC4-HMAC, I can be granted a Kerberos ticket as that other user.  (Getting another user's NT hash means I've probably already owned some random domain-joined workstation, for instance.)

As you probably already know, an NT hash is essentially password-equivalent and should be treated with the same level of sensitivity as the password itself.  And if you didn't know that, then you should read my earlier blog post where I talk a lot more about this stuff.

This was a deliberate design decision - that is, Microsoft is not going to just patch this away.  The reason they chose to do it this way was to ease the transition from NTLM to Kerberos back around the release of Windows 2000.  Newer versions of Windows such as Vista, 2008, 2008 R2, etc., use newer, better algorithms such as AES256_HMAC_SHA1 that do not use an NT hash.  Newer versions of Windows on a domain will automatically use these newer encryption types by default, but the older types such as the dreaded RC4-HMAC are still supported and can be used by down-level clients... or malicious users pretending to be down-level domain members.

As an administrator, you're free to turn the encryption type off entirely if you do not need the backwards compatibility.  Which you probably don't unless you're still rockin' some NT 4.0 servers or some other legacy application from the '90s.

(Which probably means most companies...)

Edit the Default Domain Controllers (or equivalent) Group Policy and look for the setting:

Computer Configuration > Policies > Windows Settings > Security Settings > Local Policies > Security Options > Network Security: Configure encryption types allowed for Kerberos.

This setting corresponds to the msDS-SupportedEncryptionTypes LDAP attribute on the domain controller computer objects in AD.  Enable the policy setting and uncheck the first three encryption types.

And of course, test in a lab first to ensure all your apps and equipment that uses AD authentication can deal with the new setting before applying it to production.

About Me

Ryan Ries
Texas, USA
Systems Engineer

I am a systems engineer with a focus on Microsoft tech, but I can run with pretty much any system that uses electricity.  I'm all about getting closer to the cutting edge of technology while using the right tool for the job.

This blog is about exploring IT and documenting the journey.

Blog Posts (or Vids) You Must Read (or See):

Pushing the Limits of Windows by Mark Russinovich
Mysteries of Windows Memory Management by Mark Russinovich
Accelerating Your IT Career by Ned Pyle
Post-Graduate AD Studies by Ned Pyle
MCM: Active Directory Series by PFE Platforms Team
Encodings And Character Sets by David C. Zentgraf
Active Directory Maximum Limits by Microsoft
How Kerberos Works in AD by Microsoft
How Active Directory Replication Topology Works by Microsoft
Hardcore Debugging by Andrew Richards
The NIST Definition of Cloud by NIST

MCITP: Enterprise Administrator


Profile for Ryan Ries at Server Fault, Q&A for system administrators




I do not discuss my employers on this blog and all opinions expressed are mine and do not reflect the opinions of my employers.